This is some text inside of a div block.
No items found.
No items found.

Ekimetrics presents its scientific paper on Interpretability and NLP at the xAI World Conference

Back to all articles
Date: July 24, 2023
Category: News
Author: 
No items found.

July 26 to 28, Ekimetrics is in Lisbon, Portugal for the 1st World Conference on eXplainable Artificial Intelligence (xAI). On July 27, 2023, we will present our latest scientific paper written by our Eki.Lab team.

Fill in the below to receive the News

Requis*
Thank you
You can now download the Whitepaper at the following link
Oops! Something went wrong while submitting the form.

July 26 to 28, Ekimetrics is in Lisbon, Portugal for the 1st World Conference on eXplainable Artificial Intelligence (xAI). On July 27, 2023, we will present our latest scientific paper “Evaluating self-attention interpretability through human-grounded experimental protocol” written by our Eki.Lab team.

This 1st edition of the annual event aims to bring together researchers, academics, and professionals, promoting shared knowledge, new perspectives, experiences, and innovations in the field of Explainable Artificial Intelligence.

Led by our Head of Innovation, Research and Development, Nicolas Chesneau, our Eki.Lab Interpretability team produced a paper “Evaluating self-attention interpretability through human-grounded experimental protocol”, which is the result of our scientific research. We are pleased to tell you that this paper has been reviewed by the Scientific Committee and was accepted at the xAI World Conference, a high-profile event, under the category of xAI and Natural Language Processing. It will also be included in the conference proceedings by Springer under Communications in Computer and Information Science.

This paper aims to assess how attention coefficients from Transformer architecture can help in providing interpretability. A new attention-based interpretability method called CLaSsification-Attention (CLS-A) is proposed. A human-grounded experiment is conducted to evaluate and compare CLS-A to other interpretability methods. The experimental protocol relies on the capacity of an interpretability method to provide explanations in line with human reasoning.

The acceptance of our paper by xAI Conference is a testament to our research capability and our knowledge surrounding explainable AI. This achievement also aligns with our core values: curiosity, excellence and transmission.

Ekimetrics promises to keep investing in forward-thinking innovation, which includes conducting research around interpretability for better AI usage models. Today, our Eki.Lab consists of a team of 15 lead researchers, 2 PhDs and 6 research leaders who focus on artificial intelligence.

To discover more, please read the summary of this scientific paper, “Evaluating self-attention interpretability through human-grounded experimental protocol”.

Get in touch

Connect with our Data Science experts

Requis*
Thank you
We will be in touch very soon!
Oops! Something went wrong while submitting the form.