Ekimetrics presents its scientific paper on Interpretability and NLP at the xAI World Conference - Ekimetrics
Ekimetrics: big data specialists

Data science for business

Contact
Back to articles

Ekimetrics presents its scientific paper on Interpretability and NLP at the xAI World Conference

Ekimetrics presents its scientific paper on Interpretability and NLP at the xAI World Conference

July 26 to 28, Ekimetrics is in Lisbon, Portugal for the 1st World Conference on eXplainable Artificial Intelligence (xAI). On July 27, 2023, we will present our latest scientific paper written by our Eki.Lab team.

Date : July 24th, 2023

Category : News

July 26 to 28, Ekimetrics is in Lisbon, Portugal for the 1st World Conference on eXplainable Artificial Intelligence (xAI). On July 27, 2023, we will present our latest scientific paper “Evaluating self-attention interpretability through human-grounded experimental protocol” written by our Eki.Lab team.

This 1st edition of the annual event aims to bring together researchers, academics, and professionals, promoting shared knowledge, new perspectives, experiences, and innovations in the field of Explainable Artificial Intelligence.

Led by our Head of Innovation, Research and Development, Nicolas Chesneau, our Eki.Lab Interpretability team produced a paper “Evaluating self-attention interpretability through human-grounded experimental protocol”, which is the result of our scientific research. We are pleased to tell you that this paper has been reviewed by the Scientific Committee and was accepted at the xAI World Conference, a high-profile event, under the category of xAI and Natural Language Processing. It will also be included in the conference proceedings by Springer under Communications in Computer and Information Science.

This paper aims to assess how attention coefficients from Transformer architecture can help in providing interpretability. A new attention-based interpretability method called CLaSsification-Attention (CLS-A) is proposed. A human-grounded experiment is conducted to evaluate and compare CLS-A to other interpretability methods. The experimental protocol relies on the capacity of an interpretability method to provide explanations in line with human reasoning.

The acceptance of our paper by xAI Conference is a testament to our research capability and our knowledge surrounding explainable AI. This achievement also aligns with our core values: curiosity, excellence and transmission.

Ekimetrics promises to keep investing in forward-thinking innovation, which includes conducting research around interpretability for better AI usage models. Today, our Eki.Lab consists of a team of 15 lead researchers, 2 PhDs and 6 research leaders who focus on artificial intelligence.

To discover more, please read the summary of this scientific paper, “Evaluating self-attention interpretability through human-grounded experimental protocol”.

All our industries

Latest news

Contact us!

  • We're committed to your privacy. Ekimetrics uses the information you provide to us to contact you. For more information about how we handle your personal data and your rights, check out our Privacy policy.
  • This field is for validation purposes and should be left unchanged.