This is some text inside of a div block.

"The rollout of the AI Act should be smooth if companies are well-prepared"

Back to all articles
Date: April 2, 2024
Category: News
Author: 
Annabelle Blangero

On March 13, 2024, the European Parliament adopted the AI Act, marking a decisive turning point in how Europe envisions the future of Artificial Intelligence (AI). Designed primarily to safeguard individuals' rights and safety, the impact of this law on businesses raises significant questions. Annabelle Blangero, Responsible AI Lead at Ekimetrics, sheds light on the situation.

Fill in the below to receive the News

必填项*
提交成功
点击下方按钮下载白皮书
Oops! Something went wrong while submitting the form.

On March 13, 2024, the European Parliament adopted the AI Act, marking a decisive turning point in how Europe envisions the future of Artificial Intelligence (AI). Designed primarily to safeguard individuals' rights and safety, the impact of this law on businesses raises significant questions. Annabelle Blangero, Responsible AI Lead at Ekimetrics, sheds light on the situation.

What are the potential impacts of artificial intelligence on today’s business landscape? Will the AI Act have a positive or detrimental effect on business?

Annabelle Blangero: Europe has an opportunity to lead in trustworthy AI, placing ethics at the heart of its concerns, in contrast to the major American and Chinese tech players. This regulation creates a conducive framework for developing more responsible AI. I’m convinced of the need to establish safeguards to prevent misuse and the negative consequences of AI on society.

AI is a powerful technology that carries many risks for individuals due to its complexity, probabilistic nature, and propensity for bias. These risks are varied: technical (acceleration of human biases, black box effect, model hallucinations…), societal (with discriminatory drifts, for example), or democratic (with the proliferation of fake news, in particular).

For most businesses, the most significant impact will be a need to comply with new AI system transparency standards. The recent safety requirements are far from superfluous: they’re fundamental to countering the previously mentioned risks.

Should all businesses be concerned about this law?

A.B.: It’s important to note that all businesses operating in Europe are affected by this law. Those based outside the EU must comply as soon as they interact with the European market. Compliance will be gradual over two years and apply industry by industry in each country.

The AI Act categorizes AI applications into four levels of risk: low, medium, high, and unacceptable. Low-risk use cases, such as sales prediction or Marketing Mix Modeling, have a limited impact on individuals. Medium-risk applications include, for example, the use of conversational agents, chatbots, or techniques of segmentation and lead scoring. High-risk applications directly affecting individuals’ rights and safety—such as systems for recruitment, people evaluation, or the granting of rights and systems concerning the legal domain, bank loans, or medicine—will require more significant standardization.

Today, most businesses use AI for low- or medium-risk applications, meaning the necessary adjustments will not be drastic, mainly making their standardization processes transparent.

I only see positives with this law. It can only strengthen the control, transparency, and interpretability of models. It could also contribute to better understanding and enhanced skills in the field of AI.

     

When should businesses start preparing for the AI Act?

A.B.: Right now! The rollout of the AI Act should be smooth if companies are well-prepared and anticipate. We can assimilate this transition to that experienced in 2016 with the European Parliament’s adoption of the GDPR1.

Businesses must hurry to establish risk assessment mechanisms, for example, through surveys or checklists aimed at Product Owners. They can prepare by precisely mapping the use of AI internally to create a registry of AI models and systems today, using detailed model cards, which will facilitate compliance. They must establish clear governance to delineate responsibilities, form committees, and define processes in close collaboration with legal, IT, and security departments. Awareness and training teams in AI are also essential to accompany skill development. Choosing a suitable methodological and technological partner is crucial to guide businesses through these steps.

At Ekimetrics, like many data science experts, we didn’t wait for the European Parliament to adopt the AI Act before supporting our clients on the path to responsible AI. We’ve already assisted some clients in establishing AI governance, model card systems and registries, bias analyses, and algorithms. Our efforts focus on developing innovative tools, such as solutions to assess environmental impact and detect biases in large language models (LLM). We have long advocated for the importance of the interpretability and explainability of AI systems.

We’re convinced that a responsible approach to artificial intelligence is not a constraint; on the contrary, it can create value and give businesses a real competitive advantage. It allows businesses to better control their systems through transparency and interpretability and better monitor performance while reducing development and maintenance costs and minimizing environmental impact. We’ve established seven pillars for responsible AI: Safety, Transparency, Interpretability, Explainability, Vigilance, Robustness, and Sustainability. This foundation has allowed us to develop innovative tools like the CO2 tracker, a script to semi-automate model cards, and risk evaluation checklists, setting advanced standards in responsible AI.

Our commitments have even received recognition. Last year, we were certified as Advanced Level LabelIA by Labelia Labs2, the independent association. This certification represents a very high level of maturity in responsible AI practices, based on numerous criteria. With this recognition, our teams have consolidated our expertise and vision into a white paper that aims to be a practical, responsible AI tool for business leaders to facilitate their strategic prioritization. I invite you to download it to find out where and how to start.

Download the white paper

1 General Data Protection Regulation
2 Independent association that promotes the development of trustworthy data science ecosystems

Get in touch

Connect with our Data Science experts

必填项*
谢谢!
We will be in touch very soon!
Oops! Something went wrong while submitting the form.