Generative AI in your ecosystem: relevance, embedding, control - Ekimetrics
Ekimetrics: big data specialists
Contact
Back to all insights

Generative AI in your ecosystem: relevance, embedding, control

Generative AI in your ecosystem: relevance, embedding, control

Beyond the hype of Generative AI, this article offers our perspective on the challenges that will arise when we try to integrate these new methodologies into information systems.

Author : Nicolas Chesneau, Head of Innovation, Research and Development

Date : July 4th, 2023

Category : Thought Leadership

Generative AI encompasses a set of deep learning algorithms that are capable of creating content based on one or more instructions. For example, it includes text summarization algorithms, or those that generate photos based on text descriptions (e.g., “a photo of a city in the rain”). The most famous Generative AI representative, which has gained significant media attention, is undoubtedly ChatGPT. It has been hailed as a new revolution, a major upheaval in the history of our societies. However, beyond the hype, this article provides synthetic information on a generative AI project that is being developed, and which was designed to perform concrete problem-solving activities. This article also offers our perspective on the challenges that will arise when we try to integrate these new methodologies into information systems.

A generative AI project is just like any other data science project

The emergence of generative AI is creating new use cases whose existence we could not have foreseen a few years ago. The demonstrative power of ChatGPT leads many to believe that the era of “one size fits all” is upon us, suggesting that a single model could address all data-related challenges without requiring us to adapt in any way. However, our experience in the field shows that things are more complicated than that

Generative AI is a component that needs to be integrated into an ecosystem. This requires us to process and structure data beforehand, and to search for useful information before presenting it to the generative AI algorithm. Then, an interface should also be created to display the output, presenting it with KPIs if necessary, and so on. Such operations are time-consuming, just like they would be within a traditional data project. The power of generative AI is that it differs from traditional methodologies in that it presents results to humans in a clear and meaningful way. Its training has been designed with this goal in mind: Reinforcement Learning from Human Feedback (RLHF) is a technique that takes into account the corrections made by annotators to a ranking of answers proposed by the generative AI algorithm. Thanks to RLHF, the algorithm learns to generate responses that would be validated by humans.

A great deal of adaptation is always required to meet user needs. Generative AI creates new needs and propels decision-makers towards previously unknown use cases. During the scoping phase, requirements are more and more refined with each iteration, and it quickly becomes clear that precision must be improved for certain documents or tasks, and that certain business-specific features may have been inadequately considered – or not considered at all. In the Climate Q&A, feedback from our beta testers has been invaluable in improving our tool to make it more relevant to the public. Here are a few examples: “the tool is very sensitive to the question’s phrasing,” “footnotes are not taken into account, yet important information may be found there,” “some sources may be contradictory.” We then had to make corrections, test new approaches, add new components, and so on.

In summary, generative AI is a powerful tool addressing newly defined needs that have not yet been adequately outlined, or have been poorly explained. It will revolutionize the way we work in many sectors. However, like any IT project, it requires integration into systems and specifications for the tasks the tool seeks to solve or automate.

Generative AI in production: The challenge of monitoring

In an IT project, deploying a new algorithm into production relies on proven methods. For a data science project, a few additions are necessary, such as monitoring the performance of the Machine Learning algorithm. The algorithm is trained on an initial database and then tested on another data set to evaluate its overall performance. For example, in a binary classification task, the accuracy metric is often used. An accuracy of 90% indicates that the model makes one mistake for every ten tries. Monitoring performance is crucial. If we want to change the model to improve results, the metric serves as the judge. In our example, it might be considered relevant to change the algorithm if performance reaches 95%. Monitoring also ensures that results remain consistent over time. The distribution and volume of data change with usage and with the evolution of the information system. New data may appear that needs to be incorporated. Ensuring that the algorithm remains effective and robust is essential.

For generative AI algorithms, the question of monitoring will arise in the coming months. How can we ensure that the model remains just as good? And what do we mean when we say that a generative AI algorithm is good model? In the previous example, discussing this with experts and understanding their perspective could mean that 85% would be considered an acceptable performance threshold. But how can we ensure that an algorithm generating a document performs well? The key lies in user feedback. OpenAI has improved ChatGPT by allowing users to rate the responses (thumbs up or thumbs down). The same approach will apply to major advances, where comparing documents will be necessary (is the document generated by the new algorithm better than the old one according to users?). Furthermore, users should always have the option to disregard algorithmic results to avoid incorporating irrelevant outputs. Generative AI can generate false information if not guided sufficiently (the phenomenon of hallucination is now known and documented), and the risk of bias is always present.

Generative AI will therefore require specific maintenance work in the coming years to ensure continuity in performance and usage. It is worth noting the people who are using these tools will adapt and learn to use them, much like we have done with Google searches (if the first result doesn’t seem relevant, we adjust our search with a second query to reach the desired result).

 

Generative AI is creating new use cases and will revolutionize numerous sectors, but we still need to know how to integrate it into our information systems. This step may not be as straightforward as it seems, despite the fact that algorithms like ChatGPT appear to be all-purpose. The challenges people encounter in a typical data science project are very much present in the context of generative AI. In order to ensure that it is used sustainably, one must give careful consideration to monitoring and tracking algorithms. The user must be at the center of it all, and a new task will be assigned to them: assessing the results produced by generative AI.

All our industries

Latest news

Contact us!

  • We're committed to your privacy. Ekimetrics uses the information you provide to us to contact you. For more information about how we handle your personal data and your rights, check out our Privacy policy.
  • This field is for validation purposes and should be left unchanged.