Ekimetrics - Putting the opportunities of generative AI into perspective (3/3)
Ekimetrics: big data specialists
Contact
Back to all insights

Putting the opportunities of generative AI into perspective (3/3)

Putting the opportunities of generative AI into perspective (3/3)

The excitement surrounding the new generation of widely-available generative AI tools – with ChatGPT the poster boy – has reached fever pitch. Bill Gates has declared that “the age of AI has begun”, claiming that “Artificial Intelligence is as revolutionary as mobile phones and the internet”. Meantime, some of the more enthusiastic members of the tech bloggerati are calling ChatGPT “another Gutenberg moment”, likening the advent of mass availability generative AI to the invention of the printing press in the 15th century.

Date : April 28th, 2023

Category : Thought Leadership

Time to pause and reflect?

Without getting too carried away, the consensus is that 2022/23 does indeed mark a technological turning point rich with potential and also a turning point in attitudes to AI. Not a day goes by without the release of a new AI generative algorithm. Recently, Midjourney has topped even ChatGPT in terms of buzz with its ability to create photo-real, fake-news images of the Pope in a puffer jacket and the bogus arrests of both Donald Trump and Boris Johnson.

 

However, because of the pace with which AI systems are able to learn, the risks posed by AI (see our article “The limitations and watch-outs of generative AI“), and the speed with which AI-powered technologies are being bought to market, there have been calls to pause and reflect. An open letter signed by a thousand industry experts was published on 28 March, calling for a six-month moratorium in the development of the most powerful AI systems to allow for sober assessment of the potential impact of this new technology. The letter brings together a surprising combination of signatories, from the ultra-libertarian Elon Musk to long-time advocates of ethics and regulation.

 

Questions of performance versus impact

The wide variety of different signatories to the open letter reveal fault lines between two opposing perspectives on technology and AI. These can be categorized as broadly North American vs European, performance vs impact, free market vs regulation. Since the advent of the commercial internet in the mid-1990s, Silicon Valley has imposed its vision of ever-increasing performance, with success measured by ever-greater volumes of data, computing power, and algorithmic complexity. What the open letter and its broad church of signatories suggests is that the rush to AI may have accelerated support for a more regulated, European way of thinking, more focused on social and societal impact than better and better technology at all costs.

This would be a turning point at least as significant as the technological potential unleashed by ChatGPT in particular and Large Language Models (LLMs) in general. This is because it would raise the importance of ethical and democratic impact in a sector that has for a generation been governed by the quest for ever-greater performance. Now that AI is available for all, it is as if we are considering its negative externalities for the first time. It is also a salutary moment, because it puts the question of usage back into the heart of the debate. It forces us to consider what the really useful use cases are. It is also a turning point in the understanding of these issues by the general public, a moment to understand that technology is never an end in itself, but rather a means to accelerate virtuous uses.

 

Towards new regulation?

Self-regulation has proved itself to be relatively toothless in curbing the excesses of the previous tech boom, in digital and particularly social media platforms. This – and the hyper-rapid pace of development in AI – suggests that regulation may be the only lever capable of curbing the headlong rush towards ever-greater performance. European regulation has a unique opportunity here to imprint the European vision on a global scale, but we must move quickly and not miss our chance. The sector needs a pledge and incentives to respect it, not a restrictive compliance framework. The pledge alone has the capacity to keep pace with innovation, because it can address virtuous uses and principles. This is unlike compliance frameworks, which are often condemned to write everything down, and to start again for each new technology, always with a delay.

This will represent an immense challenge in terms of governance, framing and definition of rules, coordination, control, and incentives. In a geopolitical context – with AI tools and platforms having no respect for national boundaries – the challenge will be to determine who controls what and how. Recent experience on data privacy suggests that it will be hard for the EU and the U.S. to align, not to mention China. While it may not be possible to control the pace of research and innovation in AI, a moratorium could contain the flow of public releases of new tools and updates. And whether it is possible to impose a moratorium like the one proposed by the open letter, regulation – and in particular European regulation – has the chance to seize the current historical moment; to take the lead, by focusing on real-world impact and use cases, applying the logic of incentives.

 

What ChatGPT tells us about the use of AI for business

The sudden arrival and rapid experimentation with ChatGPT at scale provide a unique opportunity to enhance understanding of the potential, uses, and issues surrounding AI. And because the technology is dominating the news agenda – and because so many businesses are keen to try it and not lose the initiative to competitors – we also have the opportunity to highlight AI’s limitations, too.

Fear of missing out and blind belief that all technology is good is still a reality, however. Gartner’s Hype Cycle is the best visual representation of this, and it is important to remember that more than three-quarters of all data projects fail, while more than 95% are never really adopted and so never generate tangible gains. Gartner’s “peak of heightened expectations” is still doing as much harm as good in the case of AI. Companies have remarkably short memories, failing to learn from failure.

Every era has its buzz, and every problem has its miracle solution. But just as Salesforce never solved the issues of relationship strategy or created clean and unified customer data, so ChatGPT will not be a miracle solution to create relevant content or enable companies to interact more effectively with their consumers. Companies would be well advised to capture the spirit of the open letter on AI, hit the pause button when a new technology arrives, and take six months to establish why and how they might use it and what impacts it might have. Sometimes, this period of reflection will result in a business deciding not to adopt the technology at all.

 

Three areas where companies will find AI helpful

There are three, clear functions or categories of task for which we believe businesses are likely to find AI most immediately helpful.

Functions that use a large volume of data to make decisions:

  • Financial analysis, summary notes on companies, enrichment/completion of company knowledge bases
  • Sustainability research and documentation from IPCC reports and other reference sources – as demonstrated in Ekimetrics’ Climate Q&A tool, plus completing ESG questionnaires
  • Health diagnostic tools, summarising research literature
  • Legal research assistance, integrating data from multiple cases.

 

Functions that use a human-machine text interface, including customer relations:

  • Customer service, complaint management
  • Synthesis of customer reviews
  • Enrichment of content provided by chatbots.

 

More universal applications for all businesses:

  • Labelling and enrichment of data
  • Creation of comprehensive data sets
  • Completion of fragmented and incomplete data sets.

 

In summary

The sudden arrival and rapid spread of mass-availability AI platforms and tools does mark the beginning of a new era in the ways companies work with technology and apply data science. It’s important not to get carried away by the hype or fear of missing out, and the recent open letter calling for an industry-wide  pause in development and roll-out of AI provides a useful metaphor for how we believe companies should assess the potential impact of AI on their businesses. Some, European-style regulation on why, how, and where AI is used would help ensure it’s used to the advantage of both corporations and citizens, and there are a large number of areas where it is likely to bring immediate and sustained benefits to businesses, without causing undue societal risks.

 

To find out more, see our articles “What is generative AI and why does it matter? and “The limitations and watch-outs of generative AI”.

All our industries

Latest news

Contact us!

  • We're committed to your privacy. Ekimetrics uses the information you provide to us to contact you. For more information about how we handle your personal data and your rights, check out our Privacy policy.
  • This field is for validation purposes and should be left unchanged.