Ekimetrics - The limitations and watch-outs of generative AI (2/3)
Ekimetrics: big data specialists
Contact
Back to all insights

The limitations and watch-outs of generative AI (2/3)

The limitations and watch-outs of generative AI (2/3)

It’s only a few months since the new generation of generative AI tools have become widely available, with OpenAI’s ChatGPT in the vanguard. Because of their potential, they’ve attracted big investment from established players in Big Tech. This includes US$11bn invested by Microsoft in ChatGPT, helping the platform to become the fastest-adopted in software history. It took just five days for ChatGPT to reach a million subscribers, and today it has more than 100 million active daily users.

Date : April 28th, 2023

Category : Thought Leadership

With companies in different sectors keen to experiment with generative AI, there’s a pervasive atmosphere of corporate Fear Of Missing Out. But companies should be cautious before going “all-in” on AI. This is not a revolution without its risks, not least because it’s developing so quickly, without regulation, and often without comprehensive testing at scale. The most significant limitations and watch-outs for those keen to embrace ChatGPT are detailed here.

  • Hallucination: the model underpinning ChatGPT struggles to admit if it lacks the information it needs to provide a meaningful answer to the questions it’s asked. In the absence of valid data on which it could formulate a response, ChatGPT tends to offer an answer anyway. It often struggles to cite sources, and even creates plausible-sounding sources – authors active in the field in question, journals or publications that have covered the topic, but fake articles, volumes, issues, and page numbers. Content and references should never be taken at face value, because although they can appear to be plausible, they may be entirely fictional.
  • Biases: there are three principle biases users need to be aware of when considering the reliability and usefulness of ChatGPT’s output. Cultural bias: the platform has been trained on an Anglo-Saxon (largely U.S. and British) corpus of data. Confirmation bias: the training method – Human Feedback Reinforcement Learning – means that the algorithms seek out more content that confirms (and does not challenge or contradict) what it knows already. And, authority bias: the algorithms favour content and attribute greater accuracy to the opinions of those considered to be experts in any given field, not those that are necessarily true. Clearly, ChatGPT has not done away with the statistician’s and data scientist’s maxim “garbage in, garbage out”, and these biases need to be born in mind when reviewing the usefulness and veracity of content generated by the tool.
  • Privacy: the privacy rules of the public beta version of OpenAI are not well understood. However, any data or code transmitted by ChatGPT is likely to be exploited for further training, leading to potential further breaches of privacy and intellectual property. As a result, the Italian data protection authority banned ChatGPT in March 2023 for non-compliance with legislation on personal data.
  • Incomplete data: even the latest update of ChatGPT – GPT-4, released in March 2023 – has only been trained on data up to September 2021, meaning almost two years of recent content missing from the training data. This means that the current version of ChatGPT is unable to generate content informed by what has happened in the recent past.

 

Beware the black box

With little transparency on the underlying processes of ChatGPT’s algorithms, it is difficult for users to understand, master, and interpret how it makes its decisions. As a black box technology, ChatGPT presents companies with three, major challenges.

  • Mastery – How to control for and eliminate the hallucinations and biases, while at the same time respecting regulatory frameworks when tolerance for error on decision making is low.
  • Relevance – How to ensure that the content generated by the tool is both contextualized and actionable for specific sectors, companies, and use cases.
  • Cost – How to control the cost of internalizing such technologies. At a time of a global talent crunch for those with technical and data science expertise, companies need to assess the technical skills required to make best use of AI.

 

Societal risks

In addition, there are four notable societal domains that present risks when using AI platforms that companies need to actively consider before putting these tools to work in their businesses.

  • Ethical – Whether it’s through the spontaneous generation of hallucinations, the perpetuation of biases, the lack of recognised sources, or the vagueness around privacy, using generative AI tools presents a major risk of generating a significant volume of unverifiable information. This is exacerbated by the fact that the model could quickly spiral out of control by including in its future learning and training data the content it generates.
  • Democratic – If the ethical risks outlined above are used for malicious purposes, this poses risks for democracy, with the industrialization of fake news and the democratization of hacking. This is the risk most feared by OpenAI founder, Sam Altmann, who has said: “I’m particularly concerned that these models could be used for large-scale disinformation. Now that they know how to write computer code better, they could be used for offensive cyberattacks.”
  • Environmental – The cost of training generative AI models is colossal given the hundreds of billions of parameters in GPT-4 models. The same goes for each request to the algorithm (estimated at 0.2g of CO2). But the acceleration in the application of AI will be so strong that it risks worsening the impact of digital technology overall. This is already worrying, with a projected impact on global emissions doubling to 8% by 2025. This would put ChatGPT on the scale of Google in terms of the number of daily requests, demanding an exponential increase in computing power and associated carbon impact. The environmental cost of generative AI is attracting serious analysis, including this recent report in the MIT Technology Review.
  • Social – Generative AI risks rendering obsolete human tasks that are currently valued highly, tasks performed by professional, qualified experts – from consultants to auditors, from lawyers to mathematicians, from coders to data scientists. While Goldman Sachs’ projection of a net loss of 300 million jobs resulting from widespread adoption of generative AI tools may be excessive, business leaders need to consider both the short-term impact of the rush to AI and what knowledge economy workers would do in a post-AI workforce.

 

In summary

The way that the current generation of AI tools is built means that businesses should not take the content they generate at face value. The black box nature of these tools presents major challenges in terms of mastery, relevance, and cost. And there are significant societal consequences that cannot be dismissed lightly – ethical and democratic, environmental and social. These must be assessed by individuals, corporations, and society as a whole before we go “all-in “on AI. Because of the way AI feeds on itself and learns from what it does, it could quickly be impossible to put the genie back into the bottle.

 

To find out more, see our articles “What is generative AI and why does it matter?and “Putting the opportunities of generative AI into perspective.

All our industries

Latest news

Contact us!

  • We're committed to your privacy. Ekimetrics uses the information you provide to us to contact you. For more information about how we handle your personal data and your rights, check out our Privacy policy.
  • This field is for validation purposes and should be left unchanged.