GPT-4 (and its short-lived predecessor GPT-3) are examples of Large Language Models (LLMs), generative AI algorithms that create new content based on existing information and frameworks and on which they are trained. In the case of ChatGPT, this means online content created up until late 2021. This deep learning method is a subset of machine learning, itself a subset of data science.
Until recently, the power, speed, and wide applicability of AI tools had been held back by the recurrent neural networks on which they were based. The leap forward made possible by ChatGPT is based on the technology of transformers. These are distinguished by both the sheer number of parameters they can take into account – 175 billion for GPT-3, a trillion for GPT-4 – and the greatly increased volume of textual data on which they are trained. For GPT-4, this means 500GB of text from the web, compared with all of Wikipedia, which represents barely more than 20GB of data. Because ChatGPT learns at such speed from so many more parameters and from so much more training data, the answers it generates to user requests are naturally much better informed and structured.
As a result, ChatGPT has become the fastest-adopted software platform in history, taking fewer than five days to reach a million subscribers. That compares to two-and-a-half months for Dall-E – the AI-driven image generator, also developed by OpenAI – ten months for Facebook and two years for Twitter. In less than six months, ChatGPT had attracted more than 100 million active users and today welcomes one billion visitors a month.
The established major players in tech are investing significantly in generative AI. Microsoft has so far invested US$11bn in OpenAI, ChatGPT’s parent company, and could end up owning three-quarters of the company’s shares in the long term. It has integrated OpenAI’s services into both its Bing search engine and its suite of Office tools. In so doing, it has stolen a march on Google, which launched its own conversational AI chatbot, Bard, some months after ChatGPT, and to less than universal acclaim. In recent months, interest, activity, and investment has lurched from Web 3.0 and the metaverse, NFTs and cryptocurrencies, to focus squarely on generative AI.
ChatGPT enables users to manipulate, summarize, or repurpose large volumes of text. This makes it ideally suited to assignments where humans and machines share the load, increasing productivity by simplifying and accelerating tasks that require writing and synthesizing text. These include:
Additionally, ChatGPT can help businesses with human-machine-human tasks in which existing textual content is reformulated. These include writing articles, media, and social media posts, as well as simplifying and popularizing content between different specialist business functions. By asking ChatGPT to write progressively more straightforwardly and with less jargon, it can help to make the impenetrable both understandable and useable across traditional silos in a business.
There are four reasons why ChatGPT can be seen as revolutionary as a business tool and assistant.
According to a March 2023 report by investment bank Goldman Sachs, up to 300 million jobs worldwide could be eliminated by generative AI tools such as ChatGPT. The report concludes that as many as two-thirds of knowledge economy jobs in the U.S. and EU could be simplified and accelerated by some degree of automation through AI, with legal, administrative, and customer support-related roles in the forefront for the U.S.; management and administrative jobs in Europe.
The true impact on the workforce will almost certainly be significantly less than the Goldman Sachs report suggests. The automation of some tasks – such as information gathering and synthesis – will likely free up time and capacity for knowledge economy workers to focus on working with these new technologies and so enable them to do more with less. And, indeed, do completely new tasks. Content development roles could shift, say, from editorial to moderation roles, while data scientists could find themselves spending more time engineering the data they work with – briefing and refining the work of generative AI tools – rather than preparing and running the analyses themselves.
It is these kinds of shifts that have led involved, invested advocates of generative AI – including such significant figures as Bill Gates – to declare that the age of AI has finally begun. For Gates, the world of technology has not experienced such a revolutionary leap forward since the widespread adoption of the graphical user interface more than 40 years ago.
To find out more, see our article on the limitations and watch-outs surrounding generative AI and our separate piece titled “Putting the opportunities of generative AI into perspective” .