June 22, 2024


It's the Technology

ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw

ChatGPT’s Most Charming Trick Is Also Its Biggest Flaw

Like many other people over the past week, Bindu Reddy recently fell under the spell of ChatGPT, a free chatbot that can answer all manner of questions with stunning and unprecedented eloquence. 

Reddy, CEO of Abacus.AI, which develops tools for coders who use artificial intelligence, was charmed by ChatGPT’s ability to answer requests for definitions of love or creative new cocktail recipes. Her company is already exploring how to use ChatGPT to help write technical documents. “We have tested it, and it works great,” she says.

ChatGPT, created by startup OpenAI, has become the darling of the internet since its release last week. Early users have enthusiastically posted screenshots of their experiments, marveling at its ability to generate short essays on just about any themecraft literary parodies, answer complex coding questions, and much more. It has prompted predictions that the service will make conventional search engines and homework assignments obsolete.

Yet the AI at the core of ChatGPT is not, in fact, very new. It is a version of an AI model called GPT-3 that generates text based on patterns it digested from huge quantities of text gathered from the web. That model, which is available as a commercial API for programmers, has already shown that it can answer questions and generate text very well some of the time. But getting the service to respond in a particular way required crafting the right prompt to feed into the software.

ChatGPT stands out because it can take a naturally phrased question and answer it using a new variant of GPT-3, called GPT-3.5. This tweak has unlocked a new capacity to respond to all kinds of questions, giving the powerful AI model a compelling new interface just about anyone can use. That OpenAI has thrown open the service for free, and the fact that its glitches can be good fun, also helped fuel the chatbot’s viral debut—similar to how some tools for creating images using AI have proven ideal for meme-making.


This content can also be viewed on the site it originates from.

OpenAI has not released full details on how it gave its text generation software a naturalistic new interface, but the company shared some information in a blog post. It says the team fed human-written answers to GPT-3.5 as training data, and then used a form of simulated reward and punishment known as reinforcement learning to push the model to provide better answers to example questions.

Christopher Potts, a professor at Stanford University, says the method used to help ChatGPT answer questions, which OpenAI has shown off previously, seems like a significant step forward in helping AI handle language in a way that is more relatable. “It’s extremely impressive,” Potts says of the technique, despite the fact that he thinks it may make his job more complicated. “It has got me thinking about what I’m going to do on my courses that require short answers on assignments,” Potts says.

Jacob Andreas, an assistant professor who works on AI and language at MIT, says the system seems likely to widen the pool of people able to tap into AI language tools. “Here’s a thing being presented to you in a familiar interface that causes you to apply a mental model that you are used to applying to other agents—humans—that you interact with,” he says.