Science reporting fellow
In November, the research company OpenAI released a bot called ChatGPT. The bot has since gone viral, and has been used to write code, poetry and even an architectural review of Perot Museum of Nature and Science.
ChatGPT composes everything from one-word responses to entire essays. But how does it actually work, and could its written responses replace human creations and ideas? Here’s what to know, according to scientists at the University of Texas at Dallas.
We encounter artificial intelligence every day. Apple’s face ID uses machine learning to confirm it’s us when we unlock our phones. Social media apps personalize our feeds based on what we like best. And tools such as Siri and Alexa answer our burning questions.
ChatGPT is a specific kind of artificial intelligence called a “large language model,” according to Xinya Du, an assistant computer science professor at UTD. It’s been trained on a large amount of data and text, including code and information from the internet.
Get the latest breaking news from North Texas and beyond.
If we ask ChatGPT what comes after the phrase “Four score and seven years ...,” it fills in the blank with the word most likely to go next based on the data it’s been trained on.
“It’s [predicting the next word] on a tremendously large scale, with a tremendously larger vocabulary of known texts than a human could possibly remember,” said Jessica Ouyang, an assistant computer science professor at UTD.
The tech that powers ChatGPT isn’t new; it’s the product of several previous OpenAI chatbots dating to 2018, including GPT, GPT-2 and GPT-3. ChatGPT is trained on an even bigger data set and is presented on an easy-to-use website — both of which have helped fuel its online popularity.
When we asked “what is 2 plus 2?” ChatGPT came back with a swift answer: “The sum of 2 and 2 is 4.”
UTD scientists said ChatGPT doesn’t solve math problems by crunching numbers. Instead, it determines the best way to answer based on what’s been previously said on the topic.
“Presumably, somewhere on the internet, someone has typed the sequence, ‘What’s two plus two? Oh, that’s four.’ Whether that’s in some sort of chat, some sort of online forum, Quora question or something,” Ouyang said.
This underscores a key limitation to ChatGPT: It’s only as smart as the data it’s been fed. The chatbot has been trained on data from up to 2021, and while that could change, it has “limited knowledge of world and events” since then, according to OpenAI’s website.
ChatGPT generates its answers one word at a time, Ouyang said, and scans the words it’s already produced to decide on the next one. This allows the chatbot to produce conversational sentences that make logical sense.
Gopal Gupta, a computer science professor at UTD, said ChatGPT can be useful for creative projects like poems or as an aide for legal briefs or emails. The chatbot can generate formulaic posts or captions for social media and product descriptions.
Dale MacDonald, an associate dean of research and creative technologies at UTD, said he could see the chatbot being used to write copy for advertisements.
ChatGPT produces answers based on a huge amount of data, including information from the internet. Of course, not everything on the internet is true.
If we ask ChatGPT to answer a scientific question, or even to write the intro to a scientific paper, it can produce an answer. But Gupta said it’s difficult to verify whether the information is true. Users can ask ChatGPT to cite its sources, but it can still produce attributions that may not be accurate.
Online sources can also be biased. According to OpenAI’s website, while the company has attempted to make ChatGPT refuse inappropriate requests, “it will sometimes respond to harmful instructions or exhibit biased behavior.”
For artificial intelligence to be as smart as humans, Gupta said, it needs to be able to reason.
As humans, we can make a decision without all the necessary information, Gupta said. Replicating this phenomenon in a computer is no small feat, and ChatGPT hasn’t accomplished it yet.
ChatGPT isn’t the first of its kind, and it likely won’t be the last. While it can carry a conversation or answer a question, it doesn’t “know” or “feel” anything — all it can do is fill in the blank, word by word.
It can also comment on its own limitations. When asked to end this article in one sentence, ChatGPT responded: “While ChatGPT can generate creative outputs and provide answers to certain questions, its limitations and potential for biased and inaccurate responses highlight the need for caution when using AI language models.”
Interested in learning more about ChatGPT? RSVP for an upcoming panel discussion hosted by the University of Texas at Dallas, in partnership with The Dallas Morning News.
“ChatGPT: Fact vs Fiction”
WHAT: A new generation of artificial intelligence tools can generate text that sounds more human than ever. What are its advantages and pitfalls?
WHEN: Tuesday March 21, 7-8:30 p.m.
WHERE: ATEC Lecture Hall at the University of Texas at Dallas
WHO: Panelists include UT-Dallas computer science professors Jessica Ouyang, Gopal Gupta and Xinya Du, and Dale MacDonald, an associate dean of research and creative technologies.
RSVP: The event is free, and everyone is welcome. RSVP for the panel at this link.
Adithi Ramakrishnan is a science reporting fellow at The Dallas Morning News. Her fellowship is supported by the University of Texas at Dallas. The News makes all editorial decisions.