Google I/O introduced an AI assistant that can see and hear the world, while OpenAI put its version of a Her-like chatbot into an iPhone. Next week, Microsoft will be hosting Build, where it’s sure to have some version of Copilot or Cortana that understands pivot tables. Then, a few weeks after that, Apple will host its own developer conference, and if the buzz is anything to go by, it’ll be talking about artificial intelligence, too. (Unclear if Siri will be mentioned.)
- Home
- Technology
- News
We gotta stop ignoring AI’s hallucination problem
Artificial intelligence is being rapidly deployed across the technological landscape in the form of GPT-4o, Google Gemini, and Microsoft Copilot, and that would be cool if the AI wasn’t so stupid.
AI is here! It’s no longer conceptual. It’s taking jobs, making a few new ones, and helping millions of students avoid doing their homework. According to most of the major tech companies investing in AI, we appear to be at the start of experiencing one of those rare monumental shifts in technology. Think the Industrial Revolution or the creation of the internet or personal computer. All of Silicon Valley — of Big Tech — is focused on taking large language models and other forms of artificial intelligence and moving them from the laptops of researchers into the phones and computers of average people. Ideally, they will make a lot of money in the process.
But I can’t really care about that because Meta AI thinks I have a beard.
I want to be very clear: I am a cis woman and do not have a beard. But if I type “show me a picture of Alex Cranz” into the prompt window, Meta AI inevitably returns images of very pretty dark-haired men with beards. I am only some of those things!
Meta AI isn’t the only one to struggle with the minutiae of The Verge’s masthead. ChatGPT told me yesterday I don’t work at The Verge. Google’s Gemini didn’t know who I was (fair), but after telling me Nilay Patel was a founder of The Verge, it then apologized and corrected itself, saying he was not. (I assure you he was.)
When you ask these bots about things that actually matter they mess up, too. Meta’s 2022 launch of Galactica was so bad the company took the AI down after three days. Earlier this year, ChatGPT had a spell and started spouting absolute nonsense, but it also regularly makes up case law, leading to multiple lawyers getting into hot water with the courts.
The AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their dimwittedness. I cannot get excited about the next turn in the AI revolution because that turn is into a place where computers cannot consistently maintain accuracy about even minor things.
I mean, they even screwed up during Google’s big AI keynote at I/O. In a commercial for Google’s new AI-ified search engine, someone asked how to fix a jammed film camera, and it suggested they “open the back door and gently remove the film.” That is the easiest way to destroy any photos you’ve already taken.
An AI’s difficult relationship with the truth is called “hallucinating.” In extremely simple terms: these machines are great at discovering patterns of information, but in their attempt to extrapolate and create, they occasionally get it wrong. They effectively “hallucinate” a new reality, and that new reality is often wrong. It’s a tricky problem, and every single person working on AI right now is aware of it.
One Google ex-researcher claimed it could be fixed within the next year (though he lamented that outcome), and Microsoft has a tool for some of its users that’s supposed to help detect them. Google’s head of Search, Liz Reid, told The Verge it’s aware of the challenge, too. “There’s a balance between creativity and factuality” with any language model, she told my colleague David Pierce. “We’re really going to skew it toward the factuality side.”
But notice how Reid said there was a balance? That’s because a lot of AI researchers don’t actually think hallucinations can be solved. A study out of the National University of Singapore suggested that hallucinations are an inevitable outcome of all large language models. Just as no person is 100 percent right all the time, neither are these computers.
And that’s probably why most of the major players in this field — the ones with real resources and financial incentive to make us all embrace AI — think you shouldn’t worry about it. During Google’s IO keynote, it added, in tiny gray font, the phrase “check responses for accuracy” to the screen below nearly every new AI tool it showed off — a helpful reminder that its tools can’t be trusted, but it also doesn’t think it’s a problem. ChatGPT operates similarly. In tiny font just below the prompt window, it says, “ChatGPT can make mistakes. Check important info.”
That’s not a disclaimer you want to see from tools that are supposed to change our whole lives in the very near future! And the people making these tools do not seem to care too much about fixing the problem beyond a small warning.
Sam Altman, the CEO of OpenAI who was briefly ousted for prioritizing profit over safety, went a step further and said anyone who had an issue with AI’s accuracy was naive. “If you just do the naive thing and say, ‘Never say anything that you’re not 100 percent sure about,’ you can get them all to do that. But it won’t have the magic that people like so much,” he told a crowd at Salesforce’s Dreamforce conference last year.
This idea that there’s a kind of unquantifiable magic sauce in AI that will allow us to forgive its tenuous relationship with reality is brought up a lot by the people eager to hand-wave away accuracy concerns. Google, OpenAI, Microsoft, and plenty of other AI developers and researchers have dismissed hallucination as a small annoyance that should be forgiven because they’re on the path to making digital beings that might make our own lives easier.
But apologies to Sam and everyone else financially incentivized to get me excited about AI. I don’t come to computers for the inaccurate magic of human consciousness. I come to them because they are very accurate when humans are not. I don’t need my computer to be my friend; I need it to get my gender right when I ask and help me not accidentally expose film when fixing a busted camera. Lawyers, I assume, would like it to get the case law right.
I understand where Sam Altman and other AI evangelists are coming from. There is a possibility in some far future to create a real digital consciousness from ones and zeroes. Right now, the development of artificial intelligence is moving at an astounding speed that puts many previous technological revolutions to shame. There is genuine magic at work in Silicon Valley right now.
But the AI thinks I have a beard. It can’t consistently figure out the simplest tasks, and yet, it’s being foisted upon us with the expectation that we celebrate the incredible mediocrity of the services these AIs provide. While I can certainly marvel at the technological innovations happening, I would like my computers not to sacrifice accuracy just so I have a digital avatar to talk to. That is not a fair exchange — it’s only an interesting one.
Green Shirts face South Africa in third ODI today
- 4 hours ago
Renowned TikToker Khaby Lame performs Umrah
- 2 hours ago
Sigourney Weaver makes West End debut in ‘The Tempest’
- 20 hours ago
PM orders strict action against tax defaulters, pushes for FBR digitisation
- a day ago
Six terrorists killed, 16 security personnel martyred in South Waziristan face-off
- 4 hours ago
PM Shehbaz constitutes negotiation committee for talks with PTI
- 4 hours ago