Google’s new AI Overviews has demonstrated the risk of relying solely on artificial intelligence-informed information, as warnings surface the next wave of ChatGPT-like platforms will feed off incorrect information.
Writing in The Conversation, Professor Toby Walsh said AI Overviews, which was recently released, saved users clicking on links by using generative AI to provide summaries of the search results.
Professor Walsh, from UNSW Sydney, said while the tool could be helpful, if you asked a left-field question the results could be dangerous.
“Google is currently scrambling to fix these problems one by one, but it is a PR disaster for the search giant and a challenging game of whack-a-mole,” Professor Wals said.
He cited examples of AI Overviews telling users that “astronauts have met cats on the moon, played with them, and provided care”.
“More worryingly, it also recommends ‘you should eat at least one small rock per day’ as ‘rocks are a vital source of minerals and vitamins’, and suggests putting glue in pizza topping.”
Professor Walsh said the fundamental problem was that generative AI tools didn’t know what was true, just what was popular.
“For example, there aren’t a lot of articles on the web about eating rocks as it is so self-evidently a bad idea.
“There is, however, a well-read satirical article from The Onion about eating rocks. And so Google’s AI based its summary on what was popular, not what was true.”
He said another problem was that generative AI tools didn’t have human values and were trained on large areas of the internet.
“And while sophisticated techniques are used to eliminate the worst, it is unsurprising they reflect some of the biases, conspiracy theories and worse to be found on the web.”
Professor Walsh warned more problems were looming with the next generation of platforms.
“The second generation of large language models are likely and unintentionally being trained on some of the outputs of the first generation. And lots of AI startups are touting the benefits of training on synthetic, AI-generated data.
“But training on the exhaust fumes of current AI models risks amplifying even small biases and errors.”