Following incidents where a new feature suggested eating rocks or adding glue to pizza sauce, the company will limit which searches trigger summaries
Google announced on Thursday its plans to refine and adjust its AI-generated summaries of search results, addressing instances where the feature provided bizarre and inaccurate answers, such as suggesting eating rocks or adding glue to pizza sauce. The company will now limit the types of searches that trigger an AI-written summary.
Liz Reid, Google’s head of search, stated that the company has implemented several restrictions on the searches that would produce AI Overview results. Additionally, Google has “limited the inclusion of satire and humor content” in these summaries. The company is also taking action against a small number of AI Overviews that violate its content policies, which it said occurred in fewer than 1 in 7 million unique search queries where the feature appeared.
Google’s AI Overviews feature, launched in the US this month, rapidly generated viral instances of the tool misinterpreting information. It seemed to use satirical sources such as the Onion or humorous Reddit posts to generate answers. Google’s AI mishaps soon became a meme, with fabricated screenshots of absurd and dark answers circulating widely on social media alongside genuine instances of the tool’s failures.
Google promoted its AI Overviews feature as a key element in the company’s broader effort to integrate generative artificial intelligence into its core services. However, the rollout resulted in Google once again experiencing public embarrassment following the introduction of a new AI product. Earlier this year, Google faced public backlash and ridicule when its AI image generation tool mistakenly placed people of color into historically inaccurate situations, such as depicting Black individuals as World War II German soldiers.
In its blog, Google provided a brief summary of the issues with AI Overviews and defended the feature. Reid stated that many of the inaccuracies in AI Overviews were due to gaps in information resulting from rare or unusual searches. Reid also mentioned intentional efforts to manipulate the function to generate incorrect answers.
“There’s nothing quite like having millions of people using the feature with many novel searches,” Reid said in the post. “We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results.”
While many viral posts stemmed from odd searches like “how many rocks should I eat” – which was based on an Onion article titled “Geologists Recommend Eating at Least One Small Rock Per Day” – others seemed to originate from more reasonable queries. One AI expert shared an image of an AI Overview suggesting that Barack Obama had been the first Muslim US president, a common right-wing conspiracy theory.
“By examining examples from the past few weeks, we identified patterns where our performance fell short. As a result, we implemented over a dozen technical enhancements to our systems,” Reid explained.
While Google’s blog portrays the issues with AI Overviews as primarily concerning edge cases, several artificial intelligence experts have suggested that these problems reflect broader challenges regarding AI’s ability to assess factual accuracy and the complexities of automating information access.
Google stated in its blog that “user feedback shows” an increased satisfaction with search results thanks to AI Overviews. However, the wider implications of Google’s AI tools and alterations to its search functions remain unclear. Website owners are apprehensive that AI summaries could negatively impact online media by diverting traffic and advertising revenue from their sites. Additionally, some researchers are concerned about Google further consolidating control over the information the public encounters on the internet.