Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

New research shows AI assistants make widespread errors about the news

CGTN

VCG
VCG

VCG

Leading artificial intelligence (AI) assistants misrepresent news content in nearly half of their responses, according to new research published on Wednesday by the European Broadcasting Union (EBU) and the BBC.

The international research analyzed 3,000 responses to questions about the news from leading AI assistant software applications that use AI to understand natural language commands and complete tasks for users.

It assessed AI assistants in 14 languages for accuracy, sourcing and ability to distinguish opinion versus fact, including ChatGPT, Copilot, Gemini and Perplexity.

Overall, the research showed that 45 percent of the AI responses studied contained at least one significant issue, with 81 percent having some form of problem.

Reuters has contacted the companies to request their comments on the findings.

Gemini, Google's AI assistant, has previously stated on its website that it welcomes feedback to keep improving the platform and make it more helpful for users.

OpenAI and Microsoft have previously stated that hallucinations—when an AI model produces incorrect or misleading information, often due to factors like insufficient data—are issues they are trying to address.

Perplexity states on its website that one of its "Deep Research" modes has 93.9 percent accuracy regarding factuality.

Sourcing errors

A third of AI assistants' responses contained serious sourcing errors, such as missing, misleading, or incorrect attribution, according to the study.

About 72 percent of responses by Gemini, Google's AI assistant, had major sourcing problems, compared to less than 25 percent for all other assistants, according to the report.

The report states that 20 percent of responses from all AI assistants studied contained accuracy issues, including outdated information.

Examples cited by the study included Gemini incorrectly stating changes to a law on disposable vapes and ChatGPT reporting Pope Francis as the current Pope several months after his death.

Twenty-two public-service media organizations from 18 countries, including France, Germany, Spain, Ukraine, Britain and the United States, participated in the study.

The EBU stated that as AI assistants increasingly replace traditional search engines for news, public trust could be at risk.

“When people don't know what to trust, they end up trusting nothing at all, and that can discourage democratic participation," EBU Media Director Jean Philip De Tender said in a statement.

Some 7 percent of all online news consumers and 15 percent of those under 25 use AI assistants to get their news, according to the Reuters Institute's Digital News Report 2025.

The new report urged AI companies to be held accountable and to improve how their AI assistants respond to news-related queries.

Source(s): Reuters
Search Trends