AI assistants fail at news accuracy, major study reveals
- By Kumail Shah -
- Oct 23, 2025

A major study by 22 renowned public service media organizations found that commonly used AI assistants fail to provide accurate news by 45% regardless of language and territory.
Journalists evaluated the responses of AI chatbots/Assistants like ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI.
The study measured criteria such as accuracy, sourcing, providing context, the ability to editorialize appropriately and the ability to distinguish fact from opinion. The journalists noted that over half of the answers had one same issue, while 31% had serious sourcing problems and 20% made factual errors.
According to Deutsche Welle (DW), 53% of the answers provided by the AI assistants had important issues, while 29% issues with accuracy of the news.
AI assistants have become an increasingly common way for people around the world to access information. According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers use AI chatbots to get news, with the figure rising to 15% for those aged under 25.
Those behind the study say it confirms that AI assistants systematically distort news content of all kinds.
Reuters Institute’s Digital News Report noted that 7% of online news consumers use AI chatbots for news. The numbers are rising for people under 25 to 15%.
“This research conclusively shows that these failings are not isolated incidents,” said Jean Philip De Tender, deputy director general of the European Broadcasting Union (EBU), which co-ordinated the study.
“They are systemic, cross-border, and multilingual, and we believe this endangers public trust. When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” he added.
This is one of the largest research projects of its kind to date and follows a study undertaken by the BBC in February 2025. That study found that more than half of all AI answers it reviewed had noteworthy issues, while almost one-fifth of the answers quoting BBC content as a source introduced factual errors of their own.
The new study followed the BBC pattern, and 3000 AI responses were reviewed by media organizations from 18 countries with multiple languages. Common questions such as “What is the Ukraine minerals deal?” or “Can Trump run for a third term?” were asked to the four AI assistants.
Journalists then examined the answers against their own expertise and professional sourcing, without knowing which assistant provided them.
In comparison to the BBC Study from eight months ago, some minor improvements were noted, but a high level of errors existed.
Peter Archer, BBC program director of generative AI, mentioned in a statement that AI’s the potential to deliver greater value to audiences. However, he emphasized the critical need for people to trust the information they consume, acknowledging that despite some improvements, significant issues persist with these AI assistants.
Gemini performed poorly among the four chatbots, with 72% of its responses having sourcing problems. Microsoft’s Copilot and Gemini were considered the worst performers. But across both studies, all four AI assistants had issues, according to the BBC study.
In a statement provided to the BBC back in February, a spokesperson for OpenAI, which developed ChatGPT, said: “OpenAI supports publishers and creators by helping 300 million weekly ChatGPT users discover quality content through summaries, quotes, clear links, and attribution.”
Call For Action!
Media broadcasters and organizations are calling on national governments to take action on the concerning issue.
In a press release, the EBU said its members are “pressing EU and national regulators to enforce existing laws on information integrity, digital services, They also stressed that independent monitoring of AI assistants must be a priority going forward, given how fast new AI models are being rolled out.
They also emphasized that independent monitoring of AI assistants must be a focus going ahead, given how fast new AI models are being rolled out.
The EBU has collaborated with various international broadcasting and media organizations to launch “Facts In: Facts Out,” a joint campaign urging AI companies to take greater responsibility for how their products manage and disseminate news.
In a statement, the campaign organizers said that when these systems distort, misattribute, or ‘decontextualize’ trusted news, they undermine public trust. The campaign’s demand is simple: If facts go in, facts must come out. AI tools must not compromise the integrity of the news they use.