What’s happened? A major study led by the European Broadcasting Union (EBU) in coordination with the BBC has revealed serious flaws in how popular AI assistants handle news-related queries, with Google’s Gemini standing out as the worst performer overall.
- The research analyzed 3,000 answers across 14 languages from major assistants, including ChatGPT, Microsoft Copilot, Gemini, and Perplexity.
- Overall, 45% of AI responses contained at least one major error, including cases where the AI presented opinions as facts (81%) or added its own opinions (73%).
- Gemini performed the worst overall, with 76% of its responses showing major sourcing or factual errors, which is double the rate of the next assistant Copilot (37%), followed by ChatGPT (36%) and Perplexity (30%).
- Common mistakes included mixing up sources, using outdated information, or blurring the line between opinion and verified fact.
Screenshot EBU
This is important because: If you’re turning to an AI assistant for news, these findings matter, especially when one model fares significantly worse than the rest.
- With AI tools increasingly replacing search engines or news summaries, faulty responses can mislead users.
- Sourcing errors occur when an AI provides a fact without properly backing it up, making it harder for users to trust the response.
- With public trust in media already shaky, AI-generated inaccuracies can make people more cynical about what’s real and what’s not.
- The fact that Gemini underperformed by a large margin raises concerns about how different companies are handling verification and model transparency.
Why should I care? You might already be using an AI assistant to catch up on the news, but if that assistant happens to be Gemini, this study suggests you are at a bigger risk of misinformation.
- If you ask Gemini for current-affairs information, there’s a high chance the response has a sourcing or factual error in nearly 3 out of 4 answers.
- While other assistants performed better, they still made plenty of mistakes, proving that no AI model is entirely reliable when it comes to factual news.
- Younger audiences, especially under 25, are among the fastest to adopt AI for news updates, which also makes them the most exposed to misinformation.
The bottom line is that AI assistants can help you stay informed, but they should not be your only source of truth.
