- Microsoft’s AI chatbot Bing was found to provide false and misleading information about elections in multiple countries, contributing to the spread of misinformation.
- A study by AI Forensics and Algorithm Watch found 30% of Bing’s responses about elections contained inaccurate or misleading info on candidates, polls, scandals and procedures.
- While Bing’s safeguards against misinformation were uneven, this shows the limitations of generative AI as an authoritative source, threatening public understanding essential for democracy.
Microsoft’s AI chatbot “Bing” has been found to provide false and misleading information about political elections in multiple countries. This could contribute to public confusion and the spread of misinformation at a time when the integrity of democratic elections is already under threat.
Misleading and Inaccurate Responses
A recent study conducted by two European nonprofits, AI Forensics and Algorithm Watch, analyzed the responses given by Microsoft’s Bing chatbot to basic questions about elections in Germany, Switzerland, and the United States. Shockingly, they found that a full 30% of the Bing chatbot’s responses contained inaccurate or misleading information about candidates, polls, scandals, and voting procedures.
The researchers asked the chatbot questions about both past and future elections. In all cases, Bing’s AI gave responses that were wrong or that misrepresented the sources it cited. This occurred across all topics related to the democratic process.
Safeguards Unevenly Implemented
In examining the chatbot’s responses in depth, the researchers found that Bing’s safeguards against misinformation were unevenly implemented. In 40% of cases, the chatbot gave evasive or non-answers rather than admitting the limitations of its knowledge.
Microsoft has acknowledged the issues raised by the study and says it plans to correct them before the 2024 U.S. presidential election. However, this startling lapse shows that generative AI still cannot be relied upon as an authoritative information source, even when steps are taken to mitigate harms.
The Threat to Democracy
While the false information spread by the Bing chatbot has not directly influenced election outcomes so far, it contributes to an information ecosystem vulnerable to confusion, doubt, and influence operations. Reliable public information is a cornerstone of democracy. If AI systems like Bing cannot provide it, they risk harming civic discourse.
As AI chatbots become more ubiquitous, the companies behind them must make accuracy and truthfulness central pillars of their design, not afterthoughts. There is simply too much at stake for them to value novelty and convenience over public understanding. Moving forward, healthy skepticism and verification will remain necessary when interacting with these systems.