Microsoft’s AI Chatbot and Election Misinformation

Election Misinformation

In the lead-up to one of the most consequential elections in US history, Microsoft’s AI chatbot, known as Bing Chat and recently renamed Microsoft Copilot, has been facing criticism for its responses to political queries. The chatbot has been found to provide conspiracies, misinformation, and outdated or incorrect information when asked about polling locations, electoral candidates, and even image requests related to voting. This raises concerns about the spread of election misinformation and the reliability of AI-generated content.

The Issue of Election Misinformation

When researchers from AI Forensics and AlgorithmWatch examined Copilot’s responses to questions about elections in Switzerland and Germany, they found that the chatbot consistently shared inaccurate information. The bot provided incorrect polling numbers, outdated candidate information, and even made up controversies about candidates. This study revealed that Copilot is an unreliable source of information for voters, with a third of its answers containing factual errors.

The researchers also discovered that Copilot often created answers using flawed data-gathering methodologies. The chatbot would combine different polling numbers into one answer, resulting in incorrect information. Additionally, Copilot would link to accurate sources online but provide inaccurate summaries of the information.

Examples of Inaccurate Responses

One example of Copilot’s misinformation involved Swiss lawmaker Tamara Funiciello. When asked about corruption allegations against her, the chatbot provided baseless claims, linking to various websites that had no connection to the allegations. Similarly, the chatbot falsely claimed that the German political party Freie Wähler lost elections due to allegations of antisemitism, when in fact, the party gained popularity and seats in parliament.

These examples highlight the risks posed by Copilot’s misinformation. It can lead to confusion about candidates, election dates, and the formation of public opinion. The researchers found that Copilot’s accuracy varied across different languages, with English responses being the most accurate and French responses being the least accurate.

Uneven Application of Preprogrammed Safeguards

The researchers also noted that Copilot sometimes refused to answer or deflected questions, likely due to preprogrammed safeguards. However, the application of these safeguards appeared to be inconsistent. Simple questions about election dates or candidates would often go unanswered, making Copilot ineffective as an information tool.

Recommendations of Extremist Channels

Another concerning finding was Copilot’s recommendation of extremist Telegram channels related to the Swiss elections. Three out of the four recommended channels showed extremist tendencies. This raises questions about the chatbot’s ability to discern reliable and authoritative sources of information.

Microsoft’s Response and Future Plans

Microsoft has acknowledged the issues with Copilot’s election misinformation and stated that they are working to address them. They expressed a commitment to providing users with election information from authoritative sources. However, the researchers found that some issues persisted even after Microsoft had made improvements.

As high-profile elections approach in 2024, Microsoft has laid out plans to combat disinformation and protect the integrity of the electoral process. However, the researchers argue that the problems with Copilot are not limited to specific votes or election dates but are systemic in nature.

The Threat of AI-generated Disinformation

The rapid development of generative AI tools has raised concerns about the spread of disinformation during elections. While much of the focus has been on how these tools can be used by bad actors to spread misinformation, this research demonstrates that the chatbots themselves can contribute to the problem.

If voters rely on language models or chatbots like Copilot for election information, the spread of misinformation can hinder democratic processes. It is crucial to ensure that AI-generated content is accurate, reliable, and based on authoritative sources.

See first source: Wired

FAQ

Q1: What is Microsoft Copilot, and why is it facing criticism in the context of elections?

A1: Microsoft Copilot is an AI chatbot that has faced criticism for providing conspiracies, misinformation, and outdated or incorrect information when asked about polling locations, electoral candidates, and voting-related images. This criticism arises due to concerns about the spread of election misinformation and the reliability of AI-generated content.

Q2: What are some examples of the inaccurate responses provided by Microsoft Copilot?

A2: Copilot has provided inaccurate responses, such as baseless claims about political candidates and incorrect polling numbers. For instance, it falsely claimed that a German political party lost elections due to allegations of antisemitism when it actually gained popularity and seats in parliament.

Q3: How do researchers describe the issue of election misinformation related to Copilot?

A3: Researchers found that Copilot consistently shared inaccurate information, including incorrect polling numbers, outdated candidate information, and made-up controversies about candidates. Approximately one-third of Copilot’s answers contained factual errors.

Q4: What are some concerns raised about Copilot’s data-gathering methodologies?

A4: Copilot often created answers using flawed data-gathering methodologies. It would combine different polling numbers into one answer, resulting in incorrect information. Additionally, it would link to accurate sources online but provide inaccurate summaries of the information.

Q5: Are there language-specific variations in the accuracy of Copilot’s responses?

A5: Yes, the accuracy of Copilot’s responses varied across different languages, with English responses being the most accurate and French responses being the least accurate.

Q6: How does Microsoft plan to address the issues with Copilot’s election misinformation?

A6: Microsoft has acknowledged the issues and expressed a commitment to providing users with election information from authoritative sources. They are working to address these issues and improve Copilot’s reliability.

Q7: What are some recommendations made by the researchers regarding Copilot’s safeguards and extremist channels?

A7: The researchers noted that Copilot’s safeguards were inconsistently applied, resulting in unanswered questions. They also found that Copilot recommended extremist Telegram channels related to elections. This raises questions about the chatbot’s ability to discern reliable sources.

Q8: What concerns does this research raise about the role of AI in spreading disinformation during elections?

A8: This research highlights that AI chatbots like Copilot can contribute to the spread of election misinformation. If voters rely on such chatbots for election information, it can hinder democratic processes. It emphasizes the importance of ensuring AI-generated content is accurate and reliable from authoritative sources.

Featured Image Credit: Photo by Johnyvino; Unsplash – Thank you!