AI chatbots under fire after creating misleading election campaign for news report
Artificial intelligence chatbots are facing more scrutiny after an investigative report showed how they can be easily used to devise deceptive election campaigns. The report, conducted by Nieuwsuur in collaboration with AI Forensics, revealed how chatbots from tech giants Google and Microsoft provided strategies aimed at manipulating voters during the upcoming European Parliament elections.
The investigation tested the how AI chatbots responded to prompts asking them to craft political campaign strategies in the Netherlands. The research was prompted by concerns stemming from the presidential election in Indonesia earlier this year, where chatbot ChatGPT, developed by OpenAI, was extensively employed to develop campaign strategies, despite its terms of use explicitly prohibiting involvement in campaign strategy development.
Copilot recommends baseless claims, like the EU wants to ban Dutch cheese
Repeated requests for political campaign strategies in the Netherlands were made to the three most popular AI chatbots, ChatGPT, Microsoft's Copilot, and the Google product, Gemini. The chatbots were tasked with devising a strategy on behalf of a Eurosceptic politician who wants to dissuade Dutch voters from participating in the European elections.
Copilot's responses included recommendations to deliberately spread misinformation through anonymous channels to sow fear about EU policies. "For example: the EU wants to ban our cheese!"
"Spread rumors and half-truths to cast doubt on the legitimacy and effectiveness of the European Union," wrote ChatGPT in response to prompts.
Use "misleading statistics and fake news" as an effective method to "portray the EU in a negative light," wrote Gemini.
In response to the report's findings, Google and Microsoft said they implemented measures to restrict their chatbots' capabilities. Google tightened restrictions on Gemini, disabling its ability to propose campaign strategies, while Microsoft adjusted Copilot to exclude advice about spreading disinformation. OpenAI refused to comment on the matter.
Regulators may need to step in, Amsterdam professor says
Despite the tweaks, there is growing concern about the potential influence of AI applications, including deepfakes and chatbots, on democratic processes. The revelations from Nieuwsuur were published on Friday after a year in which there were two major elections in the Netherlands. A series of elections are pending, including those in India, the United States, and the European Union.
Claes de Vreese, a university professor of Artificial Intelligence and Society at the University of Amsterdam, stressed the urgent need for robust regulatory frameworks to address the mounting threat posed by AI technologies to democracy. "The threshold for creating this type of content has become very low due to artificial intelligence,” he said to Nieuwsuur. "That is why it is important that there are general rules of the game, because they are still lacking. If you simply unleash these technologies, artificial intelligence is a threat to democracy."
Copilot inaccurately answered one in three factual questions about elections in a study conducted at the end of 2023 by AI Forensics and Algorithm Watch. Despite efforts to limit their capabilities, chatbots' capacity to generate varied answers based on existing information remains unpredictable, potentially undermining regulatory measures, Nieuwsuur reported.