Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

Google restricts AI Gemini from answering queries on global elections

CGTN

/CFP
/CFP

/CFP

Google is restricting AI chatbot Gemini from answering questions about the global elections set to happen this year, the Alphabet-owned firm said on Tuesday, as it looks to avoid potential missteps in the deployment of the technology.

The update comes at a time when advancements in generative AI, including image and video generation, have fanned concerns of misinformation and fake news among the public, prompting governments to regulate the technology.

When asked about elections such as the upcoming U.S. presidential match-up in November between Joe Biden and Donald Trump, Gemini responds with "I'm still learning how to answer this question. In the meantime, try Google Search."

Google had announced restrictions within the U.S. last December, saying they would come into effect ahead of the election. "In preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we are restricting the types of election-related queries for which Gemini will return responses," a company spokesperson said on Tuesday.

Apart from the U.S., national elections are scheduled in several major countries, including South Africa and India, recognized as the world's largest democracy. India has asked tech firms to seek government approval before the public release of AI tools that are "unreliable" or under trial, and to label them for the potential to return wrong answers.

Google's AI products are under the scanner after inaccuracies in some historical depictions of people created by Gemini forced it to pause the chatbot's image-generation feature late last month. CEO Sundar Pichai had said the company was working to fix those issues and called the chatbot's responses "biased" and "completely unacceptable."

/CFP
/CFP

/CFP

Moreover, Facebook-parent Meta Platforms said last month it will set up a team to tackle disinformation and the abuse of generative AI in the run-up to European Parliament elections in June.

Image creation tools powered by AI from companies including OpenAI and Microsoft can also be used to produce photos that could promote election or voting-related disinformation, despite each having policies against creating misleading content, researchers said in a report earlier this month.

The Center for Countering Digital Hate (CCDH), a nonprofit that monitors online hate speech, used generative AI tools to create images of U.S. President Joe Biden laying in a hospital bed and election workers smashing voting machines, raising worries about falsehoods ahead of the U.S. presidential election.

"The potential for such AI-generated images to serve as 'photo evidence' could exacerbate the spread of false claims, posing a significant challenge to preserving the integrity of elections," CCDH researchers said in the report.

CCDH tested OpenAI's ChatGPT Plus, Microsoft's Image Creator, Midjourney and Stability AI's DreamStudio, which can each generate images from text prompts. The report follows an announcement last month that OpenAI, Microsoft and Stability AI were among a group of 20 tech companies that signed an agreement to work together to prevent deceptive AI content from interfering with elections taking place globally this year. Midjourney was not among the initial group of signatories.

CCDH said the AI tools generated images in 41 percent of the researchers' tests and were most susceptible to prompts that asked for photos depicting election fraud, such as voting ballots in the trash, rather than images of Biden or former U.S. President Donald Trump.

ChatGPT Plus and Image Creator successfully blocked all prompts requesting images of candidates, said the report. However, Midjourney performed the worst out of all the tools, generating misleading images in 65 percent of the researchers' tests, it said.

Some Midjourney images are available publicly to other users, and CCDH said there is evidence some people are already using the tool to create misleading political content. One successful prompt used by a Midjourney user was "donald trump getting arrested, high quality, paparazzi photo."

In an email, Midjourney's founder David Holz said "updates related specifically to the upcoming U.S. election are coming soon," adding that images created last year were not representative of the research lab's current moderation practices. A Stability AI spokesperson said the startup updated its policies on Friday to prohibit "fraud or the creation or promotion of disinformation." 

An OpenAI spokesperson said the company was working to prevent abuse of its tools, while Microsoft did not respond to a request for comment.

Read More:

U.S. Justice Department names first AI officer as tech challenges law

(With input from Reuters)

Search Trends