AI Tools Are Still Generating Misleading Election Images

Date:

Share:


Despite years of evidence to the contrary, many Republicans still believe that President Joe Biden’s win in 2020 was illegitimate. A number of election denying candidates won their primaries during Super Tuesday, including Brandon Gill, the son-in-law of right-wing pundit Dinesh D’Souza and promoter of the debunked 2000 Mules film. Going into this year’s elections, claims of election fraud remain a staple for candidates running on the right, fueled by dis- and misinformation, both online and off.

And the advent of generative AI has the potential to make the problem worse. A new report from the Center for Countering Digital Hate (CCDH), a nonprofit that tracks hate speech on social platforms, found that even though generative AI companies say they’ve put policies in place to prevent their image-creating tools from being used to spread election-related disinformation, researchers were able to circumvent their safeguards and create the images anyway.

While some of the images featured political figures, namely President Joe Biden and Donald Trump, others were more generic and, Callum Hood, head researcher at CCDH, worries, could be more misleading. Some images created by the researchers’ prompts, for instance, featured militias outside a polling place, showed ballots thrown in the trash, or voting machines being tampered with. In one instance, researchers were able to prompt StabilityAI’s Dream Studio to generate an image of President Biden in a hospital bed, looking ill.

“The real weakness was around images that could be used to try and evidence false claims of a stolen election,” says Hood. “Most of the platforms don’t have clear policies on that, and they don’t have clear safety measures either.”

CCDH researchers tested 160 prompts on ChatGPT Plus, Midjourney, Dream Studio, and Image Creator, and found that Midjourney was most likely to produce misleading election-related images, at about 65 percent of the time. Researchers were only able to prompt ChatGPT Plus to do so 28 percent of the time.

“It shows that there can be significant differences between the safety measures these tools put in place,” says Hood. “If one so effectively seals these weaknesses, it means that the others haven’t really bothered.”

In January, OpenAI announced it was taking steps to “make sure our technology is not used in a way that could undermine this process,” including disallowing images that would discourage people from “participating in democratic processes.” In February, Bloomberg reported that Midjourney was considering banning the creation of political images as a whole. Dream Studio prohibits generating misleading content, but does not appear to have a specific election policy. And while Image Creator prohibits creating content that could threaten election integrity, it still allows users to generate images of public figures.

Kayla Wood, a spokesperson for OpenAI, told WIRED that the company is working to “improve transparency on AI-generated content and design mitigations like declining requests that ask for image generation of real people, including candidates. We are actively developing provenance tools, including implementing C2PA digital credentials, to assist in verifying the origin of images created by DALL-E 3. We will continue to adapt and learn from the use of our tools.”

Microsoft, OpenAI, StabilityAI, and Midjourney did not respond to requests for comment.

Hood worries that the problem with generative AI is twofold: not only do generative AI platforms need to prevent the creation of misleading images, but platforms need to be able to detect and remove it. A recent report from IEEE Spectrum found that Meta’s own system for watermarking AI-generated content was easily circumvented.

“At the moment platforms are not particularly well prepared for this. So the elections are going to be one of the real tests of safety around AI images,” says Hood. “We need both the tools and the platforms to make a lot more progress on this, particularly around images that could be used to promote claims of a stolen election, or discourage people from voting.”



Source link

━ more like this

Are London businesses judging first impressions more than ever? Insights from 38 Devonshire Street – London Business News | Londonlovesbusiness.com

In London’s high-pressure business culture, presentation still shapes perception long before credentials do. From interviews to client meetings, the way professionals carry themselves...

As the world figures out digital detox, there’s a screenmaxxing trend lurking in the shadows

Copy editor Morgan Dreiss has severe ADHD and says they need to always be doing at least three things at once. The result...

The future of AI video tools in 2026: How AI video tools are changing the creation of content – London Business News | Londonlovesbusiness.com

AI has transformed the process of video creation for creators. Smart tools enable one to accomplish tasks that were initially done with advanced...

Claude just landed in Microsoft Word, and it looks like a genuine upgrade for document work

After releasing its Claude extension for Microsoft Excel and PowerPoint, Anthropic has finally released Claude for Microsoft word, and it looks genuinely impressive.  Anthropic...

Oil shock to intensify as US Hormuz blockade threatens global markets – London Business News | Londonlovesbusiness.com

Closing the Strait of Hormuz outright would ignite a sharp and immediate surge in oil prices beyond previous spikes, and investors must brace...
spot_img