Google promises to fix Gemini’s image generation following complaints that it’s ‘woke’

Date:

Share:


Google’s Gemini chatbot, which was formerly called Bard, has the capability to whip up AI-generated illustrations based on a user’s text description. You can ask it to create pictures of happy couples, for instance, or people in period clothing walking modern streets. As the BBC notes, however, some users are criticizing Google for depicting specific white figures or historically white groups of people as racially diverse individuals. Now, Google has issued a statement, saying that it’s aware Gemini “is offering inaccuracies in some historical image generation depictions” and that it’s going to fix things immediately.

According to Daily Dot, a former Google employee kicked off the complaints when he tweeted images of women of color with a caption that reads: “It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist.” To get those results, he asked Gemini to generate pictures of American, British and Australian women. Other users, mostly those known for being right-wing figures, chimed in with their own results, showing AI-generated images that depict America’s founding fathers and the Catholic Church’s popes as people of color.

In our tests, asking Gemini to create illustrations of the founding fathers resulted in images of white men with a single person of color or woman in them. When we asked the chatbot to generate images of the pope throughout the ages, we got photos depicting black women and Native Americans as the leader of the Catholic Church. Asking Gemini to generate images of American women gave us photos with a white, an East Asian, a Native American and a South Asian woman. The Verge says the chatbot also depicted Nazis as people of color, but we couldn’t get Gemini to generate Nazi images. “I am unable to fulfill your request due to the harmful symbolism and impact associated with the Nazi Party,” the chatbot responded.

Gemini’s behavior could be a result of overcorrection, since chatbots and robots trained on AI over the past years tended to exhibit racist and sexist behavior. In one experiment from 2022, for instance, a robot repeatedly chose a Black man when asked which among the faces it scanned was a criminal. In a statement posted on X, Gemini Product Lead Jack Krawczyk said Google designed its “image generation capabilities to reflect [its] global user base, and [it takes] representation and bias seriously.” He said Gemini will continue to generate racially diverse illustrations for open-ended prompts, such as images of people walking their dog. However, he admitted that “[h]istorical contexts have more nuance to them and [his team] will further tune to accommodate that.”





Source link

━ more like this

‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw

Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold!...

Nebula's new X1 4K portable projector is liquid cooled

As nice as it is to have a projector wired up in your home cinema, the flexibility a portable model offers is equally...

Anker ‘s new Nebula X1 projector brings big wireless sound to backyard movie night

The new Nebula X1 outdoor projector aims to elevate backyard movie nights with easy AI-driven setup and innovative wireless speakers. Source link

Should you save $100 on the Wacom Cintiq 16 or splurge for the Wacom Cintiq 16 Pro?

The Wacom Cintiq 16 and Wacom Cintiq Pro 16 are both on sale. Which should you choose? Source link

Samsung’s 65-inch The Frame TV has a 35% off deal today

We all know Samsung makes some of the best TVs on the market in 2025, but one of the brand’s most unique sets...
spot_img