Google promises to fix Gemini’s image generation following complaints that it’s ‘woke’

Date:

Share:


Google’s Gemini chatbot, which was formerly called Bard, has the capability to whip up AI-generated illustrations based on a user’s text description. You can ask it to create pictures of happy couples, for instance, or people in period clothing walking modern streets. As the BBC notes, however, some users are criticizing Google for depicting specific white figures or historically white groups of people as racially diverse individuals. Now, Google has issued a statement, saying that it’s aware Gemini “is offering inaccuracies in some historical image generation depictions” and that it’s going to fix things immediately.

According to Daily Dot, a former Google employee kicked off the complaints when he tweeted images of women of color with a caption that reads: “It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist.” To get those results, he asked Gemini to generate pictures of American, British and Australian women. Other users, mostly those known for being right-wing figures, chimed in with their own results, showing AI-generated images that depict America’s founding fathers and the Catholic Church’s popes as people of color.

In our tests, asking Gemini to create illustrations of the founding fathers resulted in images of white men with a single person of color or woman in them. When we asked the chatbot to generate images of the pope throughout the ages, we got photos depicting black women and Native Americans as the leader of the Catholic Church. Asking Gemini to generate images of American women gave us photos with a white, an East Asian, a Native American and a South Asian woman. The Verge says the chatbot also depicted Nazis as people of color, but we couldn’t get Gemini to generate Nazi images. “I am unable to fulfill your request due to the harmful symbolism and impact associated with the Nazi Party,” the chatbot responded.

Gemini’s behavior could be a result of overcorrection, since chatbots and robots trained on AI over the past years tended to exhibit racist and sexist behavior. In one experiment from 2022, for instance, a robot repeatedly chose a Black man when asked which among the faces it scanned was a criminal. In a statement posted on X, Gemini Product Lead Jack Krawczyk said Google designed its “image generation capabilities to reflect [its] global user base, and [it takes] representation and bias seriously.” He said Gemini will continue to generate racially diverse illustrations for open-ended prompts, such as images of people walking their dog. However, he admitted that “[h]istorical contexts have more nuance to them and [his team] will further tune to accommodate that.”





Source link

━ more like this

Experts warn UK could run out of medicines in weeks if Iran war continues – London Business News | Londonlovesbusiness.com

Britain is on the brink of a medicine crisis, with shortages of drugs ranging from painkillers to cancer treatments expected within weeks if...

US Marines deploy as Iran threatens to ‘set troops on fire’ – London Business News | Londonlovesbusiness.com

The United States has deployed thousands of Marines to the Middle East, significantly heightening tensions with Iran as both sides approach the possibility...

A new breed of Android flagships is coming and it should make Samsung nervous

A new wave of Android flagships is on the horizon, and they’re not playing it safe. The biggest shift is that these phones...
spot_img