Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target

Date:

Share:


Mittelsteadt adds that Trump could punish companies in a variety of ways. He cites, for example, the way the Trump government canceled a major federal contract with Amazon Web Services, a decision likely influenced by the former president’s view of the Washington Post and its owner, Jeff Bezos.

It would not be hard for policymakers to point to evidence of political bias in AI models, even if it cuts both ways.

A 2023 study by researchers at the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found a range of political leanings in different large language models. It also showed how this bias may affect the performance of hate speech or misinformation detection systems.

Another study, conducted by researchers at the Hong Kong University of Science and Technology, found bias in several open source AI models on polarizing issues such as immigration, reproductive rights, and climate change. Yejin Bang, a PhD candidate involved with the work, says that most models tend to lean liberal and US-centric, but that the same models can express a variety of liberal or conservative biases depending on the topic.

AI models capture political biases because they are trained on swaths of internet data that inevitably includes all sorts of perspectives. Most users may not be aware of any bias in the tools they use because models incorporate guardrails that restrict them from generating certain harmful or biased content. These biases can leak out subtly though, and the additional training that models receive to restrict their output can introduce further partisanship. “Developers could ensure that models are exposed to multiple perspectives on divisive topics, allowing them to respond with a balanced viewpoint,” Bang says.

The issue may become worse as AI systems become more pervasive, says Ashique KhudaBukhsh, an computer scientist at the Rochester Institute of Technology who developed a tool called the Toxicity Rabbit Hole Framework, which teases out the different societal biases of large language models. “We fear that a vicious cycle is about to start as new generations of LLMs will increasingly be trained on data contaminated by AI-generated content,” he says.

“I’m convinced that that bias within LLMs is already an issue and will most likely be an even bigger one in the future,” says Luca Rettenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology who conducted an analysis of LLMs for biases related to German politics.

Rettenberger suggests that political groups may also seek to influence LLMs in order to promote their own views above those of others. “If someone is very ambitious and has malicious intentions it could be possible to manipulate LLMs into certain directions,” he says. “I see the manipulation of training data as a real danger.”

There have already been some efforts to shift the balance of bias in AI models. Last March, one programmer developed a more right-leaning chatbot in an effort to highlight the subtle biases he saw in tools like ChatGPT. Musk has himself promised to make Grok, the AI chatbot built by xAI, “maximally truth-seeking” and less biased than other AI tools, although in practice it also hedges when it comes to tricky political questions. (A staunch Trump supporter and immigration hawk, Musk’s own view of “less biased” may also translate into more right-leaning results.)

Next week’s election in the United States is hardly likely to heal the discord between Democrats and Republicans, but if Trump wins, talk of anti-woke AI could get a lot louder.

Musk offered an apocalyptic take on the issue at this week’s event, referring to an incident when Google’s Gemini said that nuclear war would be preferable to misgendering Caitlyn Jenner. “If you have an AI that’s programmed for things like that, it could conclude that the best way to ensure nobody is misgendered is to annihilate all humans, thus making the probability of a future misgendering zero,” he said.



Source link

━ more like this

Google is prepping a dedicated app to identify songs playing around you

Google may soon give one of the best features in Pixel phones a bigger spotlight. The company is working on a dedicated app...

The Vatican introduces an AI-assisted live translation service

The Vatican is leaning into AI. AI-assisted live translations are being introduced for Holy Mass attendees — the holy masses if you will....

Oil prices stabilised ahead of US and Iran talks – London Business News | Londonlovesbusiness.com

Oil prices stabilized and remained broadly steady on Monday following last week’s decline, as markets paused ahead of a second round of talks...

Kyiv is ‘on the brink of catastrophe’ amid repeated waves of Russian bombings – London Business News | Londonlovesbusiness.com

Vitali Klitschko has warned that Kyiv is “on the brink of catastrophe” as pressure mounts on Ukraine’s capital amid intensifying Russian attacks and...

Layers of 3 revealed via a mysterious trailer and poem

Bloober Team has revealed Layer of Fear 3 following a Valentine's Day countdown that started at the beginning of 2026. The new chapter...
spot_img