Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models

Date:

Share:


The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”

The information comes as part of an updated cooperative research and development agreement for AI Safety Institute consortium members, sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. Such biases are hugely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups.

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes. It also adds emphasis on putting America first, asking one working group to develop testing tools “to expand America’s global AI position.”

“The Trump administration has removed safety, fairness, misinformation, and responsibility as things it values for AI, which I think speaks for itself,” says one researcher at an organization working with the AI Safety Institute, who asked not to be named for fear of reprisal.

The researcher believes that ignoring these issues could harm regular users by possibly allowing algorithms that discriminate based on income or other demographics to go unchecked. “Unless you’re a tech billionaire, this is going to lead to a worse future for you and the people you care about. Expect AI to be unfair, discriminatory, unsafe, and deployed irresponsibly,” the researcher claims.

“It’s wild,” says another researcher who has worked with the AI Safety Institute in the past. “What does it even mean for humans to flourish?”

Elon Musk, who is currently leading a controversial effort to slash government spending and bureaucracy on behalf of President Trump, has criticized AI models built by OpenAI and Google. Last February, he posted a meme on X in which Gemini and OpenAI were labeled “racist” and “woke.” He often cites an incident where one of Google’s models debated whether it would be wrong to misgender someone even if it would prevent a nuclear apocalypse—a highly unlikely scenario. Besides Tesla and SpaceX, Musk runs xAI, an AI company that competes directly with OpenAI and Google. A researcher who advises xAI recently developed a novel technique for possibly altering the political leanings of large language models, as reported by WIRED.

A growing body of research shows that political bias in AI models can impact both liberals and conservatives. For example, a study of Twitter’s recommendation algorithm published in 2021 showed that users were more likely to be shown right-leaning perspectives on the platform.

Since January, Musk’s so-called Department of Government Efficiency (DOGE) has been sweeping through the US government, effectively firing civil servants, pausing spending, and creating an environment thought to be hostile to those who might oppose the Trump administration’s aims. Some government departments such as the Department of Education have archived and deleted documents that mention DEI. DOGE has also targeted NIST, the parent organization of AISI, in recent weeks. Dozens of employees have been fired.



Source link

━ more like this

Watch NASA’s SpaceX Crew-10 astronauts return to Earth

The astronauts part of SpaceX's Crew-10 mission are on their way back home. Their Dragon capsule called Endurance is scheduled to splash down...

Ukrainian special forces strike deep inside Russia blowing up a drone storage site – London Business News | Londonlovesbusiness.com

Ukrainian special forces have attacked a “logistics hub” storing Shahed drones deep behind enemy lines on Saturday. The SBU Special Operations Center “A” attacked...

Ville Helenius: Better programme delivery with ProMeSe – London Business News | Londonlovesbusiness.com

Ville Helenius has redefined the game in major programme delivery. His Oxford research entitled Programme Management Methods and Programme Performance: The Role of the Cost of...

Russia issues a NOTAM as Putin is set to launch ‘doomsday’ nuclear missile – London Business News | Londonlovesbusiness.com

Russian authorities have issued a Notice to Airman (NOTAM) as Vladimir Putin is set to test fire the “unstoppable doomsday” nuclear missile dubbed...

A magical farming sim, cat museum exploration and other new indie games worth checking out

Welcome to our latest recap of what's going on in the indie game space. This week, Nintendo held its latest Indie World showcase...
spot_img