Anthropic refuses to bow to Pentagon despite Hegseth’s threats

Date:

Share:


Despite an ultimatum from Defense Secretary Pete Hegseth, Anthropic said that it can’t “in good conscience” comply with a Pentagon edict to remove guardrails on its AI, CEO Dario Amodei wrote in a blog post. The Department of Defense had threatened to cancel a $200 million contract and label Anthropic a “supply chain risk” if it didn’t agree to remove safeguards over mass surveillance and autonomous weapons.

“Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place,” Amodei said. “We remain ready to continue our work to support the national security of the United States.”

In response, US Under Secretary of Defense Emil Michael accused Amodei in a post on X of wanting “nothing more than to try to personally control the US military and is OK putting our nation’s safety at risk.”

The standoff began when the Pentagon demanded that Anthropic its Claude AI product available for “all lawful purposes” — including mass surveillance and the development of fully autonomous weapons that can kill without human supervision. Anthropic refused to offer its tech for those things, even with a “safety stack” built into that model.

Yesterday, Axios reported that Hegseth gave Anthropic a deadline of 5:01 PM on Friday to agree to the Pentagon’s terms. At the same time, the DoD requested an assessment of its reliance on Claude, an initial step toward potentially labelling Anthropic as a “supply chain risk” — a designation usually reserved for firms from adversaries like China and “never before applied to an American company,” Anthropic wrote.

Amodei declined to change his stance and stated that if the Pentagon chose to offboard Anthropic, “we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations or other critical missions.” Grok is one of the other providers the DoD is reportedly considering, along with Google’s Gemini and OpenAI.

It may not be that simple for the military to disentangle itself from Claude, however. Up until now, Anthropic’s model has been the only one allowed for the military’s most sensitive tasks in intelligence, weapons development and battlefield operations. Claude was reportedly used in the Venezuelan raid in which the US military exfiltrated the country’s president, Nicolás Maduro, and his wife.

AI companies have been widely criticized for potential harm to users, but mass surveillance and weapons development would clearly take that to a new level. Anthropic’s potential reply to the Pentagon was seen as a test of its claim to be the most safety-forward AI company, particularly after dropping its flagship safety pledge a few days ago. Now that Amodei has responded, the focus will shift to the Pentagon to see if it follows through on its threats, which could seriously harm Anthropic.



Source link

━ more like this

A new breed of Android flagships is coming and it should make Samsung nervous

A new wave of Android flagships is on the horizon, and they’re not playing it safe. The biggest shift is that these phones...

Watch the trailer for Science Saru’s Ghost in the Shell anime series

A new trailer has given us our best look yet at the upcoming The Ghost in the Shell anime. While it might not...

Apple is opening Siri to pick AI models, but there’s only only that makes sense to me 

Apple promised us a smarter, more capable Siri at WWDC 2024. The pitch was compelling: a Siri that understands your personal context, digs...

YouTube CEO opens up about AI slop, and it sounds like cozy promises

YouTube is in a slightly tricky position right now. On one hand, it’s encouraging creators to use AI tools to make content faster...

Meta’s next smart glasses sound like a treat for humans stuck with prescription lenses

For the billions of people who rely on corrective glasses every day (including me), smart glasses have always been a slightly awkward conversation....
spot_img