Anthropic Revokes OpenAI’s Access to Claude

Date:

Share:


Anthropic revoked OpenAI’s API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service.

“Claude Code has become the go-to choice for coders everywhere and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5,” Anthropic spokesperson Christopher Nulty said in a statement to WIRED. “Unfortunately, this is a direct violation of our terms of service.”

According to Anthropic’s commercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models” or “reverse engineer or duplicate” the services. This change in OpenAI’s access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding.

OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude’s capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models’ behavior under similar conditions and make adjustments as needed.

“It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them,” OpenAI’s chief communications officer Hannah Wong said in a statement to WIRED.

Nulty says that Anthropic will “continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.” The company did not respond to WIRED’s request for clarification on if and how OpenAI’s current Claude API restriction would impact this work.

Top tech companies yanking API access from competitors has been a tactic in the tech industry for years. Facebook did the same to Twitter-owned Vine (which led to allegations of anticompetitive behavior) and last month Salesforce restricted competitors from accessing certain data through the Slack API. This isn’t even a first for Anthropic. Last month, the company restricted the AI coding startup Windsurf’s direct access to its models after it was rumored OpenAI was set to acquire it. (That deal fell through).

Anthropic’s chief science officer Jared Kaplan spoke to TechCrunch at the time about revoking Windsurf’s access to Claude, saying “I think it would be odd for us to be selling Claude to OpenAI.”

A day before cutting off OpenAI’s access to the Claude API, Anthropic announced new rate limits on Claude Code, its AI-powered coding tool, citing explosive usage and, in some cases, violations of its terms of service.



Source link

━ more like this

Meta’s smart glasses could soon identify people in real time

Five years after shutting down facial recognition on Facebook over privacy concerns, Meta is preparing to bring the technology back – this time...

Zillow Has Gone Wild—for AI

This will not be a banner year for the real estate app Zillow. “We describe the home market as bouncing along the bottom,”...

Pick up Apple’s iPhone Air MagSafe battery pack while it’s down to a record-low price

Despite its supremely sleek design, the iPhone Air actually has a pretty respectable battery life, lasting for somewhere in the region of 27...

Chatbots rated more empathetic than humans in controlled tests

Recent research into artificial intelligence’s emotional capabilities indicates that AI chatbots – long dismissed as rule-based and mechanical – may be better at...

Good Luck, Have Fun, Don’t Die rails against AI in style

You've seen this movie before: A disheveled man (Sam Rockwell) busts into a restaurant, threatening to blow up the joint unless a crew...
spot_img