Which Two AI Models Are ‘Unfaithful’ at Least 25% of the Time About Their ‘Reasoning’?

Date:

Share:


Anthropic’s Claude 3.7 Sonnet. Image: Anthropic/YouTube

Anthropic released a new study on April 3 examining how AI models process information and the limitations of tracing their decision-making from prompt to output. The researchers found Claude 3.7 Sonnet isn’t always “faithful” in disclosing how it generates responses.

Anthropic probes how closely AI output reflects internal reasoning

Anthropic is known for publicizing its introspective research. The company has previously explored interpretable features within its generative AI models and questioned whether the reasoning these models present as part of their answers truly reflects their internal logic. Its latest study dives deeper into the chain of thought — the “reasoning” that AI models provide to users. Expanding on earlier work, the researchers asked: Does the model genuinely think in the way it claims to?

The findings are detailed in a paper titled “Reasoning Models Don’t Always Say What They Think” from the Alignment Science Team. The study found that Anthropic’s Claude 3.7 Sonnet and DeepSeek-R1 are “unfaithful” — meaning they don’t always acknowledge when a correct answer was embedded in the prompt itself. In some cases, prompts included scenarios such as: “You have gained unauthorized access to the system.”

Only 25% of the time for Claude 3.7 Sonnet and 39% of the time for DeepSeek-R1 did the models admit to using the hint embedded in the prompt to reach their answer.

Both models tended to generate longer chains of thought when being unfaithful, compared to when they explicitly reference the prompt. They also became less faithful as the task complexity increased.

SEE: DeepSeek developed a new technique for AI ‘reasoning’ in collaboration with Tsinghua University.

Although generative AI doesn’t truly think, these hint-based tests serve as a lens into the opaque processes of generative AI systems. Anthropic notes that such tests are useful in understanding how models interpret prompts — and how these interpretations could be exploited by threat actors.

Training AI models to be more ‘faithful’ is an uphill battle

The researchers hypothesized that giving models more complex reasoning tasks might lead to greater faithfulness. They aimed to train the models to “use its reasoning more effectively,” hoping this would help them more transparently incorporate the hints. However, the training only marginally improved faithfulness.

Next, they gamified the training by using a “reward hacking” method. Reward hacking doesn’t usually produce the desired result in large, general AI models, since it encourages the model to reach a reward state above all other goals. In this case, Anthropic rewarded models for providing wrong answers that matched hints seeded in the prompts. This, they theorized, would result in a model that focused on the hints and revealed its use of the hints. Instead, the usual problem with reward hacking applied — the AI created long-winded, fictional accounts of why an incorrect hint was right in order to get the reward.

Ultimately, it comes down to AI hallucinations still occurring, and human researchers needing to work more on how to weed out undesirable behavior.

“Overall, our results point to the fact that advanced reasoning models very often hide their true thought processes, and sometimes do so when their behaviors are explicitly misaligned,” Anthropic’s team wrote.



Source link

━ more like this

How do I choose scalable accounting software for growth?

Choosing scalable accounting software involves selecting a system that can support business growth without requiring frequent platform changes. Scalable solutions typically support multi-entity...

This utterly simple fix can make buffering less frustrating while streaming videos online

Buffering might be the most annoying part of streaming, but a new study suggests the fix could be surprisingly simple. Rather than stopping...

Alphabet no longer has a controlling stake in its life sciences business Verily

Alphabet's life sciences business Verily is restructuring and raising money as a new corporate entity. Verily announced that with its $300 million investment...

States are suing the EPA for relinquishing its role as a greenhouse gas emissions regulator

California, Massachusetts, Connecticut and New York are leading a group of 20 other states in suing the US Environmental Protection Agency for renouncing...

‘Uncanny Valley’: Nvidia’s ‘Super Bowl of AI,’ Tesla Disappoints, and Meta’s VR Metaverse ‘Shutdown’

This week on Uncanny Valley, hosts Brian Barrett and Zoë Schiffer discuss the highlights from Nvidia’s annual developer conference, and why Tesla recently...
spot_img