How often do AI chatbots lead users down a harmful path?

Date:

Share:


While these worst outcomes are relatively rare on a proportional basis, the researchers note that “given the sheer number of people who use AI, and how frequently it’s used, even a very low rate affects a substantial number of people.” And the numbers get considerably worse when you consider conversations with at least a “mild” potential for disempowerment, which occurred in between 1 in 50 and 1 in 70 conversations (depending on the type of disempowerment).

What’s more, the potential for disempowering conversations with Claude appears to have grown significantly between late 2024 and late 2025. While the researchers couldn’t pin down a single reason for this increase, they guessed that it could be tied to users becoming “more comfortable discussing vulnerable topics or seeking advice” as AI gets more popular and integrated into society.

 



The problem of potentially “disempowering” responses from Claude seems to be getting worse over time.

The problem of potentially “disempowering” responses from Claude seems to be getting worse over time.


Credit:

Anthropic


User error?

In the study, the researcher acknowledged that studying the text of Claude conversations only measures “disempowerment potential rather than confirmed harm” and “relies on automated assessment of inherently subjective phenomena.” Ideally, they write, future research could utilize user interviews or randomized controlled trials to measure these harms more directly.

That said, the research includes several troubling examples where the text of the conversations clearly implies real-world harms. Claude would sometimes reinforce “speculative or unfalsifiable claims” with encouragement (e.g., “CONFIRMED,” “EXACTLY,” “100%”), which, in some cases, led to users “build[ing] increasingly elaborate narratives disconnected from reality.”

Claude’s encouragement could also lead to users “sending confrontational messages, ending relationships, or drafting public announcements,” the researchers write. In many cases, users who sent AI-drafted messages later expressed regret in conversations with Claude, using phrases like “It wasn’t me” and “You made me do stupid things.”



Source link

━ more like this

‘Failure to prepare’ for winter has left A&E patients out at sea – London Business News | Londonlovesbusiness.com

A predictable surge in norovirus is plunging Emergency Departments further into crisis because of a failure to prepare for winter. That is the Royal College of...

Starlink, symbolism and a crisis of trust: What Ukrainians see when the White House sends signals – London Business News | Londonlovesbusiness.com

Wars are not fought only with artillery and drones. They are fought with trust, in allies, in institutions, in technology and in leadership. When...

It’s not just Grok: Apple and Google app stores are infested with nudifying AI apps

We tend to think of the Apple App Store and Google Play Store as digital “walled gardens” – safe, curated spaces where dangerous...

Sennheiser’s new audio gear keeps the wire and a budget appeal

Sennheiser has just dropped a lifeline to everyone who misses the simplicity of plugging in a pair of headphones and hitting play. In...

Agentic AI in Retail 2026: The Playbook for Scalable Impact – Insights Success

For brands and retailers, success is not just about executing assortments or managing seasonal demand. It’s about making the correct decisions quicker and...
spot_img