AI chatbots like ChatGPT can copy human traits and experts say it’s a huge risk

Date:

Share:

[ad_1]

AI agents are getting better at sounding human, but new research suggests they are doing more than just copying our words. According to a recent study, popular AI models like ChatGPT can consistently mimic human personality traits. Researchers say this ability comes with serious risks, especially as questions around AI reliability and accuracy grow.

Researchers from the University of Cambridge and Google DeepMind have developed what they call the first scientifically validated personality test framework for AI chatbots, using the same psychological tools designed to measure human personality (via TechXplore).

The team applied this framework to 18 popular large language models (LLMs), including systems behind tools like ChatGPT. They found that chatbots consistently mimic human personality traits rather than responding randomly, adding to concerns about how easily AI can be pushed beyond intended safeguards.

The study shows that larger, instruction-tuned models such as GPT-4-class systems are especially good at copying stable personality profiles. Using structured prompts, researchers were able to steer chatbots into adopting specific behaviors, such as sounding more confident or empathetic.

This behavorial change carried over into everyday tasks like writing posts or replying to users, meaning their personalities can be deliberately shaped. That is where experts see the danger, particularly when AI chatbots interacts with vulnerable users.

Why AI personality raises red flags for experts

Gregory Serapio-Garcia, a co-first author from Cambridge’s Psychometrics Centre, said it was striking how convincingly LLMs could adopt human traits. He warned that personality shaping could make AI systems more persuasive and emotionally influential, especially in sensitive areas such as mental health, education, or political discussion.

The paper also raises concerns about manipulation and what researchers describe as risks linked to “AI psychosis” if users form unhealthy emotional relationships with chatbots, including scenarios where AI may reinforce false beliefs or distort reality.

The team argues that regulation is urgently needed, but also notes that regulation is meaningless without proper measurement. To that end, the dataset and code behind the personality testing framework have been made public, allowing developers and regulators to audit AI models before release.

As chatbots become more embedded in everyday life, the ability to mimic human personality may prove powerful, it also demands far closer scrutiny than it has received so far.

[ad_2]

Source link

━ more like this

Sends shares Q1 2026 business update and product progress

Sends reported Q1 2026 updates sharing news on digital cards, app redesign, ClearBank integration, and fintech industry recognition. Sends, a fintech platform operated by Smartflow...

We swipe our phones all day, and scientists just ranked which ones are the most tiring

We all know staring at your phone for hours isn’t great for mental health. But what about your fingers? Previously, researchers couldn’t measure...

Two suspects have been arrested for allegedly shooting at Sam Altman’s house

OpenAI CEO Sam Altman's house may have been the target of a second attack after San Francisco Police Department arrested two suspects for...

You Can Soon Buy a $4,370 Humanoid Robot on AliExpress

Listing consumer electronics on the internet's large ecommerce marketplaces is a key step in “democratizing” the products, allowing them to be purchased by...
spot_img