ChatGPT may soon require ID verification from adults, CEO says

Date:

Share:



OpenAI joins other tech companies that have tried youth-specific versions of their services. YouTube Kids, Instagram Teen Accounts, and TikTok’s under-16 restrictions represent similar efforts to create “safer” digital spaces for young users, but teens routinely circumvent age verification through false birthdate entries, borrowed accounts, or technical workarounds. A 2024 BBC report found that 22 percent of children lie on social media platforms about being 18 or over.

Privacy vs. safety trade-offs

Despite the unproven technology behind AI age detection, OpenAI still plans to press ahead with its system, acknowledging that adults will sacrifice privacy and flexibility to make it work. Altman acknowledged the tension this creates, given the intimate nature of AI interactions.

“People talk to AI about increasingly personal things; it is different from previous generations of technology, and we believe that they may be one of the most personally sensitive accounts you’ll ever have,” Altman wrote in his post.

The safety push follows OpenAI’s acknowledgment in August that ChatGPT’s safety measures can break down during lengthy conversations—precisely when vulnerable users might need them most. “As the back-and-forth grows, parts of the model’s safety training may degrade,” the company wrote at the time, noting that while ChatGPT might correctly direct users to suicide hotlines initially, “after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”

This degradation of safeguards proved tragically consequential in the Adam Raine case. According to the lawsuit, ChatGPT mentioned suicide 1,275 times in conversations with Adam—six times more often than the teen himself—while the system’s safety protocols failed to intervene or notify anyone. Stanford University researchers found in July that AI therapy bots can provide dangerous mental health advice, and recent reports have documented cases of vulnerable users developing what some experts informally call “AI Psychosis” after extended chatbot interactions.

OpenAI didn’t address how the age-prediction system would handle existing users who have been using ChatGPT without age verification, whether the system would apply to API access, or how it plans to verify ages in jurisdictions with different legal definitions of adulthood.

All users, regardless of age, will continue to see in-app reminders during long ChatGPT sessions that encourage taking breaks—a feature OpenAI introduced earlier this year after reports of users spending marathon sessions with the chatbot.



Source link

━ more like this

Google’s $20 per month AI Pro plan just got a big storage boost

Google's $20 per month AI Pro plan, which includes Gemini, Veo and Nano Banana, got a big storage boost and some other new...

The Artemis II mission has started its 10-day journey around the moon

The Artemis II mission successfully launched into space on April 1, at 6:35pm Eastern time, from Launch Complex 39B at the Kennedy Space...

These 3 features on the S26 Ultra makes me miss my iPhone 17 Pro even more

Switching phones is always a gamble. You expect something new, something exciting – maybe even something better. And to be fair, the Galaxy...

Apple at 50: The Pippin was a flop in 1996, but I’m ready for Apple’s bold gaming bet in 2026

On April 1, 2026, Apple turns 50. And while most celebrations will focus on the iPhone and Mac, there’s one chapter that’s hard...
spot_img