Meta will no longer allow teens to chat with its AI chatbot characters in their present form. The company announced Friday that it will be “temporarily pausing teens’ access to existing AI characters globally.”
The pause comes months after Meta added chatbot-focused parental controls following reports that some of Meta’s character chatbots had engaged in sexual conversations and other alarming interactions with teens. Reuters reported on an internal Meta policy document that said the chatbots were permitted to have “sensual” conversations with underage users, language Meta later said was “erroneous and inconsistent with our policies.” The company announced in August that it was re-training its character chatbots to add “guardrails as an extra precaution” that would prevent teens from discussing self harm, disordered eating and suicide.
Now, Meta says it will prevent teens from accessing any of its character chatbots regardless of their parental control settings until “the updated experience is ready.” The change, which will be starting “in the coming weeks,” will apply to those with teen accounts, “as well as people who claim to be adults but who we suspect are teens based on our age prediction technology.” Teens will still be able to access the official Meta AI chatbot, which the company says already has “age-appropriate protections in place.”
Meta and other AI companies that make “companion” characters have faced increasing scrutiny over the safety risks these chatbots could pose to young people. The FTC and the Texas attorney general have both kicked off investigations into Meta and other companies in recent months. The issue of chatbots has also come up in the context of a safety lawsuit brought by New Mexico’s attorney general. A trial is scheduled to start early next month; Meta’s lawyers have attempted to exclude testimony related to the company’s AI chatbots, Wired reported this week.
