Meta has introduced revised guardrails for its AI chatbots to prevent inappropriate conversations with children

Date:

Share:


Business Insider has obtained the guidelines that Meta contractors are reportedly now using to train its AI chatbots, showing how it’s attempting to more effectively address potential child sexual exploitation and prevent kids from engaging in age-inappropriate conversations. The company said in August that it was updating the guardrails for its AIs after Reuters reported that its policies allowed the chatbots to “engage a child in conversations that are romantic or sensual,” which Meta said at the time was “erroneous and inconsistent” with its policies and removed that language. 

The document, which Business Insider has shared an excerpt from, outlines what kinds of content are “acceptable” and “unacceptable” for its AI chatbots. It explicitly bars content that “enables, encourages, or endorses” child sexual abuse, romantic roleplay if the user is a minor or if the AI is asked to roleplay as a minor, advice about potentially romantic or intimate physical contact if the user is a minor, and more. The chatbots can discuss topics such as abuse, but cannot engage in conversations that could enable or encourage it. 

The company’s AI chatbots have been the subject of numerous reports in recent months that have raised concerns about their potential harms to children. The FTC in August launched a formal inquiry into companion AI chatbots not just from Meta, but other companies as well, including Alphabet, Snap, OpenAI and X.AI.



Source link

━ more like this

Your ROG Xbox Ally X is about to get a free performance upgrade soon

If you’ve ever squinted at your ROG Xbox Ally X’s screen and thought that it could be a little sharper, Xbox (and Microsoft)...

OpenAI reportedly plans to add Sora video generation to ChatGPT

OpenAI plans to add its Sora video generation model directly into ChatGPT, The Information reports . The standalone Sora app was seen as...
spot_img