Did Google lie about building a deadly chatbot? Judge finds it plausible.

Date:

Share:

[ad_1]

Did Google lie about building a deadly chatbot? Judge finds it plausible.

Judge not ready to rule on whether AI outputs are speech

Google and Character Technologies also moved to dismiss the lawsuit based on First Amendment claims, arguing that C.AI users have a right to listen to chatbot outputs as supposed “speech.”

Conway agreed that Character Technologies can assert the First Amendment rights of its users in this case, but “the Court is not prepared to hold that the Character.AI LLM’s output is speech at this stage.”

C.AI had tried to argue that chatbot outputs should be protected like speech from video game characters, but Conway said that argument was not meaningfully advanced. Garcia’s team had pushed back, noting that video game characters’ dialogue is written by humans, while chatbot outputs are simply the result of an LLM predicting what word should come next.

“Defendants fail to articulate why words strung together by an LLM are speech,” Conway wrote.

As the case advances, Character Technologies will have a chance to beef up the First Amendment claims, perhaps by better explaining how chatbot outputs are similar to other cases involving non-human speakers.

C.AI’s spokesperson provided a statement to Ars, suggesting that Conway seems confused.

“It’s long been true that the law takes time to adapt to new technology, and AI is no different,” C.AI’s spokesperson said. “In today’s order, the court made clear that it was not ready to rule on all of Character.AI’s arguments at this stage and we look forward to continuing to defend the merits of the case.”

C.AI also noted that it now provides a “separate version” of its LLM “for under-18 users,” along with “parental insights, filtered Characters, time spent notification, updated prominent disclaimers, and more.”

“Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis,” C.AI’s spokesperson said.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

[ad_2]

Source link

━ more like this

Sends shares Q1 2026 business update and product progress

Sends reported Q1 2026 updates sharing news on digital cards, app redesign, ClearBank integration, and fintech industry recognition. Sends, a fintech platform operated by Smartflow...

We swipe our phones all day, and scientists just ranked which ones are the most tiring

We all know staring at your phone for hours isn’t great for mental health. But what about your fingers? Previously, researchers couldn’t measure...

Two suspects have been arrested for allegedly shooting at Sam Altman’s house

OpenAI CEO Sam Altman's house may have been the target of a second attack after San Francisco Police Department arrested two suspects for...

You Can Soon Buy a $4,370 Humanoid Robot on AliExpress

Listing consumer electronics on the internet's large ecommerce marketplaces is a key step in “democratizing” the products, allowing them to be purchased by...
spot_img