Did Google lie about building a deadly chatbot? Judge finds it plausible.

Date:

Share:



Judge not ready to rule on whether AI outputs are speech

Google and Character Technologies also moved to dismiss the lawsuit based on First Amendment claims, arguing that C.AI users have a right to listen to chatbot outputs as supposed “speech.”

Conway agreed that Character Technologies can assert the First Amendment rights of its users in this case, but “the Court is not prepared to hold that the Character.AI LLM’s output is speech at this stage.”

C.AI had tried to argue that chatbot outputs should be protected like speech from video game characters, but Conway said that argument was not meaningfully advanced. Garcia’s team had pushed back, noting that video game characters’ dialogue is written by humans, while chatbot outputs are simply the result of an LLM predicting what word should come next.

“Defendants fail to articulate why words strung together by an LLM are speech,” Conway wrote.

As the case advances, Character Technologies will have a chance to beef up the First Amendment claims, perhaps by better explaining how chatbot outputs are similar to other cases involving non-human speakers.

C.AI’s spokesperson provided a statement to Ars, suggesting that Conway seems confused.

“It’s long been true that the law takes time to adapt to new technology, and AI is no different,” C.AI’s spokesperson said. “In today’s order, the court made clear that it was not ready to rule on all of Character.AI’s arguments at this stage and we look forward to continuing to defend the merits of the case.”

C.AI also noted that it now provides a “separate version” of its LLM “for under-18 users,” along with “parental insights, filtered Characters, time spent notification, updated prominent disclaimers, and more.”

“Additionally, we have a number of technical protections aimed at detecting and preventing conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to the National Suicide and Crisis,” C.AI’s spokesperson said.

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.



Source link

━ more like this

The FBI confirms it’s buying Americans’ location data

During a Senate hearing, FBI Director Kash Patel confirmed that his agency has bought information that could be used to track individuals' movement...

A Meta agentic AI sparked a security incident by acting without permission

The Information reported that an AI agent within Meta took unauthorized action that led to an employee creating a security breach at the...

Microsoft will no longer auto-install M365 Copilot app on Windows PCs

Microsoft has stopped automatically installing the Microsoft 365 Copilot app on Windows PCs with M365 apps, after initially planning to roll it out...

A new iPhone hacking tool puts anyone still on iOS 18 at risk

Google and cybersecurity companies Lookout and iVerify have detailed a new hacking technique that potentially puts a significant portion of iPhone users in...

Senator Blackburn introduces the first draft of a federal AI bill

The White House has been promising a set of national rules to guide artificial intelligence since late last year, and today Sen. Marsha...
spot_img