Apple hit with $1.2B lawsuit after killing controversial CSAM-detecting tool

Date:

Share:



When Apple devices are used to spread CSAM, it’s a huge problem for survivors, who allegedly face a range of harms, including “exposure to predators, sexual exploitation, dissociative behavior, withdrawal symptoms, social isolation, damage to body image and self-worth, increased risky behavior, and profound mental health issues, including but not limited to depression, anxiety, suicidal ideation, self-harm, insomnia, eating disorders, death, and other harmful effects.” One survivor told The Times she “lives in constant fear that someone might track her down and recognize her.”

Survivors suing have also incurred medical and other expenses due to Apple’s inaction, the lawsuit alleged. And those expenses will keep piling up if the court battle drags on for years and Apple’s practices remain unchanged.

Apple could win, a lawyer and policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, Riana Pfefferkorn, told The Times, as survivors face “significant hurdles” seeking liability for mishandling content that Apple says Section 230 shields. And a win for survivors could “backfire,” Pfefferkorn suggested, if Apple proves that forced scanning of devices and services violates the Fourth Amendment.

Survivors, some of whom own iPhones, think that Apple has a responsibility to protect them. In a press release, Margaret E. Mabie, a lawyer representing survivors, praised survivors for raising “a call for justice and a demand for Apple to finally take responsibility and protect these victims.”

“Thousands of brave survivors are coming forward to demand accountability from one of the most successful technology companies on the planet,” Mabie said. “Apple has not only rejected helping these victims, it has advertised the fact that it does not detect child sex abuse material on its platform or devices thereby exponentially increasing the ongoing harm caused to these victims.”



Source link

━ more like this

Anthropic Denies It Could Sabotage AI Tools During War

Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on...

Elon Musk misled investors during his Twitter takeover, jury finds

A group of former Twitter investors have prevailed at a federal civil trial over Elon Musk's actions amid his $44 billion acquisition of...

Cloudflare CEO warns AI bots could outnumber humans online by 2027

The internet you use every day could soon be dominated by artificial intelligence. Cloudflare CEO Matthew Prince says that AI bots may generate...
spot_img