A Lawsuit Against Perplexity Calls Out Fake News Hallucinations

Date:

Share:


Perplexity did not respond to requests for comment.

In a statement emailed to WIRED, News Corp chief executive Robert Thomson compared Perplexity unfavorably to OpenAI. “We applaud principled companies like OpenAI, which understands that integrity and creativity are essential if we are to realize the potential of Artificial Intelligence,” the statement says. “Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.”

OpenAI is facing its own accusations of trademark dilution, though. In the New York Times v. OpenAI, the Times alleges that ChatGPT and Bing Chat will attribute made-up quotes to the Times, and accuses OpenAI and Microsoft of damaging its reputation through trademark dilution. In one example cited in the lawsuit, the Times alleges that Bing Chat claimed that the Times called red wine (in moderation) a “heart-healthy” food, when in fact it did not; the Times argues that its actual reporting has debunked claims about the healthfulness of moderate drinking.

“Copying news articles to operate substitutive, commercial generative AI products is unlawful, as we made clear in our letters to Perplexity and our litigation against Microsoft and OpenAI,” says NYT director of external communications Charlie Stadtlander. “We applaud this lawsuit from Dow Jones and the New York Post, which is an important step toward ensuring that publisher content is protected from this kind of misappropriation.”

If publishers prevail in arguing that hallucinations can violate trademark law, AI companies could face “immense difficulties” according to Matthew Sag, a professor of law and artificial intelligence at Emory University.

“It is absolutely impossible to guarantee that a language model will not hallucinate,” Sag says. In his view, the way language models operate by predicting words that sound correct in response to prompts is always a type of hallucination—sometimes it’s just more plausible-sounding than others.

“We only call it a hallucination if it doesn’t match up with our reality, but the process is exactly the same whether we like the output or not.”



Source link

━ more like this

Decision fatigue: Why too many choices can be overwhelming – London Business News | Londonlovesbusiness.com

From choosing what to wear to deciding what to eat for breakfast, humans are constantly making decisions throughout the day. In fact, experts...

If the PS6 really does include a handheld, Sony can take inspiration from just about anywhere this time

PlayStation 5 is still dominating the current console generation, but the PS6 isn’t as far away as you may think. Sony’s next console is...

This camera breakthrough could soon help you take photos where everything is in focus

Whether you’re snapping photos on the best camera phone or using a proper camera, getting everything from the foreground to the background in...

The 10 best OSINT tools and software platforms for 2026 – London Business News | Londonlovesbusiness.com

You use OSINT to find facts fast and verify them with tools that scan the web, networks, and social platforms. This article shows...

Western Circle Group celebrates milestone with over 500,000 customers accessing direct lending support – London Business News | Londonlovesbusiness.com

Fintech platform Western Circle Ltd, which is behind the well-known Cashfloat brand, has achieved a series of milestones that demonstrate how the firm,...
spot_img