Cloudflare turns AI against itself with endless maze of irrelevant facts

Date:

Share:



On Wednesday, web infrastructure provider Cloudflare announced a new feature called “AI Labyrinth” that aims to combat unauthorized AI data scraping by serving fake AI-generated content to bots. The tool will attempt to thwart AI companies that crawl websites without permission to collect training data for large language models that power AI assistants like ChatGPT.

Cloudflare, founded in 2009, is probably best known as a company that provides infrastructure and security services for websites, particularly protection against distributed denial-of-service (DDoS) attacks and other malicious traffic.

Instead of simply blocking bots, Cloudflare’s new system lures them into a “maze” of realistic-looking but irrelevant pages, wasting the crawler’s computing resources. The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler’s operators that they’ve been detected.

“When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them,” writes Cloudflare. “But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources.”

The company says the content served to bots is deliberately irrelevant to the website being crawled, but it is carefully sourced or generated using real scientific facts—such as neutral information about biology, physics, or mathematics—to avoid spreading misinformation (whether this approach effectively prevents misinformation, however, remains unproven). Cloudflare creates this content using its Workers AI service, a commercial platform that runs AI tasks.

Cloudflare designed the trap pages and links to remain invisible and inaccessible to regular visitors, so people browsing the web don’t run into them by accident.

A smarter honeypot

AI Labyrinth functions as what Cloudflare calls a “next-generation honeypot.” Traditional honeypots are invisible links that human visitors can’t see but bots parsing HTML code might follow. But Cloudflare says modern bots have become adept at spotting these simple traps, necessitating more sophisticated deception. The false links contain appropriate meta directives to prevent search engine indexing while remaining attractive to data-scraping bots.



Source link

━ more like this

OpenAI is reportedly working on an AI music-generation tool

According to a report from The Information, OpenAI is interested in developing a tool that could generate music from text and audio prompts,...

CBP will photograph non-citizens entering and exiting the US for its facial recognition database

The US Customs and Border Protection (CBP) submitted a new measure that allows it to photograph any non-US citizen who enters or exits...

Apple is reportedly getting ready to introduce ads to its Maps app

Opening Apple's Maps app just for directions may look a little different in the near future. According to Bloomberg's Mark Gurman, Apple is...

The next iPad Pro could be the first to get vapor chamber cooling

The iterative upgrades for iPads may not be enticing enough to warrant a new purchase every year, but Apple may have a particularly...

Putin warns UK of the new ‘unstoppable’ nuclear missile dubbed the ‘Flying Chernobyl’ – London Business News | Londonlovesbusiness.com

Vladimir Putin has issued another warning to the West and the UK his nuclear forces are now on the “highest level” after the...
spot_img