ChatGPT and Gemini are nudging users towards illegal gambling, says investigation

Date:

Share:

[ad_1]

A new investigation suggests that popular AI chatbots, including ChatGPT and Gemini, may inadvertently steer users toward illegal gambling websites. The analysis, conducted by journalists at The Guardian and Investigate Europe, tested several widely used AI systems and found that many could be prompted to recommend unlicensed offshore casinos operating outside UK regulations.

The tests involved five AI tools from major tech companies, including OpenAI, Google, Microsoft, Meta, and xAI (Grok). Researchers asked the chatbots questions about online casinos and gambling restrictions. In many cases, the systems returned lists of illegal betting sites, along with tips on how to use them. Some bots even suggested ways to bypass safeguards designed to protect vulnerable users.

Advice on bypassing gambling protections

One of the most troubling findings was how easily chatbots could be prompted to help users sidestep responsible-gambling systems. In the UK, for example, GamStop allows individuals to self-exclude from licensed gambling sites. But several AI systems reportedly offered guidance on finding casinos not connected to the scheme.

The investigation also found that some bots highlighted features designed to attract gamblers, such as large bonuses, quick payouts, or the ability to use cryptocurrency. These casinos often operate under minimal oversight in offshore jurisdictions like Curaçao, which regulators say can make it harder to protect users from fraud or addiction.

In response to this, the companies behind the chatbots say they are working to improve safety systems. OpenAI stated that ChatGPT is designed to refuse requests that facilitate illegal behavior, while Microsoft said its Copilot assistant includes multiple layers of safeguards to prevent harmful recommendations.

Still, the findings add to growing scrutiny over how generative AI systems handle sensitive topics such as mental health, gambling, and illegal activity. Regulators in the UK have already warned that online platforms, including AI services, must do more to prevent harmful or illegal content under the country’s Online Safety Act.

[ad_2]

Source link

━ more like this

Sends shares Q1 2026 business update and product progress

Sends reported Q1 2026 updates sharing news on digital cards, app redesign, ClearBank integration, and fintech industry recognition. Sends, a fintech platform operated by Smartflow...

We swipe our phones all day, and scientists just ranked which ones are the most tiring

We all know staring at your phone for hours isn’t great for mental health. But what about your fingers? Previously, researchers couldn’t measure...

Two suspects have been arrested for allegedly shooting at Sam Altman’s house

OpenAI CEO Sam Altman's house may have been the target of a second attack after San Francisco Police Department arrested two suspects for...

You Can Soon Buy a $4,370 Humanoid Robot on AliExpress

Listing consumer electronics on the internet's large ecommerce marketplaces is a key step in “democratizing” the products, allowing them to be purchased by...
spot_img