Millions of People Are Using Abusive AI ‘Nudify’ Bots on Telegram

Date:

Share:


Kate Ruane, director of the Center for Democracy and Technology’s free expression project, says most major technology platforms now have policies prohibiting nonconsensual distribution of intimate images, with many of the biggest agreeing to principles to tackle deepfakes. “I would say that it’s actually not clear whether nonconsensual intimate image creation or distribution is prohibited on the platform,” Ruane says of Telegram’s terms of service, which are less detailed than other major tech platforms.

Telegram’s approach to removing harmful content has long been criticized by civil society groups, with the platform historically hosting scammers, extreme right-wing groups, and terrorism-related content. Since Telegram CEO and founder Pavel Durov was arrested and charged in France in August relating to a range of potential offenses, Telegram has started to make some changes to its terms of service and provide data to law enforcement agencies. The company did not respond to WIRED’s questions about whether it specifically prohibits explicit deepfakes.

Execute the Harm

Ajder, the researcher who discovered deepfake Telegram bots four years ago, says the app is almost uniquely positioned for deepfake abuse. “Telegram provides you with the search functionality, so it allows you to identify communities, chats, and bots,” Ajder says. “It provides the bot-hosting functionality, so it’s somewhere that provides the tooling in effect. Then it’s also the place where you can share it and actually execute the harm in terms of the end result.”

In late September, several deepfake channels started posting that Telegram had removed their bots. It is unclear what prompted the removals. On September 30, a channel with 295,000 subscribers posted that Telegram had “banned” its bots, but it posted a new bot link for users to use. (The channel was removed after WIRED sent questions to Telegram.)

“One of the things that’s really concerning about apps like Telegram is that it is so difficult to track and monitor, particularly from the perspective of survivors,” says Elena Michael, the cofounder and director of #NotYourPorn, a campaign group working to protect people from image-based sexual abuse.

Michael says Telegram has been “notoriously difficult” to discuss safety issues with, but notes there has been some progress from the company in recent years. However, she says the company should be more proactive in moderating and filtering out content itself.

“Imagine if you were a survivor who’s having to do that themselves, surely the burden shouldn’t be on an individual,” Michael says. “Surely the burden should be on the company to put something in place that’s proactive rather than reactive.”



Source link

━ more like this

Rivian agrees to settle shareholder lawsuit for $250 million

Rivian has agreed to settle a 2022 shareholder lawsuit. The automaker will pay out $250 million to qualifying investors if the agreement is...

Big tech is helping to pay for Trump’s ballroom that we all definitely want

The federal government has released a list of all of the entities helping to pay for President Trump's lavish White House ballroom, ....

This battery-powered Ring doorbell is back on sale for a record-low price

Amazon is offering a hefty discount on the . The video doorbell has dropped from its regular price of $150 to $80 in...

Sennheiser HDB 630 review: A sonic marvel with room for improvement

High-resolution audio on the go isn’t very convenient. It typically involves wired headphones and a DAC (digital-to-analog converter) of some kind, plus your...
spot_img