Here’s how deepfake vishing attacks work, and why they can be hard to detect

Date:

Share:


By now, you’ve likely heard of fraudulent calls that use AI to clone the voices of people the call recipient knows. Often, the result is what sounds like a grandchild, CEO, or work colleague you’ve known for years reporting an urgent matter requiring immediate action, saying to wire money, divulge login credentials, or visit a malicious website.

Researchers and government officials have been warning of the threat for years, with the Cybersecurity and Infrastructure Security Agency saying in 2023 that threats from deepfakes and other forms of synthetic media have increased “exponentially.” Last year, Google’s Mandiant security division reported that such attacks are being executed with “uncanny precision, creating for more realistic phishing schemes.”

Anatomy of a deepfake scam call

On Wednesday, security firm Group-IB outlined the basic steps involved in executing these sorts of attacks. The takeaway is that they’re easy to reproduce at scale and can be challenging to detect or repel.



The workflow of a deepfake vishing attack.

Credit:
Group-IB

The workflow of a deepfake vishing attack.


Credit:

Group-IB

The basic steps are:

Collecting voice samples of the person who will be impersonated. Samples as short as three seconds are sometimes adequate. They can come from videos, online meetings, or previous voice calls.

Feeding the samples into AI-based speech-synthesis engines, such as Google’s Tacotron 2, Microsoft’s Vall-E, or services from ElevenLabs and Resemble AI. These engines allow the attacker to use a text-to-speech interface that produces user-chosen words with the voice tone and conversational tics of the person being impersonated. Most services bar such use of deepfakes, but as Consumer Reports found in March, the safeguards these companies have in place to curb the practice could be bypassed with minimal effort.

An optional step is to spoof the number belonging to the person or organization being impersonated. These sorts of techniques have been in use for decades.

Next, attackers initiate the scam call. In some cases, the cloned voice will follow a script. In other more sophisticated attacks, the faked speech is generated in real time, using voice masking or transformation software. The real-time attacks can be more convincing because they allow the attacker to respond to questions a skeptical recipient may ask.

“Although real-time impersonation has been demonstrated by open source projects and commercial APIs, real-time deepfake vishing in-the-wild remains limited,” Group-IB said. “However, given ongoing advancements in processing speed and model efficiency, real-time usage is expected to become more common in the near future.”



Source link

━ more like this

The Space Invaders movie is apparently still happening

It's been a few years since we last heard anything about that is reportedly in the works, but a new report suggests...

DJI repurposed its drones’ obstacle detection tech for robot vacuums

DJI's obstacle avoidance system could be just as useful on land as it is in the air. DJI, known for its dominance in...

OpenAI brings GPT-4o back online after users melt down over the new model

Following the rollout of OpenAI's latest GPT-5 model earlier this week, a certain user base was adamantly calling for the return of the...

Apple’s MacBook Air M4 is on sale for up to 20 percent off

Whether you need a new MacBook for the upcoming semester or you've just been itching to upgrade from an older machine, now's a...

Watch NASA’s SpaceX Crew-10 astronauts return to Earth

The astronauts part of SpaceX's Crew-10 mission are on their way back home. Their Dragon capsule called Endurance is scheduled to splash down...
spot_img