Hassold says that his grandmother was a victim of an impersonation scam in the mid-2000s when attackers called and pretended to be him, persuading her to send them $1,500.
“With my grandmother, the scammer didn’t say who was calling initially, they just started talking about how they had been arrested while attending a music festival in Canada and needed her to send money for bail. Her response was ‘Crane, is that you?’ and then they had exactly what they needed,” he says. “Scammers are essentially priming their victims into believing what they want them to believe.”
As with many social engineering scams, voice-impersonation cons work best when the target is caught up in a sense of urgency and just trying to help someone or complete a task they believe is their responsibility.
“My grandmother left me a voicemail while I was driving to work saying something like ‘I hope you’re OK. Don’t worry, I sent the money, and I won’t tell anyone,’” Hassold says.
Justin Hutchens, director of research and development at the cybersecurity firm Set Solutions, says he sees deepfake voice scams as a rising concern, but he’s also worried about a future in which AI-powered scams become even more automated.
“I expect that in the near future, we will start seeing threat actors combining deepfake voice technology with the conversational interactions supported by large language models,” Hutchens says of platforms like Open AI’s ChatGPT.
For now, though, Hassold cautions against being too quick to assume that voice-impersonation scams are being driven by deepfakes. After all, the analog version of the scam is still out there and still compelling to the right target at the right time.