(Imposter scam wildebeest image from Imagen 3)
According to the Federal Trade Commission, fraud is being reported at the same rate, but more people are saying they are losing money from it.
In 2023, 27% of people who reported a fraud said they lost money, while in 2024, that figure jumped to 38%.
In a way this is odd, since you would think that we would better detect fraud attempts now. But I guess we don’t. (I’ll say why in a minute.)
Imposter scams
The second fraud category, after investment scams, was imposter scams.
The second highest reported loss amount came from imposter scams, with $2.95 billion reported lost. In 2024, consumers reported losing more money to scams where they paid with bank transfers or cryptocurrency than all other payment methods combined.
Deepfakes
I’ve spent…a long time in the business of determining who people are, and who people aren’t. While the FTC summary didn’t detail the methods of imposter scams, at least some of these may have used deepfakes to perpetuate the scam.
The FTC addressed deepfakes two years ago, speaking of
…technology that simulates human activity, such as software that creates deepfake videos and voice clones….They can use deepfakes and voice clones to facilitate imposter scams, extortion, and financial fraud. And that’s very much a non-exhaustive list.
Creating deepfakes
And the need for advanced skills to create deepfakes has disappeared. ZD NET reported on a Consumer Reports study that analyzed six voice cloning software packages:
The results found that four of the six products — from ElevenLabs, Speechify, PlayHT, and Lovo — did not have the technical mechanisms necessary to prevent cloning someone’s voice without their knowledge or to limit the AI cloning to only the user’s voice.
Instead, the protection was limited to a box users had to check off, confirming they had the legal right to clone the voice.
Which is just as effective as verifying someone’s identity by asking for their name and date of birth.
(Not) detecting deepfakes
And of course the identity/biometric vendor commuity is addressing deepfakes also. Research from iProov indicates one reason why 38% of the FTC reporters lost money to fraud:
[M]ost people can’t identify deepfakes – those incredibly realistic AI-generated videos and images often designed to impersonate people. The study tested 2,000 UK and US consumers, exposing them to a series of real and deepfake content. The results are alarming: only 0.1% of participants could accurately distinguish real from fake content across all stimuli which included images and video… in a study where participants were primed to look for deepfakes. In real-world scenarios, where people are less aware, the vulnerability to deepfakes is likely even higher.
So what’s the solution? Throw more technology at the problem? Multi factor authentication (requiring the fraudster to deepfake multiple items)? Something else?







