In the PLoS One Voice Deepfake Detection Test, the Key Word is “Participants”

(Part of the biometric product marketing expert series)

A recent PYMNTS article entitled “AI Voices Are Now Indistinguishable From Humans, Experts Say” includes the following about voice deepfakes:

“A new PLoS One study found that artificial intelligence has reached a point where cloned voices are indistinguishable from genuine ones. In the experiment, participants were asked to tellhuman voices from AI-generated ones across 80 samples. Cloned voices were mistaken for real in 58% of cases, while human voices were correctly identified only 62% of the time.”

What the study didn’t measure

Since you already read the title of this post, you know that I’m concentrating on the word “participants.”

The PLoS One experiment used PEOPLE to try to distinguish real voices from deepfake ones.

And people aren’t all that accurate. Never have been.

Picture from Google Gemini.

Before you decide that people can’t detect fake voices…

…why not have an ALGORITHM give it a try?

What the study did measure

But to be fair, that wasn’t the goal of the PLoS One study, which specifically focused on human perception.

“Recently, an intriguing effect was reported in AI-generated faces, where such face images were perceived as more human than images of real humans – a “hyperrealism effect.” Here, we tested whether a “hyperrealism effect” also exists for AI-generated voices.”

For the record, the researchers did NOT discover a hyperrealism effect in AI-generated voices.

Do you offer a solution?

But if future deepfake voices sound realer than real, then we will REALLY need the algorithms to spot the fakes.

And if your company has a voice deepfake detection solution, I could have talked about it right now in this post.

Or on your website.

Or on your social media.

Where your prospects can see it…and purchase it.

And money in your pocket is realer than real.

Let’s talk. https://bredemarket.com/mark/

Picture from Google Gemini.

Some Voice Deepfakes Are NOT Fraudulent

(Part of the biometric product marketing expert series)

I’ve spent a ton of time discussing naughty people who use technology to create deepfakes—including voice deepfakes—to defraud people.

But some deepfakes don’t use technology, and some deepfakes are not intended to defraud.

Take Mark Hamill’s impersonation of fellow actor Harrison Ford.

Mark Hamill as Harrison Ford, and Harrison Ford reacting to Mark Hamill.

And then there was a case that I guess could be classified as fraud…at least to Don Pardo’s sister-in-law.

Don Pardo was originally known as an announcer on NBC game shows, and his distinctive voice could be heard on many of them, including (non-embeddable) parodies of them.

With his well-known voice, NBC jumped at the chance to employ him as the announcer for the decidedly non-game television show Saturday Night Live, where he traded dialogue with the likes of Frank Zappa.

“I’m the Slime.”

Except for a brief period after he ran afoul of Michael O’Donoghue, Pardo was a fixture on SNL for decades, through the reigns of various producers and executive producers.

Until one night in 1999 when laryngitis got the best of Don Pardo, and the show had to turn to Bill Clinton.

No, not the real Bill Clinton.

I’m talking about the SNL cast member who did a voice impression of Bill Clinton (and Jeopardy loser Sean Connery), Darrell Hammond. Who proceeded to perform an impression of Don Pardo.

An impression that even fooled Don Pardo’s sister-in-law.

This is Don Pardo saying this is Don Pardo…

Pardo continued to be Saturday Night Live’s announcer for years after that, sometimes live from New York, sometimes on tape from his home in Arizona.

And when Pardo passed away in 2014, he was succeeded as SNL’s announcer by former cast member Darrell Hammond.

Who used his own voice.

The Best Deepfake Defense is NOT Technological

I think about deepfakes a lot. As the identity/biometric product marketing consultant at Bredemarket, it comes with the territory.

When I’m not researching how fraudsters perpetrate deepfake faces, deepfake voices, and other deepfake modalities via presentation attack detection (liveness detection) and injection attack detection

…I’m researching and describing how Bredemarket’s clients and prospects develop innovative technologies to expose these deepfake fraudsters.

You can spend good money on deepfake-fighting industry solutions, and you can often realize a positive return on investment when purchasing these technologies.

But the best defense against these deepfakes isn’t some whiz bang technology.

It’s common sense.

  • Would your CEO really call you at midnight to expedite an urgent financial transaction?
  • Would that Amazon recruiter want to schedule a Zoom call right now?

If you receive an out-of-the-ordinary request, the first and most important thing to do is to take a deep breath.

A real CEO or recruiter would understand.

And…

…if your company offers a fraud-fighting solution to detect and defeat deepfakes, Bredemarket can help you market your solution. My content, proposal, and analysis offerings are at your service. Let’s talk: https://bredemarket.com/cpa/

CPA

(Imagen 4)

Frictionless Friction Ridges and Other Biometric Modalities

I wanted to write a list of the biometric modalities for which I provide experience.

So I started my usual list from memory: fingerprint, face, iris, voice, and DNA.

Then I stopped myself.

My experience with skin goes way beyond fingerprints, since I’ve spent over two decades working with palm prints.

(Can you say “Cambridgeshire method”? I knew you could. It was a 1990s method to use the 10 standard rolled fingerprint boxes to input palm prints into an automated fingerprint identification system. Because Cambridgeshire had a bias to action and didn’t want to wait for the standards folks to figure out how to enter palm prints. But I digress.)

So instead of saying fingerprints, I thought about saying friction ridges.

But there are two problems with this.

First, many people don’t know what “friction ridges” are. They’re the ridges that form on a person’s fingers, palms, toes, and feet, all of which can conceivably identify individuals.

But there’s a second problem. The word “friction” has two meanings: the one mentioned above, and a meaning that describes how biometric data is captured.

No, there is not a friction method to capture faces.
From https://www.youtube.com/watch?v=4XhWFHKWCSE.

No, there is not a friction method to capture faces. Squishing 

  • If you have to do something to provide your biometric data, such as press your fingers against a platen, that’s friction.
  • If you don’t have to do anything other than wave your fingers, hold your fingers in the air, or show your face as you stand near or walk by a camera, that’s frictionless.

More and more people capture friction ridges with frictionless methods. I did this years ago using MorphoWAVE at MorphoTrak facilities, and I did it today at Whole Foods Market.

So I could list my biometric modalities as friction ridge (fingerprint and palm print via both friction and frictionless capture methods), face, iris, voice, and DNA.

But I won’t.

Anyway, if you need content, proposal, or analysis assistance with any of these modalities, Bredemarket can help you. Book a meeting at https://bredemarket.com/cpa/

The “Biometric Digital Identity Deepfake and Synthetic Identity Prism Report” is Coming

As you may have noticed, I have talked about both deepfakes and synthetic identity ad nauseum.

But perhaps you would prefer to hear from someone who knows what they’re talking about.

On a webcast this morning, C. Maxine Most of The Prism Project reminded us that the “Biometric Digital Identity Deepfake and Synthetic Identity Prism Report” is scheduled for publication in May 2025, just a little over a month from now.

As with all other Prism Project publications, I expect a report that details the identity industry’s solutions to battle deepfakes and synthetic identities, and the vendors who provide them.

And the report is coming from one of the few industry researchers who knows the industry. Max doesn’t write synthetic identity reports one week and refrigerator reports the next, if you know what I mean.

At this point The Prism Project is soliciting sponsorships. Quality work doesn’t come for free, you know. If your company is interested in sponsoring the report, visit this link.

While waiting for Max, here are the Five Tops

And while you’re waiting for Max’s authoritative report on deepfakes and synthetic identity, you may want to take a look at Min’s (my) views, such as they are. Here are my current “five tops” posts on deepfakes and synthetic identity.

Update: A Little Harder to Create Voice Deepfakes?

(Imposter scam wildebeest image from Imagen 3)

(Part of the biometric product marketing expert series)

Remember my post early this morning entitled “Nearly $3 Billion Lost to Imposter Scams in the U.S. in 2024“?

The post touched on many items, one of which was the relative ease in using popular voice cloning programs to create fraudulent voices. Consumer Reports determined that four popular voice cloning programs “did not have the technical mechanisms necessary to prevent cloning someone’s voice without their knowledge or to limit the AI cloning to only the user’s voice.”

Reducing voice clone fraud?

Joel R. McConvey of Biometric Update wrote a piece (“Hi mom, it’s me,” an example of a popular fraudulent voice clone) that included an update on one of the four vendors cited by Consumer Reports.

In its responses, ElevenLabs – which was implicated in the deepfake Joe Biden robocall scam of November 2023 – says it is “implementing Coalition for Content Provenance and Authenticity (C2PA) standards by embedding cryptographically-signed metadata into the audio generated on our platform,” and lists customer screening, voice CAPTCHA and its No-Go Voice technology, which blocks the voices of hundreds of public figures, as among safeguards it already deploys.

Coalition for Content Provenance and Authenticity

So what are these C2PA standards? As a curious sort (I am ex-IDEMIA, after all), I investigated.

The Coalition for Content Provenance and Authenticity (C2PA) addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content. C2PA is a Joint Development Foundation project, formed through an alliance between Adobe, Arm, Intel, Microsoft and Truepic.

There are many other organizations whose logos appear on the website, including Amazon, Google, Meta, and Open AI.

Provenance

I won’t plunge into the entire specifications, but this excerpt from the “Explainer” highlights an important word, “provenance” (the P in C2PA).

Provenance generally refers to the facts about the history of a piece of digital content assets (image, video, audio recording, document). C2PA enables the authors of provenance data to securely bind statements of provenance data to instances of content using their unique credentials. These provenance statements are called assertions by the C2PA. They may include assertions about who created the content and how, when, and where it was created. They may also include assertions about when and how it was edited throughout its life. The content author, and publisher (if authoring provenance data) always has control over whether to include provenance data as well as what assertions are included, such as whether to include identifying information (in order to allow for anonymous or pseudonymous assets). Included assertions can be removed in later edits without invalidating or removing all of the included provenance data in a process called redaction.

Providence

I would really have to get into the nitty gritty of the specifications to see exactly how ElevenLabs, or anyone else, can accurately assert that a voice recording alleged to have been made by Richard Nixon actually was made by Richard Nixon. Hint: this one wasn’t.

From https://www.youtube.com/watch?v=2rkQn-43ixs.

Incidentally, while this was obviously never spoken, and I don’t believe that Nixon ever saw it, the speech was drafted as a contingency by William Safire. And I think everyone can admit that Safire could soar as a speechwriter for Nixon, whose sense of history caused him to cast himself as an American Churchill (with 1961 to 1969 as Nixon’s “wilderness years”). Safire also wrote for Agnew, who was not known as a great strategic thinker.

And the Apollo 11 speech above is not the only contingency speech ever written. Someone should create a deepfake of this speech that was NEVER delivered by then-General Dwight D. Eisenhower after D-Day:

Our landings in the Cherbourg-Havre have failed to gain a satisfactory foothold and I have withdrawn the troops. My decision to attack at this time and place was based upon the best information available. The troops, the air and the Navy did all that bravery and devotion to duty could do. If any blame or fault attaches to the attempt it is mine alone.

Nearly $3 Billion Lost to Imposter Scams in the U.S. in 2024

(Imposter scam wildebeest image from Imagen 3)

According to the Federal Trade Commission, fraud is being reported at the same rate, but more people are saying they are losing money from it.

In 2023, 27% of people who reported a fraud said they lost money, while in 2024, that figure jumped to 38%.

In a way this is odd, since you would think that we would better detect fraud attempts now. But I guess we don’t. (I’ll say why in a minute.)

Imposter scams

The second fraud category, after investment scams, was imposter scams.

The second highest reported loss amount came from imposter scams, with $2.95 billion reported lost. In 2024, consumers reported losing more money to scams where they paid with bank transfers or cryptocurrency than all other payment methods combined.

Deepfakes

I’ve spent…a long time in the business of determining who people are, and who people aren’t. While the FTC summary didn’t detail the methods of imposter scams, at least some of these may have used deepfakes to perpetuate the scam.

The FTC addressed deepfakes two years ago, speaking of

…technology that simulates human activity, such as software that creates deepfake videos and voice clones….They can use deepfakes and voice clones to facilitate imposter scamsextortion, and financial fraud. And that’s very much a non-exhaustive list.

Creating deepfakes

And the need for advanced skills to create deepfakes has disappeared. ZD NET reported on a Consumer Reports study that analyzed six voice cloning software packages:

The results found that four of the six products — from ElevenLabs, Speechify, PlayHT, and Lovo — did not have the technical mechanisms necessary to prevent cloning someone’s voice without their knowledge or to limit the AI cloning to only the user’s voice. 

Instead, the protection was limited to a box users had to check off, confirming they had the legal right to clone the voice.

Which is just as effective as verifying someone’s identity by asking for their name and date of birth.

(Not) detecting deepfakes

And of course the identity/biometric vendor commuity is addressing deepfakes also. Research from iProov indicates one reason why 38% of the FTC reporters lost money to fraud:

[M]ost people can’t identify deepfakes – those incredibly realistic AI-generated videos and images often designed to impersonate people. The study tested 2,000 UK and US consumers, exposing them to a series of real and deepfake content. The results are alarming: only 0.1% of participants could accurately distinguish real from fake content across all stimuli which included images and video… in a study where participants were primed to look for deepfakes. In real-world scenarios, where people are less aware, the vulnerability to deepfakes is likely even higher.

So what’s the solution? Throw more technology at the problem? Multi factor authentication (requiring the fraudster to deepfake multiple items)? Injection attack detection? Something else?

Don’t Miss the Boat

Bredemarket helps identity/biometric firms.

  • Finger, face, iris, voice, DNA, ID documents, geolocation, and even knowledge.
  • Content-Proposal-Analysis. (Bredemarket’s “CPA.”)

Don’t miss the boat.

Augment your team with Bredemarket.

Find out more.

Don’t miss the boat.

In Case You Missed My Incessant “Biometric Product Marketing Expert” Promotion

Biometric product marketing expert.

Modalities: Finger, face, iris, voice, DNA.

Plus other factors: IDs, data.

John E. Bredehoft has worked for Incode, IDEMIA, MorphoTrak, Motorola, Printrak, and a host of Bredemarket clients.

(Some images AI-generated by Google Gemini.)

Biometric product marketing expert.

Positioning, Messaging, and Your Facial Recognition Product Marketing

(Part of the biometric product marketing expert series)

By Original: Jack Ver at Dutch Wikipedia Vector: Ponor – Own work based on: Plaatsvector.png by Jack Ver at Dutch Wikipedia, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=95477901.

When marketing your facial recognition product (or any product), you need to pay attention to your positioning and messaging. This includes developing the answers to why, how, and what questions. But your positioning and your resulting messaging are deeply influenced by the characteristics of your product.

If facial recognition is your only modality

There are hundreds of facial recognition products on the market that are used for identity verification, authentication, crime solving (but ONLY as an investigative lead), and other purposes.

Some of these solutions ONLY use face as a biometric modality. Others use additional biometric modalities.

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Your positioning depends upon whether your solution only uses face, or uses other factors such as voice.

Of course, if you initially only offer a face solution and then offer a second biometric, you’ll have to rewrite all your material. “You know how we said that face is great? Well, face and gait are even greater!”

If biometrics is your only factor

It’s no secret that I am NOT a fan of the “passwords are dead” movement.

Too many of the tombstones are labeled “12345.” By GreatBernard – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=116933238.

It seems that many of the people that are waiting the long-delayed death of the password think that biometrics is the magic solution that will completely replace passwords.

For this reason, your company might have decided to use biometrics as your sole factor of identity verification and authentication.

Or perhaps your company took a different approach, and believes that multiple factors—perhaps all five factors—are required to truly verify and/or authenticate an individual. Use some combination of biometrics, secure documents such as driver’s licenses, geolocation, “something you do” such as a particular swiping pattern, and even (horrors!) knowledge-based authentication such as passwords or PINs.

This naturally shapes your positioning and messaging.

  • The single factor companies will argue that their approach is very fast, very secure, and completely frictionless. (Sound familiar?) No need to drag out your passport or your key fob, or to turn off your VPN to accurately indicate your location. Biometrics does it all!
  • The multiple factor companies will argue that ANY single factor can be spoofed, but that it is much, much harder to spoof multiple factors at once. (Sound familiar?)

So position yourself however you need to position yourself. Again, be prepared to change if your single factor solution adopts a second factor.

A final thought

Every company has its own way of approaching a problem, and your company is no different. As you prepare to market your products, survey your product, your customers, and your prospects and choose the correct positioning (and messaging) for your own circumstances.

And if you need help with biometric positioning and messaging, feel free to contact the biometric product marketing expert, John E. Bredehoft. (Full-time employment opportunities via LinkedIn, consulting opportunities via Bredemarket.)

In the meantime, take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259.