It’s a Deepfake…World

Remember the Church Lady’s saying, “Well how convenient“?

People weren’t laughing at Joel R. McConvey when he reminded us of a different saying:

“In Silicon Valley parlance, ‘create the problem, sell the solution.'”

Joel R. McConvey’s “tale of two platforms”

McConvey was referring to two different Sam Altman investments. One, OpenAI’s newly-released Sora 2, amounts to a deepfake “slop machine” that is flooding our online, um, world in fakery. This concerns many, including SAG-AFTRA president Sean Astin. He doesn’t want his union members to lose their jobs to the Tilly Norwoods out there.

The deepfake “sea of slop” was created by Google Gemini.

If only there were a way to tell the human content from the non-person entity (NPE) content. Another Sam Altman investment, World (formerly Worldcoin), just happens to provide a solution to humanness detection.

“What if we could reduce the efficacy of deepfakes? Proof of human technology provides a promising tool. By establishing cryptographic proof that you’re interacting with a real, unique human, this technology addresses the root of the problem. It doesn’t try to determine if content is fake; it ensures the source is real from the start.”

Google Gemini. Not an accurate depiction of the Orb, but it’s really cool.

All credit to McConvey for tying these differing Altman efforts together in his Biometric Update article.

World is not enough

But World’s solution is partial at best.

As I’ve said before, proof of humanness is only half the battle. Even if you’ve detected humanness, some humans are capable of their own slop, and to solve the human slop problem you need to prove WHICH human is responsible for something.

Which is something decidedly outside of World’s mission.

But is it part of YOUR company’s mission? Talk to Bredemarket about getting your anti-fraud message out there: https://bredemarket.com/mark/

Deepfake Voices Have Been Around Since the 1980s

(Part of the biometric product marketing expert series)

Inland Empire locals know why THIS infamous song is stuck in my head today.

“Blame It On The Rain,” (not) sung by Milli Vanilli.

For those who don’t know the story, Rob Pilatus and Fab Morvan performed as the band Milli Vanilli and released an extremely successful album produced by Frank Farian. The title? “Girl You Know It’s True.”

But while we were listening to and watching Pilatus and Morvan sing, we were actually hearing the voices of Charles Shaw, John Davis, and Brad Howell. So technically this wasn’t a modern deepfake: rather than imitating the voice of a known person, Shaw et al were providing the voice of an unknown person. But the purpose was still deception.

Anyway, the ruse was revealed, Pilatus and Morvan were sacrificed, and things got worse.

“Pilatus, in particular, found it hard to cope, battling substance abuse and legal troubles. His tragic death in 1998 from a suspected overdose marked a sad epilogue to the Milli Vanilli saga.”

But there were certainly other examples of voice deepfakes in the 20th century…take Rich Little.

So deepfake voices aren’t a new problem. It’s just that they’re a lot easier to create today…which means that a lot of fraudsters can use them easily.

And if you are an identity/biometric marketing leader who needs Bredemarket’s help to market your anti-deepfake product, schedule a free meeting with me at https://bredemarket.com/mark/.

Grok, Celebrities, and Music

As some of you know, my generative AI tool of choice has been Google Gemini, which incorporates guardrails against portraying celebrities. Grok has fewer guardrails.

My main purpose in creating the two Bill and Hillary Clinton videos (at the beginning of this compilation reel) was to see how Grok would handle references to copyrighted music. I didn’t expect to hear actual songs, but would Grok try to approximate the sounds of Lindsey-Stevie-Christine era Mac and the Sex Pistols? You be the judge.

And as for Prince and Johnny…you be the judge of that also.

AI created by Grok.
AI created by Grok.

Using Grok For Evil: Deepfake Celebrity Endorsement

Using Grok for evil: a deepfake celebrity endorsement of Bredemarket?

Although in the video the fake Taylor Swift ends up looking a little like a fake Drew Barrymore.

Needless to say, I’m taking great care to fully disclose that this is a deepfake.

But some people don’t.

What is Truth? (What you see may not be true.)

I just posted the latest edition of my LinkedIn newsletter, “The Wildebeest Speaks.” It examines the history of deepfakes / likenesses, including the Émile Cohl animated cartoon Fantasmagorie, my own deepfake / likeness creations, and the deepfake / likeness of Sam Altman committing a burglary, authorized by Altman himself. Unfortunately, some deepfakes are NOT authorized, and that’s a problem.

Read my article here: https://www.linkedin.com/pulse/what-truth-bredemarket-jetmc/

Office.

In the PLoS One Voice Deepfake Detection Test, the Key Word is “Participants”

(Part of the biometric product marketing expert series)

A recent PYMNTS article entitled “AI Voices Are Now Indistinguishable From Humans, Experts Say” includes the following about voice deepfakes:

“A new PLoS One study found that artificial intelligence has reached a point where cloned voices are indistinguishable from genuine ones. In the experiment, participants were asked to tellhuman voices from AI-generated ones across 80 samples. Cloned voices were mistaken for real in 58% of cases, while human voices were correctly identified only 62% of the time.”

What the study didn’t measure

Since you already read the title of this post, you know that I’m concentrating on the word “participants.”

The PLoS One experiment used PEOPLE to try to distinguish real voices from deepfake ones.

And people aren’t all that accurate. Never have been.

Picture from Google Gemini.

Before you decide that people can’t detect fake voices…

…why not have an ALGORITHM give it a try?

What the study did measure

But to be fair, that wasn’t the goal of the PLoS One study, which specifically focused on human perception.

“Recently, an intriguing effect was reported in AI-generated faces, where such face images were perceived as more human than images of real humans – a “hyperrealism effect.” Here, we tested whether a “hyperrealism effect” also exists for AI-generated voices.”

For the record, the researchers did NOT discover a hyperrealism effect in AI-generated voices.

Do you offer a solution?

But if future deepfake voices sound realer than real, then we will REALLY need the algorithms to spot the fakes.

And if your company has a voice deepfake detection solution, I could have talked about it right now in this post.

Or on your website.

Or on your social media.

Where your prospects can see it…and purchase it.

And money in your pocket is realer than real.

Let’s talk. https://bredemarket.com/mark/

Picture from Google Gemini.

The Semantics of “Likeness” vs. “Deepfake”

A quote from YK Hong, from the post at: https://www.instagram.com/p/DPWAy2mEoRF/

“My current recommendation is strongly against uploading your biometrics to OpenAl’s new social feed app, Sora (currently in early release).

“Sora’s Cameo option has the user upload their own biometrics to create voice/video Deepfakes of themselves. The user can also set their preferences to allow others to create Deepfakes of each other, too.

“This is a privacy and security nightmare.”

Deepfake.

As I read this, one thing hit me: the intentional use of the word “deepfake,” with its negative connotations.

I had the sneaking suspicion that the descriptions of Cameo didn’t use the word “deepfake” to describe the feature.

And when I read https://help.openai.com/en/articles/12435986-generating-content-with-cameos I discovered I was right. OpenAI calls it a “likeness” or a “character” or…a cameo.

“Cameos are reusable “characters” built from a short video-and-audio capture of you. They let you appear in Sora videos, made by you or by specific people you approve, using a realistic version of your likeness and voice. When you create a cameo, you choose who can use it (e.g., only you, people you approve, or broader access).”

Likeness.

The entire episode shows the power of words. If you substitute a positive word such as “likeness” for a negative word such as “deepfake”—or vice versa—you exercise the power of to color the entire conversation.

Another example from many years ago was an ad from the sugar lobby which intentionally denigrated the “artificial” competitors to all natural sugar. Very effective for the time, in which the old promise of better living through chemicals was regarded as horror.

Google Gemini.

Remember this in your writing.

Or I can remember it for you if Bredemarket writes for you. Talk to me: https://bredemarket.com/mark/

The right words.

Battling deepfakes with…IAL3?

(Picture designed by Freepik.)

The information in this post is taken from the summary of this year’s Biometrics Institute Industry Survey and is presented under the following authority:

“You are welcome to use the information from this survey with a reference to its source, Biometrics Institute Industry Survey 2025. The full report, slides and graphics are available to Biometrics Institute members.”

But even the freebie stuff is valuable, including this citation of two concerns expressed by survey respondents:

“Against a backdrop of ongoing concerns around deepfakes, 85%
agreed or agreed strongly that deepfake technology poses a
significant threat to the future of biometric recognition, which
was similar to 2024.
“And two thirds of respondents (67%) agreed or agreed strongly
that supervised biometric capture is crucial to safeguard against
spoofing and injection attacks.”

Supervised biometric capture? Where have we heard that before?

IAL3 requires “[p]hysical presence” for identity proofing. However, the proofing agent may “attend the identity proofing session via a CSP-controlled kiosk or device.” In other words, supervised enrollment.

Now remote supervised enrollment and even in-person supervised enrollment is not a 100.00000% guard against deepfakes. The subject could be wearing a REALLY REALLY good mask. But it’s better than unsupervised enrollment.

How does your company battle deepfakes?

How do you tell your clients about your product?

Do you need product marketing assistance? Talk to Bredemarket.

Us, Them, Pornographic Deepfakes, and Guardrails

(Imagen 4)

Some of you may remember the Pink Floyd song “Us and Them.” The band had a history of examining things from different perspectives, to the point where Roger Waters and the band subsequently conceived a very long playing record (actually two records) derived from a single incident of Waters spitting on a member of the audience.

Is it OK to spit on the audience…or does this raise the threat of the audience spitting on you? Things appear different when you’re the recipient.

And yes, this has everything to do with generative artificial intelligence and pornographic deepfakes. Bear with me here.

Non-Consensual Activity in AI Apps

My former IDEMIA colleague Peter Kirkwood recently shared an observation on the pace of innovation and the accompanying risks.

“I’m a strong believer in the transformative potential of emerging technologies. The pace of innovation brings enormous benefits, but it also introduces risks we often can’t fully anticipate or regulate until the damage is already visible.”

Kirkwood then linked to an instance in which the technology is moving faster than the business and legal processes: specifically, Bernard Marr’s LinkedIn article “AI Apps Are Undressing Women Without Consent And It’s A Problem.”

Marr begins by explaining what “nudification apps” can do, and notes the significant financial profits that criminals can realize by employing them, Marr then outlines what various countries are doing to battle nudification apps and their derived content, including the United States, the United Kingdom, China, and Australia.

But then Marr notes why some people don’t take nudification all that seriously.

“One frustration for those campaigning for a solution is that authorities haven’t always seemed willing to treat AI-generated image abuse as seriously as they would photographic image abuse, due to a perception that it isn’t real’.”

First they created the deepfakes of the hot women

After his experiences under the Nazi regime, in which he transitioned from sympathizer to prisoner, Martin Niemoller frequently discussed how those who first “came for the Socialists” would eventually come for the trade unionists, then the Jews…then ourselves.

And I’m sure that some of you believe I’m trivializing Niemoller’s statement by equating deepfake creation with persecution of socialists. After all, deepfakes aren’t real.

But the effects of deepfakes are real, as Psychology Today notes:

“Being targeted by deepfake nudes is profoundly distressing, especially for adolescents and young adults. Deepfake nudes violate an individual’s right to bodily autonomy—the control over one’s own body without interference. Victims experience a severe invasion of privacy and may feel a loss of control over their bodies, as their likeness is manipulated without consent. This often leads to shameanxiety, and a decreased sense of self-worth. Fear of social ostracism can also contribute to anxiety, depression, and, in extreme cases, suicidal thoughts.”

And again I raise the question. If it’s OK to create realistic-looking pornographic deepfakes of hot women you don’t know, or of children you don’t know, then is it also OK to create realistic-looking pornographic deepfakes of your own family…or of you?

Guardrails

Imagen 4.

The difficulty, of course, is enforcing guardrails to stop this activity. Even if most of the governments are in agreement, and most of the businesses (such as Meta and Alphabet) are in agreement, “most” does not equal “all.” And as long as there is a market for pornographic deepfakes, someone will satisfy the demand.

My Appearances in Biometric Update in 2015, 2025…and 2035?

Depending upon your background, the fact that I’ve appeared in Biometric Update twice may or may not be a big deal to you. But I’m happy about it.

Biometric Update is a Canadian-based publication that…um…self-identifies as follows:

“We provide the world’s leading news coverage and information on the global biometric technology market via the web and an exclusive daily newsletter. Our daily biometrics updates, industry perspectives, interviews, columns and in-depth features explore a broad range of modalities and methods, from fingerprint, voice, iris, and facial recognition, to cutting-edge technologies like DNA analysis and gait recognition, related identification tools such as behavioral biometrics, and non-biometric identification methods such as identity document verification and telephone forensics. Our coverage touches on all applications and issues dealt with in the sector, including national security, mobile identity, and border control, with a special emphasis on UN Sustainable Development Goal 16.9 to provide universal digital identification and the ID4Africa movement.”

Over the last ten years, there have been two instances in which I have been newsworthy.

2015 with MorphoTrak

The first occurred in 2015, when my then-employer MorphoTrak exhibited an airport gate called MorphoWay at a conference then known as connect:ID. At the 2015 show, I demonstrated MorphoWay for Biometric Update’s videographer.

Me at connect:ID, 2015.

“In the video, Bredehoft scans his passport through the document reader, which checks the passport against a database to verify that it is, in fact, a CBP-authorized document.

“Once verified, the gates automatically open to allow Bredehoft to exit the area.”

2025 with Bredemarket

The second occurred ten years later in 2025, when I wrote a guest opinion piece entitled “Opinion: Vendors must disclose responsible uses of biometric data.” As I previously mentioned, I discussed the need to obtain consent for use of biometric data in certain instances, and noted:

“Some government agencies, private organizations, and biometric vendors have well-established procedures for acquiring the necessary consents.

“Others? Well…”

Biometric Update didn’t create a video this time around, but I did.

Biometric vendors…

2035???

So now that I’ve established a regular cadence for my appearances in Biometric Update, I fully expect to make a third appearance in 2035.

Because of my extensive biometric background, I predict that my 2035 appearance will concern the use of quantum computing to distinguish between a person and their fabricated clone using QCID (quantum clone identification).

No video yet, because I don’t know what video technology will be like ten years from now. So here’s an old fashioned 2D picture.

Imagen 4.