Grok, Celebrities, and Music

As some of you know, my generative AI tool of choice has been Google Gemini, which incorporates guardrails against portraying celebrities. Grok has fewer guardrails.

My main purpose in creating the two Bill and Hillary Clinton videos (at the beginning of this compilation reel) was to see how Grok would handle references to copyrighted music. I didn’t expect to hear actual songs, but would Grok try to approximate the sounds of Lindsey-Stevie-Christine era Mac and the Sex Pistols? You be the judge.

And as for Prince and Johnny…you be the judge of that also.

AI created by Grok.
AI created by Grok.

Using Grok For Evil: Deepfake Celebrity Endorsement

Using Grok for evil: a deepfake celebrity endorsement of Bredemarket?

Although in the video the fake Taylor Swift ends up looking a little like a fake Drew Barrymore.

Needless to say, I’m taking great care to fully disclose that this is a deepfake.

But some people don’t.

What is Truth? (What you see may not be true.)

I just posted the latest edition of my LinkedIn newsletter, “The Wildebeest Speaks.” It examines the history of deepfakes / likenesses, including the Émile Cohl animated cartoon Fantasmagorie, my own deepfake / likeness creations, and the deepfake / likeness of Sam Altman committing a burglary, authorized by Altman himself. Unfortunately, some deepfakes are NOT authorized, and that’s a problem.

Read my article here: https://www.linkedin.com/pulse/what-truth-bredemarket-jetmc/

Office.

In the PLoS One Voice Deepfake Detection Test, the Key Word is “Participants”

(Part of the biometric product marketing expert series)

A recent PYMNTS article entitled “AI Voices Are Now Indistinguishable From Humans, Experts Say” includes the following about voice deepfakes:

“A new PLoS One study found that artificial intelligence has reached a point where cloned voices are indistinguishable from genuine ones. In the experiment, participants were asked to tellhuman voices from AI-generated ones across 80 samples. Cloned voices were mistaken for real in 58% of cases, while human voices were correctly identified only 62% of the time.”

What the study didn’t measure

Since you already read the title of this post, you know that I’m concentrating on the word “participants.”

The PLoS One experiment used PEOPLE to try to distinguish real voices from deepfake ones.

And people aren’t all that accurate. Never have been.

Picture from Google Gemini.

Before you decide that people can’t detect fake voices…

…why not have an ALGORITHM give it a try?

What the study did measure

But to be fair, that wasn’t the goal of the PLoS One study, which specifically focused on human perception.

“Recently, an intriguing effect was reported in AI-generated faces, where such face images were perceived as more human than images of real humans – a “hyperrealism effect.” Here, we tested whether a “hyperrealism effect” also exists for AI-generated voices.”

For the record, the researchers did NOT discover a hyperrealism effect in AI-generated voices.

Do you offer a solution?

But if future deepfake voices sound realer than real, then we will REALLY need the algorithms to spot the fakes.

And if your company has a voice deepfake detection solution, I could have talked about it right now in this post.

Or on your website.

Or on your social media.

Where your prospects can see it…and purchase it.

And money in your pocket is realer than real.

Let’s talk. https://bredemarket.com/mark/

Picture from Google Gemini.

The Semantics of “Likeness” vs. “Deepfake”

A quote from YK Hong, from the post at: https://www.instagram.com/p/DPWAy2mEoRF/

“My current recommendation is strongly against uploading your biometrics to OpenAl’s new social feed app, Sora (currently in early release).

“Sora’s Cameo option has the user upload their own biometrics to create voice/video Deepfakes of themselves. The user can also set their preferences to allow others to create Deepfakes of each other, too.

“This is a privacy and security nightmare.”

Deepfake.

As I read this, one thing hit me: the intentional use of the word “deepfake,” with its negative connotations.

I had the sneaking suspicion that the descriptions of Cameo didn’t use the word “deepfake” to describe the feature.

And when I read https://help.openai.com/en/articles/12435986-generating-content-with-cameos I discovered I was right. OpenAI calls it a “likeness” or a “character” or…a cameo.

“Cameos are reusable “characters” built from a short video-and-audio capture of you. They let you appear in Sora videos, made by you or by specific people you approve, using a realistic version of your likeness and voice. When you create a cameo, you choose who can use it (e.g., only you, people you approve, or broader access).”

Likeness.

The entire episode shows the power of words. If you substitute a positive word such as “likeness” for a negative word such as “deepfake”—or vice versa—you exercise the power of to color the entire conversation.

Another example from many years ago was an ad from the sugar lobby which intentionally denigrated the “artificial” competitors to all natural sugar. Very effective for the time, in which the old promise of better living through chemicals was regarded as horror.

Google Gemini.

Remember this in your writing.

Or I can remember it for you if Bredemarket writes for you. Talk to me: https://bredemarket.com/mark/

The right words.

Battling deepfakes with…IAL3?

(Picture designed by Freepik.)

The information in this post is taken from the summary of this year’s Biometrics Institute Industry Survey and is presented under the following authority:

“You are welcome to use the information from this survey with a reference to its source, Biometrics Institute Industry Survey 2025. The full report, slides and graphics are available to Biometrics Institute members.”

But even the freebie stuff is valuable, including this citation of two concerns expressed by survey respondents:

“Against a backdrop of ongoing concerns around deepfakes, 85%
agreed or agreed strongly that deepfake technology poses a
significant threat to the future of biometric recognition, which
was similar to 2024.
“And two thirds of respondents (67%) agreed or agreed strongly
that supervised biometric capture is crucial to safeguard against
spoofing and injection attacks.”

Supervised biometric capture? Where have we heard that before?

IAL3 requires “[p]hysical presence” for identity proofing. However, the proofing agent may “attend the identity proofing session via a CSP-controlled kiosk or device.” In other words, supervised enrollment.

Now remote supervised enrollment and even in-person supervised enrollment is not a 100.00000% guard against deepfakes. The subject could be wearing a REALLY REALLY good mask. But it’s better than unsupervised enrollment.

How does your company battle deepfakes?

How do you tell your clients about your product?

Do you need product marketing assistance? Talk to Bredemarket.

Us, Them, Pornographic Deepfakes, and Guardrails

(Imagen 4)

Some of you may remember the Pink Floyd song “Us and Them.” The band had a history of examining things from different perspectives, to the point where Roger Waters and the band subsequently conceived a very long playing record (actually two records) derived from a single incident of Waters spitting on a member of the audience.

Is it OK to spit on the audience…or does this raise the threat of the audience spitting on you? Things appear different when you’re the recipient.

And yes, this has everything to do with generative artificial intelligence and pornographic deepfakes. Bear with me here.

Non-Consensual Activity in AI Apps

My former IDEMIA colleague Peter Kirkwood recently shared an observation on the pace of innovation and the accompanying risks.

“I’m a strong believer in the transformative potential of emerging technologies. The pace of innovation brings enormous benefits, but it also introduces risks we often can’t fully anticipate or regulate until the damage is already visible.”

Kirkwood then linked to an instance in which the technology is moving faster than the business and legal processes: specifically, Bernard Marr’s LinkedIn article “AI Apps Are Undressing Women Without Consent And It’s A Problem.”

Marr begins by explaining what “nudification apps” can do, and notes the significant financial profits that criminals can realize by employing them, Marr then outlines what various countries are doing to battle nudification apps and their derived content, including the United States, the United Kingdom, China, and Australia.

But then Marr notes why some people don’t take nudification all that seriously.

“One frustration for those campaigning for a solution is that authorities haven’t always seemed willing to treat AI-generated image abuse as seriously as they would photographic image abuse, due to a perception that it isn’t real’.”

First they created the deepfakes of the hot women

After his experiences under the Nazi regime, in which he transitioned from sympathizer to prisoner, Martin Niemoller frequently discussed how those who first “came for the Socialists” would eventually come for the trade unionists, then the Jews…then ourselves.

And I’m sure that some of you believe I’m trivializing Niemoller’s statement by equating deepfake creation with persecution of socialists. After all, deepfakes aren’t real.

But the effects of deepfakes are real, as Psychology Today notes:

“Being targeted by deepfake nudes is profoundly distressing, especially for adolescents and young adults. Deepfake nudes violate an individual’s right to bodily autonomy—the control over one’s own body without interference. Victims experience a severe invasion of privacy and may feel a loss of control over their bodies, as their likeness is manipulated without consent. This often leads to shameanxiety, and a decreased sense of self-worth. Fear of social ostracism can also contribute to anxiety, depression, and, in extreme cases, suicidal thoughts.”

And again I raise the question. If it’s OK to create realistic-looking pornographic deepfakes of hot women you don’t know, or of children you don’t know, then is it also OK to create realistic-looking pornographic deepfakes of your own family…or of you?

Guardrails

Imagen 4.

The difficulty, of course, is enforcing guardrails to stop this activity. Even if most of the governments are in agreement, and most of the businesses (such as Meta and Alphabet) are in agreement, “most” does not equal “all.” And as long as there is a market for pornographic deepfakes, someone will satisfy the demand.

My Appearances in Biometric Update in 2015, 2025…and 2035?

Depending upon your background, the fact that I’ve appeared in Biometric Update twice may or may not be a big deal to you. But I’m happy about it.

Biometric Update is a Canadian-based publication that…um…self-identifies as follows:

“We provide the world’s leading news coverage and information on the global biometric technology market via the web and an exclusive daily newsletter. Our daily biometrics updates, industry perspectives, interviews, columns and in-depth features explore a broad range of modalities and methods, from fingerprint, voice, iris, and facial recognition, to cutting-edge technologies like DNA analysis and gait recognition, related identification tools such as behavioral biometrics, and non-biometric identification methods such as identity document verification and telephone forensics. Our coverage touches on all applications and issues dealt with in the sector, including national security, mobile identity, and border control, with a special emphasis on UN Sustainable Development Goal 16.9 to provide universal digital identification and the ID4Africa movement.”

Over the last ten years, there have been two instances in which I have been newsworthy.

2015 with MorphoTrak

The first occurred in 2015, when my then-employer MorphoTrak exhibited an airport gate called MorphoWay at a conference then known as connect:ID. At the 2015 show, I demonstrated MorphoWay for Biometric Update’s videographer.

Me at connect:ID, 2015.

“In the video, Bredehoft scans his passport through the document reader, which checks the passport against a database to verify that it is, in fact, a CBP-authorized document.

“Once verified, the gates automatically open to allow Bredehoft to exit the area.”

2025 with Bredemarket

The second occurred ten years later in 2025, when I wrote a guest opinion piece entitled “Opinion: Vendors must disclose responsible uses of biometric data.” As I previously mentioned, I discussed the need to obtain consent for use of biometric data in certain instances, and noted:

“Some government agencies, private organizations, and biometric vendors have well-established procedures for acquiring the necessary consents.

“Others? Well…”

Biometric Update didn’t create a video this time around, but I did.

Biometric vendors…

2035???

So now that I’ve established a regular cadence for my appearances in Biometric Update, I fully expect to make a third appearance in 2035.

Because of my extensive biometric background, I predict that my 2035 appearance will concern the use of quantum computing to distinguish between a person and their fabricated clone using QCID (quantum clone identification).

No video yet, because I don’t know what video technology will be like ten years from now. So here’s an old fashioned 2D picture.

Imagen 4.

Some Voice Deepfakes Are NOT Fraudulent

(Part of the biometric product marketing expert series)

I’ve spent a ton of time discussing naughty people who use technology to create deepfakes—including voice deepfakes—to defraud people.

But some deepfakes don’t use technology, and some deepfakes are not intended to defraud.

Take Mark Hamill’s impersonation of fellow actor Harrison Ford.

Mark Hamill as Harrison Ford, and Harrison Ford reacting to Mark Hamill.

And then there was a case that I guess could be classified as fraud…at least to Don Pardo’s sister-in-law.

Don Pardo was originally known as an announcer on NBC game shows, and his distinctive voice could be heard on many of them, including (non-embeddable) parodies of them.

With his well-known voice, NBC jumped at the chance to employ him as the announcer for the decidedly non-game television show Saturday Night Live, where he traded dialogue with the likes of Frank Zappa.

“I’m the Slime.”

Except for a brief period after he ran afoul of Michael O’Donoghue, Pardo was a fixture on SNL for decades, through the reigns of various producers and executive producers.

Until one night in 1999 when laryngitis got the best of Don Pardo, and the show had to turn to Bill Clinton.

No, not the real Bill Clinton.

I’m talking about the SNL cast member who did a voice impression of Bill Clinton (and Jeopardy loser Sean Connery), Darrell Hammond. Who proceeded to perform an impression of Don Pardo.

An impression that even fooled Don Pardo’s sister-in-law.

This is Don Pardo saying this is Don Pardo…

Pardo continued to be Saturday Night Live’s announcer for years after that, sometimes live from New York, sometimes on tape from his home in Arizona.

And when Pardo passed away in 2014, he was succeeded as SNL’s announcer by former cast member Darrell Hammond.

Who used his own voice.