It’s a Deepfake…World

Remember the Church Lady’s saying, “Well how convenient“?

People weren’t laughing at Joel R. McConvey when he reminded us of a different saying:

“In Silicon Valley parlance, ‘create the problem, sell the solution.'”

Joel R. McConvey’s “tale of two platforms”

McConvey was referring to two different Sam Altman investments. One, OpenAI’s newly-released Sora 2, amounts to a deepfake “slop machine” that is flooding our online, um, world in fakery. This concerns many, including SAG-AFTRA president Sean Astin. He doesn’t want his union members to lose their jobs to the Tilly Norwoods out there.

The deepfake “sea of slop” was created by Google Gemini.

If only there were a way to tell the human content from the non-person entity (NPE) content. Another Sam Altman investment, World (formerly Worldcoin), just happens to provide a solution to humanness detection.

“What if we could reduce the efficacy of deepfakes? Proof of human technology provides a promising tool. By establishing cryptographic proof that you’re interacting with a real, unique human, this technology addresses the root of the problem. It doesn’t try to determine if content is fake; it ensures the source is real from the start.”

Google Gemini. Not an accurate depiction of the Orb, but it’s really cool.

All credit to McConvey for tying these differing Altman efforts together in his Biometric Update article.

World is not enough

But World’s solution is partial at best.

As I’ve said before, proof of humanness is only half the battle. Even if you’ve detected humanness, some humans are capable of their own slop, and to solve the human slop problem you need to prove WHICH human is responsible for something.

Which is something decidedly outside of World’s mission.

But is it part of YOUR company’s mission? Talk to Bredemarket about getting your anti-fraud message out there: https://bredemarket.com/mark/

Deepfake Voices Have Been Around Since the 1980s

(Part of the biometric product marketing expert series)

Inland Empire locals know why THIS infamous song is stuck in my head today.

“Blame It On The Rain,” (not) sung by Milli Vanilli.

For those who don’t know the story, Rob Pilatus and Fab Morvan performed as the band Milli Vanilli and released an extremely successful album produced by Frank Farian. The title? “Girl You Know It’s True.”

But while we were listening to and watching Pilatus and Morvan sing, we were actually hearing the voices of Charles Shaw, John Davis, and Brad Howell. So technically this wasn’t a modern deepfake: rather than imitating the voice of a known person, Shaw et al were providing the voice of an unknown person. But the purpose was still deception.

Anyway, the ruse was revealed, Pilatus and Morvan were sacrificed, and things got worse.

“Pilatus, in particular, found it hard to cope, battling substance abuse and legal troubles. His tragic death in 1998 from a suspected overdose marked a sad epilogue to the Milli Vanilli saga.”

But there were certainly other examples of voice deepfakes in the 20th century…take Rich Little.

So deepfake voices aren’t a new problem. It’s just that they’re a lot easier to create today…which means that a lot of fraudsters can use them easily.

And if you are an identity/biometric marketing leader who needs Bredemarket’s help to market your anti-deepfake product, schedule a free meeting with me at https://bredemarket.com/mark/.

Identity and Expression

(Part of the biometric product marketing expert series)

Whether you are a human or a non-person entity (NPE) with facial recognition capability, you rely on visual cues to positively identify or authenticate a person. Let’s face it; many people resemble each other, but specific facial expressions or emotions are not always shared by people who otherwise look alike.

All pictures Google Gemini.

But in one of those oddities that fill the biometric world, you can have TOO MUCH expression. Part 3 of International Civil Aviation Organization (ICAO) Document 9303, which governs machine readable travel documents, mandates that faces on travel documents must maintain a neutral expression without smiling. At the time (2003) it was believed that the facial recognition algorithms would work best if the subject were expressionless. I don’t know if that holds true today.

But once the smile is erased, any other removal of expression or emotion degrades identification capability significantly. For example, closing the eyes not only degrades facial recognition, but is obviously fatal to iris recognition.

And if you remove the landmarks upon which facial recognition depends, identification is impossible.

While expression or lack thereof does not invalidate the assumption of permanence of the biometric authentication factor, it does govern the ability of people and machines to perform identification or authentication.

Grok, Celebrities, and Music

As some of you know, my generative AI tool of choice has been Google Gemini, which incorporates guardrails against portraying celebrities. Grok has fewer guardrails.

My main purpose in creating the two Bill and Hillary Clinton videos (at the beginning of this compilation reel) was to see how Grok would handle references to copyrighted music. I didn’t expect to hear actual songs, but would Grok try to approximate the sounds of Lindsey-Stevie-Christine era Mac and the Sex Pistols? You be the judge.

And as for Prince and Johnny…you be the judge of that also.

AI created by Grok.
AI created by Grok.

Using Grok For Evil: Deepfake Celebrity Endorsement

Using Grok for evil: a deepfake celebrity endorsement of Bredemarket?

Although in the video the fake Taylor Swift ends up looking a little like a fake Drew Barrymore.

Needless to say, I’m taking great care to fully disclose that this is a deepfake.

But some people don’t.

Removing the Guardrails: President Taylor Swift, Courtesy Grok

Most of my recent generative GI experiments have centered on Google Gemini…which has its limitations:

“Google Gemini imposes severe restrictions against creating pictures of famous figures. You can’t create a picture of President Taylor Swift, for example.”

Why does Google impose such limits? Because it is very sensitive to misleading the public, fearful that the average person would see such a picture and mistakenly assume that Taylor Swift IS the President. In our litigious society, perhaps this is valid.

But we know that other generative AI services don’t have such restrictions.

“One common accusation about Grok is that it lacks the guardrails that other AI services have.”

During a few spare moments this morning, I signed up for a Bredemarket Grok account. I have a personal X (Twitter) account, but haven’t used it in a long time, so this was a fresh sign up.,

And you know the first thing that I tried to do.


Grok.

Grok created it with no problem. Actually, there is a problem, because Grok apparently is not a large multimodal model and cannot precisely generate text in its image generator. But hey, no one will notice “TWIRSHIITE BOUSE,” will they?

But wait, there’s more! After I generated the image, I saw a button to generate a video. I thought that this required the paid service, but apparently the free service allows limited video generation.

Grok.

I may be conducting some video experiments some time soon. But will I maintain my ethics…and my sanity?