Is the Quantum Security Threat Solved Before It Arrives? Probably Not.

I’ll confess: there is a cybersecurity threat so…um…threatening that I didn’t even want to think about it.

You know the drill. The bad people use technology to come up with some security threat, and then the good people use technology to thwart it.

That’s what happens with antivirus. That’s what happens with deepfakes.

But I kept on hearing rumblings about a threat that would make all this obsolete.

The quantum threat and the possible 2029 “Q Day”

Today’s Q word is “quantum.”

But with great power comes great irresponsibility. Gartner said it:

“By 2029, ‘advances in quantum computing will make conventional asymmetric cryptography unsafe to use,’ Gartner said in a study.”

Frankly, this frightened me. Think of the possibilities that come from calculation superpowers. Brute force generation of passcodes, passwords, fingerprints, faces, ID cards, or whatever is necessary to hack into a security system. A billion different combinations? No problem.

So much for your unbreakable security system.

Thales implementation of NIST FIPS 204

Unless Thales has started to solve the problem. This is what Thales said:

“The good news is that technology companies, governments and standards agencies are well aware of the deadline. They are working on defensive strategies to meet the challenge — inventing cryptographic algorithms that run not just on quantum computers but on today’s conventional components.

“This technology has a name: post-quantum cryptography.

“There have already been notable breakthroughs. In the last few days, Thales launched a quantum-resistant smartcard: MultiApp 5.2 Premium PQC. It is the first smartcard to be certified by ANSSI, France’s national cybersecurity agency.

“The product uses new generation cryptographic signatures to protect electronic ID cards, health cards, driving licences and more from attacks by quantum computers.”

So what’s so special about the technology in the MultiApp 5.2 Premium PQC?

Thales used the NIST “FIPS 204 standard to define a digital signature algorithm for a new quantum-resistant smartcard: MultiApp 5.2 Premium PQC.”

Google Gemini.

The NIST FIPS 204 standard, “Module-Lattice-Based Digital Signature Standard,” can be found here. This is the abstract:

“Digital signatures are used to detect unauthorized modifications to data and to authenticate the identity of the signatory. In addition, the recipient of signed data can use a digital signature as evidence in demonstrating to a third party that the signature was, in fact, generated by the claimed signatory. This is known as non-repudiation since the signatory cannot easily repudiate the signature at a later time. This standard specifies ML-DSA, a set of algorithms that can be used to generate and verify digital signatures. ML-DSA is believed to be secure, even against adversaries in possession of a large-scale quantum computer.”

ML-DSA stands for “Module-Lattice-Based Digital Signature Algorithm.”

Google Gemini.

Now I’ll admit I don’t know a lattice from a vertical fence post, especially when it comes to quantum computing, so I’ll have to take NIST’s word for it that modules and lattice are super-good security.

Certification, schmertification

The Thales technology was then tested by researchers to determine its Evaluation Assurance Level (EAL). The result? “Thales’ product won EAL6+ certification (the highest is EAL7).” (TechTarget explains the 7 evaluation assurance levels here.)

France’s national cybersecurity agency (ANSSI) then certified it.

However…

…remember that certifications mean squat.

For all we know, the fraudsters have already broken the protections in the FIPS 204 standard.

Google Gemini.

And the merry-go-round between fraudsters and fraud fighters continues.

If you need help spreading the word about YOUR anti-fraud solution, quantum or otherwise, schedule a free meeting with Bredemarket.

It’s a Deepfake…World

Remember the Church Lady’s saying, “Well how convenient“?

People weren’t laughing at Joel R. McConvey when he reminded us of a different saying:

“In Silicon Valley parlance, ‘create the problem, sell the solution.'”

Joel R. McConvey’s “tale of two platforms”

McConvey was referring to two different Sam Altman investments. One, OpenAI’s newly-released Sora 2, amounts to a deepfake “slop machine” that is flooding our online, um, world in fakery. This concerns many, including SAG-AFTRA president Sean Astin. He doesn’t want his union members to lose their jobs to the Tilly Norwoods out there.

The deepfake “sea of slop” was created by Google Gemini.

If only there were a way to tell the human content from the non-person entity (NPE) content. Another Sam Altman investment, World (formerly Worldcoin), just happens to provide a solution to humanness detection.

“What if we could reduce the efficacy of deepfakes? Proof of human technology provides a promising tool. By establishing cryptographic proof that you’re interacting with a real, unique human, this technology addresses the root of the problem. It doesn’t try to determine if content is fake; it ensures the source is real from the start.”

Google Gemini. Not an accurate depiction of the Orb, but it’s really cool.

All credit to McConvey for tying these differing Altman efforts together in his Biometric Update article.

World is not enough

But World’s solution is partial at best.

As I’ve said before, proof of humanness is only half the battle. Even if you’ve detected humanness, some humans are capable of their own slop, and to solve the human slop problem you need to prove WHICH human is responsible for something.

Which is something decidedly outside of World’s mission.

But is it part of YOUR company’s mission? Talk to Bredemarket about getting your anti-fraud message out there: https://bredemarket.com/mark/

Deepfake Voices Have Been Around Since the 1980s

(Part of the biometric product marketing expert series)

Inland Empire locals know why THIS infamous song is stuck in my head today.

“Blame It On The Rain,” (not) sung by Milli Vanilli.

For those who don’t know the story, Rob Pilatus and Fab Morvan performed as the band Milli Vanilli and released an extremely successful album produced by Frank Farian. The title? “Girl You Know It’s True.”

But while we were listening to and watching Pilatus and Morvan sing, we were actually hearing the voices of Charles Shaw, John Davis, and Brad Howell. So technically this wasn’t a modern deepfake: rather than imitating the voice of a known person, Shaw et al were providing the voice of an unknown person. But the purpose was still deception.

Anyway, the ruse was revealed, Pilatus and Morvan were sacrificed, and things got worse.

“Pilatus, in particular, found it hard to cope, battling substance abuse and legal troubles. His tragic death in 1998 from a suspected overdose marked a sad epilogue to the Milli Vanilli saga.”

But there were certainly other examples of voice deepfakes in the 20th century…take Rich Little.

So deepfake voices aren’t a new problem. It’s just that they’re a lot easier to create today…which means that a lot of fraudsters can use them easily.

And if you are an identity/biometric marketing leader who needs Bredemarket’s help to market your anti-deepfake product, schedule a free meeting with me at https://bredemarket.com/mark/.

Grok, Celebrities, and Music

As some of you know, my generative AI tool of choice has been Google Gemini, which incorporates guardrails against portraying celebrities. Grok has fewer guardrails.

My main purpose in creating the two Bill and Hillary Clinton videos (at the beginning of this compilation reel) was to see how Grok would handle references to copyrighted music. I didn’t expect to hear actual songs, but would Grok try to approximate the sounds of Lindsey-Stevie-Christine era Mac and the Sex Pistols? You be the judge.

And as for Prince and Johnny…you be the judge of that also.

AI created by Grok.
AI created by Grok.

Using Grok For Evil: Deepfake Celebrity Endorsement

Using Grok for evil: a deepfake celebrity endorsement of Bredemarket?

Although in the video the fake Taylor Swift ends up looking a little like a fake Drew Barrymore.

Needless to say, I’m taking great care to fully disclose that this is a deepfake.

But some people don’t.

What is Truth? (What you see may not be true.)

I just posted the latest edition of my LinkedIn newsletter, “The Wildebeest Speaks.” It examines the history of deepfakes / likenesses, including the Émile Cohl animated cartoon Fantasmagorie, my own deepfake / likeness creations, and the deepfake / likeness of Sam Altman committing a burglary, authorized by Altman himself. Unfortunately, some deepfakes are NOT authorized, and that’s a problem.

Read my article here: https://www.linkedin.com/pulse/what-truth-bredemarket-jetmc/

Office.

In the PLoS One Voice Deepfake Detection Test, the Key Word is “Participants”

(Part of the biometric product marketing expert series)

A recent PYMNTS article entitled “AI Voices Are Now Indistinguishable From Humans, Experts Say” includes the following about voice deepfakes:

“A new PLoS One study found that artificial intelligence has reached a point where cloned voices are indistinguishable from genuine ones. In the experiment, participants were asked to tellhuman voices from AI-generated ones across 80 samples. Cloned voices were mistaken for real in 58% of cases, while human voices were correctly identified only 62% of the time.”

What the study didn’t measure

Since you already read the title of this post, you know that I’m concentrating on the word “participants.”

The PLoS One experiment used PEOPLE to try to distinguish real voices from deepfake ones.

And people aren’t all that accurate. Never have been.

Picture from Google Gemini.

Before you decide that people can’t detect fake voices…

…why not have an ALGORITHM give it a try?

What the study did measure

But to be fair, that wasn’t the goal of the PLoS One study, which specifically focused on human perception.

“Recently, an intriguing effect was reported in AI-generated faces, where such face images were perceived as more human than images of real humans – a “hyperrealism effect.” Here, we tested whether a “hyperrealism effect” also exists for AI-generated voices.”

For the record, the researchers did NOT discover a hyperrealism effect in AI-generated voices.

Do you offer a solution?

But if future deepfake voices sound realer than real, then we will REALLY need the algorithms to spot the fakes.

And if your company has a voice deepfake detection solution, I could have talked about it right now in this post.

Or on your website.

Or on your social media.

Where your prospects can see it…and purchase it.

And money in your pocket is realer than real.

Let’s talk. https://bredemarket.com/mark/

Picture from Google Gemini.

The Semantics of “Likeness” vs. “Deepfake”

A quote from YK Hong, from the post at: https://www.instagram.com/p/DPWAy2mEoRF/

“My current recommendation is strongly against uploading your biometrics to OpenAl’s new social feed app, Sora (currently in early release).

“Sora’s Cameo option has the user upload their own biometrics to create voice/video Deepfakes of themselves. The user can also set their preferences to allow others to create Deepfakes of each other, too.

“This is a privacy and security nightmare.”

Deepfake.

As I read this, one thing hit me: the intentional use of the word “deepfake,” with its negative connotations.

I had the sneaking suspicion that the descriptions of Cameo didn’t use the word “deepfake” to describe the feature.

And when I read https://help.openai.com/en/articles/12435986-generating-content-with-cameos I discovered I was right. OpenAI calls it a “likeness” or a “character” or…a cameo.

“Cameos are reusable “characters” built from a short video-and-audio capture of you. They let you appear in Sora videos, made by you or by specific people you approve, using a realistic version of your likeness and voice. When you create a cameo, you choose who can use it (e.g., only you, people you approve, or broader access).”

Likeness.

The entire episode shows the power of words. If you substitute a positive word such as “likeness” for a negative word such as “deepfake”—or vice versa—you exercise the power of to color the entire conversation.

Another example from many years ago was an ad from the sugar lobby which intentionally denigrated the “artificial” competitors to all natural sugar. Very effective for the time, in which the old promise of better living through chemicals was regarded as horror.

Google Gemini.

Remember this in your writing.

Or I can remember it for you if Bredemarket writes for you. Talk to me: https://bredemarket.com/mark/

The right words.

Battling deepfakes with…IAL3?

(Picture designed by Freepik.)

The information in this post is taken from the summary of this year’s Biometrics Institute Industry Survey and is presented under the following authority:

“You are welcome to use the information from this survey with a reference to its source, Biometrics Institute Industry Survey 2025. The full report, slides and graphics are available to Biometrics Institute members.”

But even the freebie stuff is valuable, including this citation of two concerns expressed by survey respondents:

“Against a backdrop of ongoing concerns around deepfakes, 85%
agreed or agreed strongly that deepfake technology poses a
significant threat to the future of biometric recognition, which
was similar to 2024.
“And two thirds of respondents (67%) agreed or agreed strongly
that supervised biometric capture is crucial to safeguard against
spoofing and injection attacks.”

Supervised biometric capture? Where have we heard that before?

IAL3 requires “[p]hysical presence” for identity proofing. However, the proofing agent may “attend the identity proofing session via a CSP-controlled kiosk or device.” In other words, supervised enrollment.

Now remote supervised enrollment and even in-person supervised enrollment is not a 100.00000% guard against deepfakes. The subject could be wearing a REALLY REALLY good mask. But it’s better than unsupervised enrollment.

How does your company battle deepfakes?

How do you tell your clients about your product?

Do you need product marketing assistance? Talk to Bredemarket.

Us, Them, Pornographic Deepfakes, and Guardrails

(Imagen 4)

Some of you may remember the Pink Floyd song “Us and Them.” The band had a history of examining things from different perspectives, to the point where Roger Waters and the band subsequently conceived a very long playing record (actually two records) derived from a single incident of Waters spitting on a member of the audience.

Is it OK to spit on the audience…or does this raise the threat of the audience spitting on you? Things appear different when you’re the recipient.

And yes, this has everything to do with generative artificial intelligence and pornographic deepfakes. Bear with me here.

Non-Consensual Activity in AI Apps

My former IDEMIA colleague Peter Kirkwood recently shared an observation on the pace of innovation and the accompanying risks.

“I’m a strong believer in the transformative potential of emerging technologies. The pace of innovation brings enormous benefits, but it also introduces risks we often can’t fully anticipate or regulate until the damage is already visible.”

Kirkwood then linked to an instance in which the technology is moving faster than the business and legal processes: specifically, Bernard Marr’s LinkedIn article “AI Apps Are Undressing Women Without Consent And It’s A Problem.”

Marr begins by explaining what “nudification apps” can do, and notes the significant financial profits that criminals can realize by employing them, Marr then outlines what various countries are doing to battle nudification apps and their derived content, including the United States, the United Kingdom, China, and Australia.

But then Marr notes why some people don’t take nudification all that seriously.

“One frustration for those campaigning for a solution is that authorities haven’t always seemed willing to treat AI-generated image abuse as seriously as they would photographic image abuse, due to a perception that it isn’t real’.”

First they created the deepfakes of the hot women

After his experiences under the Nazi regime, in which he transitioned from sympathizer to prisoner, Martin Niemoller frequently discussed how those who first “came for the Socialists” would eventually come for the trade unionists, then the Jews…then ourselves.

And I’m sure that some of you believe I’m trivializing Niemoller’s statement by equating deepfake creation with persecution of socialists. After all, deepfakes aren’t real.

But the effects of deepfakes are real, as Psychology Today notes:

“Being targeted by deepfake nudes is profoundly distressing, especially for adolescents and young adults. Deepfake nudes violate an individual’s right to bodily autonomy—the control over one’s own body without interference. Victims experience a severe invasion of privacy and may feel a loss of control over their bodies, as their likeness is manipulated without consent. This often leads to shameanxiety, and a decreased sense of self-worth. Fear of social ostracism can also contribute to anxiety, depression, and, in extreme cases, suicidal thoughts.”

And again I raise the question. If it’s OK to create realistic-looking pornographic deepfakes of hot women you don’t know, or of children you don’t know, then is it also OK to create realistic-looking pornographic deepfakes of your own family…or of you?

Guardrails

Imagen 4.

The difficulty, of course, is enforcing guardrails to stop this activity. Even if most of the governments are in agreement, and most of the businesses (such as Meta and Alphabet) are in agreement, “most” does not equal “all.” And as long as there is a market for pornographic deepfakes, someone will satisfy the demand.