Detecting Deceptively Authoritative Deepfakes

I referenced this on one of my LinkedIn showcase pages earlier this week, but I need to say more on it.

We all agree that deepfakes can (sometimes) result in bad things, but some deepfakes present particular dangers that may not be detected. Let’s look at how deepfakes can harm the healthcare and legal professions.

Arielle Waldman of Dark Reading pointed out these dangers in her post “Sora 2 Makes Videos So Believable, Reality Checks Are Required.”

But I don’t want to talk about the general issues with believable AI (whether it’s Sora 2, Nano Banana Pro, or something else). I want to hone in on this:

“Sora 2 security risks will affect an array of industries, primarily the legal and healthcare sectors. AI generated evidence continues to pose challenges for lawyers and judges because it’s difficult to distinguish between reality and illusion. And deepfakes could affect healthcare, where many benefits are doled out virtually, including appointments and consultations.”

Actually these are two separate issues, and I’ll deal with them both.

Health Deepfakes

It’s bad enough that people can access your health records just by knowing your name and birthdate. But what happens when your medical practitioner sends you a telehealth appointment link…except your medical practitioner didn’t send it?

Grok.

So here you are, sharing your protected health information with…who exactly?

And once you realize you’ve been duped, you turn to a lawyer.

This one is not a deepfake. From YouTube.

Or you think you turn to a lawyer.

Legal Deepfakes

First off, is that lawyer truly a lawyer? And are you speaking to the lawyer to whom you think you’re speaking?

Not Johnnie Cochran.

And even if you are, when the lawyer gathers information for the case, who knows if it’s real. And I’m not talking about the lawyers who cited hallucinated legal decisions. I’m talking about the lawyers whose eDiscovery platforms gather faked evidence.

Liquor store owner.

The detection of deepfakes is currently concentrated in particular industries, such as financial services. But many more industries require this detection.

Revisiting the Coalition for Content Provenance and Authenticity

Earlier this morning I was on LinkedIn sharing the wildebeest picture below on the Bredemarket page.

Google Gemini.

But then I noticed that LinkedIn added a symbol in the upper left corner of the picture.

LinkedIn.

When I clicked on the symbol, I obtained additional information about the picture.

LinkedIn.

Content credentials

Source or history information is available for this image.

Learn more.

Al was used to generate all of this image

App or device used: Google C2PA Core Generator Library

Content Credentials issued by: Google LLC

Content Credentials issued date: Nov 20, 2025

Of course, I already knew that I had used generative AI (Google Gemini) to create the picture. And now, thanks to the Coalition for Content Provenance and Authority, you do also.

When Fraud Is Too Obvious, the TSA Edition

On Tuesday I will write about a way to combat document signature fraud, but today I will focus on extremely obvious fraudulent activity.

You probably haven’t tried to alter your appearance before going through an airport security checkpoint, but it’s hard to pull off.

Um…no.

The most obvious preventive measure is that airport security uses multi factor authentication. Even if the woman in the video encountered a dumb Transportation Security Administration (TSA) expert who thought she truly was Richard Nixon, the driver’s license “Nixon” presented would fail a security check.

But not all fraud is this easy to detect. Not for job applicants, not for travelers.

If Only Job Applicant Deepfake Detection Were This Easy

In reality, job applicant deepfake detection is (so far) unable to determine who the fraudster really is, but it can determine who the fraudster is NOT.

Something to remember when hiring people for sensitive positions. You don’t want to unknowingly hire a North Korean spy.

Fiction Overlaid on Fiction: What Will the Children Think?

Today, we know that many people are fooled by deepfakes, thinking they are real. But when we look at deepfake damage we think of adults. What about children?

It’s probably just as well that Fred Rogers passed away in 2003, years before technology allowed us to create deepfakes of everything.

Including Fred Rogers.

Grok. Not Fred Rogers.

Rogers occupied a unique role. He transported his young viewers from their real world into a world of make-believe, but took care to explain to his young viewers that there was a difference between make-believe and reality. For example, he once hosted a woman named Margaret Hamilton, who explained that she was not really a witch.

Margaret Hamilton and Fred Rogers.

Note the intelligence with which Hamilton treats her audience, by the way.

But back in Mister Rogers’ day, some people imposed make-believe on Rogers’ own make-believe, something that distressed Rogers because of his fear that it would confuse the children. Rogers objected to most of these portrayals, with the exception of Eddie Murphy’s. Children were fast asleep by the time “Mister Robinson” appeared on TV on Saturday nights. And Murphy’s character addressed serious topics such as gentrification.

Mister Robinson on gentrification, 2019.

But today we see things that are not real, but even adults think they are real. And that’s the adults; how do today’s children respond to deepfakes? If children of the 1930s were confused by a witch in a movie, how do children of today respond to things that look all too real?

And if kids do not have discernment view deepfakes, kids who create them don’t have that discernment either.

“Last October, a 13-year-old boy in Wisconsin used a picture of his classmate celebrating her bat mitzvah to create a deepfake nude he then shared on Snapchat….

“[M]any of the state laws don’t apply to explicit AI-generated deepfakes. Fewer still appear to directly grapple with the fact that perpetrators of deepfake abuse are often minors.”

Once again, technology outpaces our efforts to regulate it or examine its ethical considerations. 

Fred would be horrified.

Grok’s Not-so-deepfake Willie Nelson, Rapper

While the deepfake video generators that fraudsters use can be persuasive, the 6-second videos created by the free version of Grok haven’t reached that level of fakery. Yet.

In my experience, Grok is better at re-creating well-known people with more distinctive appearances. Good at Gene Simmons and Taylor Swift. Bad at Ace Frehley and Gerald Ford.

So I present…Willie Nelson. 

Grok.

Willie with two turntables and a microphone, and one of his buds watching.

  • If you thought “Stardust” was odd for him, listen to this. 
  • Once Grok created the video, I customized it to have Willie rap about bud. 
  • Unfortunately, or perhaps fortunately, it doesn’t sound like the real Willie.

And for the, um, record, Nelson appeared in Snoop’s “My Medicine” video.

As an added bonus, here’s Grok’s version of Cher, without audio customization. It doesn’t make me believe…

Grok.

Reminder to marketing leaders: if you need Bredemarket’s content-proposal-analysis help, book a meeting at https://bredemarket.com/mark/

Is the Quantum Security Threat Solved Before It Arrives? Probably Not.

I’ll confess: there is a cybersecurity threat so…um…threatening that I didn’t even want to think about it.

You know the drill. The bad people use technology to come up with some security threat, and then the good people use technology to thwart it.

That’s what happens with antivirus. That’s what happens with deepfakes.

But I kept on hearing rumblings about a threat that would make all this obsolete.

The quantum threat and the possible 2029 “Q Day”

Today’s Q word is “quantum.”

But with great power comes great irresponsibility. Gartner said it:

“By 2029, ‘advances in quantum computing will make conventional asymmetric cryptography unsafe to use,’ Gartner said in a study.”

Frankly, this frightened me. Think of the possibilities that come from calculation superpowers. Brute force generation of passcodes, passwords, fingerprints, faces, ID cards, or whatever is necessary to hack into a security system. A billion different combinations? No problem.

So much for your unbreakable security system.

Thales implementation of NIST FIPS 204

Unless Thales has started to solve the problem. This is what Thales said:

“The good news is that technology companies, governments and standards agencies are well aware of the deadline. They are working on defensive strategies to meet the challenge — inventing cryptographic algorithms that run not just on quantum computers but on today’s conventional components.

“This technology has a name: post-quantum cryptography.

“There have already been notable breakthroughs. In the last few days, Thales launched a quantum-resistant smartcard: MultiApp 5.2 Premium PQC. It is the first smartcard to be certified by ANSSI, France’s national cybersecurity agency.

“The product uses new generation cryptographic signatures to protect electronic ID cards, health cards, driving licences and more from attacks by quantum computers.”

So what’s so special about the technology in the MultiApp 5.2 Premium PQC?

Thales used the NIST “FIPS 204 standard to define a digital signature algorithm for a new quantum-resistant smartcard: MultiApp 5.2 Premium PQC.”

Google Gemini.

The NIST FIPS 204 standard, “Module-Lattice-Based Digital Signature Standard,” can be found here. This is the abstract:

“Digital signatures are used to detect unauthorized modifications to data and to authenticate the identity of the signatory. In addition, the recipient of signed data can use a digital signature as evidence in demonstrating to a third party that the signature was, in fact, generated by the claimed signatory. This is known as non-repudiation since the signatory cannot easily repudiate the signature at a later time. This standard specifies ML-DSA, a set of algorithms that can be used to generate and verify digital signatures. ML-DSA is believed to be secure, even against adversaries in possession of a large-scale quantum computer.”

ML-DSA stands for “Module-Lattice-Based Digital Signature Algorithm.”

Google Gemini.

Now I’ll admit I don’t know a lattice from a vertical fence post, especially when it comes to quantum computing, so I’ll have to take NIST’s word for it that modules and lattice are super-good security.

Certification, schmertification

The Thales technology was then tested by researchers to determine its Evaluation Assurance Level (EAL). The result? “Thales’ product won EAL6+ certification (the highest is EAL7).” (TechTarget explains the 7 evaluation assurance levels here.)

France’s national cybersecurity agency (ANSSI) then certified it.

However…

…remember that certifications mean squat.

For all we know, the fraudsters have already broken the protections in the FIPS 204 standard.

Google Gemini.

And the merry-go-round between fraudsters and fraud fighters continues.

If you need help spreading the word about YOUR anti-fraud solution, quantum or otherwise, schedule a free meeting with Bredemarket.

It’s a Deepfake…World

Remember the Church Lady’s saying, “Well how convenient“?

People weren’t laughing at Joel R. McConvey when he reminded us of a different saying:

“In Silicon Valley parlance, ‘create the problem, sell the solution.'”

Joel R. McConvey’s “tale of two platforms”

McConvey was referring to two different Sam Altman investments. One, OpenAI’s newly-released Sora 2, amounts to a deepfake “slop machine” that is flooding our online, um, world in fakery. This concerns many, including SAG-AFTRA president Sean Astin. He doesn’t want his union members to lose their jobs to the Tilly Norwoods out there.

The deepfake “sea of slop” was created by Google Gemini.

If only there were a way to tell the human content from the non-person entity (NPE) content. Another Sam Altman investment, World (formerly Worldcoin), just happens to provide a solution to humanness detection.

“What if we could reduce the efficacy of deepfakes? Proof of human technology provides a promising tool. By establishing cryptographic proof that you’re interacting with a real, unique human, this technology addresses the root of the problem. It doesn’t try to determine if content is fake; it ensures the source is real from the start.”

Google Gemini. Not an accurate depiction of the Orb, but it’s really cool.

All credit to McConvey for tying these differing Altman efforts together in his Biometric Update article.

World is not enough

But World’s solution is partial at best.

As I’ve said before, proof of humanness is only half the battle. Even if you’ve detected humanness, some humans are capable of their own slop, and to solve the human slop problem you need to prove WHICH human is responsible for something.

Which is something decidedly outside of World’s mission.

But is it part of YOUR company’s mission? Talk to Bredemarket about getting your anti-fraud message out there: https://bredemarket.com/mark/

Deepfake Voices Have Been Around Since the 1980s

(Part of the biometric product marketing expert series)

Inland Empire locals know why THIS infamous song is stuck in my head today.

“Blame It On The Rain,” (not) sung by Milli Vanilli.

For those who don’t know the story, Rob Pilatus and Fab Morvan performed as the band Milli Vanilli and released an extremely successful album produced by Frank Farian. The title? “Girl You Know It’s True.”

But while we were listening to and watching Pilatus and Morvan sing, we were actually hearing the voices of Charles Shaw, John Davis, and Brad Howell. So technically this wasn’t a modern deepfake: rather than imitating the voice of a known person, Shaw et al were providing the voice of an unknown person. But the purpose was still deception.

Anyway, the ruse was revealed, Pilatus and Morvan were sacrificed, and things got worse.

“Pilatus, in particular, found it hard to cope, battling substance abuse and legal troubles. His tragic death in 1998 from a suspected overdose marked a sad epilogue to the Milli Vanilli saga.”

But there were certainly other examples of voice deepfakes in the 20th century…take Rich Little.

So deepfake voices aren’t a new problem. It’s just that they’re a lot easier to create today…which means that a lot of fraudsters can use them easily.

And if you are an identity/biometric marketing leader who needs Bredemarket’s help to market your anti-deepfake product, schedule a free meeting with me at https://bredemarket.com/mark/.