THIS VIDEO IS FAKE. U.S. Government pictures are public domain, so I altered and animated the original picture a bit. Inspired by a picture shared by Mitch Wagner of Kirk and Spock in ugly Christmas sweaters.
Tag Archives: deepfake
Step Into Christmas: Deepfake?
Deepfakes are not a 21st century invention. Take this video of “Step Into Christmas.”
But here are the musician credits.
Elton: Piano and vocals
Davey Johnstone: Guitars and backing vocals
Dee Murray: Bass guitar and backing vocals
Nigel Olsson: Drums and backing vocals
Ray Cooper: Percussion
Kiki Dee: Backing vocals (uncredited)
Jo Partridge: Backing vocals (uncredited)
Roger Pope: Tambourine (uncredited)
David Hentschel: ARP 2500 synthesizer (uncredited)
The video doesn’t match this list. According to the video, Elton played more than the guitar, and Bernie Taupin performed on the track.
So while we didn’t use the term “deepfake” in 1973, this promotional video meets at least some of the criteria of a deepfake.
And before you protest that everybody knew that Elton John didn’t play guitar…undoubtedly some people saw this video and believed that Elton was a guitarist. After all, they saw it with their own eyes.
Sounds like fraud to me!
Remember this when you watch things.
Detecting Deceptively Authoritative Deepfakes
I referenced this on one of my LinkedIn showcase pages earlier this week, but I need to say more on it.
We all agree that deepfakes can (sometimes) result in bad things, but some deepfakes present particular dangers that may not be detected. Let’s look at how deepfakes can harm the healthcare and legal professions.
Arielle Waldman of Dark Reading pointed out these dangers in her post “Sora 2 Makes Videos So Believable, Reality Checks Are Required.”
But I don’t want to talk about the general issues with believable AI (whether it’s Sora 2, Nano Banana Pro, or something else). I want to hone in on this:
“Sora 2 security risks will affect an array of industries, primarily the legal and healthcare sectors. AI generated evidence continues to pose challenges for lawyers and judges because it’s difficult to distinguish between reality and illusion. And deepfakes could affect healthcare, where many benefits are doled out virtually, including appointments and consultations.”
Actually these are two separate issues, and I’ll deal with them both.
Health Deepfakes
It’s bad enough that people can access your health records just by knowing your name and birthdate. But what happens when your medical practitioner sends you a telehealth appointment link…except your medical practitioner didn’t send it?
So here you are, sharing your protected health information with…who exactly?
And once you realize you’ve been duped, you turn to a lawyer.
Or you think you turn to a lawyer.
Legal Deepfakes
First off, is that lawyer truly a lawyer? And are you speaking to the lawyer to whom you think you’re speaking?
And even if you are, when the lawyer gathers information for the case, who knows if it’s real. And I’m not talking about the lawyers who cited hallucinated legal decisions. I’m talking about the lawyers whose eDiscovery platforms gather faked evidence.
The detection of deepfakes is currently concentrated in particular industries, such as financial services. But many more industries require this detection.
Revisiting the Coalition for Content Provenance and Authenticity
Earlier this morning I was on LinkedIn sharing the wildebeest picture below on the Bredemarket page.

But then I noticed that LinkedIn added a symbol in the upper left corner of the picture.

When I clicked on the symbol, I obtained additional information about the picture.

Content credentials
Source or history information is available for this image.
Learn more.
Al was used to generate all of this image
App or device used: Google C2PA Core Generator Library
Content Credentials issued by: Google LLC
Content Credentials issued date: Nov 20, 2025
Of course, I already knew that I had used generative AI (Google Gemini) to create the picture. And now, thanks to the Coalition for Content Provenance and Authority, you do also.
When Fraud Is Too Obvious, the TSA Edition
On Tuesday I will write about a way to combat document signature fraud, but today I will focus on extremely obvious fraudulent activity.
You probably haven’t tried to alter your appearance before going through an airport security checkpoint, but it’s hard to pull off.
The most obvious preventive measure is that airport security uses multi factor authentication. Even if the woman in the video encountered a dumb Transportation Security Administration (TSA) expert who thought she truly was Richard Nixon, the driver’s license “Nixon” presented would fail a security check.
But not all fraud is this easy to detect. Not for job applicants, not for travelers.
If Only Job Applicant Deepfake Detection Were This Easy
In reality, job applicant deepfake detection is (so far) unable to determine who the fraudster really is, but it can determine who the fraudster is NOT.
Something to remember when hiring people for sensitive positions. You don’t want to unknowingly hire a North Korean spy.
Not Bono
Is Bono laughing with me, or at me?
By the way, this is fake. Via Grok.
Fiction Overlaid on Fiction: What Will the Children Think?
Today, we know that many people are fooled by deepfakes, thinking they are real. But when we look at deepfake damage we think of adults. What about children?
It’s probably just as well that Fred Rogers passed away in 2003, years before technology allowed us to create deepfakes of everything.
Including Fred Rogers.
Rogers occupied a unique role. He transported his young viewers from their real world into a world of make-believe, but took care to explain to his young viewers that there was a difference between make-believe and reality. For example, he once hosted a woman named Margaret Hamilton, who explained that she was not really a witch.
Note the intelligence with which Hamilton treats her audience, by the way.
But back in Mister Rogers’ day, some people imposed make-believe on Rogers’ own make-believe, something that distressed Rogers because of his fear that it would confuse the children. Rogers objected to most of these portrayals, with the exception of Eddie Murphy’s. Children were fast asleep by the time “Mister Robinson” appeared on TV on Saturday nights. And Murphy’s character addressed serious topics such as gentrification.
But today we see things that are not real, but even adults think they are real. And that’s the adults; how do today’s children respond to deepfakes? If children of the 1930s were confused by a witch in a movie, how do children of today respond to things that look all too real?
And if kids do not have discernment view deepfakes, kids who create them don’t have that discernment either.
“Last October, a 13-year-old boy in Wisconsin used a picture of his classmate celebrating her bat mitzvah to create a deepfake nude he then shared on Snapchat….
“[M]any of the state laws don’t apply to explicit AI-generated deepfakes. Fewer still appear to directly grapple with the fact that perpetrators of deepfake abuse are often minors.”
Once again, technology outpaces our efforts to regulate it or examine its ethical considerations.
Fred would be horrified.
Grok’s Not-so-deepfake Willie Nelson, Rapper
While the deepfake video generators that fraudsters use can be persuasive, the 6-second videos created by the free version of Grok haven’t reached that level of fakery. Yet.
In my experience, Grok is better at re-creating well-known people with more distinctive appearances. Good at Gene Simmons and Taylor Swift. Bad at Ace Frehley and Gerald Ford.
So I present…Willie Nelson.
Willie with two turntables and a microphone, and one of his buds watching.
- If you thought “Stardust” was odd for him, listen to this.
- Once Grok created the video, I customized it to have Willie rap about bud.
- Unfortunately, or perhaps fortunately, it doesn’t sound like the real Willie.
And for the, um, record, Nelson appeared in Snoop’s “My Medicine” video.
As an added bonus, here’s Grok’s version of Cher, without audio customization. It doesn’t make me believe…
Reminder to marketing leaders: if you need Bredemarket’s content-proposal-analysis help, book a meeting at https://bredemarket.com/mark/
Is the Quantum Security Threat Solved Before It Arrives? Probably Not.
I’ll confess: there is a cybersecurity threat so…um…threatening that I didn’t even want to think about it.
You know the drill. The bad people use technology to come up with some security threat, and then the good people use technology to thwart it.
That’s what happens with antivirus. That’s what happens with deepfakes.
But I kept on hearing rumblings about a threat that would make all this obsolete.
The quantum threat and the possible 2029 “Q Day”
Today’s Q word is “quantum.”
But with great power comes great irresponsibility. Gartner said it:
“By 2029, ‘advances in quantum computing will make conventional asymmetric cryptography unsafe to use,’ Gartner said in a study.”
Frankly, this frightened me. Think of the possibilities that come from calculation superpowers. Brute force generation of passcodes, passwords, fingerprints, faces, ID cards, or whatever is necessary to hack into a security system. A billion different combinations? No problem.
So much for your unbreakable security system.
Thales implementation of NIST FIPS 204
Unless Thales has started to solve the problem. This is what Thales said:
“The good news is that technology companies, governments and standards agencies are well aware of the deadline. They are working on defensive strategies to meet the challenge — inventing cryptographic algorithms that run not just on quantum computers but on today’s conventional components.
“This technology has a name: post-quantum cryptography.
“There have already been notable breakthroughs. In the last few days, Thales launched a quantum-resistant smartcard: MultiApp 5.2 Premium PQC. It is the first smartcard to be certified by ANSSI, France’s national cybersecurity agency.
“The product uses new generation cryptographic signatures to protect electronic ID cards, health cards, driving licences and more from attacks by quantum computers.”
So what’s so special about the technology in the MultiApp 5.2 Premium PQC?
Thales used the NIST “FIPS 204 standard to define a digital signature algorithm for a new quantum-resistant smartcard: MultiApp 5.2 Premium PQC.”

The NIST FIPS 204 standard, “Module-Lattice-Based Digital Signature Standard,” can be found here. This is the abstract:
“Digital signatures are used to detect unauthorized modifications to data and to authenticate the identity of the signatory. In addition, the recipient of signed data can use a digital signature as evidence in demonstrating to a third party that the signature was, in fact, generated by the claimed signatory. This is known as non-repudiation since the signatory cannot easily repudiate the signature at a later time. This standard specifies ML-DSA, a set of algorithms that can be used to generate and verify digital signatures. ML-DSA is believed to be secure, even against adversaries in possession of a large-scale quantum computer.”
ML-DSA stands for “Module-Lattice-Based Digital Signature Algorithm.”

Now I’ll admit I don’t know a lattice from a vertical fence post, especially when it comes to quantum computing, so I’ll have to take NIST’s word for it that modules and lattice are super-good security.
Certification, schmertification
The Thales technology was then tested by researchers to determine its Evaluation Assurance Level (EAL). The result? “Thales’ product won EAL6+ certification (the highest is EAL7).” (TechTarget explains the 7 evaluation assurance levels here.)
France’s national cybersecurity agency (ANSSI) then certified it.
However…
…remember that certifications mean squat.
For all we know, the fraudsters have already broken the protections in the FIPS 204 standard.

And the merry-go-round between fraudsters and fraud fighters continues.
If you need help spreading the word about YOUR anti-fraud solution, quantum or otherwise, schedule a free meeting with Bredemarket.
