Singer/songwriters…and Deepfakes

I was just talking about singers, songwriters, and one singer who pretended to be a songwriter.

Of course, some musicians can be both.

Willie Nelson has written songs for others, sung songs written by others, and sung his own songs.

But despite the Grok deepfake I shared last October, Willie is not known as a rapper.

This is fake. Grok.

Step Into Christmas: Deepfake?

Deepfakes are not a 21st century invention. Take this video of “Step Into Christmas.”

But here are the musician credits.

Elton: Piano and vocals

Davey Johnstone: Guitars and backing vocals

Dee Murray: Bass guitar and backing vocals

Nigel Olsson: Drums and backing vocals

Ray Cooper: Percussion

Kiki Dee: Backing vocals (uncredited)

Jo Partridge: Backing vocals (uncredited)

Roger Pope: Tambourine (uncredited)

David Hentschel: ARP 2500 synthesizer (uncredited)

The video doesn’t match this list. According to the video, Elton played more than the guitar, and Bernie Taupin performed on the track.

So while we didn’t use the term “deepfake” in 1973, this promotional video meets at least some of the criteria of a deepfake.

And before you protest that everybody knew that Elton John didn’t play guitar…undoubtedly some people saw this video and believed that Elton was a guitarist. After all, they saw it with their own eyes.

Sounds like fraud to me!

Remember this when you watch things.

Detecting Deceptively Authoritative Deepfakes

I referenced this on one of my LinkedIn showcase pages earlier this week, but I need to say more on it.

We all agree that deepfakes can (sometimes) result in bad things, but some deepfakes present particular dangers that may not be detected. Let’s look at how deepfakes can harm the healthcare and legal professions.

Arielle Waldman of Dark Reading pointed out these dangers in her post “Sora 2 Makes Videos So Believable, Reality Checks Are Required.”

But I don’t want to talk about the general issues with believable AI (whether it’s Sora 2, Nano Banana Pro, or something else). I want to hone in on this:

“Sora 2 security risks will affect an array of industries, primarily the legal and healthcare sectors. AI generated evidence continues to pose challenges for lawyers and judges because it’s difficult to distinguish between reality and illusion. And deepfakes could affect healthcare, where many benefits are doled out virtually, including appointments and consultations.”

Actually these are two separate issues, and I’ll deal with them both.

Health Deepfakes

It’s bad enough that people can access your health records just by knowing your name and birthdate. But what happens when your medical practitioner sends you a telehealth appointment link…except your medical practitioner didn’t send it?

Grok.

So here you are, sharing your protected health information with…who exactly?

And once you realize you’ve been duped, you turn to a lawyer.

This one is not a deepfake. From YouTube.

Or you think you turn to a lawyer.

Legal Deepfakes

First off, is that lawyer truly a lawyer? And are you speaking to the lawyer to whom you think you’re speaking?

Not Johnnie Cochran.

And even if you are, when the lawyer gathers information for the case, who knows if it’s real. And I’m not talking about the lawyers who cited hallucinated legal decisions. I’m talking about the lawyers whose eDiscovery platforms gather faked evidence.

Liquor store owner.

The detection of deepfakes is currently concentrated in particular industries, such as financial services. But many more industries require this detection.

Revisiting the Coalition for Content Provenance and Authenticity

Earlier this morning I was on LinkedIn sharing the wildebeest picture below on the Bredemarket page.

Google Gemini.

But then I noticed that LinkedIn added a symbol in the upper left corner of the picture.

LinkedIn.

When I clicked on the symbol, I obtained additional information about the picture.

LinkedIn.

Content credentials

Source or history information is available for this image.

Learn more.

Al was used to generate all of this image

App or device used: Google C2PA Core Generator Library

Content Credentials issued by: Google LLC

Content Credentials issued date: Nov 20, 2025

Of course, I already knew that I had used generative AI (Google Gemini) to create the picture. And now, thanks to the Coalition for Content Provenance and Authority, you do also.

When Fraud Is Too Obvious, the TSA Edition

On Tuesday I will write about a way to combat document signature fraud, but today I will focus on extremely obvious fraudulent activity.

You probably haven’t tried to alter your appearance before going through an airport security checkpoint, but it’s hard to pull off.

Um…no.

The most obvious preventive measure is that airport security uses multi factor authentication. Even if the woman in the video encountered a dumb Transportation Security Administration (TSA) expert who thought she truly was Richard Nixon, the driver’s license “Nixon” presented would fail a security check.

But not all fraud is this easy to detect. Not for job applicants, not for travelers.

If Only Job Applicant Deepfake Detection Were This Easy

In reality, job applicant deepfake detection is (so far) unable to determine who the fraudster really is, but it can determine who the fraudster is NOT.

Something to remember when hiring people for sensitive positions. You don’t want to unknowingly hire a North Korean spy.