Deep Deepfakes vs. Shallow Shallowfakes

We toss words around until they lose all meaning, like the name of Jello Biafra’s most famous band. (IYKYK.)

So why are deepfakes deep?

And does the existence of deepfakes necessarily mean that shallowfakes exist?

Why are deepfakes deep?

The University of Virginia Information Security explains how deepfakes are created, which also explains why they’re called that.

“A deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name).”

UVA then launches into a technical explanation.

“Deep learning is a special kind of machine learning that involves “hidden layers.” Typically, deep learning is executed by a special class of algorithm called a neural network….A hidden layer is a series of nodes within the network that performs mathematical transformations to convert input signals to output signals (in the case of deepfakes, to convert real images to really good fake images). The more hidden layers a neural network has, the “deeper” the network is.”

Why are shallowfakes shallow?

So if you don’t use a multi-level neural network to create your fake, then it is by definition shallow. Although you most likely need to use cumbersome manual methods to create it.

  • For presentation attack detection (liveness detection, either active or passive), you can dispense with the neural network and just use old fashioned makeup.
From NIST.

Or a mask.

Imagen 4.

It’s all semantics

In truth, we commonly refer to all face, voice, and finger fakes as “deep” fakes even when they don’t originate in a neural network.

But if someone wants to refer to shallowfakes, it’s OK with me.

Presentation Attack Injection, Injection Attack Detection, and Deepfakes on LinkedIn and Substack

Just letting my Bredemarket blog readers know of two items I wrote on other platforms.

  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes.” This LinkedIn article, part of The Wildebeest Speaks newsletter series, is directed toward people who already have some familiarity with deepfake attacks.
  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes (version 2). This Substack post does NOT assume any deepfake attack background.

“Somewhat You Why,” and Whether Deepfakes are Evil or Good or Both

I debated whether or not I should publish this because it touches upon two controversial topics: U.S. politics, and my proposed sixth factor of authentication.

I eventually decided to share it on the Bredemarket blog but NOT link to it or quote it on my socials.

Although I could change my mind later.

Are deepfakes bad?

When I first discussed deepfakes in June 2023, I detailed two deepfake applications.

One deepfake was an audio-video creation purportedly showing Richard Nixon paying homage to the Apollo 11 astronauts who were stranded on the surface of the moon.

  • Of course, no Apollo 11 astronauts were ever stranded on the surface of the moon; Neil Armstrong and Buzz Aldrin returned to Earth safely.
  • So Nixon never had to pay homage to them, although William Safire wrote a speech as a contingency.
  • This deepfake is not in itself bad, unless it is taught in a history course as true history about “the failure of the U.S. moon program.” (The Apollo program had a fatal catastrophe, but not involving Apollo 11.)

The other deepfake was more sinister.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million….The manager, believing everything appeared legitimate, began making the transfers.

Except that the director wasn’t the director, and the company had just been swindled to the tune of $35 million.

I think everyone knows now that deepfakes can be used for bad things. So we establish standards to determine “content provenance and authenticity,” which is a fancy way to say whether content is real or a deepfake.

In addition to establishing standards, we do a lot of research to counter deepfakes, because they are bad.

Or are they?

What the National Science Foundation won’t do

Multiple sources, including both Nextgov and Biometric Update, are reporting on the cancellation of approximately 430 grants from the National Science Foundation. Among these grants are ones for deepfake research.

Around 430 federally-funded research grants covering topics like deepfake detection, artificial intelligence advancement and the empowerment of marginalized groups in scientific fields were among several projects terminated in recent days following a major realignment in research priorities at the National Science Foundation.

As you can probably guess, the cancellation of these grants is driven by the Trump Administration and the so-called Department of Government Efficiency (DOGE).

Why?

Because freedom:

Per the Presidential Action announced January 20, 2025, NSF will not prioritize research proposals that engage in or facilitate any conduct that would unconstitutionally abridge the free speech of any American citizen. NSF will not support research with the goal of combating “misinformation,” “disinformation,” and “malinformation” that could be used to infringe on the constitutionally protected speech rights of American citizens across the United States in a manner that advances a preferred narrative about significant matters of public debate.

The NSF argues that a person’s First Amendment rights permit them, I mean permit him, to share content without having the government prevent its dissemination by tagging it as misinformation, disinformation, or malinformation.

And it’s not the responsibility of the U.S. Government to research creation of so-called misinformation content. Hence the end of funding for deepfake research.

So deepfakes are good because they’re protected by the First Amendment.

But wait a minute…

Just because the U.S. Government doesn’t like it when patriotic citizens are censored from distributing deepfake videos for political purposes, that doesn’t necessarily mean that the U.S. Government objects to ALL deepfakes.

For example, let’s say that a Palm Beach, Florida golf course receives a video message from Tiger Woods reserving a tee time and paying a lot of money to reserve the tee time. The golf course doesn’t allow anyone to book a tee time and waits for Tiger’s wire transfer to clear. After the fact, the golf course discovers that (a) the money was wired from a non-existent account, and (b) the person making the video call was not Tiger Woods, but a faked version of him.

I don’t think anyone in the U.S. Government or DOGE thinks that ripping off a Palm Beach, Florida golf course is a legitimate use of First Amendment free speech rights.

So deepfakes are bad because they lead to banking fraud and other forms of fraud.

This is not unique to deepfakes, but is also true of many other technologies. Nuclear technology can provide energy to homes, or it can kill people. Facial recognition (of real people) can find missing and abducted persons, or it can send Chinese Muslims to re-education camps.

Let’s go back to factors of authentication and liveness detection

Now let’s say that Tiger Woods’ face shows up on YOUR screen. You can use liveness detection and other technologies to determine whether it is truly Tiger Woods, and take action accordingly.

  • If the interaction with Woods is trivial, you may NOT want to spend time and resources to perform a robust authentication.
  • If the interaction with Woods is critical, you WILL want to perform a robust authentication.

It all boils down to something that I’ve previously called “somewhat you why.”

Why is Tiger Woods speaking?

  • If Tiger Woods is performing First Amendment-protected activity such as political talk, then “somewhat you why” asserts that whether this is REALLY Woods or not doesn’t matter.
  • If Tiger Woods is making a financial transaction with a Palm Beach, Florida golf course, then “somewhat you why” asserts that you MUST determine if this is really Woods.

It’s simple…right?

What about your deepfake solution?

Regardless of federal funding, companies are still going to offer deepfake detection products. Perhaps yours is one of them.

How will you market that product?

Do you have the resources to market your product, or are your resources already stretched thin?

If you need help with your facial recognition product marketing, Bredemarket has an opening for a facial recognition client. I can offer

  • compelling content creation
  • winning proposal development
  • actionable analysis

If Bredemarket can help your stretched staff, book a free meeting with me: https://bredemarket.com/cpa/

(Lincoln’s laptop from Imagen 3)

Jobseekers and Know Your (Fill in the Blank)

I’ve noticed that my LinkedIn posts on jobseeking perform much better than my LinkedIn posts on the technical intricacies of multifactor identity verification.

But maybe I can achieve both mass appeal and niche engagement.

Private Equity Talent Hunt and Emma Emily

A year ago I reposted something on LinkedIn about a firm called Private Equity Talent Hunt (among other names). As Shelly Jones originally explained, their business model is to approach a jobseeker about an opportunity, ask for a copy of the jobseeker’s resume, and then spring the bad news that the resume is not “ATS friendly” but can be fixed…for a fee.

The repost has garnered over 20,000 impressions and over 200 comments—high numbers for me. 

It looks like a lot of people are encountering Jennifer Cona, Elizabeth Vardaman, Sarah Williams, Jessica Raymond, Emily Newman, Emma Emily (really), and who knows how many other recruiters…

…who say they work at Private Equity Talent Hunt, Private Equity Recruiting Firm, Private Equity Talent Seek, and who knows how many other firms.

If only there were a way to know if you’re communicating with a real person, at a real business.

Actually, there is.

Know Your Customer and Business

As financial institutions and other businesses have known for years, there are services such as “Know Your Customer” and “Know Your Business” that organizations can use. 

KYC and KYB let companies make sure they’re dealing with real people, and that the business is legitimate and not a front for another company—or for a drug cartel or terrorist organization.

So if a company is approached by Emma Emily at Private Equity Talent Hunt, what do they need to do?

The first step is to determine whether Emma Emily is a real person and not a synthetic identity. You can use a captured facial image, analyzed by liveness detection, coupled with a valid government ID, and possibly supported by home ownership information, utility bills, and other documentation.

If there is no Emma Emily, you can stop there.

But if Emma Emily is a real person, you can check her credentials. Where is she employed today? Where was she employed before? What are her post secondary degrees? What does her LinkedIn profile say? If her previous job was as a jewelry designer and her Oxford degree was in nuclear engineering, Emma Emily sounds risky.

And you can also check the business itself, such as Private Equity Talent Hunt. Check their website, business license, LinkedIn profile, and everything else about the firm.

But I’m not a business!

OK, I admit there’s an issue here.

There are over 100 businesses that provide identity verification services, and many of them provide KYC and KYB.

To other businesses.

Very few people purchase KYC and KYB per se for personal use.

So you have to improvise.

Ask Emma Emily some tough questions.

Ask her about the track record of her employer.

And if Emma Emily claims to be a recruiter for a well-known company like Amazon, ask for her corporate email address.

(Image from Microsoft Copilot)

How Much Does Synthetic Identity Fraud Cost?

Identity firms really hope that prospects understand the threat posed by synthetic identity fraud, or SIF.

I’m here to help.

(Synthetic identity AI image from Imagen 3.)

Estimated SIF costs in 2020

In an early synthetic identity fraud post in 2020, I referenced a Thomson Reuters (not Thomas Reuters) article from that year which quoted synthetic identity fraud figures all over the map.

  • My own post referenced the Auriemma Group estimate of a $6 billion cost to U.S. lenders.
  • McKinsey preferred to use a percentage estimate of “10–15% of charge offs in a typical unsecured lending portfolio.” However, this may not be restricted to synthetic identity fraud, but may include other types of fraud.
  • Thomson Reuters quoted Socure’s Johnny Ayers, who estimated that “20% of credit losses stem from synthetic identity fraud.”

Oh, and a later post that I wrote quoted a $20 billion figure for synthetic identity fraud losses in 2020. Plus this is where I learned the cool acronym “SIF” to refer to synthetic identity fraud. As far as I know, there is no government agency with the acronym SIF, which would of course cause confusion. (There was a Social Innovation Fund, but that may no longer exist in 2025.)

Never Search Alone, not National Security Agency. AI image from Imagen 3.

Back to synthetic identity fraud, which reportedly resulted in between $6 billion and $20 billion in losses in 2020.

Estimated SIF costs in 2025

But that was 2020.

What about now? Let’s visit Socure again:

The financial toll of AI-driven fraud is staggering, with projected global losses reaching $40 billion by 2027 up from US12.3 billion in 2023 (CAGR 32%)., driven by sophisticated fraud techniques and automation, such as synthetic identities created with AI tools​.

Again this includes non-synthetic fraud, but it’s a good number for the high end. While my FTC fraud post didn’t break out synthetic identity fraud figures, Plaid cited a 2023 $1.8 billion figure for the auto industry alone, and Mastercard cited a $5 billion figure.

But everyone agrees on a figure of billions and billions.

The real Carl Sagan.
The deepfake Carl Sagan.

(I had to stop writing this post for a minute because I received a phone call from “JP Morgan Chase,” but the person didn’t know who they were talking to, merely asking for the owner of the phone number. Back to fraud.)

Reducing SIF in 2025

In a 2023 post, I cataloged four ways to fight synthetic identity fraud:

  1. Private databases.
  2. Government documents.
  3. Government databases.
  4. A “who you are” test with facial recognition and liveness detection (presentation attack detection).

Ideally an identity verification solution should use multiple methods, and not just one. It doesn’t do you any good to forge a driver’s license if AAMVA doesn’t know about the license in any state or provincial database.

And if you need an identity content marketing expert to communicate how your firm fights synthetic identities, Bredemarket can help with its content-proposal-analysis services.

Find out more about Bredemarket’s “CPA” services.

More on Injection Attack Detection

(Injection attack syringe image from Imagen 3)

Not too long after I shared my February 7 post on injection attack detection, Biometric Update shared a post of its own, “Veridas introduces new injection attack detection feature for fraud prevention.”

I haven’t mentioned VeriDas much in the Bredemarket blog, but it is one of the 40+ identity firms that are blogging. In Veridas’ case, in English and Spanish.

And of course I referenced VeriDas in my February 7 post when it defined the difference between presentation attack detection and injection attack detection.

Biometric Update played up this difference:

To stay ahead of the curve, Spanish biometrics company Veridas has introduced an advanced injection attack detection capability into its system, to combat the growing threat of synthetic identities and deepfakes…. 

Veridas says that standard fraud detection only focuses on what it sees or hears – for example, face or voice biometrics. So-called Presentation Attack Detection (PAD) looks for fake images, videos and voices. Deepfake detection searches for the telltale artifacts that give away the work of generative AI. 

Neither are monitoring where the feed comes from or whether the device is compromised. 

I can revisit the arguments about whether you should get PAD and…IAD?…from the same vendor, or whether you should get best in-class solutions to address each issue separately.

But they need to be addressed.