Deep Deepfakes vs. Shallow Shallowfakes

We toss words around until they lose all meaning, like the name of Jello Biafra’s most famous band. (IYKYK.)

So why are deepfakes deep?

And does the existence of deepfakes necessarily mean that shallowfakes exist?

Why are deepfakes deep?

The University of Virginia Information Security explains how deepfakes are created, which also explains why they’re called that.

“A deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name).”

UVA then launches into a technical explanation.

“Deep learning is a special kind of machine learning that involves “hidden layers.” Typically, deep learning is executed by a special class of algorithm called a neural network….A hidden layer is a series of nodes within the network that performs mathematical transformations to convert input signals to output signals (in the case of deepfakes, to convert real images to really good fake images). The more hidden layers a neural network has, the “deeper” the network is.”

Why are shallowfakes shallow?

So if you don’t use a multi-level neural network to create your fake, then it is by definition shallow. Although you most likely need to use cumbersome manual methods to create it.

  • For presentation attack detection (liveness detection, either active or passive), you can dispense with the neural network and just use old fashioned makeup.
From NIST.

Or a mask.

Imagen 4.

It’s all semantics

In truth, we commonly refer to all face, voice, and finger fakes as “deep” fakes even when they don’t originate in a neural network.

But if someone wants to refer to shallowfakes, it’s OK with me.

IDV Differentiation as Measured in the Prism Project’s Deepfake and Synthetic Identity Report

Because I have talked about differentiation ad nauseum, I’m always looking for ways to see how identity/biometric and technology vendors have differentiated themselves. Yes, almost all of them overuse the word “trust,” but there is still some differentiation out there.

And I found a source that measured differentiation (or “unique positioning”) in various market segments. Using this source, I chose to concentrate on vendors who concentrate on identity verification (or “identity proofing & verification,” but close enough).

My source? The recently released “Biometric Digital Identity Deepfake and Synthetic Identity Prism Report” from The Prism Project, which you can download here by providing your business address.

Before you read this, I want to caution you that this is NOT a thorough evaluation of The Prism Project deepfake and synthetic identity report. After some preliminaries, it focuses on one small portion of the report, concentrating on ONLY one “beam” (IDV) and ONLY one evaluation factor (differentiation).

Four facts about the report

First, the report is comprehensive. It’s not merely a list of ranked vendors, but also provides a, um, deep dive into deepfakes and synthetic identity. Even if you don’t care about the industry players, I encourage you to (a) download the report, and (b) read the 8 page section entitled “Crash Course: The Identity Arms Race.”

  • The crash course starts by describing digital identity and the role that biometrics plays in digital identity. It explains how banks, government agencies, and others perform identity verification; we’ll return to this later.
  • Then it moves on to the bad people who try to use “counterfeit identity elements” in place of “authentic identity elements.” The report discusses spoofs, presentation attacks, countermeasures such as multi-factor authentication, and…
  • Well, just download the report and read it yourself. If you want to understand deepfakes and synthetic identities, the “Crash Course” section will educate you quickly and thoroughly, as will the remainder of the report.
Synthetic Identity Fraud Attacks. Copyright 2025 The Prism Project.

Second, the report is comprehensive. Yeah, I just said that, but it’s also comprehensive in the number of organizations that it covers.

  • In a previous life I led a team that conducted competitive analysis on over 80 identity organizations.
  • I then subsequently encountered others who estimated that there are over 100 organizations.
  • This report evaluates over 200 organizations. In part this is because it includes evaluations of “relying parties” that are part of the ecosystem. (Examples include Mastercard, PayPal, and the Royal Bank of Canada who obviously don’t want to do business with deepfakes or synthetic identities.) Still, the report is amazing in its organizational coverage.

Third, the report is comprehensive. In a non-lunatic way, the report categorizes each organization into one or more “beams”:

  • The aforementioned relying parties
  • Core identity technology
  • Identity platforms
  • Integrators & solution providers
  • Passwordless authentication
  • Environmental risk signals
  • Infrastructure, community, culture
  • And last but first (for purposes of this post), identity proofing and verification.

Fourth, the report is comprehensive. Yes I’m repetitive, but each of the 200+ organizations are evaluated on a 0-6 scale based upon seven factors. In listed order, they are:

  • Growth & Resources
  • Market Presence
  • Proof Points
  • Unique Positioning, defined as “Unique Value Proposition (UVP) along with diferentiable technology and market innovation generally and within market sector.”
  • Business Model & Strategy
  • Biometrics and Document Authentication
  • Deepfakes & Synthetic Identity Leadership

In essence, the wealth of data makes this report look like a NIST report: there are so many individual “slices” of the prism that every one of the 200+ organizations can make a claim about how it was recognized by The Prism Project. And you’ve probably already seen some organizations make such claims, just like they do whenever a new NIST report comes out.

So let’s look at the tiny slice of the prism that is my, um, focus for this post.

Unique positioning in the IDV slice of the Prism

So, here’s the moment all of you have been waiting for. Which organizations are in the Biometric Digital Identity Deepfake and Synthetic Identity Prism?

Deepfake and Synthetic Identity Prism. Copyright 2025 The Prism Project.

Yeah, the text is small. Told you there were a lot of organizations.

For my purposes I’m going to concentrate on the “identity proofing and verification” beam in the lower left corner. But I’m going to dig deeper.

In the illustration above, organizations are nearer or farther from the center based upon their AVERAGE score for all 7 factors I listed previously. But because I want to concentrate on differentiation, I’m only going to look at the identity proofing and verification organizations with high scores (between 5 and the maximum of 6) for the “unique positioning” factor.

I’ll admit my methodology is somewhat arbitrary.

  • There’s probably no great, um, difference between an organization with a score of 4.9 and one with a score of 5. But you can safely state that an organization with a “unique positioning” score of 2 isn’t as differentiated from one with a score of 5.
  • And this may not matter. For example, iBeta (in the infrastructure – culture – community beam) has a unique positioning score of 2, because a lot of organizations do what iBeta does. But at the same time iBeta has a biometric commitment of 4.5. They don’t evaluate refrigerators.

So, here’s my list of identity proofing and verification organizations who scored between 5 and 6 for the unique positioning factor:

  • ID.me
  • iiDENTIFii
  • Socure

Using the report as my source, these three identity verification companies have offerings that differentiate themselves from others in the pack.

Although I’m sure the other identity verification vendors can be, um, trusted.

Oh, by the way…did I remember to suggest that you download the report?

Deepfake App Secret Purposes and Age Non-verification

It’s nearly impossible to battle a tidal wave.

CBS News recently reported on the attempts of Meta and others to remove advertisements for “nudify” apps from their platforms. The intent of these apps is to take pictures of existing people—for example, “Scarlett Johansson and Anne Hathaway”—and creating deepfake nudes based on the source material.

Two versions of “what does this app do”

But the apps may present their purposes differently when applying for Apple App Store and Google Play Store approval.

“The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification. So when these apps come up for review on the Apple or Google store, they don’t necessarily have the wherewithal to ban them.”

How old are you? If you say so

And there’s another problem. While the apps are marketed to adult men, their users extend beyond that.

“CBS News’ 60 Minutes reported on the lack of age verification on one of the most popular sites using artificial intelligence to generate fake nude photos of real people. 

“Despite visitors being told that they must be 18 or older to use the site…60 Minutes was able to immediately gain access to uploading photos once the user clicked “accept” on the age warning prompt, with no other age verification necessary.”

We’ve seen this so-called “age verification” before.

From another age-regulated industry.

But if whack-a-mole fighting against deepfake generators won’t work, what will?

I don’t have the answer. Even common sense won’t help here.

Deepfakes Slipping Through the Silos

(Imagen 4)

Sometimes common sense isn’t enough to stop deepfake fraud. Marc Ricker of iValt asserrts that a unified response helps also.

“Too often, network teams focus on availability, while security teams chase threats after the fact. That separation creates gaps — gaps that attackers exploit.”

Ricker’s solution:

“iVALT unifies remote access and identity security through:

Instant, passwordless biometric authentication

AI-resistant technology that stops deepfake and synthetic identity fraud”

iVALT trumpets the use of 5 factors: device ID, biometrics, geolocation, time window, and “app code.” 

  • I was curious which biometric modalities and vendors iVALT supported, so I looked it up. 
  • iVALT appears to use PingOne DaVinci, which orchestrates everything. 
  • The only biometrics specifically mentioned by iVALT are those captured on a mobile phone.
  • But it’s unclear to me whether these are the biometrics captured by the phone’s operating system (for example, TouchID or FaceID on iOS), third party biometrics, or all of the above.

Of course, most people don’t care about the minutiae of supported biometric modalities. 

But some do…because all biometric algorithms do NOT provide the same accuracy or performance.

Presentation Attack Injection, Injection Attack Detection, and Deepfakes on LinkedIn and Substack

Just letting my Bredemarket blog readers know of two items I wrote on other platforms.

  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes.” This LinkedIn article, part of The Wildebeest Speaks newsletter series, is directed toward people who already have some familiarity with deepfake attacks.
  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes (version 2). This Substack post does NOT assume any deepfake attack background.

The Best Deepfake Defense is NOT Technological

I think about deepfakes a lot. As the identity/biometric product marketing consultant at Bredemarket, it comes with the territory.

When I’m not researching how fraudsters perpetrate deepfake faces, deepfake voices, and other deepfake modalities via presentation attack detection (liveness detection) and injection attack detection

…I’m researching and describing how Bredemarket’s clients and prospects develop innovative technologies to expose these deepfake fraudsters.

You can spend good money on deepfake-fighting industry solutions, and you can often realize a positive return on investment when purchasing these technologies.

But the best defense against these deepfakes isn’t some whiz bang technology.

It’s common sense.

  • Would your CEO really call you at midnight to expedite an urgent financial transaction?
  • Would that Amazon recruiter want to schedule a Zoom call right now?

If you receive an out-of-the-ordinary request, the first and most important thing to do is to take a deep breath.

A real CEO or recruiter would understand.

And…

…if your company offers a fraud-fighting solution to detect and defeat deepfakes, Bredemarket can help you market your solution. My content, proposal, and analysis offerings are at your service. Let’s talk: https://bredemarket.com/cpa/

CPA

(Imagen 4)

“Somewhat You Why” in Minnesota

Remember my earlier post “‘Somewhat You Why,’ and Whether Deepfakes are Evil or Good or Both”?

When I posted it, I said:

I debated whether or not I should publish this because it touches upon two controversial topics: U.S. politics, and my proposed sixth factor of authentication. 

I eventually decided to share it on the Bredemarket blog but NOT link to it or quote it on my socials.

Well, I’m having the same debate with this post, which is ironic because I learned about the content via the socials. Not that I will identify the source, because it is from someone’s personal Facebook feed.

Just a random picture of Princess Diana. Public domain.

My earlier post analyzed my assumption that deepfakes are bad. It covered the end of National Science Foundation funding for deepfake research, apparently because deepfakes can be used as a form of First Amendment free speech.

Well, the same issue is appearing at the state level, according to the AP:

X Corp., the social media platform owned by Trump adviser Elon Musk, is challenging the constitutionality of a Minnesota ban on using deepfakes to influence elections and harm candidates, saying it violates First Amendment speech protections.

As I previously noted, this does NOT mean that X believes in a Constitutional right to financially defraud people.

  • Or do I have a Constitutional right to practice my freedom of religion by creating my own biometric-free voter identification card like John Wahl did?

Again, is it all about intent? Somewhat you why?

And if your firm provides facial recognition, how do you address such issues?

If you need help with your facial recognition product marketing, Bredemarket has an opening for a facial recognition client. I can offer

  • compelling content creation
  • winning proposal development
  • actionable analysis

If Bredemarket can help your stretched staff, book a free meeting with me: https://bredemarket.com/cpa/

(Lincoln’s laptop from Imagen 3)

“Somewhat You Why,” and Whether Deepfakes are Evil or Good or Both

I debated whether or not I should publish this because it touches upon two controversial topics: U.S. politics, and my proposed sixth factor of authentication.

I eventually decided to share it on the Bredemarket blog but NOT link to it or quote it on my socials.

Although I could change my mind later.

Are deepfakes bad?

When I first discussed deepfakes in June 2023, I detailed two deepfake applications.

One deepfake was an audio-video creation purportedly showing Richard Nixon paying homage to the Apollo 11 astronauts who were stranded on the surface of the moon.

  • Of course, no Apollo 11 astronauts were ever stranded on the surface of the moon; Neil Armstrong and Buzz Aldrin returned to Earth safely.
  • So Nixon never had to pay homage to them, although William Safire wrote a speech as a contingency.
  • This deepfake is not in itself bad, unless it is taught in a history course as true history about “the failure of the U.S. moon program.” (The Apollo program had a fatal catastrophe, but not involving Apollo 11.)

The other deepfake was more sinister.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million….The manager, believing everything appeared legitimate, began making the transfers.

Except that the director wasn’t the director, and the company had just been swindled to the tune of $35 million.

I think everyone knows now that deepfakes can be used for bad things. So we establish standards to determine “content provenance and authenticity,” which is a fancy way to say whether content is real or a deepfake.

In addition to establishing standards, we do a lot of research to counter deepfakes, because they are bad.

Or are they?

What the National Science Foundation won’t do

Multiple sources, including both Nextgov and Biometric Update, are reporting on the cancellation of approximately 430 grants from the National Science Foundation. Among these grants are ones for deepfake research.

Around 430 federally-funded research grants covering topics like deepfake detection, artificial intelligence advancement and the empowerment of marginalized groups in scientific fields were among several projects terminated in recent days following a major realignment in research priorities at the National Science Foundation.

As you can probably guess, the cancellation of these grants is driven by the Trump Administration and the so-called Department of Government Efficiency (DOGE).

Why?

Because freedom:

Per the Presidential Action announced January 20, 2025, NSF will not prioritize research proposals that engage in or facilitate any conduct that would unconstitutionally abridge the free speech of any American citizen. NSF will not support research with the goal of combating “misinformation,” “disinformation,” and “malinformation” that could be used to infringe on the constitutionally protected speech rights of American citizens across the United States in a manner that advances a preferred narrative about significant matters of public debate.

The NSF argues that a person’s First Amendment rights permit them, I mean permit him, to share content without having the government prevent its dissemination by tagging it as misinformation, disinformation, or malinformation.

And it’s not the responsibility of the U.S. Government to research creation of so-called misinformation content. Hence the end of funding for deepfake research.

So deepfakes are good because they’re protected by the First Amendment.

But wait a minute…

Just because the U.S. Government doesn’t like it when patriotic citizens are censored from distributing deepfake videos for political purposes, that doesn’t necessarily mean that the U.S. Government objects to ALL deepfakes.

For example, let’s say that a Palm Beach, Florida golf course receives a video message from Tiger Woods reserving a tee time and paying a lot of money to reserve the tee time. The golf course doesn’t allow anyone to book a tee time and waits for Tiger’s wire transfer to clear. After the fact, the golf course discovers that (a) the money was wired from a non-existent account, and (b) the person making the video call was not Tiger Woods, but a faked version of him.

I don’t think anyone in the U.S. Government or DOGE thinks that ripping off a Palm Beach, Florida golf course is a legitimate use of First Amendment free speech rights.

So deepfakes are bad because they lead to banking fraud and other forms of fraud.

This is not unique to deepfakes, but is also true of many other technologies. Nuclear technology can provide energy to homes, or it can kill people. Facial recognition (of real people) can find missing and abducted persons, or it can send Chinese Muslims to re-education camps.

Let’s go back to factors of authentication and liveness detection

Now let’s say that Tiger Woods’ face shows up on YOUR screen. You can use liveness detection and other technologies to determine whether it is truly Tiger Woods, and take action accordingly.

  • If the interaction with Woods is trivial, you may NOT want to spend time and resources to perform a robust authentication.
  • If the interaction with Woods is critical, you WILL want to perform a robust authentication.

It all boils down to something that I’ve previously called “somewhat you why.”

Why is Tiger Woods speaking?

  • If Tiger Woods is performing First Amendment-protected activity such as political talk, then “somewhat you why” asserts that whether this is REALLY Woods or not doesn’t matter.
  • If Tiger Woods is making a financial transaction with a Palm Beach, Florida golf course, then “somewhat you why” asserts that you MUST determine if this is really Woods.

It’s simple…right?

What about your deepfake solution?

Regardless of federal funding, companies are still going to offer deepfake detection products. Perhaps yours is one of them.

How will you market that product?

Do you have the resources to market your product, or are your resources already stretched thin?

If you need help with your facial recognition product marketing, Bredemarket has an opening for a facial recognition client. I can offer

  • compelling content creation
  • winning proposal development
  • actionable analysis

If Bredemarket can help your stretched staff, book a free meeting with me: https://bredemarket.com/cpa/

(Lincoln’s laptop from Imagen 3)

Zoom Scam With Faces

An interesting variant on fraudulent deepfake scams.

Kenny Li of Manta fame was sucked into a scam attempt, but was able to perceive the scam before any damage was done.

Li responded to a message from a known contact, which resulted in a Telegram conversation, which resulted in a Zoom call.

“In the call, there were team members who had their cameras on, and [the] Manta founder could see their faces. He mentioned that “Everything looked very real. But I couldn’t hear them.” Then came the “Zoom update required” prompt…”

Li didn’t fall for it.

(Imagen 3)

And one more thing…

The formal announcement is embargoed until Monday, but Bredemarket has TWO openings to act as your on-demand marketing muscle for facial recognition or cybersecurity:

  • compelling content creation
  • winning proposal development
  • actionable analysis

Book a call: https://bredemarket.com/cpa/