“Somewhat You Why,” and Whether Deepfakes are Evil or Good or Both

I debated whether or not I should publish this because it touches upon two controversial topics: U.S. politics, and my proposed sixth factor of authentication.

I eventually decided to share it on the Bredemarket blog but NOT link to it or quote it on my socials.

Although I could change my mind later.

Are deepfakes bad?

When I first discussed deepfakes in June 2023, I detailed two deepfake applications.

One deepfake was an audio-video creation purportedly showing Richard Nixon paying homage to the Apollo 11 astronauts who were stranded on the surface of the moon.

  • Of course, no Apollo 11 astronauts were ever stranded on the surface of the moon; Neil Armstrong and Buzz Aldrin returned to Earth safely.
  • So Nixon never had to pay homage to them, although William Safire wrote a speech as a contingency.
  • This deepfake is not in itself bad, unless it is taught in a history course as true history about “the failure of the U.S. moon program.” (The Apollo program had a fatal catastrophe, but not involving Apollo 11.)

The other deepfake was more sinister.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million….The manager, believing everything appeared legitimate, began making the transfers.

Except that the director wasn’t the director, and the company had just been swindled to the tune of $35 million.

I think everyone knows now that deepfakes can be used for bad things. So we establish standards to determine “content provenance and authenticity,” which is a fancy way to say whether content is real or a deepfake.

In addition to establishing standards, we do a lot of research to counter deepfakes, because they are bad.

Or are they?

What the National Science Foundation won’t do

Multiple sources, including both Nextgov and Biometric Update, are reporting on the cancellation of approximately 430 grants from the National Science Foundation. Among these grants are ones for deepfake research.

Around 430 federally-funded research grants covering topics like deepfake detection, artificial intelligence advancement and the empowerment of marginalized groups in scientific fields were among several projects terminated in recent days following a major realignment in research priorities at the National Science Foundation.

As you can probably guess, the cancellation of these grants is driven by the Trump Administration and the so-called Department of Government Efficiency (DOGE).

Why?

Because freedom:

Per the Presidential Action announced January 20, 2025, NSF will not prioritize research proposals that engage in or facilitate any conduct that would unconstitutionally abridge the free speech of any American citizen. NSF will not support research with the goal of combating “misinformation,” “disinformation,” and “malinformation” that could be used to infringe on the constitutionally protected speech rights of American citizens across the United States in a manner that advances a preferred narrative about significant matters of public debate.

The NSF argues that a person’s First Amendment rights permit them, I mean permit him, to share content without having the government prevent its dissemination by tagging it as misinformation, disinformation, or malinformation.

And it’s not the responsibility of the U.S. Government to research creation of so-called misinformation content. Hence the end of funding for deepfake research.

So deepfakes are good because they’re protected by the First Amendment.

But wait a minute…

Just because the U.S. Government doesn’t like it when patriotic citizens are censored from distributing deepfake videos for political purposes, that doesn’t necessarily mean that the U.S. Government objects to ALL deepfakes.

For example, let’s say that a Palm Beach, Florida golf course receives a video message from Tiger Woods reserving a tee time and paying a lot of money to reserve the tee time. The golf course doesn’t allow anyone to book a tee time and waits for Tiger’s wire transfer to clear. After the fact, the golf course discovers that (a) the money was wired from a non-existent account, and (b) the person making the video call was not Tiger Woods, but a faked version of him.

I don’t think anyone in the U.S. Government or DOGE thinks that ripping off a Palm Beach, Florida golf course is a legitimate use of First Amendment free speech rights.

So deepfakes are bad because they lead to banking fraud and other forms of fraud.

This is not unique to deepfakes, but is also true of many other technologies. Nuclear technology can provide energy to homes, or it can kill people. Facial recognition (of real people) can find missing and abducted persons, or it can send Chinese Muslims to re-education camps.

Let’s go back to factors of authentication and liveness detection

Now let’s say that Tiger Woods’ face shows up on YOUR screen. You can use liveness detection and other technologies to determine whether it is truly Tiger Woods, and take action accordingly.

  • If the interaction with Woods is trivial, you may NOT want to spend time and resources to perform a robust authentication.
  • If the interaction with Woods is critical, you WILL want to perform a robust authentication.

It all boils down to something that I’ve previously called “somewhat you why.”

Why is Tiger Woods speaking?

  • If Tiger Woods is performing First Amendment-protected activity such as political talk, then “somewhat you why” asserts that whether this is REALLY Woods or not doesn’t matter.
  • If Tiger Woods is making a financial transaction with a Palm Beach, Florida golf course, then “somewhat you why” asserts that you MUST determine if this is really Woods.

It’s simple…right?

What about your deepfake solution?

Regardless of federal funding, companies are still going to offer deepfake detection products. Perhaps yours is one of them.

How will you market that product?

Do you have the resources to market your product, or are your resources already stretched thin?

If you need help with your facial recognition product marketing, Bredemarket has an opening for a facial recognition client. I can offer

  • compelling content creation
  • winning proposal development
  • actionable analysis

If Bredemarket can help your stretched staff, book a free meeting with me: https://bredemarket.com/cpa/

(Lincoln’s laptop from Imagen 3)

Zoom Scam With Faces

An interesting variant on fraudulent deepfake scams.

Kenny Li of Manta fame was sucked into a scam attempt, but was able to perceive the scam before any damage was done.

Li responded to a message from a known contact, which resulted in a Telegram conversation, which resulted in a Zoom call.

“In the call, there were team members who had their cameras on, and [the] Manta founder could see their faces. He mentioned that “Everything looked very real. But I couldn’t hear them.” Then came the “Zoom update required” prompt…”

Li didn’t fall for it.

(Imagen 3)

And one more thing…

The formal announcement is embargoed until Monday, but Bredemarket has TWO openings to act as your on-demand marketing muscle for facial recognition or cybersecurity:

  • compelling content creation
  • winning proposal development
  • actionable analysis

Book a call: https://bredemarket.com/cpa/ 

Beyond Voice: Adobe Experience Manager and the Coalition for Content Provenance and Authenticity

I previously described how technology from the Coalition for Content Provenance and Authenticity protects you from voice deepfakes.

But as Krassimir Boyanov of KBWEB Consult notes, C2PA technology provides the “provenance” of other types of assets.

Adobe is implementing C2PA-compliant content credentials in AEM [Adobe Experience Manager]. These can tell you the facts about an asset (in C2PA terms, the “provenance“): whether the image of a product from a particular manufacturer was created by that manufacturer, whether a voice recording of Anil Chakravarthy was created by Anil Chakravarthy, or whether an image of a peaceful meadow was actually generated by artificial intelligence (such as Adobe Firefly).

(AI generated image of Richard Nixon from https://www.youtubehttps://www.youtube.com/watch?v=2rkQn-43ixs)

The “Biometric Digital Identity Deepfake and Synthetic Identity Prism Report” is Coming

As you may have noticed, I have talked about both deepfakes and synthetic identity ad nauseum.

But perhaps you would prefer to hear from someone who knows what they’re talking about.

On a webcast this morning, C. Maxine Most of The Prism Project reminded us that the “Biometric Digital Identity Deepfake and Synthetic Identity Prism Report” is scheduled for publication in May 2025, just a little over a month from now.

As with all other Prism Project publications, I expect a report that details the identity industry’s solutions to battle deepfakes and synthetic identities, and the vendors who provide them.

And the report is coming from one of the few industry researchers who knows the industry. Max doesn’t write synthetic identity reports one week and refrigerator reports the next, if you know what I mean.

At this point The Prism Project is soliciting sponsorships. Quality work doesn’t come for free, you know. If your company is interested in sponsoring the report, visit this link.

While waiting for Max, here are the Five Tops

And while you’re waiting for Max’s authoritative report on deepfakes and synthetic identity, you may want to take a look at Min’s (my) views, such as they are. Here are my current “five tops” posts on deepfakes and synthetic identity.

How Much Does Synthetic Identity Fraud Cost?

Identity firms really hope that prospects understand the threat posed by synthetic identity fraud, or SIF.

I’m here to help.

(Synthetic identity AI image from Imagen 3.)

Estimated SIF costs in 2020

In an early synthetic identity fraud post in 2020, I referenced a Thomson Reuters (not Thomas Reuters) article from that year which quoted synthetic identity fraud figures all over the map.

  • My own post referenced the Auriemma Group estimate of a $6 billion cost to U.S. lenders.
  • McKinsey preferred to use a percentage estimate of “10–15% of charge offs in a typical unsecured lending portfolio.” However, this may not be restricted to synthetic identity fraud, but may include other types of fraud.
  • Thomson Reuters quoted Socure’s Johnny Ayers, who estimated that “20% of credit losses stem from synthetic identity fraud.”

Oh, and a later post that I wrote quoted a $20 billion figure for synthetic identity fraud losses in 2020. Plus this is where I learned the cool acronym “SIF” to refer to synthetic identity fraud. As far as I know, there is no government agency with the acronym SIF, which would of course cause confusion. (There was a Social Innovation Fund, but that may no longer exist in 2025.)

Never Search Alone, not National Security Agency. AI image from Imagen 3.

Back to synthetic identity fraud, which reportedly resulted in between $6 billion and $20 billion in losses in 2020.

Estimated SIF costs in 2025

But that was 2020.

What about now? Let’s visit Socure again:

The financial toll of AI-driven fraud is staggering, with projected global losses reaching $40 billion by 2027 up from US12.3 billion in 2023 (CAGR 32%)., driven by sophisticated fraud techniques and automation, such as synthetic identities created with AI tools​.

Again this includes non-synthetic fraud, but it’s a good number for the high end. While my FTC fraud post didn’t break out synthetic identity fraud figures, Plaid cited a 2023 $1.8 billion figure for the auto industry alone, and Mastercard cited a $5 billion figure.

But everyone agrees on a figure of billions and billions.

The real Carl Sagan.
The deepfake Carl Sagan.

(I had to stop writing this post for a minute because I received a phone call from “JP Morgan Chase,” but the person didn’t know who they were talking to, merely asking for the owner of the phone number. Back to fraud.)

Reducing SIF in 2025

In a 2023 post, I cataloged four ways to fight synthetic identity fraud:

  1. Private databases.
  2. Government documents.
  3. Government databases.
  4. A “who you are” test with facial recognition and liveness detection (presentation attack detection).

Ideally an identity verification solution should use multiple methods, and not just one. It doesn’t do you any good to forge a driver’s license if AAMVA doesn’t know about the license in any state or provincial database.

And if you need an identity content marketing expert to communicate how your firm fights synthetic identities, Bredemarket can help with its content-proposal-analysis services.

Find out more about Bredemarket’s “CPA” services.

Hammering Imposter Fraud

(Imagen 3)

If you sell hammers, then hammers are the solution to everything, including tooth decay.

I previously wrote about the nearly $3 billion lost to imposter fraud in the United States in 2024.

But how do you mitigate it?

“Advanced, contextual fraud prevention” firm ThreatMark has the answer:

“As impersonation scams use a wide range of fraudulent methods, they require a comprehensive approach to detection and prevention. One of the most efficient in this regard is behavioral intelligence. Its advantages lie mainly in its ability to detect both authorized and unauthorized fraud in real time across all digital channels, based on a variety of signals.”

Perhaps.

Update: A Little Harder to Create Voice Deepfakes?

(Imposter scam wildebeest image from Imagen 3)

(Part of the biometric product marketing expert series)

Remember my post early this morning entitled “Nearly $3 Billion Lost to Imposter Scams in the U.S. in 2024“?

The post touched on many items, one of which was the relative ease in using popular voice cloning programs to create fraudulent voices. Consumer Reports determined that four popular voice cloning programs “did not have the technical mechanisms necessary to prevent cloning someone’s voice without their knowledge or to limit the AI cloning to only the user’s voice.”

Reducing voice clone fraud?

Joel R. McConvey of Biometric Update wrote a piece (“Hi mom, it’s me,” an example of a popular fraudulent voice clone) that included an update on one of the four vendors cited by Consumer Reports.

In its responses, ElevenLabs – which was implicated in the deepfake Joe Biden robocall scam of November 2023 – says it is “implementing Coalition for Content Provenance and Authenticity (C2PA) standards by embedding cryptographically-signed metadata into the audio generated on our platform,” and lists customer screening, voice CAPTCHA and its No-Go Voice technology, which blocks the voices of hundreds of public figures, as among safeguards it already deploys.

Coalition for Content Provenance and Authenticity

So what are these C2PA standards? As a curious sort (I am ex-IDEMIA, after all), I investigated.

The Coalition for Content Provenance and Authenticity (C2PA) addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content. C2PA is a Joint Development Foundation project, formed through an alliance between Adobe, Arm, Intel, Microsoft and Truepic.

There are many other organizations whose logos appear on the website, including Amazon, Google, Meta, and Open AI.

Provenance

I won’t plunge into the entire specifications, but this excerpt from the “Explainer” highlights an important word, “provenance” (the P in C2PA).

Provenance generally refers to the facts about the history of a piece of digital content assets (image, video, audio recording, document). C2PA enables the authors of provenance data to securely bind statements of provenance data to instances of content using their unique credentials. These provenance statements are called assertions by the C2PA. They may include assertions about who created the content and how, when, and where it was created. They may also include assertions about when and how it was edited throughout its life. The content author, and publisher (if authoring provenance data) always has control over whether to include provenance data as well as what assertions are included, such as whether to include identifying information (in order to allow for anonymous or pseudonymous assets). Included assertions can be removed in later edits without invalidating or removing all of the included provenance data in a process called redaction.

Providence

I would really have to get into the nitty gritty of the specifications to see exactly how ElevenLabs, or anyone else, can accurately assert that a voice recording alleged to have been made by Richard Nixon actually was made by Richard Nixon. Hint: this one wasn’t.

From https://www.youtube.com/watch?v=2rkQn-43ixs.

Incidentally, while this was obviously never spoken, and I don’t believe that Nixon ever saw it, the speech was drafted as a contingency by William Safire. And I think everyone can admit that Safire could soar as a speechwriter for Nixon, whose sense of history caused him to cast himself as an American Churchill (with 1961 to 1969 as Nixon’s “wilderness years”). Safire also wrote for Agnew, who was not known as a great strategic thinker.

And the Apollo 11 speech above is not the only contingency speech ever written. Someone should create a deepfake of this speech that was NEVER delivered by then-General Dwight D. Eisenhower after D-Day:

Our landings in the Cherbourg-Havre have failed to gain a satisfactory foothold and I have withdrawn the troops. My decision to attack at this time and place was based upon the best information available. The troops, the air and the Navy did all that bravery and devotion to duty could do. If any blame or fault attaches to the attempt it is mine alone.

Nearly $3 Billion Lost to Imposter Scams in the U.S. in 2024

(Imposter scam wildebeest image from Imagen 3)

According to the Federal Trade Commission, fraud is being reported at the same rate, but more people are saying they are losing money from it.

In 2023, 27% of people who reported a fraud said they lost money, while in 2024, that figure jumped to 38%.

In a way this is odd, since you would think that we would better detect fraud attempts now. But I guess we don’t. (I’ll say why in a minute.)

Imposter scams

The second fraud category, after investment scams, was imposter scams.

The second highest reported loss amount came from imposter scams, with $2.95 billion reported lost. In 2024, consumers reported losing more money to scams where they paid with bank transfers or cryptocurrency than all other payment methods combined.

Deepfakes

I’ve spent…a long time in the business of determining who people are, and who people aren’t. While the FTC summary didn’t detail the methods of imposter scams, at least some of these may have used deepfakes to perpetuate the scam.

The FTC addressed deepfakes two years ago, speaking of

…technology that simulates human activity, such as software that creates deepfake videos and voice clones….They can use deepfakes and voice clones to facilitate imposter scamsextortion, and financial fraud. And that’s very much a non-exhaustive list.

Creating deepfakes

And the need for advanced skills to create deepfakes has disappeared. ZD NET reported on a Consumer Reports study that analyzed six voice cloning software packages:

The results found that four of the six products — from ElevenLabs, Speechify, PlayHT, and Lovo — did not have the technical mechanisms necessary to prevent cloning someone’s voice without their knowledge or to limit the AI cloning to only the user’s voice. 

Instead, the protection was limited to a box users had to check off, confirming they had the legal right to clone the voice.

Which is just as effective as verifying someone’s identity by asking for their name and date of birth.

(Not) detecting deepfakes

And of course the identity/biometric vendor commuity is addressing deepfakes also. Research from iProov indicates one reason why 38% of the FTC reporters lost money to fraud:

[M]ost people can’t identify deepfakes – those incredibly realistic AI-generated videos and images often designed to impersonate people. The study tested 2,000 UK and US consumers, exposing them to a series of real and deepfake content. The results are alarming: only 0.1% of participants could accurately distinguish real from fake content across all stimuli which included images and video… in a study where participants were primed to look for deepfakes. In real-world scenarios, where people are less aware, the vulnerability to deepfakes is likely even higher.

So what’s the solution? Throw more technology at the problem? Multi factor authentication (requiring the fraudster to deepfake multiple items)? Something else?

More on Injection Attack Detection

(Injection attack syringe image from Imagen 3)

Not too long after I shared my February 7 post on injection attack detection, Biometric Update shared a post of its own, “Veridas introduces new injection attack detection feature for fraud prevention.”

I haven’t mentioned VeriDas much in the Bredemarket blog, but it is one of the 40+ identity firms that are blogging. In Veridas’ case, in English and Spanish.

And of course I referenced VeriDas in my February 7 post when it defined the difference between presentation attack detection and injection attack detection.

Biometric Update played up this difference:

To stay ahead of the curve, Spanish biometrics company Veridas has introduced an advanced injection attack detection capability into its system, to combat the growing threat of synthetic identities and deepfakes…. 

Veridas says that standard fraud detection only focuses on what it sees or hears – for example, face or voice biometrics. So-called Presentation Attack Detection (PAD) looks for fake images, videos and voices. Deepfake detection searches for the telltale artifacts that give away the work of generative AI. 

Neither are monitoring where the feed comes from or whether the device is compromised. 

I can revisit the arguments about whether you should get PAD and…IAD?…from the same vendor, or whether you should get best in-class solutions to address each issue separately.

But they need to be addressed.

Injection Attack Detection

(Injection attack syringe image from Imagen 3)

Having realized that I have never discussed injection attacks on the Bredemarket blog, I decided I should rectify this.

Types of attacks

When considering falsifying identity verification or authentication, it’s helpful to see how VeriDas defines two different types of falsification:

  1. Presentation Attacks: These involve an attacker presenting falsified evidence directly to the capture device’s camera. Examples include using photocopies, screenshots, or other forms of impersonation to deceive the system.
  2. Injection Attacks: These are more sophisticated, where the attacker introduces false evidence directly into the system without using the camera. This often involves manipulating the data capture or communication channels.

To be honest, most of my personal experience involves presentation attacks, in which the identity verification/authentication system remains secure but the information, um, presented to it is altered in some way. See my posts on Vision Transformer (ViT) Models and NIST IR 8491.

By JamesHarrison – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4873863.

Injection attacks and the havoc they wreak

In an injection attack, the identity verification/authentication system itself is compromised. For example, instead of taking its data from the camera, data from some other source is, um, injected so that it look like it came from the camera.

Incidentally, I should tangentially note that injection attacks greatly differ from scraping attacks, in which content from legitimate blogs is stolen and injected into scummy blogs that merely rip off content from their original writers. Speaking for myself, it is clear that this repurpose is not an honorable practice.

Note that injection attacks don’t only affect identity systems, but can affect ANY computer system. SentinelOne digs into the different types of injection attacks, including manipulation of SQL queries, cross-site scripting (XSS), and other types. Here’s an example from the health world that is pertinent to Bredemarket readers:

In May 2024, Advocate Aurora Health, a healthcare system in Wisconsin and Illinois, reported a data breach exposing the personal information of 3 million patients. The breach was attributed to improper use of Meta Pixel on the websites of the provider. After the breach, Advocate Health was faced with hefty fines and legal battles resulting from the exposure of Protected Health Information(PHI).

Returning to the identity sphere, Mitek Systems highlights a common injection.

Deepfakes utilize AI and machine learning to create lifelike videos of real people saying or doing things they never actually did. By injecting such videos into a system’s feed, fraudsters can mimic the appearance of a legitimate user, thus bypassing facial recognition security measures.

Again, this differs from someone with a mask getting in front of the system’s camera. Injections bypass the system’s camera.

Fight back, even when David Horowitz isn’t helping you

Do how do you detect that you aren’t getting data from the camera or capture device that is supposed to be providing it? Many vendors offer tactics to attack the attackers; here’s what ID R&D (part of Mitek Systems) proposes.

These steps include creating a comprehensive attack tree, implementing detectors that cover all the attack vectors, evaluating potential security loopholes, and setting up a continuous improvement process for the attack tree and associated mitigation measures.

And as long as I’m on a Mitek kick, here’s Chris Briggs telling Adam Bacia about how injection attacks relate to everything else.

From https://www.youtube.com/watch?v=ZXBHlzqtbdE.

As you can see, the tactics to fight injection attacks are far removed from the more forensic “liveness” procedures such as detecting whether a presented finger is from a living breathing human.

Presentation attack detection can only go so far.

Injection attack detection is also necessary.

So if you’re a company guarding against spoofing, you need someone who can create content, proposals, and analysis that can address both biometric and non-biometric factors.

Learn how Bredemarket can help.

CPA

Not that I’m David Horowitz, but I do what I can. As did David Horowitz’s producer when he was threatened with a gun. (A fake gun.)

From https://www.youtube.com/watch?v=ZXP43jlbH_o.