Injection Attack Detection, CEN/TS 18099:2025, and iProov

Most identity and biometric marketing leaders know that their products should detect attacks, including injection attacks. But do the products detect attacks? And do prospects know that the products detect attacks? (iProov prospects know. Or should know.)

I’ve mentioned injection attack detection a couple of times on the Bredemarket blog, noting its difference from presentation attack detection. While the latter affects what is shown to the biometric reader, the former bypasses the biometric reader entirely.

But I haven’t mentioned how vendors can secure independent confirmation of their injection attack defenses.

European Committee for Standardization (CEN)

Here’s part of what ID Tech Wire said a year ago.

“A new European technical standard, CEN/TS 18099:2025, has been published to address the growing concern of biometric data injection attacks. The standard provides a framework for evaluating the effectiveness of identity verification (IDV) vendors in detecting and mitigating these attacks, filling a critical gap left by existing regulations.”

Being a baseball hot dogs apple pie guy, I had never heard of CEN. Now I have.

“CEN, the European Committee for Standardization, is an association that brings together the National Standardization Bodies of 34 European countries.

“CEN provides a platform for the development of European Standards and other technical documents in relation to various kinds of products, materials, services and processes.”

And before you say that them furriner Europeans couldn’t possibly understand the nuances of good ol’ Murican injection attacks, look at all the countries that follow biometric interchange guidance from the American National Standards Institute (ANSI) and the National Institute of Standards and Technology (NIST).

So CEN is good.

But let’s get to THIS standard.

More on CEN/TS 18099:2025

The Biometric Data Injection Attack Detection standard can be found at multiple locations, including the aforementioned ANSI. From the current 2025 version:

“This document provides an overview of: 

– Definitions of biometric data injection attacks; 

– Use cases for injection attacks with biometric data on essential hardware components of biometric systems used for enrollment and verification; 

– Tools for injection attacks on systems using one or more biometric modalities. 

This document provides guidance for: 

– Injection Attack Instrument Detection System (defined in 3.12); 

– adequate risk mitigation for injection attack tools; 

– Creation of a test plan for the evaluation of an injection attack detection system (defined in 3.9).”

Like (most) good standards, you have to buy it. Current Murican price is $99.

You can see how this parallels the existing standard for presentation attack detection testing.

Which brings us to iProov…and Ingenium

iProov is a company in the United Kingdom. This post does not address whether the United Kingdom is part of Europe; I assigned that thankless task to Bredebot. But iProov does pay attention to European stands, according to this statement:

“[iProov] announced that its Dynamic Liveness technology is the first and only solution to successfully achieve an Ingenium Level 4 evaluation and the CEN/TS 18099 High technical specification for Injection Attack Detection, following an independent evaluation by the ISO/IEC 17025-accredited, Ingenium Biometric Laboratories. Ingenium Level 4 builds on the requirements outlined in CEN/TS 18099, providing an increased level of assurance with an extended period of active testing and inclusion of complex, highly-weighted attack types.”

Ingenium’s injection attack detection testing is arranged in five levels/tiers. The first two correspond to the “substantial” and “high” evaluation levels in CEN/TS 18099:2025. The final three levels exceed the standard.

Level 4:

“Level 4: A 40-day FTE evaluation that further exceeds the CEN TS 18099:2025 standard. Level 4 maintains a high attack weighting while specifically targeting the IAI detection capabilities of your system. Although not a formal PAD (Presentation Attack Detection) assessment, this level offers valuable insights into your system’s PAD subsystem resilience.”

Because while they are technically different, injection attack detection and presentation attack detection are intertwined. 

Does your product detect attacks?

And if you adopt a customer focus, the customer doesn’t really care about the TYPE of attack. The customer ONLY cares about the attack itself, and whether or not the vendor detected and prevented it.

Identity/biometric marketing leaders, does your product offer independent confirmation of its attack detection capabilities? If not, do you publicize your own self-assertion of detection?

Because if you DON’T explicitly address attack detection, your prospects are forced to assume that you can’t detect attacks at all. And your prospects will avoid you as dangerous and gravitate to vendors who DO assert attack detection in some way.

And you will lose money.

Regardless of whether you are in the United States, United Kingdom, or the European continent…losing money is not good.

So don’t lose money. Tell your prospects about your attack detection. Or have Bredemarket help you tell them. Talk to me.

Biometric product marketing expert. This is NOT in the United Kingdom.

Postscript: Non iProov injection attack detection here.

Yoti iBeta Confirmation of Presentation Attack Detection Level 3

We’ve talked about Levels 1 and 2 of iBeta’s confirmation that particular biometric implementations meet the requirements of ISO 30107-3. But now with Yoti’s confirmation, we can talk about iBeta Level 3.

From iBeta:

“The test method was to apply 1 bona fide subject presentation that alternated with 3 artefact presentations such that the presentation of each species consisted of 150 Presentation Attacks (PAs) and 50 bona fide presentations, or until 56 hours had passed per species. The results were displayed for the tester on the device as “Liveness check: Passed” for a successful attempt or “Liveness check: Failed” for an unsuccessful attempt.

“iBeta was not able to gain a liveness classification with the presentation attacks (PAs) on the Apple iPhone 16 Pro. With 150 PAs for each of 3 species, the total number of attacks was 450, and the overall Attack Presentation Classification Error Rate (APCER) was 0%. The Bona Fide Presentation Classification Error Rate (BPCER) was also calculated and may be found in the final report.

“Yoti Limited’s myface12122025 application and supporting backend components were tested by iBeta to the ISO 30107-3 Biometric Presentation Attack Detection Standard and found to be in compliance with Level 3.”

More from Yoti itself.

“Yoti’s MyFace is the first passive, single-selfie liveness technology in the world to conform to iBeta’s Level 3 testing under ISO/IEC 30107-3 – their highest level for liveness checks.”

Also see Biometric Update and UK Tech.

After all, facial age estimation is of no meaning whatsoever if the face is fake. So it was important that Yoti receive this confirmation.

On Acquired Identities

Most of my discussions regarding identity assume the REAL identity of a person.

But what if someone acquires the identity of another? For example, when the late Steve Bridges impersonated George W. Bush?

White House photo by Kimberlee Hewitt – whitehouse.gov, President George W. Bush and comedian Steve Bridges, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3052515

Or better still, what when multiple people adopt an identity?

Google Gemini.

And by the way, Charlie Chaplin said that he NEVER entered a Charlie Chaplin lookalike contest…and came in third.

Grok.

Of course, these assumed identities require alterations that liveness detection should detect.

As a biometric product marketing expert should know.

Landscape.

Deep Deepfakes vs. Shallow Shallowfakes

We toss words around until they lose all meaning, like the name of Jello Biafra’s most famous band. (IYKYK.)

So why are deepfakes deep?

And does the existence of deepfakes necessarily mean that shallowfakes exist?

Why are deepfakes deep?

The University of Virginia Information Security explains how deepfakes are created, which also explains why they’re called that.

“A deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name).”

UVA then launches into a technical explanation.

“Deep learning is a special kind of machine learning that involves “hidden layers.” Typically, deep learning is executed by a special class of algorithm called a neural network….A hidden layer is a series of nodes within the network that performs mathematical transformations to convert input signals to output signals (in the case of deepfakes, to convert real images to really good fake images). The more hidden layers a neural network has, the “deeper” the network is.”

Why are shallowfakes shallow?

So if you don’t use a multi-level neural network to create your fake, then it is by definition shallow. Although you most likely need to use cumbersome manual methods to create it.

  • For presentation attack detection (liveness detection, either active or passive), you can dispense with the neural network and just use old fashioned makeup.
From NIST.

Or a mask.

Imagen 4.

It’s all semantics

In truth, we commonly refer to all face, voice, and finger fakes as “deep” fakes even when they don’t originate in a neural network.

But if someone wants to refer to shallowfakes, it’s OK with me.

Presentation Attack Injection, Injection Attack Detection, and Deepfakes on LinkedIn and Substack

Just letting my Bredemarket blog readers know of two items I wrote on other platforms.

  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes.” This LinkedIn article, part of The Wildebeest Speaks newsletter series, is directed toward people who already have some familiarity with deepfake attacks.
  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes (version 2). This Substack post does NOT assume any deepfake attack background.

“Somewhat You Why,” and Whether Deepfakes are Evil or Good or Both

I debated whether or not I should publish this because it touches upon two controversial topics: U.S. politics, and my proposed sixth factor of authentication.

I eventually decided to share it on the Bredemarket blog but NOT link to it or quote it on my socials.

Although I could change my mind later.

Are deepfakes bad?

When I first discussed deepfakes in June 2023, I detailed two deepfake applications.

One deepfake was an audio-video creation purportedly showing Richard Nixon paying homage to the Apollo 11 astronauts who were stranded on the surface of the moon.

  • Of course, no Apollo 11 astronauts were ever stranded on the surface of the moon; Neil Armstrong and Buzz Aldrin returned to Earth safely.
  • So Nixon never had to pay homage to them, although William Safire wrote a speech as a contingency.
  • This deepfake is not in itself bad, unless it is taught in a history course as true history about “the failure of the U.S. moon program.” (The Apollo program had a fatal catastrophe, but not involving Apollo 11.)

The other deepfake was more sinister.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million….The manager, believing everything appeared legitimate, began making the transfers.

Except that the director wasn’t the director, and the company had just been swindled to the tune of $35 million.

I think everyone knows now that deepfakes can be used for bad things. So we establish standards to determine “content provenance and authenticity,” which is a fancy way to say whether content is real or a deepfake.

In addition to establishing standards, we do a lot of research to counter deepfakes, because they are bad.

Or are they?

What the National Science Foundation won’t do

Multiple sources, including both Nextgov and Biometric Update, are reporting on the cancellation of approximately 430 grants from the National Science Foundation. Among these grants are ones for deepfake research.

Around 430 federally-funded research grants covering topics like deepfake detection, artificial intelligence advancement and the empowerment of marginalized groups in scientific fields were among several projects terminated in recent days following a major realignment in research priorities at the National Science Foundation.

As you can probably guess, the cancellation of these grants is driven by the Trump Administration and the so-called Department of Government Efficiency (DOGE).

Why?

Because freedom:

Per the Presidential Action announced January 20, 2025, NSF will not prioritize research proposals that engage in or facilitate any conduct that would unconstitutionally abridge the free speech of any American citizen. NSF will not support research with the goal of combating “misinformation,” “disinformation,” and “malinformation” that could be used to infringe on the constitutionally protected speech rights of American citizens across the United States in a manner that advances a preferred narrative about significant matters of public debate.

The NSF argues that a person’s First Amendment rights permit them, I mean permit him, to share content without having the government prevent its dissemination by tagging it as misinformation, disinformation, or malinformation.

And it’s not the responsibility of the U.S. Government to research creation of so-called misinformation content. Hence the end of funding for deepfake research.

So deepfakes are good because they’re protected by the First Amendment.

But wait a minute…

Just because the U.S. Government doesn’t like it when patriotic citizens are censored from distributing deepfake videos for political purposes, that doesn’t necessarily mean that the U.S. Government objects to ALL deepfakes.

For example, let’s say that a Palm Beach, Florida golf course receives a video message from Tiger Woods reserving a tee time and paying a lot of money to reserve the tee time. The golf course doesn’t allow anyone to book a tee time and waits for Tiger’s wire transfer to clear. After the fact, the golf course discovers that (a) the money was wired from a non-existent account, and (b) the person making the video call was not Tiger Woods, but a faked version of him.

I don’t think anyone in the U.S. Government or DOGE thinks that ripping off a Palm Beach, Florida golf course is a legitimate use of First Amendment free speech rights.

So deepfakes are bad because they lead to banking fraud and other forms of fraud.

This is not unique to deepfakes, but is also true of many other technologies. Nuclear technology can provide energy to homes, or it can kill people. Facial recognition (of real people) can find missing and abducted persons, or it can send Chinese Muslims to re-education camps.

Let’s go back to factors of authentication and liveness detection

Now let’s say that Tiger Woods’ face shows up on YOUR screen. You can use liveness detection and other technologies to determine whether it is truly Tiger Woods, and take action accordingly.

  • If the interaction with Woods is trivial, you may NOT want to spend time and resources to perform a robust authentication.
  • If the interaction with Woods is critical, you WILL want to perform a robust authentication.

It all boils down to something that I’ve previously called “somewhat you why.”

Why is Tiger Woods speaking?

  • If Tiger Woods is performing First Amendment-protected activity such as political talk, then “somewhat you why” asserts that whether this is REALLY Woods or not doesn’t matter.
  • If Tiger Woods is making a financial transaction with a Palm Beach, Florida golf course, then “somewhat you why” asserts that you MUST determine if this is really Woods.

It’s simple…right?

What about your deepfake solution?

Regardless of federal funding, companies are still going to offer deepfake detection products. Perhaps yours is one of them.

How will you market that product?

Do you have the resources to market your product, or are your resources already stretched thin?

If you need help with your facial recognition product marketing, Bredemarket has an opening for a facial recognition client. I can offer

  • compelling content creation
  • winning proposal development
  • actionable analysis

If Bredemarket can help your stretched staff, book a free meeting with me: https://bredemarket.com/cpa/

(Lincoln’s laptop from Imagen 3)