Since I’m talking about presentation attack detection and injection attack detection a lot lately, I should briefly explain the difference between the two. This is from a Substack post I wrote last June.
Let’s say that you have an app on your smartphone that verifies that you are who you say you are.
Maybe it’s a banking app.
Maybe it’s an app that provides access to a government benefits account.
Maybe it’s an app that lets you enter a football stadium.
As part of its workflow, the app uses the smartphone camera to take a picture of your face.
But is that really YOUR face?
Presentation attack detection
A “presentation attack” occurs when the presented item is altered. In the case of a face presented to a smartphone camera, here are three examples of presentation attacks:
Your face is altered by makeup, a mask, or another disguise.
Your face is replaced by a printed photo of someone else’s face.
Your face is replaced by a digital photo or video on a monitor or screen.
Injection attack detection
But what if the image is NOT from the smartphone camera?
What if it is “injected” from another source, bypassing the camera altogether?
The victim doesn’t care
From the fraud victim perspective, it doesn’t matter whether a presentation attack or an injection attack is used.
The only thing that matters is that some type of deepfake fraud was used to fool the system.
While much of the world continues to play football, American “football” wrapped up this month at the professional level with the “Commercials, Concerts, And a Sports Show”(tm).
During the game, New England Patriots quarterback Drake Maye threw two interceptions, or throws that were received by players on the opposing them (the Seattle Seahawks).
But what if Maye were throwing iris templates? And what if the defending Seahawks used the intercepted data in injection attacks?
Bet you didn’t think I was going there.
Iris template replay attacks
Facial data (from companies such as FaceTec and iProov) isn’t the only type of data that can be protected by injection attack detection. You can inject data from any type of biometric to bypass the capture device.
One type of injection attack is a template replay attack. It works something like this:
For this example assume that I am a legitimate subject and an authorized user, and the biometric workstation captures my iris.
Rather than sending the entire iris image to the server, it converts the image into a template, or a much smaller mathematical representation.
The biometric workstation transmits this template to the server. BUT…
The evil fraudsters use some type of malware to intercept my iris template and save it for future mischief. Unfortunately, unlike a football interception seen by over 100 million people, no one realizes that this iris “interception” happened.
Later, when a fraudster wants to gain access to the biometric system, they perform an injection attack. Rather than capturing the fraudster’s iris at a workstation and sending that template to the server, the fraudster performs a “replay” and “injects” my intercepted iris template into the workflow.
The server receives my iris template, thinks I am accessing the system, and authorizes access.
The fraudster does bad things.
Iris template replay attack detection
How do you prevent an iris template replay attack?
First you have to detect it. Perhaps the system can detect that the template is not from a current iris capture, or that the template originated somewhere other than an iris workstation.
Once you detect it, you can reject it. Fraudster denied.
Of course this applies to any biometric template: fingerprint, face, whatever.
Injection attack detection, when implemented, is just another tool embedded in the biometric product.
Biometric product marketing expert. Look at his eyes.
There are numerous independent testing laboratories, holding testing certifications from various entities, that test a product’s conformance to the requirements of a particular standard.
For presentation attack detection (liveness), organizations such as iBeta and BixeLab test conformance to ISO 30107-3.
Vendors who submit their products to iBeta may optionally choose to have the results published; iBeta publishes these confirmation letters here.
In a similar manner, BixeLab publishes its confirmation letters here.
For injection attack detection, Ingenium tests conformance to CEN/TS 18099:2025, as well as testing that exceeds the requirements of that standard.
Unfortunately, I was unable to locate a central source of all of Ingenium’s testing results. So I had to hunt around.
Known Ingenium Injection Attack Detection Testing Results
Ingenium’s testing is relatively new, as is the whole idea of performing injection attack detection testing in general, so it shouldn’t be surprising that vendors haven’t rushed to get independent confirmation of injection attack capabilities.
But they should.
A brief reminder on Ingenium’s five testing levels
Level 1: CEN Substantial: This tier is equivalent to the CEN TS 18099:2025 ‘substantial’ evaluation level. A Level 1 test requires 25 FTE days and includes a focus on 2 or more IAMs and 10 or more IAI species. It’s a great starting point for assessing your system’s resilience to common injection attacks.
Level 2: CEN High: Exceeding the substantial level, this tier aligns with the CEN TS 18099:2025 ‘high’ evaluation level. This 30-day FTE evaluation expands the scope to include 3 or more IAMs and a higher attack weighting, providing a more rigorous test of your system’s defenses.
Level 3: This level goes beyond the CEN TS 18099:2025 standard to provide an even more robust evaluation. The 35-day FTE program focuses on a higher attack weighting, with a greater emphasis on sophisticated IAMs and IAI species to ensure a more thorough assessment of your system’s resilience.
Level 4: A 40-day FTE evaluation that further exceeds the CEN TS 18099:2025 standard. Level 4 maintains a high attack weighting while specifically targeting the IAI detection capabilities of your system. Although not a formal PAD (Presentation Attack Detection) assessment, this level offers valuable insights into your system’s PAD subsystem resilience.
Level 5: Our most comprehensive offering, this 50-day FTE evaluation goes well beyond the CEN TS 18099:2025 requirements. Level 5 includes the highest level of Ingenium-created IAI species, which are specifically tailored to the unique functionality of your system. This intensive testing provides the deepest insight into your system’s resilience to injection attacks.
Oh, and there’s a video
As I was publicizing my iProov injection attack detection post, I used Grok to create an injection attack detection video. Not for the squeamish, but injection attacks are nasty anyway.
Most identity and biometric marketing leaders know that their products should detect attacks, including injection attacks. But do the products detect attacks? And do prospects know that the products detect attacks? (iProov prospects know. Or should know.)
I’ve mentioned injection attack detection a couple of times on the Bredemarket blog, noting its difference from presentation attack detection. While the latter affects what is shown to the biometric reader, the former bypasses the biometric reader entirely.
But I haven’t mentioned how vendors can secure independent confirmation of their injection attack defenses.
“A new European technical standard, CEN/TS 18099:2025, has been published to address the growing concern of biometric data injection attacks. The standard provides a framework for evaluating the effectiveness of identity verification (IDV) vendors in detecting and mitigating these attacks, filling a critical gap left by existing regulations.”
“CEN, the European Committee for Standardization, is an association that brings together the National Standardization Bodies of 34 European countries.
“CEN provides a platform for the development of European Standards and other technical documents in relation to various kinds of products, materials, services and processes.”
And before you say that them furriner Europeans couldn’t possibly understand the nuances of good ol’ Murican injection attacks, look at all the countries that follow biometric interchange guidance from the American National Standards Institute (ANSI) and the National Institute of Standards and Technology (NIST).
So CEN is good.
But let’s get to THIS standard.
More on CEN/TS 18099:2025
The Biometric Data Injection Attack Detection standard can be found at multiple locations, including the aforementioned ANSI. From the current 2025 version:
“This document provides an overview of:
– Definitions of biometric data injection attacks;
– Use cases for injection attacks with biometric data on essential hardware components of biometric systems used for enrollment and verification;
– Tools for injection attacks on systems using one or more biometric modalities.
This document provides guidance for:
– Injection Attack Instrument Detection System (defined in 3.12);
– adequate risk mitigation for injection attack tools;
– Creation of a test plan for the evaluation of an injection attack detection system (defined in 3.9).”
Like (most) good standards, you have to buy it. Current Murican price is $99.
You can see how this parallels the existing standard for presentation attack detection testing.
Which brings us to iProov…and Ingenium
iProov is a company in the United Kingdom. This post does not address whether the United Kingdom is part of Europe; I assigned that thankless task to Bredebot. But iProov does pay attention to European stands, according to this statement:
“[iProov] announced that its Dynamic Liveness technology is the first and only solution to successfully achieve an Ingenium Level 4 evaluation and the CEN/TS 18099 High technical specification for Injection Attack Detection, following an independent evaluation by the ISO/IEC 17025-accredited, Ingenium Biometric Laboratories. Ingenium Level 4 builds on the requirements outlined in CEN/TS 18099, providing an increased level of assurance with an extended period of active testing and inclusion of complex, highly-weighted attack types.”
Ingenium’s injection attack detection testing is arranged in five levels/tiers. The first two correspond to the “substantial” and “high” evaluation levels in CEN/TS 18099:2025. The final three levels exceed the standard.
Level 4:
“Level 4: A 40-day FTE evaluation that further exceeds the CEN TS 18099:2025 standard. Level 4 maintains a high attack weighting while specifically targeting the IAI detection capabilities of your system. Although not a formal PAD (Presentation Attack Detection) assessment, this level offers valuable insights into your system’s PAD subsystem resilience.”
Because while they are technically different, injection attack detection and presentation attack detection are intertwined.
Does your product detect attacks?
And if you adopt a customer focus, the customer doesn’t really care about the TYPE of attack. The customer ONLY cares about the attack itself, and whether or not the vendor detected and prevented it.
Identity/biometric marketing leaders, does your product offer independent confirmation of its attack detection capabilities? If not, do you publicize your own self-assertion of detection?
Because if you DON’T explicitly address attack detection, your prospects are forced to assume that you can’t detect attacks at all. And your prospects will avoid you as dangerous and gravitate to vendors who DO assert attack detection in some way.
And you will lose money.
Regardless of whether you are in the United States, United Kingdom, or the European continent…losing money is not good.
So don’t lose money. Tell your prospects about your attack detection. Or have Bredemarket help you tell them. Talk to me.
Biometric product marketing expert. This is NOT in the United Kingdom.
“A deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name).”
UVA then launches into a technical explanation.
“Deep learning is a special kind of machine learning that involves “hidden layers.” Typically, deep learning is executed by a special class of algorithm called a neural network….A hidden layer is a series of nodes within the network that performs mathematical transformations to convert input signals to output signals (in the case of deepfakes, to convert real images to really good fake images). The more hidden layers a neural network has, the “deeper” the network is.”
Why are shallowfakes shallow?
So if you don’t use a multi-level neural network to create your fake, then it is by definition shallow. Although you most likely need to use cumbersome manual methods to create it.
In injection attack detection, no fakery of the image is necessary. You can insert a real image of the person.
For presentation attack detection (liveness detection, either active or passive), you can dispense with the neural network and just use old fashioned makeup.
From NIST.
Or a mask.
Imagen 4.
It’s all semantics
In truth, we commonly refer to all face, voice, and finger fakes as “deep” fakes even when they don’t originate in a neural network.
But if someone wants to refer to shallowfakes, it’s OK with me.