Additional Ingenium Injection Attack Detection Testing…Result

There are numerous independent testing laboratories, holding testing certifications from various entities, that test a product’s conformance to the requirements of a particular standard.

For presentation attack detection (liveness), organizations such as iBeta and BixeLab test conformance to ISO 30107-3.

  • Vendors who submit their products to iBeta may optionally choose to have the results published; iBeta publishes these confirmation letters here.
  • In a similar manner, BixeLab publishes its confirmation letters here.

For injection attack detection, Ingenium tests conformance to CEN/TS 18099:2025, as well as testing that exceeds the requirements of that standard.

Unfortunately, I was unable to locate a central source of all of Ingenium’s testing results. So I had to hunt around.

Known Ingenium Injection Attack Detection Testing Results

Biometric VendorIngenium Injection Attack Detection Test LevelNotes
FaceTec2Ingenium letter on FaceTec website
iProov4Bredemarket blog post “Injection Attack Detection, CEN/TS 18099:2025, and iProov

And…that’s all I could find.

Ingenium’s testing is relatively new, as is the whole idea of performing injection attack detection testing in general, so it shouldn’t be surprising that vendors haven’t rushed to get independent confirmation of injection attack capabilities.

But they should.

A brief reminder on Ingenium’s five testing levels

I’ve mentioned this before, but it’s worth exploring in more detail, since I only discussed Level 4. Here’s a complete list of all five of Ingenium’s testing evaluation tiers:

  • Level 1: CEN Substantial: This tier is equivalent to the CEN TS 18099:2025 ‘substantial’ evaluation level. A Level 1 test requires 25 FTE days and includes a focus on 2 or more IAMs and 10 or more IAI species. It’s a great starting point for assessing your system’s resilience to common injection attacks.
  • Level 2: CEN High: Exceeding the substantial level, this tier aligns with the CEN TS 18099:2025 ‘high’ evaluation level. This 30-day FTE evaluation expands the scope to include 3 or more IAMs and a higher attack weighting, providing a more rigorous test of your system’s defenses.
  • Level 3: This level goes beyond the CEN TS 18099:2025 standard to provide an even more robust evaluation. The 35-day FTE program focuses on a higher attack weighting, with a greater emphasis on sophisticated IAMs and IAI species to ensure a more thorough assessment of your system’s resilience.
  • Level 4: A 40-day FTE evaluation that further exceeds the CEN TS 18099:2025 standard. Level 4 maintains a high attack weighting while specifically targeting the IAI detection capabilities of your system. Although not a formal PAD (Presentation Attack Detection) assessment, this level offers valuable insights into your system’s PAD subsystem resilience.
  • Level 5: Our most comprehensive offering, this 50-day FTE evaluation goes well beyond the CEN TS 18099:2025 requirements. Level 5 includes the highest level of Ingenium-created IAI species, which are specifically tailored to the unique functionality of your system. This intensive testing provides the deepest insight into your system’s resilience to injection attacks.

Oh, and there’s a video

As I was publicizing my iProov injection attack detection post, I used Grok to create an injection attack detection video. Not for the squeamish, but injection attacks are nasty anyway.

Grok.

Let’s Talk About Occluded Face Expression Reconstruction

ORFE, OAFR, ORecFR, OFER. Let’s go!

As you may know, I’ve often used Grok to convert static images to 6-second videos. But I’ve never tried to do this with an occluded face, because I feared I’d probably fail. Grok isn’t perfect, after all.

Facia’s 2024 definition of occlusion is “an extraneous object that hinders the view of a face, for example, a beard, a scarf, sunglasses, or a mustache covering lips.” Facia also mentions the COVID practice of wearing masks.

Occlusion limits the data available to facial recognition algorithms, which has an adverse effect on accuracy. At the time, “lower chin and mouth occlusions caused an inaccuracy rate increase of 8.2%.” Occlusion of the eyes naturally caused greater inaccuracies.

So how do we account for occlusions? Facia offers three tactics:

  • Occlusion Robust Feature Extraction (ORFE)
  • Occlusion Aware Facial Recognition (OAFR)
  • Occlusion Recovery-Based Facial Recognition (ORecFR)

But those acronyms aren’t enough, so we’ll add one more.

At the 2025 Computer Vision and Pattern Recognition conference, a group of researchers led by Pratheba Selvaraju presented a paper entitled “OFER: Occluded Face Expression Reconstruction.” This gives us one more acronym to play around with.

Here’s the abstract of the paper:

Reconstructing 3D face models from a single image is an inherently ill-posed problem, which becomes even more challenging in the presence of occlusions. In addition to fewer available observations, occlusions introduce an extra source of ambiguity where multiple reconstructions can be equally valid. Despite the ubiquity of the problem, very few methods address its multi-hypothesis nature. In this paper we introduce OFER, a novel approach for singleimage 3D face reconstruction that can generate plausible, diverse, and expressive 3D faces, even under strong occlusions. Specifically, we train two diffusion models to generate a shape and expression coefficients of face parametric model, conditioned on the input image. This approach captures the multi-modal nature of the problem, generating a distribution of solutions as output. However, to maintain consistency across diverse expressions, the challenge is to select the best matching shape. To achieve this, we propose a novel ranking mechanism that sorts the outputs of the shape diffusion network based on predicted shape accuracy scores. We evaluate our method using standard benchmarks and introduce CO-545, a new protocol and dataset designed to assess the accuracy of expressive faces under occlusion. Our results show improved performance over occlusion-based methods, while also enabling the generation of diverse expressions for a given image.

Cool. I was just writing about multimodal for a biometric client project, but this is a different meaning altogether.

In my non-advanced brain, the process of creating multiple options and choosing the one with the “best” fit (however that is defined) seems promising.

Although Grok didn’t do too badly with this one. Not perfect, but pretty good.

Grok.

Singer/songwriters…and Deepfakes

I was just talking about singers, songwriters, and one singer who pretended to be a songwriter.

Of course, some musicians can be both.

Willie Nelson has written songs for others, sung songs written by others, and sung his own songs.

But despite the Grok deepfake I shared last October, Willie is not known as a rapper.

This is fake. Grok.

So Much For Fake IDs

So someone used generative AI to create a “European Union – United Kingdom” identity card. And if that itself wasn’t a clear enough indication of fakery, they included a watermark saying it was generated.

So I tried something similar.

But Google Gemini blocked my attempt.

“I cannot create images of identification documents, including driver’s licenses, or include text that identifies the image as fake. I am also unable to generate images that depict an impossible or future date of birth, as requested.”

As did Grok.

“I’m sorry, but I can’t create or generate any image that replicates or imitates an official government-issued ID (even with “FAKE” written on it). This includes California REAL ID driver’s licenses or any other state/federal identification document.”

So I had to make it a little less real.

A lot less real.

Google Gemini.

Grok’s Not-so-deepfake Willie Nelson, Rapper

While the deepfake video generators that fraudsters use can be persuasive, the 6-second videos created by the free version of Grok haven’t reached that level of fakery. Yet.

In my experience, Grok is better at re-creating well-known people with more distinctive appearances. Good at Gene Simmons and Taylor Swift. Bad at Ace Frehley and Gerald Ford.

So I present…Willie Nelson. 

Grok.

Willie with two turntables and a microphone, and one of his buds watching.

  • If you thought “Stardust” was odd for him, listen to this. 
  • Once Grok created the video, I customized it to have Willie rap about bud. 
  • Unfortunately, or perhaps fortunately, it doesn’t sound like the real Willie.

And for the, um, record, Nelson appeared in Snoop’s “My Medicine” video.

As an added bonus, here’s Grok’s version of Cher, without audio customization. It doesn’t make me believe…

Grok.

Reminder to marketing leaders: if you need Bredemarket’s content-proposal-analysis help, book a meeting at https://bredemarket.com/mark/

Grok, Celebrities, and Music

As some of you know, my generative AI tool of choice has been Google Gemini, which incorporates guardrails against portraying celebrities. Grok has fewer guardrails.

My main purpose in creating the two Bill and Hillary Clinton videos (at the beginning of this compilation reel) was to see how Grok would handle references to copyrighted music. I didn’t expect to hear actual songs, but would Grok try to approximate the sounds of Lindsey-Stevie-Christine era Mac and the Sex Pistols? You be the judge.

And as for Prince and Johnny…you be the judge of that also.

AI created by Grok.
AI created by Grok.

Using Grok For Evil: Deepfake Celebrity Endorsement

Using Grok for evil: a deepfake celebrity endorsement of Bredemarket?

Although in the video the fake Taylor Swift ends up looking a little like a fake Drew Barrymore.

Needless to say, I’m taking great care to fully disclose that this is a deepfake.

But some people don’t.

Removing the Guardrails: President Taylor Swift, Courtesy Grok

Most of my recent generative GI experiments have centered on Google Gemini…which has its limitations:

“Google Gemini imposes severe restrictions against creating pictures of famous figures. You can’t create a picture of President Taylor Swift, for example.”

Why does Google impose such limits? Because it is very sensitive to misleading the public, fearful that the average person would see such a picture and mistakenly assume that Taylor Swift IS the President. In our litigious society, perhaps this is valid.

But we know that other generative AI services don’t have such restrictions.

“One common accusation about Grok is that it lacks the guardrails that other AI services have.”

During a few spare moments this morning, I signed up for a Bredemarket Grok account. I have a personal X (Twitter) account, but haven’t used it in a long time, so this was a fresh sign up.,

And you know the first thing that I tried to do.


Grok.

Grok created it with no problem. Actually, there is a problem, because Grok apparently is not a large multimodal model and cannot precisely generate text in its image generator. But hey, no one will notice “TWIRSHIITE BOUSE,” will they?

But wait, there’s more! After I generated the image, I saw a button to generate a video. I thought that this required the paid service, but apparently the free service allows limited video generation.

Grok.

I may be conducting some video experiments some time soon. But will I maintain my ethics…and my sanity?

Conceptualization of the Planet Bredemarket and Its Rings

Inspired by the Constant Contact session I attended at the Small Business Expo, I wanted to conceptualize the Bredemarket online presence, and decided to adopt a “planet with rings” model.

Think of Bredemarket as a planet. Like Saturn, Uranus, Neptune, and Jupiter, the planet Bredemarket is surrounded by rings.

Google Gemini.

The closest ring to the planet is the Bredemarket mailing list (MailChimp).

The next closest ring is the Bredemarket website (WordPress).

Moving outward, we find the following rings:

  • Search engines and generative AI tools, including Bing, ChatGPT, Google, Grok, Perplexity, and others.
  • The Bredemarket Facebook page and associated groups.
  • The Bredemarket LinkedIn page and associated showcase pages.
  • A variety of social platforms, including Bluesky, Instagram, Substack, and Threads.
  • Additional social platforms, including TikTok, WhatsApp, and YouTube.

While this conceptualization is really only useful to me, I thought a few of you may be interested in some of the “inner rings.”

And if you’re wondering why your favorite way cool platform is banished to the outer edges…well, that’s because it doesn’t make Bredemarket any money. I’ve got a business to run here, and TikTok doesn’t help me pay the bills…

Veo 3 and Deepfakes

(Not a video, but a still image from Imagen 4)

My Google Gemini account does not include access to Google’s new video generation tool Veo 3. But I’m learning about its capabilities from sources such as TIME magazine.

Which claims to be worried.

“TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence.”

However, TIME notes that the ability to create fake videos has existed for years. So why worry now?

“Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery.”

Some of this could be sensationalism. After all, simple text can communicate misinformation.

And you can use common sense to detect deepfakes…sometimes.

Mom’s spaghetti 

Then again, some of the Veo 3 deepfakes look pretty good. Take this example of Will Smith slapping down some pasta at Eminem’s restaurant. The first part of the short was generated with old technology, the last part with Veo 3.

Now I am certain that Google will attempt to put guardrails on Veo 3, as it has attempted to do with other products.

But what will happen if a guardrail-lacking Grok video generator is released?

Or if someone creates a non-SaaS video generator that a user can run on their own with all guardrails disabled?

Increase the impact of your deepfake detection technology

In that case, deepfake detection technology will become even more critical.

Does your firm offer deepfake detection technology?

Do you want your prospects to know how your technology benefits them?

Here’s how Bredemarket can help you help your prospects: https://bredemarket.com/cpa/