Know Your…Passenger

(Part of the biometric product marketing expert series)

OK, here’s another “KYx” acronym courtesy Facephi…Know Your Passenger.

And this is a critical one, and has been critical since…well, about September 11, 2001.

I saw Steve Craig’s reshare of the Facephi press release, which includes the following:

Currently, passengers must verify their identity at multiple checkpoints throughout a single journey, leading to delays and increased congestion at airports. To address this challenge, Facephi has developed technology that enables identity validation before arriving at the airport, reducing wait times and ensuring a seamless and secure travel experience. This innovation has already been successfully tested in collaboration with IATA through a proof of concept conducted last November.

More here.

The idea of creating an ecosystem in which identity is known throughout the entire passenger journey is not new to Facephi, of course. I remember that Safran developed a similar concept in the 2010s before it sold off Morpho, MorphoTrust, MorphoTrak, and Morpho Detection. And I’ve previously discussed the SITA-IDEMIA-Indico “Digital Travel Ecosystem.”

But however it’s accomplished, seamless travel benefits everyone…except the terrorists.

Have You Been Falsely Accused of NPE Use? You May Be Entitled To Compensation.

(From imgflip)

Yes, I broke a cardinal rule by placing an undefined acronym in the blog post title.

99% of all readers probably concluded that the “NPE” in the title was some kind of dangerous drug.

And there actually is something called Norpseudoephedrine that uses the acronym NPE. It was discussed in a 1998 study shared by the National Library of Medicine within the National Institutes of Health. (TL;DR: NPE “enhances the analgesic and rate decreasing effects of morphine, but inhibits its discriminative properties.”)

From the National Library of Medicine.

But I wasn’t talking about THAT NPE.

I was talking about the NPEs that are non-person entities. 

But not in the context of attribute-based access control or rivers or robo-docs

I was speaking of using generative artificial intelligence to write text.

My feelings on this have been expressed before, including my belief that generative AI should NEVER write the first draft of any published piece.

A false accusation

A particular freelance copywriter holds similar beliefs, so she was shocked when she received a rejection notice from a company that included the following:

“We try to avoid employing people who use AI for their writing.

“Although you answered ‘No’ to our screening question, the text of your proposal is AI-generated.”

There’s only one teeny problem: the copywriter wrote her proposal herself.

(This post doesn’t name the company who made the false accusation, so if you DON’T want to know who the company is, don’t click on this link.)

Face it. (Yes, I used that word intentionally; I’ve got a business to run.) Some experts—well, self-appointed “experts”—who delve into the paragraph you’re reading right now will conclude that its use of proper grammar, em dashes, the word “delve,” and the Oxford comma PROVE that I didn’t write it. Maybe I’ll add a rocket emoji to help them perpetuate their misinformation. 🚀

Heck, I’ve used the word “delve” for years before ChatGPT became a verb. And now I use it on purpose just to irritate the “experts.”

The ramifications of a false accusation

And the company’s claim about the copywriter’s authorship is not only misinformation.

It’s libel.

I have some questions for the company that falsely accused the copywriter of using generative AI to write her proposal.

  • How did the company conclude that the copywriter did not write her proposal, but used a generative AI tool to write it?
  • What is the measured accuracy of the method employed by the company?
  • Has the copywriter been placed on a blocklist by the company based upon this false accusation?
  • Has the company shared this false accusation with other companies, thus endangering the copywriter’s ability to make a living?

If this raises to the level of personal injury, perhaps an attorney should get involved.

From imgflip.

A final thought

Seriously: if you’re accused of something you didn’t do, push back.

After all, humans who claim to detect AI have not been independently measured regarding their AI detection accuracy.

And AI-powered AI detectors can hallucinate.

So be safe, and take care of yourself, and each other.


Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259.

How to Recognize People From Quite a Long Way Away

I can’t find it, and I failed to blog about it (because reasons), but several years ago there was a U.S. effort to recognize people from quite a long way away.

Recognize, not recognise.

From https://www.youtube.com/watch?v=ug8nHaelWtc.

The U.S. effort was not a juvenile undertaking, but from what I recall was seeking solutions to wartime use cases, in which the enemy (or a friend) might be quite a long way away.

I was reminded of this U.S. long-distance biometric effort when Biometric Update reported on efforts by Heriot-Watt University in Edinburgh, Scotland and other entities to use light detection and ranging (LiDAR) to capture and evaluate faces from as far as a kilometer away.

At 325 metres – the length of around three soccer pitches – researchers were able to 3D image the face of one of their co-authors in millimetre-scale detail.

The same system could be used to accurately detect faces and human activity at distances of up to one kilometre – equivalent to the length of 10 soccer pitches – the researchers say.

(I’m surprised they said “soccer.” Maybe it’s a Scots vs. English thing.)

More important than the distance is the fact that since they didn’t depend upon visible light, they could capture faces shrouded by the environment.

“The results of our research show the enormous potential of such a system to construct detailed high-resolution 3D images of scenes from long distances in daylight or darkness conditions.

“For example, if someone is standing behind camouflage netting, this system has the potential to determine whether they are on their mobile phone, holding something, or just standing there idle. So there are a number of potential applications from a security and defence perspective.”

So much for camouflage.

But this is still in the research stage. Among other things, the tested “superconducting nanowire single-photon detector (SNSPD)” only works at 1 degree Kelvin.

That’s cold.

More on Injection Attack Detection

(Injection attack syringe image from Imagen 3)

Not too long after I shared my February 7 post on injection attack detection, Biometric Update shared a post of its own, “Veridas introduces new injection attack detection feature for fraud prevention.”

I haven’t mentioned VeriDas much in the Bredemarket blog, but it is one of the 40+ identity firms that are blogging. In Veridas’ case, in English and Spanish.

And of course I referenced VeriDas in my February 7 post when it defined the difference between presentation attack detection and injection attack detection.

Biometric Update played up this difference:

To stay ahead of the curve, Spanish biometrics company Veridas has introduced an advanced injection attack detection capability into its system, to combat the growing threat of synthetic identities and deepfakes…. 

Veridas says that standard fraud detection only focuses on what it sees or hears – for example, face or voice biometrics. So-called Presentation Attack Detection (PAD) looks for fake images, videos and voices. Deepfake detection searches for the telltale artifacts that give away the work of generative AI. 

Neither are monitoring where the feed comes from or whether the device is compromised. 

I can revisit the arguments about whether you should get PAD and…IAD?…from the same vendor, or whether you should get best in-class solutions to address each issue separately.

But they need to be addressed.

Injection Attack Detection

(Injection attack syringe image from Imagen 3)

Having realized that I have never discussed injection attacks on the Bredemarket blog, I decided I should rectify this.

Types of attacks

When considering falsifying identity verification or authentication, it’s helpful to see how VeriDas defines two different types of falsification:

  1. Presentation Attacks: These involve an attacker presenting falsified evidence directly to the capture device’s camera. Examples include using photocopies, screenshots, or other forms of impersonation to deceive the system.
  2. Injection Attacks: These are more sophisticated, where the attacker introduces false evidence directly into the system without using the camera. This often involves manipulating the data capture or communication channels.

To be honest, most of my personal experience involves presentation attacks, in which the identity verification/authentication system remains secure but the information, um, presented to it is altered in some way. See my posts on Vision Transformer (ViT) Models and NIST IR 8491.

By JamesHarrison – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4873863.

Injection attacks and the havoc they wreak

In an injection attack, the identity verification/authentication system itself is compromised. For example, instead of taking its data from the camera, data from some other source is, um, injected so that it look like it came from the camera.

Incidentally, I should tangentially note that injection attacks greatly differ from scraping attacks, in which content from legitimate blogs is stolen and injected into scummy blogs that merely rip off content from their original writers. Speaking for myself, it is clear that this repurpose is not an honorable practice.

Note that injection attacks don’t only affect identity systems, but can affect ANY computer system. SentinelOne digs into the different types of injection attacks, including manipulation of SQL queries, cross-site scripting (XSS), and other types. Here’s an example from the health world that is pertinent to Bredemarket readers:

In May 2024, Advocate Aurora Health, a healthcare system in Wisconsin and Illinois, reported a data breach exposing the personal information of 3 million patients. The breach was attributed to improper use of Meta Pixel on the websites of the provider. After the breach, Advocate Health was faced with hefty fines and legal battles resulting from the exposure of Protected Health Information(PHI).

Returning to the identity sphere, Mitek Systems highlights a common injection.

Deepfakes utilize AI and machine learning to create lifelike videos of real people saying or doing things they never actually did. By injecting such videos into a system’s feed, fraudsters can mimic the appearance of a legitimate user, thus bypassing facial recognition security measures.

Again, this differs from someone with a mask getting in front of the system’s camera. Injections bypass the system’s camera.

Fight back, even when David Horowitz isn’t helping you

Do how do you detect that you aren’t getting data from the camera or capture device that is supposed to be providing it? Many vendors offer tactics to attack the attackers; here’s what ID R&D (part of Mitek Systems) proposes.

These steps include creating a comprehensive attack tree, implementing detectors that cover all the attack vectors, evaluating potential security loopholes, and setting up a continuous improvement process for the attack tree and associated mitigation measures.

And as long as I’m on a Mitek kick, here’s Chris Briggs telling Adam Bacia about how injection attacks relate to everything else.

From https://www.youtube.com/watch?v=ZXBHlzqtbdE.

As you can see, the tactics to fight injection attacks are far removed from the more forensic “liveness” procedures such as detecting whether a presented finger is from a living breathing human.

Presentation attack detection can only go so far.

Injection attack detection is also necessary.

So if you’re a company guarding against spoofing, you need someone who can create content, proposals, and analysis that can address both biometric and non-biometric factors.

Learn how Bredemarket can help.

CPA

Not that I’m David Horowitz, but I do what I can. As did David Horowitz’s producer when he was threatened with a gun. (A fake gun.)

From https://www.youtube.com/watch?v=ZXP43jlbH_o.

Clean Fast Contactless Biometrics

(Image from DW)

The COVID-19 pandemic may be a fading memory, but contactless biometrics remains popular.

Back in the 1980s, you had to touch something to get the then-new “livescan” machines to capture your fingerprints. While you no longer had messy ink-stained fingers, you still had to put your fingers on a surface that a bunch of other people had touched. What if they had the flu? Or AIDS (the health scare of that decade)?

As we began to see facial recognition in the 1990s and early 2000s, one advantage of that biometric modality was that it was CONTACTLESS. Unlike fingerprints, you didn’t have to press your face against a surface.

But then fingerprints also became contactless after someone asked an unusual question in 2004.

“Actually this effort launched before that, as there were efforts in 2004 and following years to capture a complete set of fingerprints within 15 seconds…”

This WAS an unusual question, considering that it took a minute or more to capture inked prints or livescan prints. And the government expected this to happen in 15 seconds?

A decade later several companies were pursuing this in conjunction with NIST. There were two solutions: dedicated kiosks such as MorphoWave from my then-employer MorphoTrak, and solutions that used a standard smartphone camera such as SlapShot from Sciometrics and Integrated Biometrics.

The, um, upshot is that now contactless fingerprint and face capture are both a thing. Contactless capture provides speed, and even the impossible 15 second capture target was blown away. 

Fingers and faces can be captured “on the move” in airports, border crossings, stadiums, and university lunchrooms and other educational facilities.

Perhaps Iris and voice can be considered contactless and fast. 

But even “rapid” DNA isn’t that rapid.

Hospital Patient Facial Recognition

(Hospitalized wildebeest facial recognition image from Google Gemini)

It’s no secret that I detest the practice of identifying a patient by their name and birthdate. A fraudster can easily acquire this knowledge and impersonate a patient.

The people that I hang around with promote biometrics as a better solution to authentication of a hospital patient whose identity was previously verified. Of course, this crowd promotes biometrics as the solution to EVERYTHING. My former Motorola coworker Edward Chen has established a company called Biometrics4ALL.

But the need to identify patients is real. Are you about to remove Jane’s appendix? You’d better make sure that’s Jane on the operating table. And yes, that mistake has happened. (The hospital was very sorry.)

Of the various biometric modalities, face seems the most promising for the health use case, particularly for hospital patients.

  • Fingerprints require you or a medical professional to move your finger(s) to a contact or contactless reader. 
  • Hand geometry is even more difficult.
  • For iris or retinal scans, your eyes have to be open.
  • For voice, you have to be awake. And coherent—I’m not sure if a person can be identified by a moan of pain.
  • DNA takes at least 90 minutes.
  • Gait? Um…no.

Unlike the other modalities, the patient doesn’t have to do anything for facial recognition. Even if asleep or sedated, a medical professional can capture an image of a patient’s face. There are some accuracy considerations; I don’t know how well the algorithms work with closed eyes or a wide open mouth. But it looks promising.

Imprivata agrees that facial recognition is a valuable patient identification method.

“By capturing and analyzing unique facial characteristics such as the distance between the eyes and the shape of the nose, this technology can generate a unique identifier for each patient. This identifier is then linked to the patient’s electronic health record (EHR), ensuring that medical staff access the correct records. This method significantly reduces the risk of misidentification and the occurrence of duplicate records, thereby enhancing patient safety.”

However, I can think of one instance in which patient facial recognition would be challenging.

Burn victims.

If the patient were enrolled before the injury, the combination of disfigurement and bandaging would limit the ability to compare the current face to the previously enrolled one.

But this can be overcome. After all, we figured out how to recognize the faces of people wearing masks.

Don’t Miss the Boat

Bredemarket helps identity/biometric firms.

  • Finger, face, iris, voice, DNA, ID documents, geolocation, and even knowledge.
  • Content-Proposal-Analysis. (Bredemarket’s “CPA.”)

Don’t miss the boat.

Augment your team with Bredemarket.

Find out more.

Don’t miss the boat.

In Case You Missed My Incessant “Biometric Product Marketing Expert” Promotion

Biometric product marketing expert.

Modalities: Finger, face, iris, voice, DNA.

Plus other factors: IDs, data.

John E. Bredehoft has worked for Incode, IDEMIA, MorphoTrak, Motorola, Printrak, and a host of Bredemarket clients.

(Some images AI-generated by Google Gemini.)

Biometric product marketing expert.

Well, the Writer Was 60% Correct (Face-Iris Pixels Per Inch)

(Part of the biometric product marketing expert series)

I recently read a web page (I won’t name the site) that included the following text:

…fingerprints, palm prints, latents, faces, and irises at 500 or 1000 ppi.

Which is partially correct.

Yes, fingerprints, palm prints, and latent prints are measured in pixels per inch (ppi), with older systems capturing 500 ppi images, some newer images capturing 1,000 ppi images, and other systems capturing 2,000 ppi or larger images. 2,000 ppi resolution is used in some images in NIST Special Database 300 because why not?

I don’t know of any latent fingerprint examiner who is capturing 4,000 ppi friction ridge prints, but I bet that someone out there is doing it.

But faces and irises are not measured in pixels per inch.

Why not?

Because, at least until recently, friction ridge impressions were captured differently than faces and irises.

  • Since the 19th century, we’ve naturally assumed that friction ridges are captured via a contact method, whether by inking the fingers and palms and pressing against a paper card, pressing the fingers and palms against a livescan platen, or pressing a finger on a designated spot on a smartphone.
  • You don’t press your face or iris against a camera. Yes, you often have to place your iris very close to a camera, but it’s still a contactless method.
This is not a recommended method of facial image acquisition. From https://www.youtube.com/watch?v=4XhWFHKWCSE.

Obviously things have changed in the friction ridge world over the last decade, as more companies support contactless methods of fingerprint capture, either through dedicated devices or standard smartphone cameras.

And that has caused issues for organizations such as the U.S. Federal Bureau of Investigation, who have very deep concerns about how contactless fingerprints will function in their current contact-based systems.

For example, how will Electronic Biometric Transmission Specification Appendix F (version 11.2 here) compliance work in the world where the friction ridges are NOT pressed against a surface?