And of course I referenced VeriDas in my February 7 post when it defined the difference between presentation attack detection and injection attack detection.
Biometric Update played up this difference:
To stay ahead of the curve, Spanish biometrics company Veridas has introduced an advanced injection attack detection capability into its system, to combat the growing threat of synthetic identities and deepfakes….
Veridas says that standard fraud detection only focuses on what it sees or hears – for example, face or voice biometrics. So-called Presentation Attack Detection (PAD) looks for fake images, videos and voices. Deepfake detection searches for the telltale artifacts that give away the work of generative AI.
Neither are monitoring where the feed comes from or whether the device is compromised.
I can revisit the arguments about whether you should get PAD and…IAD?…from the same vendor, or whether you should get best in-class solutions to address each issue separately.
When considering falsifying identity verification or authentication, it’s helpful to see how VeriDas defines two different types of falsification:
Presentation Attacks: These involve an attacker presenting falsified evidence directly to the capture device’s camera. Examples include using photocopies, screenshots, or other forms of impersonation to deceive the system.
Injection Attacks: These are more sophisticated, where the attacker introduces false evidence directly into the system without using the camera. This often involves manipulating the data capture or communication channels.
To be honest, most of my personal experience involves presentation attacks, in which the identity verification/authentication system remains secure but the information, um, presented to it is altered in some way. See my posts on Vision Transformer (ViT) Models and NIST IR 8491.
In an injection attack, the identity verification/authentication system itself is compromised. For example, instead of taking its data from the camera, data from some other source is, um, injected so that it look like it came from the camera.
Incidentally, I should tangentially note that injection attacks greatly differ from scraping attacks, in which content from legitimate blogs is stolen and injected into scummy blogs that merely rip off content from their original writers. Speaking for myself, it is clear that this repurpose is not an honorable practice.
Note that injection attacks don’t only affect identity systems, but can affect ANY computer system. SentinelOne digs into the different types of injection attacks, including manipulation of SQL queries, cross-site scripting (XSS), and other types. Here’s an example from the health world that is pertinent to Bredemarket readers:
In May 2024, Advocate Aurora Health, a healthcare system in Wisconsin and Illinois, reported a data breach exposing the personal information of 3 million patients. The breach was attributed to improper use of Meta Pixel on the websites of the provider. After the breach, Advocate Health was faced with hefty fines and legal battles resulting from the exposure of Protected Health Information(PHI).
Deepfakes utilize AI and machine learning to create lifelike videos of real people saying or doing things they never actually did. By injecting such videos into a system’s feed, fraudsters can mimic the appearance of a legitimate user, thus bypassing facial recognition security measures.
Again, this differs from someone with a mask getting in front of the system’s camera. Injections bypass the system’s camera.
Fight back, even when David Horowitz isn’t helping you
Do how do you detect that you aren’t getting data from the camera or capture device that is supposed to be providing it? Many vendors offer tactics to attack the attackers; here’s what ID R&D (part of Mitek Systems) proposes.
These steps include creating a comprehensive attack tree, implementing detectors that cover all the attack vectors, evaluating potential security loopholes, and setting up a continuous improvement process for the attack tree and associated mitigation measures.
As you can see, the tactics to fight injection attacks are far removed from the more forensic “liveness” procedures such as detecting whether a presented finger is from a living breathing human.
Presentation attack detection can only go so far.
Injection attack detection is also necessary.
So if you’re a company guarding against spoofing, you need someone who can create content, proposals, and analysis that can address both biometric and non-biometric factors.
The COVID-19 pandemic may be a fading memory, but contactless biometrics remains popular.
Back in the 1980s, you had to touch something to get the then-new “livescan” machines to capture your fingerprints. While you no longer had messy ink-stained fingers, you still had to put your fingers on a surface that a bunch of other people had touched. What if they had the flu? Or AIDS (the health scare of that decade)?
As we began to see facial recognition in the 1990s and early 2000s, one advantage of that biometric modality was that it was CONTACTLESS. Unlike fingerprints, you didn’t have to press your face against a surface.
“Actually this effort launched before that, as there were efforts in 2004 and following years to capture a complete set of fingerprints within 15 seconds…”
This WAS an unusual question, considering that it took a minute or more to capture inked prints or livescan prints. And the government expected this to happen in 15 seconds?
A decade later several companies were pursuing this in conjunction with NIST. There were two solutions: dedicated kiosks such as MorphoWave from my then-employer MorphoTrak, and solutions that used a standard smartphone camera such as SlapShot from Sciometrics and Integrated Biometrics.
The, um, upshot is that now contactless fingerprint and face capture are both a thing. Contactless capture provides speed, and even the impossible 15 second capture target was blown away.
Fingers and faces can be captured “on the move” in airports, border crossings, stadiums, and university lunchrooms and other educational facilities.
Perhaps Iris and voice can be considered contactless and fast.
The people that I hang around with promote biometrics as a better solution to authentication of a hospital patient whose identity was previously verified. Of course, this crowd promotes biometrics as the solution to EVERYTHING. My former Motorola coworker Edward Chen has established a company called Biometrics4ALL.
But the need to identify patients is real. Are you about to remove Jane’s appendix? You’d better make sure that’s Jane on the operating table. And yes, that mistake has happened. (The hospital was very sorry.)
Of the various biometric modalities, face seems the most promising for the health use case, particularly for hospital patients.
Fingerprints require you or a medical professional to move your finger(s) to a contact or contactless reader.
Hand geometry is even more difficult.
For iris or retinal scans, your eyes have to be open.
For voice, you have to be awake. And coherent—I’m not sure if a person can be identified by a moan of pain.
DNA takes at least 90 minutes.
Gait? Um…no.
Unlike the other modalities, the patient doesn’t have to do anything for facial recognition. Even if asleep or sedated, a medical professional can capture an image of a patient’s face. There are some accuracy considerations; I don’t know how well the algorithms work with closed eyes or a wide open mouth. But it looks promising.
Imprivata agrees that facial recognition is a valuable patient identification method.
“By capturing and analyzing unique facial characteristics such as the distance between the eyes and the shape of the nose, this technology can generate a unique identifier for each patient. This identifier is then linked to the patient’s electronic health record (EHR), ensuring that medical staff access the correct records. This method significantly reduces the risk of misidentification and the occurrence of duplicate records, thereby enhancing patient safety.”
However, I can think of one instance in which patient facial recognition would be challenging.
Burn victims.
If the patient were enrolled before the injury, the combination of disfigurement and bandaging would limit the ability to compare the current face to the previously enrolled one.
I recently read a web page (I won’t name the site) that included the following text:
…fingerprints, palm prints, latents, faces, and irises at 500 or 1000 ppi.
Which is partially correct.
Yes, fingerprints, palm prints, and latent prints are measured in pixels per inch (ppi), with older systems capturing 500 ppi images, some newer images capturing 1,000 ppi images, and other systems capturing 2,000 ppi or larger images. 2,000 ppi resolution is used in some images in NIST Special Database 300 because why not?
I don’t know of any latent fingerprint examiner who is capturing 4,000 ppi friction ridge prints, but I bet that someone out there is doing it.
But faces and irises are not measured in pixels per inch.
Why not?
Because, at least until recently, friction ridge impressions were captured differently than faces and irises.
Since the 19th century, we’ve naturally assumed that friction ridges are captured via a contact method, whether by inking the fingers and palms and pressing against a paper card, pressing the fingers and palms against a livescan platen, or pressing a finger on a designated spot on a smartphone.
You don’t press your face or iris against a camera. Yes, you often have to place your iris very close to a camera, but it’s still a contactless method.
Obviously things have changed in the friction ridge world over the last decade, as more companies support contactless methods of fingerprint capture, either through dedicated devices or standard smartphone cameras.
And that has caused issues for organizations such as the U.S. Federal Bureau of Investigation, who have very deep concerns about how contactless fingerprints will function in their current contact-based systems.
For example, how will Electronic Biometric Transmission Specification Appendix F (version 11.2 here) compliance work in the world where the friction ridges are NOT pressed against a surface?
Both the U.S. National Institute of Standards and Technology and the Digital Benefits Hub made important announcements this morning. I will quote portions of the latter announcement.
In response to heightened fraud and related cybersecurity threats during the COVID-19 pandemic, some benefits-administering agencies began to integrate new safeguards such as individual digital accounts and identity verification, also known as identity proofing, into online applications. However, the use of certain approaches, like those reliant upon facial recognition or data brokers, has raised questions about privacy and data security, due process issues, and potential biases in systems that disproportionately impact communities of color and marginalized groups. Simultaneously, adoption of more effective, evidence-based methods of identity verification has lagged, despite recommendations from NIST (Question A4) and the Government Accountability Office.
There’s a ton to digest here. This impacts a number of issues that I and others have been discussing for years.
Image from the mid-2010s. “John, how do you use the CrowdCompass app for this Users Conference?” Well, let me tell you…
Because of my former involvement with the biometric user conference managed by IDEMIA, MorphoTrak, Sagem Morpho, Motorola, and older entities, I always like to peek and see what they’re doing these days. And it looks like they’re still prioritizing the educational element of the conference.
Although the 2024 Justice and Public Safety Conference won’t take place until September, the agenda is already online.
Subject to change, presumably.
This Joseph Courtesis session, scheduled for the afternoon of Thursday, September 12 caught my eye. It’s entitled “Ethical Use of Facial Recognition in Law Enforcement: Policy Before Technology.” Here’s an excerpt from the abstract:
This session will focus on post investigative image identification with the assistance of Facial Recognition Technology (FRT). It’s important to point out that FRT, by itself, does not produce Probable Cause to arrest.
Re-read that last sentence, then re-read it one more time. 100% of the wrongful arrest cases would be eliminated if everyone adopted this one practice. FRT is ONLY an investigative lead.
And Courtesis makes one related point:
Any image identification process that includes FRT should put policy before the technology.
Any technology that could deprive a person of their liberty needs a clear policy on its proper use.
September conference attendees will definitely receive a comprehensive education from an authority on the topic.
But now I’m having flashbacks, and visions of Excel session planning workbooks are dancing in my head. Maybe they plan with Asana today.
HID Global has teamed up with Amazon Web Services to enhance biometric face imaging capabilities by utilizing the Amazon Rekognition computer vision cloud service on its U.ARE.U camera system.
And I also don’t know whether HID Global will be prevented from providing the U.ARE.U face product to law enforcement, given Amazon’s 2020-2021 ban on law enforcement use of Amazon Rekognition’s face capabilities.
Amazon Rekognition and the FBI
Especially since Fedscoop revealed in January that the FBI was in the “initiation” phase of using Amazon Rekognition. Neither Amazon nor the FBI would say whether facial recognition was part of the deal.
If Alphabet or Amazon reverse their current reluctance to market their biometric offerings to governments, the entire landscape could change again.
If they wished, Alphabet, Amazon, and the other tech powers could shut IDEMIA, NEC, and Thales completely out of the biometric business with a minimal (to them) investment. If you’re familiar with SWOT analyses, this definitely falls into the “threat” category.
But the Really Big Bunch still fear public reaction to any so-called “police state” involvement.