I Guess I Was Fated to Write About NIST IR 8491 on Passive Presentation Attack Detection

Remember in mid-August when I said that the U.S. National Institute of Standards and Technology was splitting its FRVT tests into FRTE and FATE tests?

Well, the FATE side of the house has released its first two studies, including one entitled “Face Analysis Technology Evaluation (FATE) Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms” (NIST Internal Report NIST IR 8491; PDF here).

By JamesHarrison – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4873863

I’ve written all about this study in a LinkedIn article under my own name that answers the following questions:

  • What is a presentation attack?
  • How do you detect presentation attacks?
  • Why does NIST care about presentation attacks?
  • And why should you?

My LinkedIn article, “Why NIST Cares About Presentation Attack Detection…and Why You Should Also,” can be found at the link https://www.linkedin.com/pulse/why-nist-cares-presentation-attack-detectionand-you-should-bredehoft/.

ICYMI: Gummy Fingers

In case you missed it…

My recent post “Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event” included the following sentence:

But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/

A biometrics industry colleague noticed the rhyming words “dummy” and “gummy” and wondered if the latter was a typo. It turns out it wasn’t.

To my knowledge, these gummy fingers do NOT have ridges. From https://www.candynation.com/gummy-fingers

Back in 2002, researcher Tsutomu Matsumoto used “gummy bears” gelatin to create a fake finger that fooled a fingerprint reader.

Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.

Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers.

For the rest of the story, see “We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.”

(Bredemarket email, meeting, contact, subscribe)

The Great Renaming: FRVT is now FRTE and FATE

Face professionals, your world just changed.

I and countless others have spent the last several years referring to the National Institute of Standards and Technology’s Face Recognition Vendor Test, or FRVT. I guess some people have spent almost a quarter century referring to FRVT, because the term has been in use since 1999.

Starting now, you’re not supposed to use the FRVT acronym any more.

From NIST:

Face Technology Evaluations – FRTE/FATE

To bring clarity to our testing scope and goals, what was formerly known as FRVT has been rebranded and split into FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation).  Tracks that involve the processing and analysis of images will run under the FATE activity, and tracks that pertain to identity verification will run under FRTE.  All existing participation and submission procedures remain unchanged.

From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate

So, for example, the former “FRVT 1:1” and “FRVT 1:N” are now named “FRTE 1:1” and “FRTE 1:N,” respectively. At least at present, the old links https://pages.nist.gov/frvt/html/frvt11.html and https://pages.nist.gov/frvt/html/frvt1N.html still work.

The change actually makes sense, since tasks such as age estimation and presentation attack detection (liveness detection) do not directly relate to the identification of individuals.

Us old folks just have to get used to the change.

I just hope that the new “FATE” acronym doesn’t mean that some algorithms are destined to perform better than others.

Communicating How Your Firm Fights Synthetic Identities

(Updated question count 10/23/2023)

Does your firm fight crooks who try to fraudulently use synthetic identities? If so, how do you communicate your solution?

This post explains what synthetic identities are (with examples), tells four ways to detect synthetic identities, and closes by providing an answer to the communication question.

While this post is primarily intended for identity firms who can use Bredemarket’s marketing and writing services, anyone else who is interested in synthetic identities can read along.

What are synthetic identities?

To explain what synthetic identities are, let me start by telling you about Jason Brown.

Jason Brown wasn’t Jason Brown

You may not have heard of him unless you lived in Atlanta, Georgia in 2019 and lived near the apartment he rented.

Jason Brown’s renting of an apartment isn’t all that unusual.

If you were to visit Brown’s apartment in February 2019, you would find credit cards and financial information for Adam M. Lopez and Carlos Rivera.

Now that’s a little unusual, especially since Lopez and Rivera never existed.

For that matter, Jason Brown never existed either.

Brown was synthetically created from a stolen social security number and a fake California driver’s license. The creator was a man named Corey Cato, who was engaged in massive synthetic identity fraud. If you want to talk about a case that emphasizes the importance of determining financial identity, this is it.

A Georgia man was sentenced Sept. 1 (2022) to more than seven years in federal prison for participating in a nationwide fraud ring that used stolen social security numbers, including those belonging to children, to create synthetic identities used to open lines of credit, create shell companies, and steal nearly $2 million from financial institutions….

Cato joined conspiracies to defraud banks and illegally possess credit cards. Cato and his co-conspirators created “synthetic identities” by combining false personal information such as fake names and dates of birth with the information of real people, such as their social security numbers. Cato and others then used the synthetic identities and fake ID documents to open bank and credit card accounts at financial institutions. Cato and his co-conspirators used the unlawfully obtained credit cards to fund their lifestyles.

From https://www.ice.gov/news/releases/hsi-investigates-synthetic-identities-scheme-defrauded-banks-nearly-2m

Talking about synthetic identity at Victoria Gardens

Here’s a video that I created on Saturday that describes, at a very high level, how synthetic identities can be used fraudulently. People who live near Rancho Cucamonga, California will recognize the Victoria Gardens shopping center, proof that synthetic identity theft can occur far away from Georgia.

From https://www.youtube.com/watch?v=oDrSBlDJVCk

Note that synthetic identity theft different from stealing someone else’s existing identity. In this case, a new identity is created.

So how do you catch these fraudsters?

Catching the identity synthesizers

If you’re renting out an apartment, and Jason Brown shows you his driver’s license and provides his Social Security Number, how can you detect if Brown is a crook? There are four methods to verify that Jason Brown exists, and that he’s the person renting your apartment.

Method One: Private Databases

One way to check Jason Brown’s story is to perform credit checks and other data investigations using financial databases.

  • Did Jason Brown just spring into existence within the past year, with no earlier credit record? That seems suspicious.
  • Does Jason Brown’s credit record appear TOO clean? That seems suspicious.
  • Does Jason Brown share information such as a common social security number with other people? Are any of those other identities also fraudulent? That is DEFINITELY suspicious.

This is one way that many firms detect synthetic identities, and for some firms it is the ONLY way they detect synthetic identities. And these firms have to tell their story to their prospects.

If your firm offers a tool to verify identities via private databases, how do you let your prospects know the benefits of your tool, and why your solution is better than all other solutions?

Method Two: Check That Driver’s License (or other government document)

What about that driver’s license that Brown presented? There are a wide variety of software tools that can check the authenticity of driver’s licenses, passports, and other government-issued documents. Some of these tools existed back in 2019 when “Brown” was renting his apartment, and a number of them exist today.

Maybe your firm has created such a tool, or uses a tool from a third party.

If your firm offers this capability, how can your prospects learn about its benefits, and why your solution excels?

Method Three: Check Government Databases

Checking the authenticity of a government-issued document may not be enough, since the document itself may be legitimate, but the implied credentials may no longer be legitimate. For example, if my California driver’s license expires in 2025, but I move to Minnesota in 2023 and get a new license, my California driver’s license is no longer valid, even though I have it in my possession.

Why not check the database of the Department of Motor Vehicles (or the equivalent in your state) to see if there is still an active driver’s license for that person?

The American Association of Motor Vehicle Administrators (AAMVA) maintains a Driver’s License Data Verification (DLDV) Service in which participating jurisdictions allow other entities to verify the license data for individuals. Your firm may be able to access the DLDV data for selected jurisdictions, providing an extra identity verification tool.

If your firm offers this capability, how can your prospects learn where it is available, what its benefits are, and why it is an important part of your solution?

Method Four: Conduct the “Who You Are” Test

There is one more way to confirm that a person is real, and that is to check the person. Literally.

If someone on a smartphone or videoconference says that they are Jason Brown, how do you know that it’s the real Jason Brown and not Jim Smith, or a previous recording or simulation of Jason Brown?

This is where tools such as facial recognition and liveness detection come to play.

  • You can ensure that the live face matches any face on record.
  • You can also confirm that the face is truly a live face.

In addition to these two tests, you can compare the face against the face on the presented driver’s license or passport to offer additional confirmation of true identity.

Now some companies offer facial recognition, others offer liveness detection, others match the live face to a face on a government ID, and many companies offer two or three of these capabilities.

One more time: if your firm offers these capabilities—either your own or someone else’s—what are the benefits of your algorithms? (For example, are they more accurate than competing algorithms? And under what conditions?) And why is your solution better than the others?

This is for the firms who fight synthetic identities

While most of this post is of general interest to anyone dealing with synthetic identities, this part of this post is specifically addressed to identity and biometric firms who provide synthetic identity-fighting solutions.

When you communicate about your solutions, your communicator needs to have certain types of experience.

  • Industry experience. Perhaps you sell your identity solution to financial institutions, or educational institutions , or a host of other industries (gambling/gaming, healthcare, hospitality, retailers, or sport/concert venues, or others). You need someone with this industry experience.
  • Solution experience. Perhaps your communications require someone with 29 years of experience in identity, biometrics, and technology marketing, including experience with all five factors of authentication (and verification).
  • Communication experience. Perhaps you need to effectively communicate with your prospects in a customer focused, benefits-oriented way. (Content that is all about you and your features won’t win business.)

Perhaps you can use Bredemarket, the identity content marketing expert. I work with you (and I have worked with others) to ensure that your content meets your awareness, consideration, and/or conversion goals.

How can I work with you to communicate your firm’s anti-synthetic identity message? For example, I can apply my identity/biometric blog expert knowledge to create an identity blog post for your firm. Blog posts provide an immediate business impact to your firm, and are easy to reshare and repurpose. For B2B needs, LinkedIn articles provide similar benefits.

If Bredemarket can help your firm convey your message about synthetic identity, let’s talk.

And thirteen more things

If you haven’t read a Bredemarket blog post before, or even if you have, you may not realize that this post is jam-packed with additional information well beyond the post itself. This post alone links to the following Bredemarket posts and other content. You may want to follow one or more of the 13 links below if you need additional information on a particular topic:

  1. Synthetic Identity video (YouTube), August 12, 2023. https://www.youtube.com/watch?v=oDrSBlDJVCk
  2. Using “Multispectral” and “Liveness” in the Same Sentence (Bredemarket blog), June 6, 2023. https://bredemarket.com/2023/06/06/using-multispectral-and-liveness-in-the-same-sentence/
  3. Who is THE #1 NIST facial recognition vendor? (Bredemarket blog), February 23, 2022. https://bredemarket.com/2022/02/23/number1frvt/
  4. Financial Identity (Bredemarket website). https://bredemarket.com/financial-identity/
  5. Educational Identity (Bredemarket website). https://bredemarket.com/educational-identity/
  6. The five authentication factors (Bredemarket blog), March 2, 2021. https://bredemarket.com/2021/03/02/the-five-authentication-factors/
  7. Customer Focus (Bredemarket website). https://bredemarket.com/customer-focus/
  8. Benefits (Bredemarket website). https://bredemarket.com/benefits/
  9. Seven Questions Your Content Creator Should Ask You: the e-book version (Bredemarket blog and e-book), October 22, 2023. https://bredemarket.com/2023/10/22/seven-questions-your-content-creator-should-ask-you-the-e-book-version/
  10. Four Mini-Case Studies for One Inland Empire Business—My Own (Bredemarket blog and e-book), April 16, 2023. https://bredemarket.com/2023/04/16/four-mini-case-studies-for-one-inland-empire-business-my-own/
  11. Identity blog post writing (Bredemarket website). https://bredemarket.com/identity-blog-post-writing/
  12. Blog About Your Identity Firm’s Benefits Now. Why Wait? (Bredemarket blog), August 11, 2023. https://bredemarket.com/2023/08/11/blog-about-your-identity-firms-benefits-now-why-wait/
  13. Why Your Company Should Write LinkedIn Articles (Bredemarket LinkedIn article), July 31, 2023. https://www.linkedin.com/pulse/why-your-company-should-write-linkedin-articles-bredemarket/

That’s twelve more things than the Cupertino guys do, although my office isn’t as cool as theirs.

Well, why not one more?

Here’s my latest brochure for the Bredemarket 400 Short Writing Service, my standard package to create your 400 to 600 word blog posts and LinkedIn articles. Be sure to check the Bredemarket 400 Short Writing Service page for updates.

If that doesn’t fit your needs, I have other offerings.

Plus, I’m real. I’m not a bot.

We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.

(Part of the biometric product marketing expert series)

Some of you are probably going to get into an automobile today.

Are you insane?

The National Highway Traffic Safety Administration has released its latest projections for traffic fatalities in 2022, estimating that 42,795 people died in motor vehicle traffic crashes.

From https://www.nhtsa.gov/press-releases/traffic-crash-death-estimates-2022

When you have tens of thousands of people dying, then the only conscionable response is to ban automobiles altogether. Any other action or inaction is completely irresponsible.

After all, you can ask the experts who want us to ban biometrics because it can be spoofed and is racist, so therefore we shouldn’t use biometrics at all.

I disagree with the calls to ban biometrics, and I’ll go through three “biometrics are bad” examples and say why banning biometrics is NOT justified.

  • Even some identity professionals may not know about the old “gummy fingers” story from 20+ years ago.
  • And yes, I know that I’ve talked about Gender Shades ad nauseum, but it bears repeating again.
  • And voice deepfakes are always a good topic to discuss in our AI-obsessed world.

Example 1: Gummy fingers

My recent post “Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event” included the following sentence:

But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/

A biometrics industry colleague noticed the rhyming words “dummy” and “gummy” and wondered if the latter was a typo. It turns out it wasn’t.

To my knowledge, these gummy fingers do NOT have ridges. From https://www.candynation.com/gummy-fingers

Back in 2002, researcher Tsutomu Matsumoto used “gummy bears” gelatin to create a fake finger that fooled a fingerprint reader.

Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.

Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers. Here’s a recent example of presentation attack detection (liveness detection) from TECH5:

TECH5 participated in the 2023 LivDet Non-contact Fingerprint competition to evaluate its latest NN-based fingerprint liveness detection algorithm and has achieved first and second ranks in the “Systems” category for both single- and four-fingerprint liveness detection algorithms respectively. Both submissions achieved the lowest error rates on bonafide (live) fingerprints. TECH5 achieved 100% accuracy in detecting complex spoof types such as Ecoflex, Playdoh, wood glue, and latex with its groundbreaking Neural Network model that is only 1.5MB in size, setting a new industry benchmark for both accuracy and efficiency.

From https://tech5.ai/tech5s-mobile-fingerprint-liveness-detection-technology-ranked-the-most-accurate-in-the-market/

TECH5 excelled in detecting fake fingers for “non-contact” reading where the fingers don’t even touch a surface such as an optical surface. That’s appreciably harder than detecting fake fingers that touch contact devices.

I should note that LivDet is an independent assessment. As I’ve said before, independent technology assessments provide some guidance on the accuracy and performance of technologies.

So gummy fingers and future threats can be addressed as they arrive.

But at least gummy fingers aren’t racist.

Example 2: Gender shades

In 2017-2018, the Algorithmic Justice League set out to answer this question:

How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?

From http://gendershades.org/. Yes, that’s “http,” not “https.” But I digress.

Let’s stop right there for a moment and address two items before we continue. Trust me; it’s important.

  1. This study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.
  2. The study focused on gender classification and race classification. Back in those primitive innocent days of 2018, the world assumed that you could look at a person and tell whether the person was male or female, or tell the race of a person. (The phrase “self-identity” had not yet become popular, despite the Rachel Dolezal episode which happened before the Gender Shades study). Most importantly, the study did not address identification of individuals at all.

However, the findings did find something:

While the companies appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. Let’s explore.

All companies perform better on males than females with an 8.1% – 20.6% difference in error rates.

All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% – 19.2% difference in error rates.

When we analyze the results by intersectional subgroups – darker males, darker females, lighter males, lighter females – we see that all companies perform worst on darker females.

From http://gendershades.org/overview.html

What does this mean? It means that if you are using one of these three algorithms solely for the purpose of determining a person’s gender and race, some results are more accurate than others.

Three algorithms do not predict hundreds of algorithms, and classification is not identification. If you’re interested in more information on the differences between classification and identification, see Bredemarket’s November 2021 submission to the Department of Homeland Security. (Excerpt here.)

And all the stories about people such as Robert Williams being wrongfully arrested based upon faulty facial recognition results have nothing to do with Gender Shades. I’ll address this briefly (for once):

  • In the United States, facial recognition identification results should only be used by the police as an investigative lead, and no one should be arrested solely on the basis of facial recognition. (The city of Detroit stated that Williams’ arrest resulted from “sloppy” detective work.)
  • If you are using facial recognition for criminal investigations, your people had better have forensic face training. (Then they would know, as Detroit investigators apparently didn’t know, that the quality of surveillance footage is important.)
  • If you’re going to ban computerized facial recognition (even when only used as an investigative lead, and even when only used by properly trained individuals), consider the alternative of human witness identification. Or witness misidentification. Roeling Adams, Reggie Cole, Jason Kindle, Adam Riojas, Timothy Atkins, Uriah Courtney, Jason Rivera, Vondell Lewis, Guy Miles, Luis Vargas, and Rafael Madrigal can tell you how inaccurate (and racist) human facial recognition can be. See my LinkedIn article “Don’t ban facial recognition.”

Obviously, facial recognition has been the subject of independent assessments, including continuous bias testing by the National Institute of Standards and Technology as part of its Face Recognition Vendor Test (FRVT), specifically within the 1:1 verification testing. And NIST has measured the identification bias of hundreds of algorithms, not just three.

In fact, people that were calling for facial recognition to be banned just a few years ago are now questioning the wisdom of those decisions.

But those days were quaint. Men were men, women were women, and artificial intelligence was science fiction.

The latter has certainly changed.

Example 3: Voice spoofs

Perhaps it’s an exaggeration to say that recent artificial intelligence advances will change the world. Perhaps it isn’t. Personally I’ve been concentrating on whether AI writing can adopt the correct tone of voice, but what if we take the words “tone of voice” literally? Let’s listen to President Richard Nixon:

From https://www.youtube.com/watch?v=2rkQn-43ixs

Richard Nixon never spoke those words in public, although it’s possible that he may have rehearsed William Safire’s speech, composed in case Apollo 11 had not resulted in one giant leap for mankind. As noted in the video, Nixon’s voice and appearance were spoofed using artificial intelligence to create a “deepfake.”

It’s one thing to alter the historical record. It’s another thing altogether when a fraudster spoofs YOUR voice and takes money out of YOUR bank account. By definition, you will take that personally.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.

What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech…

From https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=8e8417775591

Now I’ll grant that this is an example of human voice verification, which can be as inaccurate as the previously referenced human witness misidentification. But are computerized systems any better, and can they detect spoofed voices?

Well, in the same way that fingerprint readers worked to overcome gummy bears, voice readers are working to overcome deepfake voices. Here’s what one company, ID R&D, is doing to combat voice spoofing:

IDVoice Verified combines ID R&D’s core voice verification biometric engine, IDVoice, with our passive voice liveness detection, IDLive Voice, to create a high-performance solution for strong authentication, fraud prevention, and anti-spoofing verification.

Anti-spoofing verification technology is a critical component in voice biometric authentication for fraud prevention services. Before determining a match, IDVoice Verified ensures that the voice presented is not a recording.

From https://www.idrnd.ai/idvoice-verified-voice-biometrics-and-anti-spoofing/

This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.

As for independent testing:

A final thought

Yes, fraudsters can use advanced tools to do bad things.

But the people who battle fraudsters can also use advanced tools to defeat the fraudsters.

Take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259

Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event

(Part of the biometric product marketing expert series)

(UPDATE JUNE 24: CORRECTED THE YEAR THAT COVID BEGAN.)

I haven’t said anything publicly about Apple Vision Pro, so it’s time for me to be “how do you do fellow kids” trendy and jump on the bandwagon.

Actually…

It ISN’T time for me to jump on the Apple Vision Pro bandwagon, because while Apple Vision Pro affects the biometric industry, it’s not a REVOLUTIONARY biometric event.

The four revolutionary biometric events in the 21st century

How do I define a “revolutionary biometric event”?

By Alberto Korda – Museo Che Guevara, Havana Cuba, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6816940

I define it as something that completely transforms the biometric industry.

When I mention three of the four revolutionary biometric events in the 21st century, you will understand what I mean.

  • 9/11. After 9/11, orders of biometric devices skyrocketed, and biometrics were incorporated into identity documents such as passports and driver’s licenses. Who knows, maybe someday we’ll actually implement REAL ID in the United States. The latest extension of the REAL ID enforcement date moved it out to May 7, 2025. (Subject to change, of course.)
  • The Boston Marathon bombings, April 2013. After the bombings, the FBI was challenged in managing and analyzing countless hours of video evidence. Companies such as IDEMIA National Security Solutions, MorphoTrak, Motorola, Paravision, Rank One Computing, and many others have tirelessly worked to address this challenge, while ensuring that facial recognition results accurately identify perpetrators while protecting the privacy of others in the video feeds.
  • COVID-19, spring 2020 and beyond. COVID accelerated changes that were already taking place in the biometric industry. COVID prioritized mobile, remote, and contactless interactions and forced businesses to address issues that were not as critical previously, such as liveness detection.

These three are cataclysmic world events that had a profound impact on biometrics. The fourth one, which occurred after the Boston Marathon bombings but before COVID, was…an introduction of a product feature.

  • Touch ID, September 2013. When Apple introduced the iPhone 5s, it also introduced a new way to log in to the device. Rather than entering a passcode, iPhone 5S users could just use their finger to log in. The technical accomplishment was dwarfed by the legitimacy that this brought to using fingerprints for identification. Before 2013, attempts to implement fingerprint verification for benefits recipients were resisted because fingerprinting was something that criminals did. After September 2013, fingerprinting was something that the cool Apple kids did. The biometric industry changed overnight.

Of course, Apple followed Touch ID with Face ID, with adherents of the competing biometric modalities sparring over which was better. But Face ID wouldn’t have been accepted as widely if Touch ID hadn’t paved the way.

So why hasn’t iris verification taken off?

Iris verification has been around for decades (I remember Iridian before L-1; it’s now part of IDEMIA), but iris verification is nowhere near as popular in the general population as finger and face verification. There are two reasons for this:

  • Compared to other biometrics, irises are hard to capture. To capture a fingerprint, you can lay your finger on a capture device, or “slap” your four fingers on a capture device, or even “wave” your fingers across a capture device. Faces are even easier to capture; while older face capture systems required you to stand close to the camera, modern face devices can capture your face as you are walking by the camera, or even if you are some distance from the camera.
  • Compared to other biometrics, irises are expensive to capture. Many years ago, my then-employer developed a technological marvel, an iris capture device that could accurately capture irises for people of any height. Unfortunately, the technological marvel cost thousands upon thousands of dollars, and no customers were going to use it when they could acquire fingerprint and face capture devices that were much less costly.

So while people rushed to implement finger and face capture on phones and other devices, iris capture was reserved for narrow verticals that required iris accuracy.

With one exception. Samsung incorporated Princeton Identity technology into its Samsung Galaxy S8 in 2017. But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies. (This is why liveness detection is so important.) While Samsung continues to sell iris verification today, it hadn’t been adopted by Apple and therefore wasn’t cool.

Until now.

About the Apple Vision Pro and Optic ID

The Apple Vision Pro is not the first headset that was ever created, but the iPhone wasn’t the first smartphone either. And coming late to the game doesn’t matter. Apple’s visibility among trendsetters ensures that when Apple releases something, people take notice.

And when all of us heard about Vision Pro, one of the things that Apple shared about it was its verification technique. Not Touch ID or Face ID, but Optic ID. (I like naming consistency.)

According to Apple, Optic ID works by analyzing a user’s iris through LED light exposure and then comparing it with an enrolled Optic ID stored on the device’s Secure Enclave….Optic ID will be used for everything from unlocking Vision Pro to using Apple Pay in your own headspace.

From The Verge, https://www.theverge.com/2023/6/5/23750147/apple-optic-id-vision-pro-iris-biometrics

So why did Apple incorporate Optic ID on this device and not the others?

There are multiple reasons, but one key reason is that the Vision Pro retails for US$3,499, which makes it easier for Apple to justify the cost of the iris components.

But the high price of the Vision Pro comes at…a price

However, that high price is also the reason why the Vision Pro is not going to revolutionize the biometric industry. CNET admitted that the Vision Pro is a niche item:

At $3,499, Apple’s Vision Pro costs more than three weeks worth of pay for the average American, according to Bureau of Labor Statistics data. It’s also significantly more expensive than rival devices like the upcoming $500 Meta Quest 3, $550 Sony PlayStation VR 2 and even the $1,000 Meta Quest Pro

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Now CNET did go on to say the following:

With Vision Pro, Apple is trying to establish what it believes will be the next major evolution of the personal computer. That’s a bigger goal than selling millions of units on launch day, and a shift like that doesn’t happen overnight, no matter what the price is. The version of Vision Pro that Apple launches next year likely isn’t the one that most people will buy.

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Certainly Vision Pro and Optic ID have the potential to revolutionize the computing industry…in the long term. And as that happens, the use of iris biometrics will become more popular with the general public…in the long term.

But not today. You’ll have to wait a little longer for the next biometric revolution. And hopefully it won’t be a catastrophic event like three of the previous revolutions.

Using “Multispectral” and “Liveness” in the Same Sentence

(Part of the biometric product marketing expert series)

Now that I’m plunging back into the fingerprint world, I’m thinking about all the different types of fingerprint readers.

  • The optical fingerprint and palm print readers are still around.
  • And the capacitive fingerprint readers still, um, persist.
  • And of course you have the contactless fingerprint readers such as MorphoWave, one that I know about.
  • And then you have the multispectral fingerprint readers.

What is multispectral?

Bayometric offers a web page that covers some of these fingerprint reader types, and points out the drawbacks of some of the readers they discuss.

Latent prints are usually produced by sweat, skin debris or other sebaceous excretions that cover up the palmar surface of the fingertips. If a latent print is on the glass platen of the optical sensor and light is directed on it, this print can fool the optical scanner….

Capacitive sensors can be spoofed by using gelatin based soft artificial fingers.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

There is another weakness of these types of readers. Some professions damage and wear away a person’s fingerprint ridges. Examples of professions whose practitioners exhibit worn ridges include construction workers and biometric content marketing experts (who, at least in the old days, handled a lot of paper).

The solution is to design a fingerprint reader that not only examines the surface of the finger, but goes deeper.

From HID Global, “A Guide to MSI Technology: How It Works,” https://blog.hidglobal.com/2022/10/guide-msi-technology-how-it-works

The specialty of multispectral sensors is that it can capture the features of the tissue that lie below the skin surface as well as the usual features on the finger surface. The features under the skin surface are able to provide a second representation of the pattern on the fingerprint surface.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

Multispectral sensors are nothing new. When I worked for Motorola, Motorola Ventures had invested in a company called Lumidigm that produced multispectral fingerprint sensors; they were much more expensive than your typical optical or capacitive sensor, but were much more effective in capturing true fingerprints to the subdermal level.

Lumidigm was eventually acquired in 2014: not by Motorola (who sold off its biometric assets such as Printrak and Symbol), but by HID Global. This company continues to produce Lumidigm-branded multispectral fingerprint sensors to this day.

But let’s take a look at the other word I bandied about.

What is liveness?

KISS, Alive! By Obtained from allmusic.com., Fair use, https://en.wikipedia.org/w/index.php?curid=2194847

“Gelatin based soft artificial fingers” aren’t the only way to fool a biometric sensor, whether you’re talking about a fingerprint sensor or some other sensor such as a face sensor.

Regardless of the biometric modality, the intent is the same; instead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).

This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).

But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.

And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.

(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)

[UPDATE 4/24/2024: I FINALLY ADDRESSED THE DIFFERENCE BETWEEN ACTIVE AND PASSIVE LIVENESS HERE.]

Multispectral liveness

While multispectral fingerprint readers aren’t the only fingerprint readers, or the only biometric readers, that iBeta has tested for liveness, the HID Global Lumidigm readers conform to Level 2 (the higher level) of iBeta testing.

There’s a confirmation letter and everything.

From the iBeta website.

This letter was issued in 2021. For some odd reason, HID Global decided to publicize this in 2023.

Oh well. It’s good to occasionally remind people of stuff.