Iris Recognition, Apple, and Worldcoin

(Part of the biometric product marketing expert series)

Iris recognition continues to make the news. Let’s review what iris recognition is and its benefits (and drawbacks), why Apple made the news last month, and why Worldcoin is making the news this month.

What is iris recognition?

There are a number of biometric modalities that can identify individuals by “who they are” (one of the five factors of authentication). A few examples include fingerprints, faces, voices, and DNA. All of these modalities purport to uniquely (or nearly uniquely) identify an individual.

One other way to identify individuals is via the irises in their eyes. I’m not a doctor, but presumably the Cleveland Clinic employs medical professionals who are qualified to define what the iris is.

The iris is the colored part of your eye. Muscles in your iris control your pupil — the small black opening that lets light into your eye.

From https://my.clevelandclinic.org/health/body/22502-iris
From Cleveland Clinic. (Link)

And here’s what else the Cleveland Clinic says about irises.

The color of your iris is like your fingerprint. It’s unique to you, and nobody else in the world has the exact same colored eye.

From https://my.clevelandclinic.org/health/body/22502-iris

John Daugman and irises

But why use irises rather than, say, fingerprints and faces? The best person to answer this is John Daugman. (At this point several of you are intoning, “John Daugman.” With reason. He’s the inventor of iris recognition.)

Here’s an excerpt from John Daugman’s 2004 paper on iris recognition:

(I)ris patterns become interesting as an alternative approach to reliable visual recognition of persons when imaging can be done at distances of less than a meter, and especially when there is a need to search very large databases without incurring any false matches despite a huge number of possibilities. Although small (11 mm) and sometimes problematic to image, the iris has the great mathematical advantage that its pattern variability among different persons is enormous.

Daugman, John, “How Iris Recognition Works.” IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 1, JANUARY 2004. Quoted from page 21. (PDF)

Or in non-scientific speak, one benefit of iris recognition is that you know it is accurate, even when submitting a pair of irises in a one-to-many search against a huge database. How huge? We’ll discuss later.

Brandon Mayfield and fingerprints

Remember that Daugman’s paper was released roughly two months before Brandon Mayfield was misidentified in a fingerprint comparison. (Everyone now intone “Brandon Mayfield.”)

If you want to know the details of that episode, the Department of Justice Office of the Inspector General issued a 330 page report (PDF) on it. If you don’t have time to read 330 pages, here’s Al Jazeera’s shorter version of Brandon Mayfield’s story.

While some of the issues associated with Mayfield’s misidentification had nothing to do with forensic science (Al Jazeera spends some time discussing bias, and Itiel Dror also looked at bias post-Mayfield), this still shows that fingerprints are remarkably similar and that it takes care to properly identify people.

Police agencies, witnesses, and faces

And of course there are recent examples of facial misidentifications (both by police agencies and witnesses), again not necessarily forensic science related, and again showing the similarity of faces from two different people.

Iris “data richness” and independent testing

Why are irises more accurate than fingerprints and faces? Here’s what one vendor, Iris ID, claims about irises vs. other modalities:

At the root of iris recognition’s accuracy is the data-richness of the iris itself. The IrisAccess system captures over 240 degrees of freedom or unique characteristics in formulating its algorithmic template. Fingerprints, facial recognition and hand geometry have far less detailed input in template construction.

Iris ID, “How It Compares.” (Link)

Enough about claims. What about real results? The IREX 10 test, independently administered by the U.S. National Institute of Standards and Technology, measures the identification (one-to-many) accuracy of submitted algorithms. At the time I am writing this, the ten most accurate algorithms provide false negative identification rates (FNIR) between 0.0022 ± 0.0004 and 0.0037 ± 0.0005 when two eyes are used. (Single eye accuracy is lower.) By the time you see this, the top ten algorithms may have changed, because the vendors are always improving.

IREX10 two-eye accuracy, top ten algorithms as of July 28, 2023. (Link)

While the IREX10 one-to-many tests are conducted against databases of less than a million records, it is estimated that iris one-to-many accuracy remains high even with databases of a billion people—something we will return to later in this post.

Iris drawbacks

OK, so if irises are so accurate, why aren’t we dumping our fingerprint readers and face readers and just using irises?

In short, because of the high friction in capturing irises. You can use high-resolution cameras to capture fingerprints and faces from far away, but as of now iris capture usually requires you to get very close to the capture device.

Iris image capture circa 2020 from the U.S. Federal Bureau of Investigation. (Link)

Which I guess is better than the old days when you had to put your eye right up against the capture device, but it’s still not as friendly (or intrusive) as face capture, which can be achieved as you’re walking down a passageway in an airport or sports stadium.

Irises and Apple Vision Pro

So how are irises being used today? You may or may not have hard last month’s hoopla about the Apple Vision Pro, which uses irises for one-to-one authetication.

I’m not going to spend a ton of time delving into this, because I just discussed Apple Vision Pro in June. In fact, I’m just going to quote from what I already said.

And when all of us heard about Vision Pro, one of the things that Apple shared about it was its verification technique. Not Touch ID or Face ID, but Optic ID. (I like naming consistency.)

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/
From Apple, https://www.apple.com/105/media/us/apple-vision-pro/2023/7e268c13-eb22-493d-a860-f0637bacb569/anim/drawer-privacy-optic-id/large.mp4

In short, as you wear the headset (which by definition is right on your head, not far away), the headset captures your iris images and uses them to authenticate you.

It’s a one-to-one comparison, not the one-to-many comparison that I discussed earlier in this post, but it is used to uniquely identify an individual.

But iris recognition doesn’t have to be used for identification.

Irises and Worldcoin

“But wait a minute, John,” you’re saying. “If you’re not using irises to determine if a person is who they say they are, then why would anyone use irises?”

Enter Worldcoin, which I mentioned in passing in my early July age estimation post.

Over the past several years, I’ve analyzed a variety of identity firms. Earlier this year I took a look at Worldcoin….Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness)…

From https://bredemarket.com/2023/07/03/age-estimation/

That’s the only thing that I’ve said about Worldcoin, at least publicly. (I looked at Worldcoin privately earlier in 2023, but that report is not publicly accessible and even I don’t have it any more.)

Worldcoin’s July 24 announcement

I guess it’s time for me to revisit Worldcoin, since the company made a super-big splashy announcement on Monday, July 24.

The Worldcoin Foundation today announced that Worldcoin, a project co-founded by Sam Altman, Alex Blania and Max Novendstern, is now live and in a production-grade state. 

The launch includes the release of the World ID SDK and plans to scale Orb operations to 35+ cities across 20+ countries around the world. In tandem, the Foundation’s subsidiary, World Assets Ltd., minted and released the Worldcoin token (WLD) to the millions of eligible people who participated in the beta; WLD is now transactable on the blockchain….

“In the age of AI, the need for proof of personhood is no longer a topic of serious debate; instead, the critical question is whether or not the proof of personhood solutions we have can be  privacy-first, decentralized and maximally inclusive,” said Worldcoin co-founder and Tools for Humanity CEO Alex Blania. “Through its unique technology, Worldcoin aims to provide anyone in the world, regardless of background, geography or income, access to the growing digital and global economy in a privacy preserving and decentralized way.”

From https://worldcoin.org/blog/announcements/worldcoin-project-launches

Worldcoin does NOT positively identify people…but it can still pay you

A very important note: Worldcoin’s purpose is not to determine identity (that a person is who they say they are). Worldcoin’s purpose is to determine uniqueness: namely, that a person (whoever they are) is unique among all the billions of people in the world. Once uniqueness is determined, the person can get money money money with an assurance that the same person won’t get money twice.

OK, so how are you going to determine the uniqueness of a person among all of the billions of people in the world?

Using the Orb to capture irises

As far as Worldcoin is concerned, irises are the best way to determine uniqueness, echoing what others have said.

Iris biometrics outperform other biometric modalities and already achieved false match rates beyond 1.2× ⁣10−141.2×10−14 (one false match in one trillion[9]) two decades ago[10]—even without recent advancements in AI. This is several orders of magnitude more accurate than the current state of the art in face recognition.

From https://worldcoin.org/blog/engineering/humanness-in-the-age-of-ai

So how is Worldcoin going to capture millions, and eventually billions, of iris pairs?

By using the Orb. (You may intone “the Orb” now.)

To complete your Worldcoin registration, you need to find an Orb that will capture your irises and verify your uniqueness.

Now you probably won’t find an Orb at your nearby 7 Eleven; as I write this, there are only a little over 100 listed locations in the entire world where Orbs are deployed. I happen to live within 50 miles of Santa Monica, where an Orb was recently deployed (by appointment only, unavailable on weekends, and you know how I feel about driving on Southern California freeways on a weekday).

But now that you can get crypto for enrolling at an Orb, people are getting more excited about the process, and there will be wider adoption.

Whether this will make a difference in the world or just be a fad remains to be seen.

The Difference Between Identity Assurance Levels 2 and 3

It’s been years since I talked about Identity Assurance Levels (IALs) in any detail, but I wanted to delve into two of the levels and see when IAL3 is necessary, and when it is not.

But first, a review

If the term “identity assurance level” is new to you, let me reprint what they are. This is taken from my December 3, 2020 post on identity assurance levels and digital identity.

The U.S. National Institute of Standards and Technology has defined “identity assurance levels” (IALs) that can be used when dealing with digital identities. It’s helpful to review how NIST has defined the IALs.

Assurance in a subscriber’s identity is described using one of three IALs:

IAL1: There is no requirement to link the applicant to a specific real-life identity. Any attributes provided in conjunction with the subject’s activities are self-asserted or should be treated as self-asserted (including attributes a [Credential Service Provider] CSP asserts to an [Relying Party] RP). Self-asserted attributes are neither validated nor verified.

IAL2: Evidence supports the real-world existence of the claimed identity and verifies that the applicant is appropriately associated with this real-world identity. IAL2 introduces the need for either remote or physically-present identity proofing. Attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL2 can support IAL1 transactions if the user consents.

IAL3: Physical presence is required for identity proofing. Identifying attributes must be verified by an authorized and trained CSP representative. As with IAL2, attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL3 can support IAL1 and IAL2 identity attributes if the user consents.

For purposes of this post, IAL1 is (if I may use a technical term) a nothingburger. It may be good enough for a Gmail account, but these days even social media accounts are more likely to require IAL2.

And it’s worthwhile to mention (as I did before) that in practice, IAL3 may not require physical presence.

IAL3: In-person or supervised-remote identity proofing is required.

From https://id4d.worldbank.org/guide/levels-assurance-loas

So what’s the practical difference between IAL2 and IAL3?

If we ignore IAL1 and concentrate on IAL2 and IAL3, we can see one difference between the two. IAL2 allows remote, unsupervised identity proofing, while IAL3 requires (in practice) that any remote identity proofing is supervised.

Designed by Freepik.

Much of my time at my previous employer Incode Technologies involved unsupervised remote identity proofing (IAL2). For example, if a woman wants to set up an account at a casino, she can complete the onboarding process to set up the account on her phone, without anyone from the casino being present to make sure she wasn’t faking her face or her ID. (Fraud detection is the “technologies” part of Incode Technologies, and that’s how they make sure she isn’t faking.)

From https://www.youtube.com/watch?v=w4Y725Pn5HE

But what if you need supervised remote identity proofing for legal or other reasons? Another company called NextgenID offers this.

From https://www.youtube.com/watch?v=ykDdCgkrMKs

But is this good enough? Yes it is, according to Nextgen.

SRIP provides remote supervision of in-person proofing using NextgenID’s Identity Stations, an all-in-one system designed to securely perform all enrollment processes and workflow requirements. The station facilitates the complete and accurate capture at IAL levels 1, 2 and 3 of all required personal identity documentations and includes a full complement of biometric capture support for face, fingerprint, and iris.

From https://www.nextgenid.com/markets-srip.php

Now there are some other differences between IAL2 and IAL3 in terms of the proofing, so NIST came up with a handy dandy chart that allows you to decide which IAL level you need.

From NIST Special Publication 800-63
Revision 3
, Section 6.1 “Selecting IAL.”

When deciding between IAL2 and IAL3, question 3 in the table above is the most critical. NIST explains the purpose of question 3:

At this point, the agency understands that some level of proofing is required. Step 3 is intended to look at the potential impacts of an identity proofing failure to determine if IAL2 or IAL3 is the most appropriate selection. The primary identity proofing failure an agency may encounter is accepting a falsified identity as true, therefore providing a service or benefit to the wrong or ineligible person. In addition, proofing, when not required, or collecting more information than needed is a risk in and of itself. Hence, obtaining verified attribute information when not needed is also considered an identity proofing failure. This step should identify if the agency answered Step 1 and 2 incorrectly, realizing they do not need personal information to deliver the service. Risk should be considered from the perspective of the organization and to the user, since one may not be negatively impacted while the other could be significantly harmed. Agency risk management processes should commence with this step.

From https://pages.nist.gov/800-63-3/sp800-63-3.html#sec6

Even with the complexity of the flowchart, some determinations can be pretty simple. For example, if any of the six risks listed under question 3 are determined to be “high,” then you must use IAL3.

But the whole exercise is a lot to work through, and you need to work through it yourself. When I pasted the PNG file for the flowchart above into this blog post, I noticed that the filename is “IAL_CYOA.png.” And we all know what “CYOA” means.

But if you do the work, you’ll be better informed on the procedures you need to use to verify the identities of people.

One footnote: although NIST is a U.S. organization, its identity assurance levels (including IAL2 and IAL3) are used worldwide, including by the World Bank. So everyone should be familiar with them.

Pangiam/Trueface: when version 1.0 of the SDK is the REVISED version

After a lack of appearances in the Bredemarket blog (none since November), Pangiam is making an appearance again, based on announcements by Biometric Update and Trueface itself about a new revision of the Trueface facial recognition SDK.

The new revision includes a number of features, including a new model for masked faces and some technical improvements.

So what is this revision called?

Version 1.0.

“Wait,” you’re asking yourself. “Version 1.0 is the NEW version? It sounds like the ORIGINAL version. Shouldn’t the new version be 2.0?”

Well, no. The original version was V0. Trueface is now ready to release V1.

Well, almost ready.

If you go to the Trueface SDK reference page, you’ll see that Trueface releases are categorized as “alpha,” “beta,” and “stable.”

  • When I viewed the page on the afternoon of March 28, the latest stable release was 0.33.14634.
  • If you want to use the version 1.0 that is being “introduced” (Pangiam’s word), you have to go to the latest beta release, which was 1.0.16286.
  • And if you want to go bleeding edge alpha, you can get release 1.1.16419.

(Again, this was on the afternoon of March 28, and may change by the time you read this.)

Now most biometric vendors don’t expose this much detail about their software. Some don’t even provide any release information, especially for products with long delivery times where the version that a customer will eventually get doesn’t even have locked-down requirements yet. But Pangiam has chosen to provide this level of detail.

Oh, and Pangiam/Trueface also actively participates in the ongoing NIST FRVT testing. Information on the 1:1 performance of the trueface-003 algorithm can be found here. Information on the 1:N performance of the trueface-000 algorithm can be found here.

Who is THE #1 NIST facial recognition vendor?

(Part of the biometric product marketing expert series)

(When I wrote this in 2022 I used the then-current FRVT terminology. I’ve updated to FRTE as warranted.)

As I’ve noted before, there are a number of facial recognition companies that claim to be the #1 NIST facial recognition vendor. I’m here to help you cut through the clutter so you know who the #1 NIST facial recognition vendor truly is.

You can confirm this information yourself by visiting the NIST FRTE 1:1 Verification and FRTE 1:N Identification pages. The old FRVT, by the way, stood for “Face Recognition Vendor Test”—and has subsequently been replaced by FRTE, “Face Recognition Technology Evaluation.”


From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate.

So I can announce to you that as of February 23, 2022, the #1 NIST facial recognition vendor is Cloudwalk.

And Sensetime.

And Beihang University ERCACAT.

And Cubox.

And Adera.

And Chosun University.

And iSAP Solution Corporation.

And Bitmain.

And Visage Techologies.

And Expasoft LLC.

And Paravision.

And NEC.

And Ptakuratsatu.

And Ayonix.

And Rank One.

And Dermalog.

And Innovatrics.

Now how can ALL dozen-plus of these entities be number 1?

Easy.

The NIST 1:1 and 1:N tests include many different accuracy and performance measurements, and each of the entities listed above placed #1 in at least one of these measurements. And all of the databases, database sizes, and use cases measure very different things.

Transportation Security Administration Checkpoint at John Glenn Columbus International Airport. By Michael Ball – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77279000

For example:

  • Visage Technologies was #1 in the 1:1 performance measurements for template generation time, in milliseconds, for 480×720 and 960×1440 data.
  • Meanwhile, NEC was #1 in the 1:N Identification (T>0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million.
  • Not to be confused with the 1:N Identification (T>0) accuracy measurements for gallery visa, probe border, N = 1.6 million, where the #1 algorithm was not from NEC.
  • And not to be confused with the 1:N Investigation (R = 1, T = 0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million, where the #1 algorithm was not from NEC.

And can I add a few more caveats?

First caveat: Since all of these tests are ongoing tests, you can probably find a slightly different set of #1 algorithms if you look at the January data, and you will probably find a slightly different set of #1 algorithms when the March data is available.

Second caveat: These are the results for the unqualified #1 NIST categories. You can add qualifiers, such as “#1 non-Chinese vendor” or “#1 western vendor” or “#1 U.S. vendor” to vault a particular algorithm to the top of the list.

Third caveat: You can add even more qualifiers, such as “within the top five NIST vendors” and (one I admit to having used before) “a top tier NIST vendor in multiple categories.” This can mean whatever you want it to mean. (As can “dramatically improved” algorithm, which may mean that you vaulted from position #300 to position #200 in one of the categories.)

Fourth caveat: Even if a particular NIST test applies to your specific use case, #1 performance on a NIST test does not guarantee that a facial recognition system supplied by that entity will yield #1 performance with your database in your environment. The algorithm sent to NIST may or may not make it into a production system. And even if it does, performance against a particular NIST test database may not yield the same results as performance against a Rhode Island criminal database, a French driver’s license database, or a Nigerian passport database. For more information on this, see Mike French’s LinkedIn article “Why agencies should conduct their own AFIS benchmarks rather than relying on others.”

So now that you know who the #1 NIST facial recognition vendor is, do you feel more knowledgeable?

Although I’ll grant that a NIST accuracy or performance claim is better than some other claims, such as self-test results.

Why isn’t there a Pharmaceutical Justice League?

In case you missed it, Blake Hall of ID.me recently shared an article by Stewart Baker about “The Flawed Claims About Bias in Facial Recognition.”

As many of you know, there have been many claims about bias in facial recognition, which have even led to the formation of an Algorithmic Justice League.

By Jason Fabok and Alex Sinclair / DC Comics – [1], Fair use, https://en.wikipedia.org/w/index.php?curid=54168863

Whoops, wrong Justice League. But you get the idea. “Gender Shades” and stuff like that, which I’ve written about before.

Back to Hall’s article, which makes a number of excellent points about bias in facial recognition, including the studies performed by NIST (referenced later in this post), but I loved one comparison that Baker wrote about.

So technical improvements may narrow but not entirely eliminate disparities in face recognition. Even if that’s true, however, treating those disparities as a moral issue still leads us astray. To see how, consider pharmaceuticals. The world is full of drugs that work a bit better or worse in men than in women. Those drugs aren’t banned as the evil sexist work of pharma bros. If the gender differential is modest, doctors may simply ignore the difference, or they may recommend a different dose for women. And even when the differential impact is devastating—such as a drug that helps men but causes birth defects when taken by pregnant women—no one wastes time condemning those drugs for their bias. Instead, they’re treated like any other flawed tool, minimizing their risks by using a variety of protocols from prescription requirements to black box warnings. 

From https://www.lawfareblog.com/flawed-claims-about-bias-facial-recognition

As an (tangential) example of this, I recently read an article entitled “To begin addressing racial bias in medicine, start with the skin.” This article does not argue that we should ban dermatology because conditions are more often misdiagnosed in people with darker skin. Instead, the article argues that we should improve dermatology to reduce these biases.

In the same manner, the biometric industry and stakeholder should strive to minimize bias in facial recognition and other biometrics, not ban it. See NIST’s study (NISTIR 8280, PDF) in this regard, referenced in Baker’s article.

In addition to what Baker said, let me again note that when judging the use of facial recognition, it should be compared against the alternatives. While I believe that alternatives should be offered, even passwords, consider that automated facial recognition supported by trained examiner review is much more accurate than witness (mis)identification. I don’t think we want to solely rely on that.

Because falsely imprisoning someone due to non-algorithmic witness misidentification is as bad as kryptonite.

By Apparent scan made by the original uploader User:Kryptoman., Fair use, https://en.wikipedia.org/w/index.php?curid=11736865

Tech5: Updating my contactless fingerprint capture post from October 2021

I’ve worked in the general area of contactless fingerprint capture for years, initially while working for a NIST CRADA partner. While most of the NIST CRADA partners are still pursuing contactless fingerprint technology, there are also new entrants.

In the pre-COVID days, the primary advantage of contactless fingerprint capture was speed. As I noted in an October 2021 post:

Actually this effort launched before that, as there were efforts in 2004 and following years to capture a complete set of fingerprints within 15 seconds; those efforts led, among other things, to the smartphone software we are seeing today.

From https://bredemarket.com/2021/10/04/contactless-fingerprint-scanning-almost-software-at-connectid/

By 2016, several companies had entered into cooperative research and development agreements with NIST to develop contactless fingerprint capture software, either for dedicated devices or for smartphones. Most of those early CRADA participants are still around today, albeit under different names.

Of the CRADA partners, MorphoTrak is now IDEMIA, Diamond Fortress is now Telos ID, Hoyos Labs is now Veridium, AOS is no longer in operation, and 3M’s biometric holdings are now part of Thales. Slide 10 from the NIST presentation posted at https://www.nist.gov/system/files/documents/2016/12/14/iai_2016-nist_contactless_fingerprints-distro-20160811.pdf

I’ve previously written posts about two of these CRADA partners, Telos ID (previously Diamond Fortress) and Sciometrics (the supplier for Integrated Biometrics).

But these aren’t the only players in the contactless fingerprint market. There are always new entrants in a market where there is opportunity.

A month before I wrote my post about Integrated Biometrics/Sciometrics’ SlapShot, a company called Tech5 released its own product.

T5-AirSnap Finger uses a smartphone’s built-in camera to perform finger detection, enhancement, image processing and scaling, generating images that can be transmitted for identity verification or registration within seconds, according to the announcement. The resulting images are suitable for use with standard AFIS solutions, and comparison against legacy datasets…

From https://www.biometricupdate.com/202109/tech5-contactless-fingerprint-biometrics-for-mobile-devices-unveiled

This particular article quoted Tech5 Co-founder/CEO Machiel van der Harst. A subsequent article quoted Tech5 Co-Founder/CTO Rahul Parthe. Both co-founders previously worked for L-1 Identity Solutions (now part of IDEMIA).

Parthe has noted the importance of smartphone-based contactless fingerprint capture:

“We all carry these awesome computers in our hands,” Parthe explains. “It’s a perfectly packaged hardware device that is ideal for any capture technology. Smartphones are powerful compute devices on the edge, with a nice integrated camera with auto-focus and flash. And now phones also come with multiple cameras which can help with better focus and depth estimation. This allows the users to take photos of their fingers and the software takes care of the rest. I’d just like to point out here that we’re talking about using the phone’s camera to capture biometrics and using a smartphone to take the place of a dedicated reader. We’re not talking about the in-built fingerprint acquisition we’re all familiar with on many devices which is the means of accessing the device itself.”

From https://www.biometricupdate.com/202202/contactless-fingerprinting-maturation-allows-the-unification-of-biometric-capture-using-smartphones

I’ve made a similar point before. While dedicated devices may not completely disappear, multi-purpose devices that we already have are the preferable way to go.

For more information about T5-AirSnap Finger, visit this page.

Tech5’s results for NIST’s Proprietary Fingerprint Template (PFT) Evaluation III, possibly using an algorithm similar to that in T5-AirSnap Finger, are detailed here.

DNA mixture interpretation outside of the forensic laboratory? Apparently not yet.

(Part of the biometric product marketing expert series)

The National Institute of Standards and Technology has published a draft report entitled DNA Mixture Interpretation: A Scientific Foundation Review.

As NIST explains:

This report, currently published in draft form, reviews the methods that forensic laboratories use to interpret evidence containing a mixture of DNA from two or more people.

From https://www.nist.gov/dna-mixture-interpretation-nist-scientific-foundation-review

The problem of mixtures is more pronounced in DNA analysis than in analysis of other biometrics. You aren’t going to encounter two overlapping irises or two overlapping faces in the real world. (Well, not normally.)

By Olli Niemitalo – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=18707318

You can certainly encounter overlapping voices (in a recorded conversation) or overlapping fingerprints (when two or more people touched the same item).

But there are methods to separate one biometric sample from another.

It’s a little more complicated when you’re dealing with DNA.

Distinguishing one person’s DNA from another in these mixtures, estimating how many individuals contributed DNA, determining whether the DNA is even relevant or is from contamination, or whether there is a trace amount of suspect or victim DNA make DNA mixture interpretation inherently more challenging than examining single-source samples. These issues, if not properly considered and communicated, can lead to misunderstandings regarding the strength and relevance of the DNA evidence in a case.

From the Abstract in https://doi.org/10.6028/NIST.IR.8351-draft%C2%A0

As some of you know, I have experience with “rapid DNA” instruments that provide a mostly-automated way to analyze DNA samples. Because these instruments are mostly automated and designed for use by non-scientific personnel, they are not able to analyze all of the types of DNA that would be analyzed by a forensic laboratory.

Therefore, this draft document is silent on the topic of rapid DNA, despite the fact that co-author Peter Vallone has years of experience in rapid DNA.

I am not a scientist, but in my view the absence of any reference to rapid DNA strongly suggests that it’s premature at this time to apply these instruments to DNA mixtures, such as rape cases in which both the assailant’s and the victim’s DNA are present in a sample.

Granted, there may be rape cases in which the DNA of the assailant may be present with no mixture.

You have to be REALLY careful before claiming that rapid DNA instruments can be used to wipe out the backlog of rape test kits. However, rapid DNA can be used to clear less complicated DNA cases so that the laboratories can concentrate on the more complex cases.

So who is Cubox?

Some people like to look at baseball statistics or movie reviews for fun.

Here at Bredemarket, we scan the latest one-to-many (identification) results from NIST’S Ongoing Face Recognition Vendor Test (FRVT).

Hey, SOMEBODY has to do it.

Dicing and slicing the FRVT tests

For those who have never looked at FRVT before, it does not merely report the accuracy results of searches against one database, but reports accuracy results for searches against eight different databases of different types and of different sizes (N).

  • Mugshot, Mugshot, N = 12000000
  • Mugshot, Mugshot, N = 1600000
  • Mugshot, Webcam, N = 1600000
  • Mugshot, Profile, N = 1600000
  • Visa, Border, N = 1600000
  • Visa, Kiosk, N = 1600000
  • Border, Border 10+YRS, N = 1600000
  • Mugshot, Mugshot 12+YRS, N = 3000000

This is actually good for the vendors who submit their biometric algorithms, because even if the algorithm performs poorly on one of the databases, it may perform wonderfully on one of the other seven. That’s how so many vendors can trumpet that their algorithm is the best. When you throw in other qualifiers such as “top five,” “best non-Chinese vendor,” and even “vastly improved,” you can see how dozens of vendors can issue “NIST says we’re the best” press releases.

Not that I knock the practice; after all, I myself have done this for years. But you need to know how to interpret these press releases, and what they’re really saying. Remember this when you read the vendor announcement toward the end of this post.

Anyway, I went to check the current results, which when you originally visit the page are sorted in the order of the fifth database, the Visa Border database. And this is what I saw this morning (October 27):

For the most part, the top five for the Visa Border test contain the usual players. North Americans will be most familiar with IDEMIA and NEC, and Cloudwalk and Sensetime have been around for a while.

A new algorithm from a not-so-new provider

But I had never noticed Cubox in the NIST testing before. And the number attached to the Cubox algorithm, “000,” indicates that this is Cubox’s first submission.

And Cubox did exceptionally well, especially for a first submission.

As you can see by the superscripts attached to each numeric value, Cubox had the second most accurate algorithm for the Visa Border test, the most accurate algorithm for the Visa Kiosk test, and placed no lower than 12th in the six (of eight) tests in which it participated. Considering that 302 algorithms have been submitted over the years, that’s pretty remarkable for a first-time submission.

Well, as an ex-IDEMIA employee, my curious nature kicked in.

Who is Cubox?

I’ll start by telling you who Cubox is not. Specifically, Cubox is not CuBox the low-power computer.

The Cubox that submitted an algorithm to NIST is a South Korean firm with the website cubox.aero, self-described as “The Leading Provider in Biometrics” (aren’t they all?) with fingerprint and face solutions. Cubox competes in the access control and border control markets.

Cubox’s ten-year history and “overseas” page details its growth in its markets, and its solutions that it has provided in South Korea, Mongolia, and Vietnam.

And although Cubox hasn’t trumpeted its performance on its own website (at least in the English version; I don’t know about the Korean version), Cubox has publicized its accomplishment on a LinkedIn post.

Why NIST tests aren’t important

But before you get excited about the NIST results from Cubox, Sensetime, or any of the algorithm providers, remember that the NIST test is just a test. NIST cautions people about this, I have cautioned people about this (see the fourth point in this post), and Mike French has also discussed this.

However, it is also important to remember that NIST does not test operational systems, but rather technology submitted as software development kits or SDKs. Sometimes these submissions are labeled as research (or just not labeled), but in reality it cannot be known if these algorithms are included in the product that an agency will ultimately receive when they purchase a biometric system. And even if they are “the same”, the operational architecture could produce different results with the same core algorithms optimized for use in a NIST study.

The very fact that test results vary between the NIST databases explicitly tells you that a number one ranking on one database does not mean that you’ll get a number one ranking on every database. And as French reminds us, when you take an operational algorithm in an operational system with a customer database, the results may be quite different.

Which is why French recommends that any government agency purchasing a biometric system should conduct its own test, with vendor operational systems (rather than test systems) loaded with the agency’s own data.

Incidentally, if your agency needs a forensic expert to help with a biometric procurement or implementation, check out the consulting services offered by French’s company, Applied Forensic Services.

And if you need help communicating the benefits of your biometric solution, check out the consulting services offered by my own company, Bredemarket. After all, I am a biometric content marketing expert.

Contactless fingerprint scanning (almost) software at #connectID

Let me kick off this post by quoting from another post that I wrote:

I’ve always been of the opinion that technology is moving away from specialized hardware to COTS hardware. For example, the fingerprint processing and matching that used to require high-end UNIX computers with custom processor boards in the 1990s can now be accomplished on consumer-grade smartphones.

Further evidence of this was promoted in advance of #connectID by Integrated Biometrics.

And yes, for those following Integrated Biometrics’ naming conventions, there IS a 1970s movie called “Slap Shot,” but I don’t think it has anything to do with crime solving. Unless you count hockey “enforcers” as law enforcement. And the product apparently wasn’t named by Integrated Biometrics anyway.

But back to the product:

SlapShot supports the collection of Fingerprint and facial images suitable for use with state of the art matching algorithms. Fingerprints can now be captured by advanced software that enables the camera in your existing smart phones to generate images with a quality capable of precise identification. Facial recognition and metadata supplement the identification process for any potential suspect or person of interest.

This groundbreaking approach turns almost any smart phone into a biometric capture device, and with minimal integration, your entire force can leverage their existing smart phones to capture fingerprints for identification and verification, receiving matching results in seconds from a centralized repository.

Great, you say! But there’s one more thing. Two more things, actually:

SlapShot functions on Android devices that support Lollipop or later operating systems and relies on the device’s rear high-resolution camera. Images captured from the camera are automatically processed on the device in the background and converted into EBTS files. Once the fingerprint image is taken, the fingerprint matcher in the cloud returns results instantly.

The SlapShot SDK allows developers to capture contactless fingerprints and other biometrics within their own apps via calls to the SlapShot APIs.

Note that SlapShot is NOT intended for end users, but for developers to incorporate into existing applications. Also note that it is (currently) ONLY supported on Android, not iOS.

But this does illustrate the continuing move away from dedicated devices, including Integrated Biometrics’ own line of dedicated devices, to multi-use devices that can also perform forensic capture and perform or receive forensic matching results.

And no, Integrated Biometrics is not cannibalizing its own market. I say this for two reasons.

  1. First, there are still going to be customers who will want dedicated devices, for a variety of reasons.
  2. Second, if Integrated Biometrics doesn’t compete in the smartphone contactless fingerprint capture market, it will lose sales to the companies that DO compete in this market.

Contactless fingerprint capture has been pursued by multiple companies for years, ever since the NIST CRADA was issued a few years ago. (Integrated Biometrics’ partner Sciometrics was one of those early CRADA participants, along with others.) Actually this effort launched before that, as there were efforts in 2004 and following years to capture a complete set of fingerprints within 15 seconds; those efforts led, among other things, to the smartphone software we are seeing today. Not only from Integrated Biometrics/Sciometrics, but also from other CRADA participants. (Don’t forget this one.)

Of the CRADA partners, MorphoTrak is now IDEMIA, Diamond Fortress is now Telos ID, Hoyos Labs is now Veridium, AOS is no longer in operation, and 3M’s biometric holdings are now part of Thales. Slide 10 from the NIST presentation posted at https://www.nist.gov/system/files/documents/2016/12/14/iai_2016-nist_contactless_fingerprints-distro-20160811.pdf

Of course these smartphone capture software packages aren’t Electronic Biometric Transmission Specification (EBTS) Appendix F certified, but that’s another story entirely.

More on the Israeli master faces study

Eric Weiss of FindBiometrics has opined on the Tel Aviv master faces study that I previously discussed.

Oops, wrong “Faces.” Oh well. By Warner Bros. Records – Billboard, page 18, 14 November 1970, Public Domain, https://commons.wikimedia.org/w/index.php?curid=27031391

While he does not explicitly talk about the myriad of facial recognition algorithms that were NOT addressed in the study, he does have some additional details about the test dataset.

The three algorithms that were tested

Here’s what FindBiometrics says about the three algorithms that were tested in the Israeli study.

The researchers described (the master faces) as master keys that could unlock the three facial recognition systems that were used to test the theory. In that regard, they challenged the Dlib, FaceNet, and SphereFace systems, and their nine master faces were able to impersonate more than 40 percent of the 5,749 people in the LFW set.

While it initially sounds impressive to say that three facial recognition algorithms were fooled by the master faces, bear in mind that there are hundreds of facial recognition algorithms tested by NIST alone, and (as I said earlier) the test has NOT been duplicated against any algorithms other than the three open source algorithms mentioned.

…let’s look at the algorithms themselves and evaluate the claim that results for the three algorithms Dlib, FaceNet, and SphereFace can naturally be extrapolated to ALL facial recognition algorithms….NIST’s subsequent study…evaluated 189 algorithms specially for 1:1 and 1:N use cases….“Tests showed a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors.”

In short, just because the three open source algorithms were fooled by master faces doesn’t mean that commercial grade algorithms would also be fooled by master faces. Maybe they would be fooled…or maybe they wouldn’t.

What about the dataset?

The three open source algorithms were tested against the dataset from Labeled Faces in the Wild. As I noted in my prior post, the LFW people emphasize some important caveats about their dataset, including the following:

Many groups are not well represented in LFW. For example, there are very few children, no babies, very few people over the age of 80, and a relatively small proportion of women. In addition, many ethnicities have very minor representation or none at all.

In the FindBiometrics article, Weiss provides some additional detail about dataset representation.

…there is good reason to question the researchers’ conclusion. Only two of the nine master faces belong to women, and most depicted white men over the age of 60. In plain terms, that means that the master faces are not representative of the global public, and they are not nearly as effective when applied to anyone that falls outside one particular demographic.

That discrepancy can largely be attributed to the limitations of the LFW dataset. Women make up only 22 percent of the dataset, and the numbers are even lower for children, the elderly (those over the age of 80), and for many ethnic groups.

Valid points to be sure, although the definition of a “representative” dataset varies depending upon the use case. For example, a representative dataset for a law enforcement database in the city of El Paso, Texas will differ from a representative dataset for an airport database catering to Air France customers.

So what conclusion can be drawn?

Perhaps it’s just me, but scientific entities that conduct studies are always motivated by the need for additional funding. After a study is concluded, it seems that the entities always conclude that “more research is needed”…which can be self-serving, because as long as more research is needed, the scientific entities can continue to receive necessary funding. Imagine the scientific entity that would dare to say “Well, all necessary research has been conducted. We’re closing down our research center.”

But in this case, there IS a need to perform additional research, to test the master faces against different algorithms and against different datasets. Then we’ll know whether this statement from the FindBiometrics article (emphasis mine) is actually true:

Any face-based identification system would be extremely vulnerable to spoofing…