The Double Loop Podcast Discusses Research From the Self-Styled “Inventor of Cross-Fingerprint Recognition”

(Part of the biometric product marketing expert series)

Apologies in advance, but if you’re NOT interested in fingerprints, you’ll want to skip over this Bredemarket identity/biometrics post, my THIRD one about fingerprint uniqueness and/or similarity or whatever because the difference between uniqueness and similarity really isn’t important, is it?

Yes, one more post about the study whose principal author was Gabe Guo, the self-styled “inventor of cross-fingerprint recognition.”

In case you missed it

In case you missed my previous writings on this topic:

But don’t miss this

Well, two other people have weighed in on the paper: Glenn Langenburg and Eric Ray, co-presenters on the Double Loop Podcast. (“Double loop” is a fingerprint thing.)

So who are Langenburg and Ray? You can read their full biographies here, but both of them are certified latent print examiners. This certification, administered by the International Association for Identification, is designed to ensure that the certified person is knowledgeable about both latent (crime scene) fingerprints and known fingerprints, and how to determine whether or not two prints come from the same person. If someone is going to testify in court about fingerprint comparison, this certification is recognized as a way to designate someone as an expert on the subject, as opposed to a college undergraduate. (As of today, the list of IAI certified latent print examiners as of December 2023 can be found here in PDF form.)

Podcast episode 264 dives into the Columbia study in detail, including what the study said, what it didn’t say, and what the publicity for the study said that doesn’t match the study.

Eric and Glenn respond to the recent allegations that a computer science undergraduate at Columbia University, using Artificial Intelligence, has “proven that fingerprints aren’t unique” or at least…that’s how the media is mischaracterizing a new published paper by Guo, et al. The guys dissect the actual publication (“Unveiling intra-person fingerprint similarity via deep contrastive learning” in Science Advances, 2024 by Gabe Guo, et al.). They state very clearly what the paper actually does show, which is a far cry from the headlines and even public dissemination originating from Columbia University and the author. The guys talk about some of the important limitations of the study and how limited the application is to real forensic investigations. They then explore some of the media and social media outlets that have clearly misunderstood this paper and seem to have little understanding of forensic science. Finally, Eric and Glenn look at some quotes and comments from knowledgeable sources who also have recognized the flaws in the paper, the authors’ exaggerations, and lack of understanding of the value of their findings.

From https://doublelooppodcast.com/2024/01/fingerprints-proven-by-ai-to-not-be-unique-episode-264/.

Yes, the episode is over an hour long, but if you want to hear a good discussion of the paper that goes beyond the headlines, I strongly recommend that you listen to it.

TL;DR

If you’re in a TL;DR frame of mind, I’ll just offer one tidbit: “uniqueness” and “similarity” are not identical. Frankly, they’re not even similar.

Will Ferrell and Chad Smith, or maybe vice versa. Fair use. From https://www.billboard.com/music/music-news/will-ferrell-chad-smith-red-hot-benefit-chili-peppers-6898348/, originally from NBC.

Did the Columbia Study “Discover” Fingerprint Patterns?

As you may have seen elsewhere, I’ve been wondering whether the widely-publicized Columbia University study on the uniqueness of fingerprints isn’t any more than a simple “discovery” of fingerprint patterns, which we’ve known about for years. But to prove or refute my suspicions, I had to read the study first.

My initial exposure to the Columbia study

I’ve been meaning to delve into the minutiae of the Columbia University fingerprint study ever since I initially wrote about it last Thursday.

(And yes, that’s a joke. The so-called experts say that the word “delve” is a mark of AI-generated content. And “minutiae”…well, you know.)

If you missed my previous post, “Claimed AI-detected Similarity in Fingerprints From the Same Person: Are Forensic Examiners Truly ‘Doing It Wrong’,” I discussed a widely-publicized study by a team led by Columbia University School of Engineering and Applied Science undergraduate senior Gabe Guo. Columbia Engineering itself publicized the study with the attention-grabbing headline “AI Discovers That Not Every Fingerprint Is Unique,” coupled with the sub-head “we’ve been comparing fingerprints the wrong way!”

There are three ways to react to the article:

  1. Gabe Guo, who freely admits that he knows nothing about forensic science, is an idiot. For decades we have known that fingerprints ARE unique, and the original forensic journals were correct in not publishing this drivel.
  2. The brave new world of artificial intelligence is fundamentally disproving previously sacred assumptions, and anyone who resists these assumptions is denying scientific knowledge and should go back to their caves.
  3. Well, let’s see what the study actually SAYS.

Until today, I hadn’t had a chance to read the study. But I wanted to do this, because a paragraph in the article that described the study got me thinking. I needed to see the study itself to confirm my suspicions.

“The AI was not using ‘minutiae,’ which are the branchings and endpoints in fingerprint ridges – the patterns used in traditional fingerprint comparison,” said Guo, who began the study as a first-year student at Columbia Engineering in 2021. “Instead, it was using something else, related to the angles and curvatures of the swirls and loops in the center of the fingerprint.” 

From https://www.newswise.com/articles/ai-discovers-that-not-every-fingerprint-is-unique

Hmm. Are you thinking what I am thinking?

What were you thinking?

I’ll preface this by saying that while I have worked with fingerprints for 29 years, I am nowhere near a forensic expert. I know enough to cause trouble.

But I know who the real forensic experts are, so I’m going to refer to a page on onin.com, the site created by Ed German. German, who is talented at explaining fingerprint concepts to lay people, created a page to explain “Level 1, 2 and 3 Details.” (It also explains ACE-V, for people interested in that term.)

Here are German’s quick explanations of Level 1, 2, and 3 detail. These are illustrated at the original page, but I’m just putting the textual definitions here.

  • Level 1 includes the general ridge flow and pattern configuration.  Level 1 detail is not sufficient for individualization, but can be used for exclusion.  Level 1 detail may include information enabling orientation, core and delta location, and distinction of finger versus palm.” 
  • Level 2 detail includes formations, defined as a ridge ending, bifurcation, dot, or combinations thereof.   The relationship of Level 2 detail enables individualization.” 
  • Level 3 detail includes all dimensional attributes of a ridge, such as ridge path deviation, width, shape, pores, edge contour, incipient ridges, breaks, creases, scars and other permanent details.” 

We’re not going to get into Level 3 in this post. But if you look at German’s summary of Level 2, you’ll see that he is discussing the aforementioned MINUTIAE (which, according to German, “enables individualization”). And if you look at German’s summary of Level 1, he’s discussing RIDGE FLOW, or perhaps “the angles and curvatures of the swirls and loops in the center of the fingerprint” (which, according to German, “is not sufficient for individualization”).

Did Gabe Guo simply “discover” fingerprint patterns? On a separate onin.com page, common fingerprint patterns are cited (arch, loop, whorl). Is this the same thing that Guo (who possibly has never heard of loops and whorls in his life) is talking about?

From Antheus Technology page, from NIST’s Appendix B to the FpVTE 2003 test document. I remember that test very well.

I needed to read the original study to see what Guo actually said, and to determine if AI discovered something novel beyond what forensic scientists consider the information “in the center of the fingerprint.”

So let’s look at the study

I finally took the time to read the study, “Unveiling intra-person fingerprint similarity via deep contrastive learning,” as published in Science Advances on January 12. While there is a lot to read here, I’m going to skip to Guo et al’s description of the fingerprint comparison method used by AI. Central to this comparison is the concept of a “feature map.”

Figure 2A shows that all the feature maps exhibit a statistically significant ability to distinguish between pairs of distinct fingerprints from the same person and different people. However, some are clearly better than others. In general, the more fingerprint-like a feature map looks, the more strongly it shows the similarity. We highlight that the binarized images performed almost as well as the original images, meaning that the similarity is due mostly to inherent ridge patterns, rather than spurious characteristics (e.g., image brightness, image background noise, and pressure applied by the user when providing the sample). Furthermore, it is very interesting that ridge orientation maps perform almost as well as the binarized and original images—this suggests that most of the cross-finger similarity can actually be explained by ridge orientation.

From https://www.science.org/doi/10.1126/sciadv.adi0329.

(The implied reversal from the forensic order of things is interesting. Specifically, ridge orientation, which yields a bunch of rich data, is considered more authoritative than mere minutiae points, which are just teeny little dots that don’t look like a fingerprint. Forensic examiners consider the minutiae more authoritative than the ridge detail.)

Based upon the initial findings, Guo et al delved deeper. (Sorry, couldn’t help myself.) Specifically, they interrogated the feature maps.

We observe a trend in the filter visualizations going from the beginning to the end of the network: filters in earlier layers exhibit simpler ridge/minutia patterns, the middle layers show more complex multidirectional patterns, and filters in the last layer display high-level patterns that look much like fingerprints—this increasing complexity is expected of deep neural networks that process images. Furthermore, the ridge patterns in the filter visualizations are all generally the same shade of gray, meaning that we can rule out image brightness as a source of similarity. Overall, each of these visualizations resembles recognizable parts of fingerprint patterns (rather than random noise or background patterns), bolstering our confidence that the similarity learned by our deep models is due to genuine fingerprint patterns, and not spurious similarities.

From https://www.science.org/doi/10.1126/sciadv.adi0329.

So what’s the conclusion?

(W)e show above 99.99% confidence that fingerprints from different fingers of the same person share very strong similarities. 

From https://www.science.org/doi/10.1126/sciadv.adi0329.

And what are Guo et al’s derived ramifications? I’ll skip to the most eye-opening one, related to digital authentication.

In addition, our work can be useful in digital authentication scenarios. Using our fingerprint processing pipeline, a person can enroll into their device’s fingerprint scanner with one finger (e.g., left index) and unlock it with any other finger (e.g., right pinky). This increases convenience, and it is also useful in scenarios where the original finger a person enrolled with becomes temporarily or permanently unreadable (e.g., occluded by bandages or dirt, ridge patterns have been rubbed off due to traumatic event), as they can still access their device with their other fingers.

From https://www.science.org/doi/10.1126/sciadv.adi0329.

However, the researchers caution that (as any good researcher would say when angling for funds) more research is needed. Their biggest concern was the small sample size they used in their experiments (60,000 prints), coupled with the fact that the prints were full and not partial fingerprints.

What is unanswered?

So let’s assume that the study shows a strong similarity between the ridges of fingerprints from the same person. Is this enough to show:

  • that the prints from two fingers on the same person ARE THE SAME, and
  • that the prints from two fingers on the same person are more alike than a print from ANY OTHER PERSON?

Or to use a specific example, if we have Mike French’s fingers 2 (right index) and 7 (left index), are those demonstrably from the same person, while my own finger 2 is demonstrably NOT from Mike French?

And what happens if my finger 2 has the same ridge pattern as French’s finger 2, yet is different from French’s finger 7? Does that mean that my finger 2 and French’s finger 2 are from the same person?

If this happens, then the digital authentication example above wouldn’t work, because I could use my finger 2 to get access to French’s data.

This could get messy.

More research IS needed, and here’s what it should be

If you have an innovative idea for a way to build an automobile, is it best to never talk to an existing automobile expert at all?

Same with fingerprints. Don’t just leave the study with the AI folks. Bring the forensic people on board.

And the doctors also.

Initiate a conversation between the people who found this new AI technique, the forensic people who have used similar techniques to classify prints as arches, loops, whorls, etc., and the medical people who understand how the ridges are formed in the womb in the first place.

If you get all the involved parties in one room, then perhaps they can work together to decide whether the technique can truly be used to identify people.

I don’t expect that this discussion will settle once and for all whether every fingerprint is unique. At least not to the satisfaction of scientists.

But bringing the parties together is better than not listening to critical stakeholders at all.

Login.gov and IAL2 #realsoonnow

Back in August 2023, the U.S. General Services Administration published a blog post that included the following statement:

Login.gov is on a path to providing an IAL2-compliant identity verification service to its customers in a responsible, equitable way. Building on the strong evidence-based identity verification that Login.gov already offers, Login.gov is on a path to providing IAL2-compliant identity verification that ensures both strong security and broad and equitable access.

From https://www.gsa.gov/blog/2023/08/18/reducing-fraud-and-increasing-access-drives-record-adoption-and-usage-of-logingov

It’s nice to know…NOW…that Login.gov is working to achieve IAL2.

This post explains what the August 2023 GSA post said, and what it didn’t say.

But first, I’ll define what Login.gov and “IAL2” are.

What is Login.gov?

Here is what Login.gov says about itself:

Login.gov is a secure sign in service used by the public to sign in to participating government agencies. Participating agencies will ask you to create a Login.gov account to securely access your information on their website or application.

You can use the same username and password to access any agency that partners with Login.gov. This streamlines your process and eliminates the need to remember multiple usernames and passwords.

From https://www.login.gov/what-is-login/

Obviously there are a number of private companies (over 80 last I counted) that provide secure access to information, but Login.gov is provided by the government itself—specifically by the General Services Administration’s Technology Transformation Services. Agencies at the federal, state, and local level can work with the GSA TTS’ “18F” organization to implement solutions such as Login.gov.

Why would agencies implement Login.gov? Because the agencies want to protect their constituents’ information. If fraudsters capture personally identifiable information (PII) of someone applying for government services, the breached government agency will face severe repurcussions. Login.gov is supposed to protect its partner agencies from these nightmares.

How does Login.gov do this?

  • Sometimes you might use two-factor authentication consisting of a password and a second factor such as an SMS code or the use of an authentication app.
  • In more critical cases, Login.gov requests a more reliable method of identification, such as a government-issued photo ID (driver’s license, passport, etc.).

What is IAL2?

At the risk of repeating myself, I’ll briefly go over what “Identity Assurance Level 2” (IAL2) is.

The U.S. National Institute of Standards and Technology, in its publication NIST SP 800-63a, has defined “identity assurance levels” (IALs) that can be used when dealing with digital identities. It’s helpful to review how NIST has defined the IALs. (I’ll define the other acronyms as we go along.)

Assurance in a subscriber’s identity is described using one of three IALs:

IAL1: There is no requirement to link the applicant to a specific real-life identity. Any attributes provided in conjunction with the subject’s activities are self-asserted or should be treated as self-asserted (including attributes a [Credential Service Provider] CSP asserts to an [Relying Party] RP). Self-asserted attributes are neither validated nor verified.

IAL2: Evidence supports the real-world existence of the claimed identity and verifies that the applicant is appropriately associated with this real-world identity. IAL2 introduces the need for either remote or physically-present identity proofing. Attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL2 can support IAL1 transactions if the user consents.

IAL3: Physical presence is required for identity proofing. Identifying attributes must be verified by an authorized and trained CSP representative. As with IAL2, attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL3 can support IAL1 and IAL2 identity attributes if the user consents.

From https://pages.nist.gov/800-63-3/sp800-63a.html#sec2

So in its simplest terms, IAL2 requires evidence of a verified credential so that an online person can be linked to a real-life identity. If someone says they’re “John Bredehoft” and fills in an online application to receive government services, IAL2 compliance helps to ensure that the person filling out the online application truly IS John Bredehoft, and not Bernie Madoff.

As more and more of us conduct business—including government business—online, IAL2 compliance is essential to reduce fraud.

One more thing about IAL2 compliance. The mere possession of a valid government issued photo ID is NOT sufficient for IAL2 compliance. After all, Bernie Madoff may be using John Bredehoft’s driver’s license. To make sure that it’s John Bredehoft using John Bredehoft’s driver’s license, an additional check is needed.

This has been explained by ID.me, a private company that happens to compete with Login.gov to provide identity proofing services to government agencies.

Biometric comparison (e.g., selfie with liveness detection or fingerprint) of the strongest piece of evidence to the applicant

From https://network.id.me/article/what-is-nist-ial2-identity-verification/

So you basically take the information on a driver’s license and perform a facial recognition 1:1 comparison with the person possessing the driver’s license, ideally using liveness detection, to make sure that the presented person is not a fake.

So what?

So the GSA was apparently claiming how secure Login.gov was. Guess who challenged the claim?

The GSA.

Now sometimes it’s ludicrous to think that the government can police itself, but in some cases government actually identifies government faults.

Of course, this works best when you can identify problems with some other government entity.

Which is why the General Services Administration has an Inspector General. And in March 2023, the GSA Inspector General released a report with the following title: “GSA Misled Customers on Login.gov’s Compliance with Digital Identity Standards.”

The title is pretty clear, but Fedscoop summarized the findings for those who missed the obvious:

As part of an investigation that has run since last April (2022), GSA’s Office of the Inspector General found that the agency was billing agencies for IAL2-compliant services, even though Login.gov did not meet Identity Assurance Level 2 (IAL2) standards.

GSA knowingly billed over $10 million for services provided through contracts with other federal agencies, even though Login.gov is not IAL2 compliant, according to the watchdog.

From https://fedscoop.com/gsa-login-gov-watchdog-report/

So now GSA is explicitly saying that Login.gov ISN’T IAL2-compliant.

Which helps its private sector competitors.

Clean Data is the New Oxygen, and Dirty Data is the New Carbon Monoxide

I have three questions for you, but don’t sweat; I’m giving you the answers.

  1. How long can you survive without pizza? Years (although your existence will be hellish).
  2. OK, how long can you survive without water? From 3 days to 7 days.
  3. OK, how long can you survive without oxygen? Only 10 minutes.

This post asks how long a 21st century firm can survive without data, and what can happen if the data is “dirty.”

How does Mika survive?

Have you heard of Mika? Here’s her LinkedIn profile.

From Mika’s LinkedIn profile at https://www.linkedin.com/in/mika-ai-ceo/

Yes, you already know that I don’t like LinkedIn profiles that don’t belong to real people. But this one is a bit different.

Mika is the Chief Executive Officer of Dictador, a Polish-Colombian spirits firm, and is responsible for “data insight, strategic provocation and DAO community liaison.” Regarding data insight, Mika described her approach in an interview with Inside Edition:

My decision making process relies on extensive data analysis and aligning with the company’s strategic objectives. It’s devoid of personal bias ensuring unbiased and strategic choices that prioritize the organization’s best interests.

From the transcript to https://www.youtube.com/watch?v=8BQEyQ2-awc
From https://www.youtube.com/watch?v=8BQEyQ2-awc

Mika was brought to my attention by accomplished product marketer/artist Danuta (Dana) Deborgoska. (She’s appeared in the Bredemarket blog before, though not by name.) Dana is also Polish (but not Colombian) and clearly takes pride in the artificial intelligence accomplishments of this Polish-headquartered company. You can read her LinkedIn post to see her thoughts, one of which was as follows:

Data is the new oxygen, and we all know that we need clean data to innovate and sustain business models.

From Dana Debogorska’s LinkedIn post.

Dana succinctly made two points:

  1. Data is the new oxygen.
  2. We need clean data.

Point one: data is the new oxygen

There’s a reference to oxygen again, but it’s certainly appropriate. Just as people cannot survive without oxygen, Generative AI cannot survive without data.

But the need for data predates AI models. From 2017:

Reliance Industries Chairman Mukesh Ambani said India is poised to grow…but to make that happen the country’s telecoms and IT industry would need to play a foundational role and create the necessary digital infrastructure.

Calling data the “oxygen” of the digital economy, Ambani said the telecom industry had the urgent task of empowering 1.3 billion Indians with the tools needed to flourish in the digital marketplace.

From India Times.

And we can go back centuries in history and find examples when a lack of data led to catastrophe. Like the time in 1776 when the Hessians didn’t know that George Washington and his troops had crossed the Delaware.

Point two: we need clean data

Of course, the presence or absence of data alone is not enough. As Debogorska notes, we don’t just need any data; we need CLEAN data, without error and without bias. Dirty data is like carbon monoxide, and as you know carbon monoxide is harmful…well, most of the time.

That’s been the challenge not only with artificial intelligence, but with ALL aspects of data gathering.

The all-male board of directors of a fertilizer company in 1960. Fair use. From the New York Times.

In all of these cases, someone (Amazon, Enron’s shareholders, or NIST) asked questions about the cleanliness of the data, and then set out to answer those questions.

  • In the case of Amazon’s recruitment tool and the company Enron, the answers caused Amazon to abandon the tool and Enron to abandon its existence.
  • Despite the entreaties of so-called privacy advocates (who prefer the privacy nightmare of physical driver’s licenses to the privacy-preserving features of mobile driver’s licenses), we have not abandoned facial recognition, but we’re definitely monitoring it in a statistical (not an anecdotal) sense.

The cleanliness of the data will continue to be the challenge as we apply artificial intelligence to new applications.

Clean room of a semiconductor manufacturing facility. Uploaded by Duk 08:45, 16 Feb 2005 (UTC) – http://www.grc.nasa.gov/WWW/ictd/content/labmicrofab.html (original) and https://images.nasa.gov/details/GRC-1998-C-01261 (high resolution), Public Domain, https://commons.wikimedia.org/w/index.php?curid=60825

Point three: if you’re not saying things, then you’re not selling

(Yes, this is the surprise point.)

Dictador is talking about Mika.

Are you talking about your product, or are you keeping mum about it?

I have more to…um…say about this. Follow this link.

Pangiam May Be Acquired Next Year

Things change. Pangiam, a company that didn’t even exist a few years ago, and that started off by acquiring a one-off project from a local government agency, is now itself a friendly acquisition target (pending stockholder and regulatory approvals).

From MWAA to Pangiam

Back when I worked for IDEMIA and helped to market its border control solutions, one of our competitors for airport business was an airport itself—specifically, the Metropolitan Washington Airports Authority. Rather than buying a biometric exit solution from someone else, the MWAA developed its own, called veriScan.

2021 image from the former airportveriscan website.

After I left IDEMIA, the MWAA decided that it didn’t want to be in the software business any more, and sold veriScan to a new company, Pangiam. I posted about this decision and the new company in this blog.

ALEXANDRIA, Va., March 19, 2021 /PRNewswire/ — Pangiam, a technology-based security and travel services provider, announced today that it has acquired veriScan, an integrated biometric facial recognition system for airports and airlines, from the Metropolitan Washington Airports Authority (“Airports Authority”). Terms of the transaction were not disclosed.

From PR Newswire.

But Pangiam was just getting started.

Trueface, FRTE, stadiums, and artificial intelligence

Results for the NIST FRTE 1:N pangiam-000 algorithm, captured November 6, 2023 from NIST.

A few months later Pangiam acquired Trueface and therefore earned a spot on the NIST FRTE 1:N (formerly FRVT 1:N) rankings and an interest in the stadium/venue identity verification/authentication market.

By Chris6d – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=101751795

Meanwhile Pangiam continued to build up its airport business and also improved its core facial recognition technology.

After that I personally concentrated on other markets, and therefore missed the announcements of Pangiam Bridge (introducing artificial intelligence into Pangiam’s border crossing offering) and Project DARTMOUTH (devoted to using artificial intelligence and pattern analysis to airline baggage, cargo, and shipments).

So what will Pangiam work on next? Where will it expand? What will it acquire?

Nothing.

Enter BigBear.ai

Pangiam itself is now an acquisition target.

COLUMBIA, MD.— November 6, 2023 — BigBear.ai (NYSE: BBAI), a leading provider of AI-enabled business intelligence solutions, today announced a definitive merger agreement to acquire Pangiam Intermediate Holdings, LLC (Pangiam), a leader in Vision AI for the global trade, travel, and digital identity industries, for approximately $70 million in an all-stock transaction. The combined company will create one of the industry’s most comprehensive Vision AI portfolios, combining Pangiam’s facial recognition and advanced biometrics with BigBear.ai’s computer vision capabilities, positioning the company as a foundational leader in one of the fastest growing categories for the application of AI. The proposed acquisition is expected to close in the first quarter of 2024, subject to customary closing conditions, including approval by the holders of a majority of BigBear.ai’s outstanding common shares and receipt of regulatory approval.

From bigbear.ai.

Yet another example of how biometrics is now just a minor part of general artificial intelligence efforts. Identify a face or a grenade, it’s all the same.

Anyway, let’s check back in a few months. Because of the technology involved, this proposed acquisition will DEFINITELY merit government review.

The Death of the Bicycle Will Triumph!

By 齐健 from Peking, People’s Republic of China – Down the Hutong, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=18200257

This is taking forever.

We’ve been talking about the death of the bicycle since the time of the Wright Brothers and Henry Ford.

But we still haven’t achieved it.

Wilbur Wright building a bicycle two centuries ago before he came to his senses. By Wright brothers – Library of Congress CALL NUMBER: LC-W85- 81 [P&P]REPRODUCTION NUMBER: LC-DIG-ppprs-00540 (digital file from original)LC-W851-81 (b&w film copy, Public Domain, https://commons.wikimedia.org/w/index.php?curid=2217030

What will it take to make the death of the bicycle a reality?

Why does the bicycle need to die?

I think that all intelligent people agree that the bicycle needs to die. But just to be extra-cautious, I will again enumerate the reasons why the death of the bicycle is absolutely necessary.

By Photo by Adam Coppola. – Photo by Adam Coppola taken under contract for PeopleForBikes, released into the public domain with the consent of the subjects.[1][2], CC0, https://commons.wikimedia.org/w/index.php?curid=46251073
  • The bicycle is too slow. Perhaps the bicycle was suitable for 19th century life, but today it’s an embarrassment. The speed of the bicycle has long been surpassed by automobiles from the aforementioned Ford, and airplanes from the aforementioned Wrights. It poses a danger as slow-moving bicycle traffic risks getting hit by faster-moving vehicles, unless extraordinary measures are undertaken to separate bicycles from normal traffic. For this reason alone the bicycle must die.
  • The bicycle is too weak. If that weren’t enough, take a look at the weakness of the bicycle and the huge threat from this weakness. You can completely destroy the bicycle and its rider with a simple puddle of oil, a nail, or a misplaced brick that a bicycle hits. This is yet another reason why the bicycle must die.
Illustrating the inherent weakness of the bicycle. By Björn Appel – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=366355
  • The bicycle is too inefficient. Other factors of transportation are much better equipped to carry loads of people and goods. The bicycle? Forget it. Any attempt to carry a reasonable load of goods on a bicycle is doomed to failure.
An accident waiting to happen. By Emesik – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=30211326
  • The bicycle is too easy to steal. It takes some effort to steal other factors of transportation, but it is pitifully easy to steal a bike, or part of a bike.
A bicycle wheel remains chained in a bike rack after the rest of the bicycle has been stolen. By Ildar Sagdejev (Specious) – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=4741181

Despite everyone knowing about these security and personal threats for years if not decades, use of the bicycle continues to persist.

And we have to put a stop to it.

Why does the bicycle continue to live?

The problem is that a few wrongheaded individuals continue to promote bicycle use in a misguided way.

  • Some of them argue that bicycles provide health benefits that you can’t realize with other factors of transportation. Any so-called health benefits are completely erased by the damage that could happen when a bicycle rider ends up face down on the pavement.
  • Others argue that you can mitigate the problems with bicycles by requiring riders to change to a new bicycle every 90 days. This is also misguided, because even if you do this, the threats from bicycle use continue to occur from day one.
Make sure your bicycle has a wheel, spokes, seat, and drink holder, and don’t use any of the last six bicycles you previously used. By Havang(nl) – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=2327525

How do we solve this?

People have tried to hasten the death of the bicycle, but its use still persists.

Kill the bicycle in favor of superior factors of transportation. By Greg Gjerdingen from Willmar, USA – 59 Edsel Villager, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=40125742

We have continued to advance other factors of transportation, both from the efforts of vendors, as well as the efforts of industry associations such as the International Bus and Infiniti Association (IBIA) and the MANX (Moving At Necessary eXpress) Alliance.

Mascot of the MANX Alliance. By Michelle Weigold – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=55459524

Yet resistance persists. Even the National Institute of Standards and Technology (NIST), which should know better, continues to define bicycle use as a standard factor of transportation.

The three most recognized factors of transportation include “something you pedal” (such as a bicycle), “something you drive” (such as an automobile), and “something you ride” (such as a bus).

NIST Special Publication 800-8-2. Link unavailable.

It is imperative that both governments and businesses completely ban use of the bicycle in favor of other forms of transportation. Our security as a nation depends on this.

Bill Gates has long championed the automobile as his preferred factor of transportation. From https://www.snopes.com/fact-check/mugshot-bill-gates-arrested/

Do your part to bring about the death of the bicycle in favor of other factors of transportation, and ensure that we will enjoy a bicycleless future.

A personal note

I don’t agree with anything I just wrote.

Despite its faults, I still believe that the bicycle has a proper place in our society, perhaps as one of several factors of transportation in an MFT (multi-factor transportation) arrangement.

And, if you haven’t figure it out yet, I’m not on board with the complete death of the password either. Passwords (and PINs) have their place. And when used properly they’re not that bad (even if these 2021 figures are off by an order of magnitude today).

Feel free to share the images and interactive found on this page freely. When doing so, please attribute the authors by providing a link back to this page and Better Buys, so your readers can learn more about this project and the related research.

Oh, and about the title of this post. If you’ve heard of Triumph Motorcycles, you may already know that Triumph started as a bicycle manufacturer.

More on NIST’s FRTE-FATE Split

(Part of the biometric product marketing expert series)

I’ve talked about why NIST separated its FRVT efforts into FRTE and FATE.

But I haven’t talked bout how NIST did this.

And as you all know, the second most important question after why is how.

Why the great renaming took place

As I noted back in August, NIST chose to split its Face Recognition Vendor Test (FRVT) into two parts—FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation).

At the time, NIST explained why it did this:

To bring clarity to our testing scope and goals

From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate

In essence, the Face Recognition Vendor Test had become a hodgepodge of different things. Some of the older tests were devoted to identification of individuals (face recognition), while some of the newer tests were looking at issues other than individual identification (face analysis).

Of course, this confusion between identification and non-identification is nothing new, which is why some of the people who read Gender Shades falsely concluded that if the three algorithms couldn’t classify people by sex or race, they couldn’t identify them as individuals.

But I digress. (I won’t do it again.)

NIST explained at the time:

Tracks that involve the processing and analysis of images will run under the FATE activity, and tracks that pertain to identity verification will run under FRTE.

From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate

How the great renaming happened in practice

What is in FRTE?

To date, most of my personal attention (and probably most of yours) was paid to what was previously called FRVT 1:1 and FRVT 1:N.

These two tests are now part of FRTE, and were simply renamed to FRTE 1:1 and FRTE 1:N. They’ve even (for now) retained the same URLs, although that may change in the future.

Other tests that are now part of the FRTE bucket include:

The “Still Face and Iris 1:N Identification” effort (PDF) has apparently also been reclassified as an FRTE effort.

What is in FATE?

Obviously, presentation attack detection (PAD) testing falls into the FATE category, since this does not measure the identification of an individual, but whether a person is truly there or not. The first results have been released; I previously wrote about this here.

The next obvious category is age estimation testing, which again does not try to identify an individual, but estimate how old the person is. This testing has not yet started, but I talked about the concept of age estimation previously.

Other parts of FATE include:

It is very possible that NIST will add additional FRTE and FATE tests in the future. These may be brand new tests, or variations of existing tests. For example, when all of us started wearing face masks a couple of years ago, NIST simulated face masks on its existing facial images and created the data for the face mask test described above.

What do you think NIST should test next, either in the FRTE or the FATE category?

More on morphing

And yes, I’m concluding this post with this video. By the way, this is the full version that (possibly intentionally) caused a ton of controversy and was immediately banned for nearly a quarter century. The morphing starts at 5:30. The crotch-grabbing starts right after the 7:00 mark.

From https://www.youtube.com/watch?v=pTFE8cirkdQ

But on a less controversial note, let’s give equal time to Godley & Creme.

From https://www.youtube.com/watch?v=ypMnBuvP5kA

Perhaps because of the lack of controversy with Godley & Creme’s earlier effort, Ashley Clark prefers it to the later Michael Jackson/John Landis effort.

Whereas Godley & Creme used editing technology to embrace and reflect the ambiguous murk of thwarted love, Jackson and Landis imposed an artificial sheen on the complexity of identity; a sheen that feels poignant if not outright tragic in the wake of Jackson’s ultimate appearance and fate. Really, it did matter if he was black or white.

From https://ashleyclark.substack.com/p/black-or-white-and-crying-all-over

But I digress. (I lied.)

Sadly, morphing escaped from the hands of music video directors and artists and entered the world of fraudsters, as Regula explains.

One of the main application areas of facial morphing for criminal purposes is forging identity documents. The attack targets face-based identity verification systems and procedures. Most often it involves passports; however, any ID document with a photo can be compromised.

One well-known case happened in 2018 when a group of activists merged together a photo of Federica Mogherini, the High Representative of the European Union for Foreign Affairs and Security Policy, and a member of their group. Using this morphed photo, they managed to obtain an authentic German passport.

From https://regulaforensics.com/blog/facial-morphing/

Which is why NIST didn’t just cry about the problem. They tested it to assist the vendors in solving the problem.

I Guess I Was Fated to Write About NIST IR 8491 on Passive Presentation Attack Detection

Remember in mid-August when I said that the U.S. National Institute of Standards and Technology was splitting its FRVT tests into FRTE and FATE tests?

Well, the FATE side of the house has released its first two studies, including one entitled “Face Analysis Technology Evaluation (FATE) Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms” (NIST Internal Report NIST IR 8491; PDF here).

By JamesHarrison – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4873863

I’ve written all about this study in a LinkedIn article under my own name that answers the following questions:

  • What is a presentation attack?
  • How do you detect presentation attacks?
  • Why does NIST care about presentation attacks?
  • And why should you?

My LinkedIn article, “Why NIST Cares About Presentation Attack Detection…and Why You Should Also,” can be found at the link https://www.linkedin.com/pulse/why-nist-cares-presentation-attack-detectionand-you-should-bredehoft/.

The Great Renaming: FRVT is now FRTE and FATE

Face professionals, your world just changed.

I and countless others have spent the last several years referring to the National Institute of Standards and Technology’s Face Recognition Vendor Test, or FRVT. I guess some people have spent almost a quarter century referring to FRVT, because the term has been in use since 1999.

Starting now, you’re not supposed to use the FRVT acronym any more.

From NIST:

Face Technology Evaluations – FRTE/FATE

To bring clarity to our testing scope and goals, what was formerly known as FRVT has been rebranded and split into FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation).  Tracks that involve the processing and analysis of images will run under the FATE activity, and tracks that pertain to identity verification will run under FRTE.  All existing participation and submission procedures remain unchanged.

From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate

So, for example, the former “FRVT 1:1” and “FRVT 1:N” are now named “FRTE 1:1” and “FRTE 1:N,” respectively. At least at present, the old links https://pages.nist.gov/frvt/html/frvt11.html and https://pages.nist.gov/frvt/html/frvt1N.html still work.

The change actually makes sense, since tasks such as age estimation and presentation attack detection (liveness detection) do not directly relate to the identification of individuals.

Us old folks just have to get used to the change.

I just hope that the new “FATE” acronym doesn’t mean that some algorithms are destined to perform better than others.

Time to Check the Current NIST Face Recognition Vendor Test Results (well, three of them)

It’s been a while since I’ve peeked at the NIST Face Recognition Vendor Test (FRVT) results.

As I’ve stated before, the results can be sliced and diced in so many ways that many vendors can claim to be the #1 NIST FRVT vendor.

What’s more, these results change on a monthly basis, so it’s quite possible that the #1 vendor in some category in February 2022 was no longer than #1 vendor in March 2022. (And if your company markets years-old FRVT results, stop it!)

This is the August 15, 2023 peek at three ways to slice and dice the NIST FRVT results.

And a bunch of vendors will be mad at me because I didn’t choose THEIR preferred slicing and dicing, or their ways to exclude results (not including Chinese algorithms, not including algorithms used in surveillance, etc.). The mad vendors can write their own blog posts (or ask Bredemarket to ghostwrite them on their behalf).

NIST FRVT 1:1, VISABORDER

The phrase “NIST FRVT 1:1, VISABORDER” is shorthand for the NIST one-to-one version of the Face Recognition Vendor Test, using the VISABORDER probe and gallery data. This happens to be the default way in which NIST sorts the 1:1 accuracy results, but of course you can sort them against any other probe/gallery combination, and get a different #1 vendor.

As of August 15, the top two accuracy algorithms for VISABORDER came from Cloudwalk. Here are all of the top ten.

Captured 8/15/2023, sorted by VISABORDER. From https://pages.nist.gov/frvt/html/frvt11.html

NIST FRVT 1:1, Comparison Time (Mate)

But NIST doesn’t just measure accuracy for a bunch of different probe-target combinations. It also measures performance, since the most accurate algorithm in the world won’t do you any good if it takes forever to compare the face templates.

One caveat regarding these measures is that NIST conducts the tests on a standardized set of equipment, so that results between vendors can be compared. This is important to note, because a comparison that takes 103 milliseconds on NIST’s equipment will yield a different time on a customer’s equipment.

One of the many performance measures is “Comparison Time (Mate).” There is also a performance measure for “Comparison Time (Non-mate).”

So in this test, the fastest vendor algorithm comes from Trueface. Again, here are the top 10.

Captured 8/15/2023, sorted by Comparison Time (Mate). From https://pages.nist.gov/frvt/html/frvt11.html

NIST FRVT 1:N, VISABORDER 1.6M

Now I know what some of you are saying. “John,” you say, “the 1:1 test only measures a comparison against one face against one other face, or what NIST calls verification. What if you’re searching against a database of faces, or identification?”

Well, NIST has a 1:N test to measure that particular use case. Or use cases, because again you can slice and dice the results in so many different ways.

When looking at accuracy, the default NIST 1:N sort is by:

  • Probe images from the BORDER database.
  • Gallery images from a 1,600,000 record VISA database.

Cloudwalk happens to be the #1 vendor in this slicing and dicing of the test. Here are the top ten.

Captured 8/15/2023, sorted by Visa, Border, N=1600000. From https://pages.nist.gov/frvt/html/frvt1N.html

Test data is test data

The usual cautions apply that everyone, including NIST, emphasizes that these test results do not guarantee similar results in an operational environment. Even if the algorithm author ported its algorithm to an operational system with absolutely no changes, the operational system will have a different hardware configuration and will have different data.

For example, none of the NIST 1:N tests use databases with more than 12 million records. Even 20 years ago, Behnam Bavarian correctly noted that biometric databases would eventually surpass hundreds of millions of records, or even billions of records. There is no way that NIST could assemble a test database that large.

So you should certianly consider the NIST tests, but before you deploy an operational ABIS, you should follow Mike French’s advice and conduct an ABIS benchmark on your own equipment, with your own data.