This post concentrates on IDENTIFICATION perfection, or the ability to enjoy zero errors when identifying individuals.
The risk of claiming identification perfection (or any perfection) is that a SINGLE counter-example disproves the claim.
If you assert that your biometric solution offers 100% accuracy, a SINGLE false positive or false negative shatters the assertion.
If you claim that your presentation attack detection solution exposes deepfakes (face, voice, or other), then a SINGLE deepfake that gets past your solution disproves your claim.
And as for the pre-2009 claim that latent fingerprint examiners never make a mistake in an identification…well, ask Brandon Mayfield about that one.
In fact, I go so far as to avoid using the phrase “no two fingerprints are alike.” Many years ago (before 2009) in an International Association for Identification meeting, I heard someone justify the claim by saying, “We haven’t found a counter-example yet.” That doesn’t mean that we’ll NEVER find one.
At first glance, it appears that Motorola would be the last place to make a boneheaded mistake like that. After all, Motorola is known for its focus on quality.
But in actuality, Motorola was the perfect place to make such a mistake, since it was one of the champions of the “Six Sigma” philosophy (which targets a maximum of 3.4 defects per million opportunities). Motorola realized that manufacturing perfection is impossible, so manufacturers (and the people in Motorola’s weird Biometric Business Unit) should instead concentrate on reducing the error rate as much as possible.
So one misspelling could be tolerated, but I shudder to think what would have happened if I had misspelled “quality” a second time.
Back in August 2023, the U.S. General Services Administration published a blog post that included the following statement:
Login.gov is on a path to providing an IAL2-compliant identity verification service to its customers in a responsible, equitable way. Building on the strong evidence-based identity verification that Login.gov already offers, Login.gov is on a path to providing IAL2-compliant identity verification that ensures both strong security and broad and equitable access.
Login.gov is a secure sign in service used by the public to sign in to participating government agencies. Participating agencies will ask you to create a Login.gov account to securely access your information on their website or application.
You can use the same username and password to access any agency that partners with Login.gov. This streamlines your process and eliminates the need to remember multiple usernames and passwords.
Why would agencies implement Login.gov? Because the agencies want to protect their constituents’ information. If fraudsters capture personally identifiable information (PII) of someone applying for government services, the breached government agency will face severe repurcussions. Login.gov is supposed to protect its partner agencies from these nightmares.
How does Login.gov do this?
Sometimes you might use two-factor authentication consisting of a password and a second factor such as an SMS code or the use of an authentication app.
In more critical cases, Login.gov requests a more reliable method of identification, such as a government-issued photo ID (driver’s license, passport, etc.).
The U.S. National Institute of Standards and Technology, in its publication NIST SP 800-63a, has defined “identity assurance levels” (IALs) that can be used when dealing with digital identities. It’s helpful to review how NIST has defined the IALs. (I’ll define the other acronyms as we go along.)
Assurance in a subscriber’s identity is described using one of three IALs:
IAL1: There is no requirement to link the applicant to a specific real-life identity. Any attributes provided in conjunction with the subject’s activities are self-asserted or should be treated as self-asserted (including attributes a [Credential Service Provider] CSP asserts to an [Relying Party] RP). Self-asserted attributes are neither validated nor verified.
IAL2: Evidence supports the real-world existence of the claimed identity and verifies that the applicant is appropriately associated with this real-world identity. IAL2 introduces the need for either remote or physically-present identity proofing. Attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL2 can support IAL1 transactions if the user consents.
IAL3: Physical presence is required for identity proofing. Identifying attributes must be verified by an authorized and trained CSP representative. As with IAL2, attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL3 can support IAL1 and IAL2 identity attributes if the user consents.
So in its simplest terms, IAL2 requires evidence of a verified credential so that an online person can be linked to a real-life identity. If someone says they’re “John Bredehoft” and fills in an online application to receive government services, IAL2 compliance helps to ensure that the person filling out the online application truly IS John Bredehoft, and not Bernie Madoff.
As more and more of us conduct business—including government business—online, IAL2 compliance is essential to reduce fraud.
One more thing about IAL2 compliance. The mere possession of a valid government issued photo ID is NOT sufficient for IAL2 compliance. After all, Bernie Madoff may be using John Bredehoft’s driver’s license. To make sure that it’s John Bredehoft using John Bredehoft’s driver’s license, an additional check is needed.
This has been explained by ID.me, a private company that happens to compete with Login.gov to provide identity proofing services to government agencies.
Biometric comparison (e.g., selfie with liveness detection or fingerprint) of the strongest piece of evidence to the applicant
So you basically take the information on a driver’s license and perform a facial recognition 1:1 comparison with the person possessing the driver’s license, ideally using liveness detection, to make sure that the presented person is not a fake.
As part of an investigation that has run since last April (2022), GSA’s Office of the Inspector General found that the agency was billing agencies for IAL2-compliant services, even though Login.gov did not meet Identity Assurance Level 2 (IAL2) standards.
GSA knowingly billed over $10 million for services provided through contracts with other federal agencies, even though Login.gov is not IAL2 compliant, according to the watchdog.
My decision making process relies on extensive data analysis and aligning with the company’s strategic objectives. It’s devoid of personal bias ensuring unbiased and strategic choices that prioritize the organization’s best interests.
Mika was brought to my attention by accomplished product marketer/artist Danuta (Dana) Deborgoska. (She’s appeared in the Bredemarket blog before, though not by name.) Dana is also Polish (but not Colombian) and clearly takes pride in the artificial intelligence accomplishments of this Polish-headquartered company. You can read her LinkedIn post to see her thoughts, one of which was as follows:
Data is the new oxygen, and we all know that we need clean data to innovate and sustain business models.
There’s a reference to oxygen again, but it’s certainly appropriate. Just as people cannot survive without oxygen, Generative AI cannot survive without data.
But the need for data predates AI models. From 2017:
Reliance Industries Chairman Mukesh Ambani said India is poised to grow…but to make that happen the country’s telecoms and IT industry would need to play a foundational role and create the necessary digital infrastructure.
Calling data the “oxygen” of the digital economy, Ambani said the telecom industry had the urgent task of empowering 1.3 billion Indians with the tools needed to flourish in the digital marketplace.
Of course, the presence or absence of data alone is not enough. As Debogorska notes, we don’t just need any data; we need CLEAN data, without error and without bias. Dirty data is like carbon monoxide, and as you know carbon monoxide is harmful…well, most of the time.
That’s been the challenge not only with artificial intelligence, but with ALL aspects of data gathering.
The all-male board of directors of a fertilizer company in 1960. Fair use. From the New York Times.
In all of these cases, someone (Amazon, Enron’s shareholders, or NIST) asked questions about the cleanliness of the data, and then set out to answer those questions.
In the case of Amazon’s recruitment tool and the company Enron, the answers caused Amazon to abandon the tool and Enron to abandon its existence.
Despite the entreaties of so-called privacy advocates (who prefer the privacy nightmare of physical driver’s licenses to the privacy-preserving features of mobile driver’s licenses), we have not abandoned facial recognition, but we’re definitely monitoring it in a statistical (not an anecdotal) sense.
The cleanliness of the data will continue to be the challenge as we apply artificial intelligence to new applications.
Things change. Pangiam, a company that didn’t even exist a few years ago, and that started off by acquiring a one-off project from a local government agency, is now itself a friendly acquisition target (pending stockholder and regulatory approvals).
From MWAA to Pangiam
Back when I worked for IDEMIA and helped to market its border control solutions, one of our competitors for airport business was an airport itself—specifically, the Metropolitan Washington Airports Authority. Rather than buying a biometric exit solution from someone else, the MWAA developed its own, called veriScan.
2021 image from the former airportveriscan website.
ALEXANDRIA, Va., March 19, 2021 /PRNewswire/ — Pangiam, a technology-based security and travel services provider, announced today that it has acquired veriScan, an integrated biometric facial recognition system for airports and airlines, from the Metropolitan Washington Airports Authority (“Airports Authority”). Terms of the transaction were not disclosed.
So what will Pangiam work on next? Where will it expand? What will it acquire?
Nothing.
Enter BigBear.ai
Pangiam itself is now an acquisition target.
COLUMBIA, MD.— November 6, 2023 — BigBear.ai (NYSE: BBAI), a leading provider of AI-enabled business intelligence solutions, today announced a definitive merger agreement to acquire Pangiam Intermediate Holdings, LLC (Pangiam), a leader in Vision AI for the global trade, travel, and digital identity industries, for approximately $70 million in an all-stock transaction. The combined company will create one of the industry’s most comprehensive Vision AI portfolios, combining Pangiam’s facial recognition and advanced biometrics with BigBear.ai’s computer vision capabilities, positioning the company as a foundational leader in one of the fastest growing categories for the application of AI. The proposed acquisition is expected to close in the first quarter of 2024, subject to customary closing conditions, including approval by the holders of a majority of BigBear.ai’s outstanding common shares and receipt of regulatory approval.
Yet another example of how biometrics is now just a minor part of general artificial intelligence efforts. Identify a face or a grenade, it’s all the same.
Anyway, let’s check back in a few months. Because of the technology involved, this proposed acquisition will DEFINITELY merit government review.
As identity/biometric professionals well know, there are five authentication factors that you can use to gain access to a person’s account. (You can also use these factors for identity verification to establish the person’s account in the first place.)
Something You Are. I’ve spent…a long time with this factor, since this is the factor that includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.
As I mentioned in August, there are a number of biometric modalities, including face, fingerprint, iris, hand geometry, palm print, signature, voice, gait, and many more.
If your firm offers an identity solution that partially depends upon “something you are,” then you need to create content (blog, case study, social media, white paper, etc.) that converts prospects for your identity/biometric product/service and drives content results.
For example, when biometric companies want to justify the use of their technology, they have found that it is very effective to position biometrics as a way to combat sex trafficking.
Similarly, moves to rein in social media are positioned as a way to preserve mental health.
Now that’s a not-so-pretty picture, but it effectively speaks to emotions.
“If poor vulnerable children are exposed to addictive, uncontrolled social media, YOUR child may end up in a straitjacket!”
In New York state, four government officials have declared that the ONLY way to preserve the mental health of underage social media users is via two bills, one of which is the “New York Child Data Protection Act.”
But there is a challenge to enforce ALL of the bill’s provisions…and only one way to solve it. An imperfect way—age estimation.
Because they want to protect the poor vulnerable children.
By Paolo Monti – Available in the BEIC digital library and uploaded in partnership with BEIC Foundation.The image comes from the Fondo Paolo Monti, owned by BEIC and located in the Civico Archivio Fotografico of Milan., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48057924
And because the major U.S. social media companies are headquartered in California. But I digress.
So why do they say that children need protection?
Recent research has shown devastating mental health effects associated with children and young adults’ social media use, including increased rates of depression, anxiety, suicidal ideation, and self-harm. The advent of dangerous, viral ‘challenges’ being promoted through social media has further endangered children and young adults.
Of course one can also argue that social media is harmful to adults, but the New Yorkers aren’t going to go that far.
So they are just going to protect the poor vulnerable children.
CC BY-SA 4.0.
This post isn’t going to deeply analyze one of the two bills the quartet have championed, but I will briefly mention that bill now.
The “Stop Addictive Feeds Exploitation (SAFE) for Kids Act” (S7694/A8148) defines “addictive feeds” as those that are arranged by a social media platform’s algorithm to maximize the platform’s use.
Those of us who are flat-out elderly vaguely recall that this replaced the former “chronological feed” in which the most recent content appeared first, and you had to scroll down to see that really cool post from two days ago. New York wants the chronological feed to be the default for social media users under 18.
The bill also proposes to limit under 18 access to social media without parental consent, especially between midnight and 6:00 am.
And those who love Illinois BIPA will be pleased to know that the bill allows parents (and their lawyers) to sue for damages.
Previous efforts to control underage use of social media have faced legal scrutinity, but since Attorney General James has sworn to uphold the U.S. Constitution, presumably she has thought about all this.
Enough about SAFE for Kids. Let’s look at the other bill.
The New York Child Data Protection Act
The second bill, and the one that concerns me, is the “New York Child Data Protection Act” (S7695/A8149). Here is how the quartet describes how this bill will protect the poor vulnerable children.
CC BY-SA 4.0.
With few privacy protections in place for minors online, children are vulnerable to having their location and other personal data tracked and shared with third parties. To protect children’s privacy, the New York Child Data Protection Act will prohibit all online sites from collecting, using, sharing, or selling personal data of anyone under the age of 18 for the purposes of advertising, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, this informed consent must come from a parent.
And again, this bill provides a BIPA-like mechanism for parents or guardians (and their lawyers) to sue for damages.
But let’s dig into the details. With apologies to the New York State Assembly, I’m going to dig into the Senate version of the bill (S7695). Bear in mind that this bill could be amended after I post this, and some of the portions that I cite could change.
This only applies to natural persons. So the bots are safe, regardless of age.
Speaking of age, the age of 18 isn’t the only age referenced in the bill. Here’s a part of the “privacy protection by default” section:
§ 899-FF. PRIVACY PROTECTION BY DEFAULT.
1. EXCEPT AS PROVIDED FOR IN SUBDIVISION SIX OF THIS SECTION AND SECTION EIGHT HUNDRED NINETY-NINE-JJ OF THIS ARTICLE, AN OPERATOR SHALL NOT PROCESS, OR ALLOW A THIRD PARTY TO PROCESS, THE PERSONAL DATA OF A COVERED USER COLLECTED THROUGH THE USE OF A WEBSITE, ONLINE SERVICE, ONLINE APPLICATION, MOBILE APPLICA- TION, OR CONNECTED DEVICE UNLESS AND TO THE EXTENT:
(A) THE COVERED USER IS TWELVE YEARS OF AGE OR YOUNGER AND PROCESSING IS PERMITTED UNDER 15 U.S.C. § 6502 AND ITS IMPLEMENTING REGULATIONS; OR
(B) THE COVERED USER IS THIRTEEN YEARS OF AGE OR OLDER AND PROCESSING IS STRICTLY NECESSARY FOR AN ACTIVITY SET FORTH IN SUBDIVISION TWO OF THIS SECTION, OR INFORMED CONSENT HAS BEEN OBTAINED AS SET FORTH IN SUBDIVISION THREE OF THIS SECTION.
So a lot of this bill depends upon whether a person is over or under the age of eighteen, or over or under the age of thirteen.
And that’s a problem.
How old are you?
The bill needs to know whether or not a person is 18 years old. And I don’t think the quartet will be satisfied with the way that alcohol websites determine whether someone is 21 years old.
Attorney General James and the others would presumably prefer that the social media companies verify ages with a government-issued ID such as a state driver’s license, a state identification card, or a national passport. This is how most entities verify ages when they have to satisfy legal requirements.
For some people, even some minors, this is not that much of a problem. Anyone who wants to drive in New York State must have a driver’s license, and you have to be at least 16 years old to get a driver’s license. Admittedly some people in the city never bother to get a driver’s license, but at some point these people will probably get a state ID card.
However, there are going to be some 17 year olds who don’t have a driver’s license, government ID or passport.
And some 16 year olds.
And once you look at younger people—15 year olds, 14 year olds, 13 year olds, 12 year olds—the chances of them having a government-issued identification document are much less.
What are these people supposed to do? Provide a birth certificate? And how will the social media companies know if the birth certificate is legitimate?
But there’s another way to determine ages—age estimation.
How old are you, part 2
As long-time readers of the Bredemarket blog know, I have struggled with the issue of age verification, especially for people who do not have driver’s licenses or other government identification. Age estimation in the absence of a government ID is still an inexact science, as even Yoti has stated.
Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.
So if a minor does not have a government ID, and the social media firm has to use age estimation to determine a minor’s age for purposes of the New York Child Data Protection Act, the following two scenarios are possible:
An 11 year old may be incorrectly allowed to give informed consent for purposes of the Act.
A 14 year old may be incorrectly denied the ability to give informed consent for purposes of the Act.
Is age estimation “good enough for government work”?
If you ask any one of us in the identity verification industry, we’ll tell you how identity verification proves that you know who is accessing your service.
During the identity verification/onboarding step, one common technique is to capture the live face of the person who is being onboarded, then compare that to the face captured from the person’s government identity document. As long as you have assurance that (a) the face is live and not a photo, and (b) the identity document has not been tampered, you positively know who you are onboarding.
The authentication step usually captures a live face and compares it to the face that was captured during onboarding, thus positively showing that the right person is accessing the previously onboarded account.
Sound like the perfect solution, especially in industries that rely on age verification to ensure that people are old enough to access the service.
Therefore, if you are employing robust identity verification and authentication that includes age verification, this should never happen.
Eduardo Montanari, who manages delivery logistics at a burger shop north of São Paulo, has noticed a pattern: Every time an order pickup is assigned to a female driver, there’s a good chance the worker is a minor.
On YouTube, a tutorial — one of many — explains “how to deliver as a minor.” It has over 31,000 views. “You have to create an account in the name of a person who’s the right age. I created mine in my mom’s name,” says a boy, who identifies himself as a minor in the video.
Once a cooperative parent or older sibling agrees to help, the account is created in the older person’s name, the older person’s face and identity document is used to create the account, and everything is valid.
Outsmarting authentication
Yes, but what about authentication?
That’s why it’s helpful to use a family member, or someone who lives in the minor’s home.
Let’s say little Maria is at home, during her homework, when her gig economy app rings with a delivery request. Now Maria was smart enough to have her older sister Irene or her mama Cecile perform the onboarding with the delivery app. If she’s at home, she can go to Irene or Cecile, have them perform the authentication, and then she’s off on her bike to make money.
(Alternatively, if the app does not support liveness detection, Maria can just hold a picture of Irene or Cecile up to the camera and authenticate.)
The onboarding process was completed by the account holder.
The authentication was completed by the account holder.
But the account holder isn’t the one that’s actually using the service. Once authentication is complete, anyone can access the service.
So how do you stop underage gig economy use?
According to Rest of World, one possible solution is to tattle on underage delivery people. If you see something, say something.
But what’s the incentive for a restaurant owner or delivery recipient to report that their deliveries are being performed by a kid?
“The feeling we have is that, at least this poor boy is working. I know this is horrible, but here in Brazil we end up seeing it as an opportunity … It’s ridiculous,” (psychologist Regiane Couto) said.
A much better solution is to replace one-time authetication with continuous authentication, or at least be smarter in authentication. For example, a gig delivery worker could be required to authenticate at multiple points in the process:
When the worker receives the delivery request.
When the worker arrives at the restaurant.
When the worker makes the delivery.
It’s too difficult to drag big sister Irene or mama Cecile to ALL of these points.
As an added bonus, these authetications provide timestamps of critical points in the delivery process, which the delivery company and/or restaurant can use for their analytics.
Problem solved.
Except that little Maria doesn’t have any excuse and has to complete her homework.
At the highest level, debates regarding government and enterprise use of biometric technology boil down to a debate about whether to keep people safe, or whether to preserve individual privacy.
In the state of Montana, school safety is winning over school privacy—for now.
The state Legislature earlier this year passed a law barring state and local governments from continuous use of facial recognition technology, typically in the form of cameras capable of reading and collecting a person’s biometric data, like the identifiable features of their face and body. A bipartisan group of legislators went toe-to-toe with software companies and law enforcement in getting Senate Bill 397 over the finish line, contending public safety concerns raised by the technology’s supporters don’t overcome individual privacy rights.
School districts, however, were specifically carved out of the definition of state and local governments to which the facial recognition technology law applies.
At a minimum Montana school districts seek to abide by two existing Federal laws when installating facial recognition and video surveillance systems.
Without many state-level privacy protection laws in place, school policies typically lean on the Children’s Online Privacy Protection Act (COPPA), a federal law requiring parental consent in order for websites to collect data on their children, or the Family Educational Rights and Privacy Act (FERPA), which protects the privacy of student education records.
If a vendor doesn’t agree to abide by these laws, then the Montana School Board Association recommends that the school district not do business with the vendor.
The Family Educational Rights and Privacy Act was passed by the US federal government to protect the privacy of students’ educational records. This law requires public schools and school districts to give families control over any personally identifiable information about the student.
(The Sun River Valley School District’s) use of the technology is more focused on keeping people who shouldn’t be on school property away, he said, such as a parent who lost custody of their child.
(Simms) High School Principal Luke McKinley said it’s been more frequent to use the facial recognition technology during extra-curricular activities, when football fans get too rowdy for a high school sports event.
Technology (in this case from Verkada) helps the Sun River School District, especially in its rural setting. Back in 2022, it took law enforcement an estimated 45 minutes to respond to school incidents. The hope is that the technology could identify those who engaged in illegal activity, or at least deter it.
What about other school districts?
When I created my educational identity page, I included the four key words “When permitted by law.” While Montana school districts are currently permitted to use facial recognition and video surveillance, other school districts need to check their local laws before implementing such a system, and also need to ensure that they comply with federal laws such as COPPA and FERPA.
I may be, um, biased in my view, but as long as the school district (or law enforcement agency, or apartment building owner, or whoever) complies with all applicable laws, and implements the technology with a primary purpose of protecting people rather than spying on them, facial recognition is a far superior tool to protect people than manual recognition methods that rely on all-too-fallible human beings.
I tend to view presentation attack detection (PAD) through the lens of iBeta or occasionally of BixeLab. But I need to remind myself that these are not the only entities examining PAD.
A recent paper authored by Koushik Srivatsan, Muzammal Naseer, and Karthik Nandakumar of the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) addresses PAD from a research perspective. I honestly don’t understand the research, but perhaps you do.
Flip spoofing his natural appearance by portraying Geraldine. Some were unable to detect the attack. By NBC Television. – eBay itemphoto frontphoto back, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16476809
Face anti-spoofing (FAS) or presentation attack detection is an essential component of face recognition systems deployed in security-critical applications. Existing FAS methods have poor generalizability to unseen spoof types, camera sensors, and environmental conditions. Recently, vision transformer (ViT) models have been shown to be effective for the FAS task due to their ability to capture long-range dependencies among image patches. However, adaptive modules or auxiliary loss functions are often required to adapt pre-trained ViT weights learned on large-scale datasets such as ImageNet. In this work, we first show that initializing ViTs with multimodal (e.g., CLIP) pre-trained weights improves generalizability for the FAS task, which is in line with the zero-shot transfer capabilities of vision-language pre-trained (VLP) models. We then propose a novel approach for robust cross-domain FAS by grounding visual representations with the help of natural language. Specifically, we show that aligning the image representation with an ensemble of class descriptions (based on natural language semantics) improves FAS generalizability in low-data regimes. Finally, we propose a multimodal contrastive learning strategy to boost feature generalization further and bridge the gap between source and target domains. Extensive experiments on three standard protocols demonstrate that our method significantly outperforms the state-of-the-art methods, achieving better zero-shot transfer performance than five-shot transfer of “adaptive ViTs”.
CLIP is the first multimodal (in this case, vision and text) model tackling computer vision and was recently released by OpenAI on January 5, 2021….CLIP is a bridge between computer vision and natural language processing.
Sadly, Brems didn’t address ViT, so I turned to Chinmay Bhalerao.
Vision Transformers work by first dividing the image into a sequence of patches. Each patch is then represented as a vector. The vectors for each patch are then fed into a Transformer encoder. The Transformer encoder is a stack of self-attention layers. Self-attention is a mechanism that allows the model to learn long-range dependencies between the patches. This is important for image classification, as it allows the model to learn how the different parts of an image contribute to its overall label.
The output of the Transformer encoder is a sequence of vectors. These vectors represent the features of the image. The features are then used to classify the image.
On September 30, FindBiometrics and Acuity Market Intelligence released the production version of the Biometric Digital Identity Prism Report. You can request to download it here.
But FindBiometrics and Acuity Market Intelligence didn’t invent the Big 3. The concept has been around for 40 years. And two of today’s Big 3 weren’t in the Big 3 when things started. Oh, and there weren’t always 3; sometimes there were 4, and some could argue that there were 5.
So how did we get from the Big 3 of 40 years ago to the Big 3 of today?
The Big 3 in the 1980s
Back in 1986 (eight years before I learned how to spell AFIS) the American National Standards Institute, in conjunction with the National Bureau of Standards, issued ANSI/NBS-ICST 1-1986, a data format for information interchange of fingerprints. The PDF of this long-superseded standard is available here.
When creating this standard, ANSI and the NBS worked with a number of law enforcement agencies, as well as companies in the nascent fingerprint industry. There is a whole list of companies cited at the beginning of the standard, but I’d like to name four of them.
De La Rue Printrak, Inc.
Identix, Inc.
Morpho Systems
NEC Information Systems, Inc.
While all four of these companies produced computerized fingerprinting equipment, three of them had successfully produced automated fingerprint identification systems, or AFIS. As Chapter 6 of the Fingerprint Sourcebook subsequently noted:
Morpho Systems resulted from French AFIS efforts, separate from those of the FBI. These efforts launched Morpho’s long-standing relationship with the French National Police, as well as a similar relationship (now former relationship) with Pierce County, Washington.
NEC had deployed AFIS equipment for the National Police Academy of Japan, and (after some prodding; read Chapter 6 for the story) the city of San Francisco. Eventually the state of California obtained an NEC system, which played a part in the identification of “Night Stalker” Richard Ramirez.
After the success of the San Francisco and California AFIS systems, many other jurisdictions began clamoring for AFIS of their own, and turned to these three vendors to supply them.
The Big 4 in the 1990s
But in 1990, these three firms were joined by a fourth upstart, Cogent Systems of South Pasadena, California.
While customers initially preferred the Big 3 to the upstart, Cogent Systems eventually installed a statewide system in Ohio and a border control system for the U.S. government, plus a vast number of local systems at the county and city level.
Between 1991 and 1994, the (Immigfation and Naturalization Service) conducted several studies of automated fingerprint systems, primarily in the San Diego, California, Border Patrol Sector. These studies demonstrated to the INS the feasibility of using a biometric fingerprint identification system to identify apprehended aliens on a large scale. In September 1994, Congress provided almost $30 million for the INS to deploy its fingerprint identification system. In October 1994, the INS began using the system, called IDENT, first in the San Diego Border Patrol Sector and then throughout the rest of the Southwest Border.
I was a proposal writer for Printrak (divested by De La Rue) in the 1990s, and competed against Cogent, Morpho, and NEC in AFIS procurements. By the time I moved from proposals to product management, the next redefinition of the “big” vendors occurred.
The Big 3 in 2003
There are a lot of name changes that affected AFIS participants, one of which was the 1988 name change of the National Bureau of Standards to the National Institute of Standards and Technology (NIST). As fingerprints and other biometric modalities were increasingly employed by government agencies, NIST began conducting tests of biometric systems. These tests continue to this day, as I have previously noted.
One of NIST’s first tests was the Fingerprint Vendor Technology Evaluation of 2003 (FpVTE 2003).
For those who are familiar with NIST testing, it’s no surprise that the test was thorough:
FpVTE 2003 consists of multiple tests performed with combinations of fingers (e.g., single fingers, two index fingers, four to ten fingers) and different types and qualities of operational fingerprints (e.g., flat livescan images from visa applicants, multi-finger slap livescan images from present-day booking or background check systems, or rolled and flat inked fingerprints from legacy criminal databases).
Eighteen vendors submitted their fingerprint algorithms to NIST for one or more of the various tests, including Bioscrypt, Cogent Systems, Identix, SAGEM MORPHO (SAGEM had acquired Morpho Systems), NEC, and Motorola (which had acquired Printrak). And at the conclusion of the testing, the FpVTE 2003 summary (PDF) made this statement:
Of the systems tested, NEC, SAGEM, and Cogent produced the most accurate results.
Which would have been great news if I were a product manager at NEC, SAGEM, and Cogent.
Unfortunately, I was a product manager at Motorola.
The effect of this report was…not good, and at least partially (but not fully) contributed to Motorola’s loss of its long-standing client, the Royal Canadian Mounted Police, to Cogent.
The Big 3, 4, or 5 after 2003
So what happened in the years after FpVTE was released? Opinions vary, but here are three possible explanations for what happened next.
Did the Big 3 become the Big 4 again?
Now I probably have a bit of bias in this area since I was a Motorola employee, but I maintain that Motorola overcame this temporary setback and vaulted back into the Big 4 within a couple of years. Among other things, Motorola deployed a national 1000 pixels-per-inch (PPI) system in Sweden several years before the FBI did.
Did the Big 3 remain the Big 3?
Motorola’s arch-enemies at Sagem Morpho had a different opinion, which was revealed when the state of West Virginia finally got around to deploying its own AFIS. A bit ironic, since the national FBI AFIS system IAFIS was located in West Virginia, or perhaps not.
Anyway, Motorola had a very effective sales staff, as was apparent when the state issued its Request for Proposal (RFP) and explicitly said that the state wanted a Motorola AFIS.
That didn’t stop Cogent, Identix, NEC, and Sagem Morpho from bidding on the project.
After the award, Dorothy Bullard and I requested copies of all of the proposals for evaluation. While Motorola (to no one’s surprise) won the competition, Dorothy and I believed that we shouldn’t have won. In particular, our arch-enemies at Sagem Morpho raised a compelling argument that it should be the chosen vendor.
Their argument? Here’s my summary: “Your RFP says that you want a Motorola AFIS. The states of Kansas (see page 6 of this PDF) and New Mexico (see this PDF) USED to have a Motorola AFIS…but replaced their systems with our MetaMorpho AFIS because it’s BETTER than the Motorola AFIS.”
But were Cogent, Motorola, NEC, and Sagem Morpho the only “big” players?
Did the Big 3 become the Big 5?
While the Big 3/Big 4 took a lot of the headlines, there were a number of other companies vying for attention. (I’ve talked about this before, but it’s worthwhile to review it again.)
Identix, while making some efforts in the AFIS market, concentrated on creating live scan fingerprinting machines, where it competed (sometimes in court) against companies such as Digital Biometrics and Bioscrypt.
The fingerprint companies started to compete against facial recognition companies, including Viisage and Visionics.
Oh, and there were also iris companies such as Iridian.
And there were other ways to identify people. Even before 9/11 mandated REAL ID (which we may get any year now), Polaroid was making great efforts to improve driver’s licenses to serve as a reliable form of identification.
In short, there were a bunch of small identity companies all over the place.
But in the course of a few short years, Dr. Joseph Atick (initially) and Robert LaPenta (subsequently) concentrated on acquiring and merging those companies into a single firm, L-1 Identity Solutions.
These multiple mergers resulted in former competitors Identix and Digital Biometrics, and former competitors Viisage and Visionics, becoming part of one big happy family. (A multinational big happy family when you count Bioscrypt.) Eventually this company offered fingerprint, face, iris, driver’s license, and passport solutions, something that none of the Big 3/Big 4 could claim (although Sagem Morpho had a facial recognition offering). And L-1 had federal contracts and state contracts that could match anything that the Big 3/Big 4 offered.
So while L-1 didn’t have a state AFIS contract like Cogent, Motorola, NEC, and Sagem Morpho did, you could argue that L-1 was important enough to be ranked with the big boys.
So for the sake of argument let’s assume that there was a Big 5, and L-1 Identity Solutions was part of it, along with the three big boys Motorola, NEC, and Safran (who had acquired Sagem and thus now owned Sagem Morpho), and the independent Cogent Systems. These five companies competed fiercly with each other (see West Virginia, above).
In a two-year period, everything would change.
The Big 3 after 2009
Hang on to your seats.
The Motorola RAZR was hugely popular…until it wasn’t. Eventually Motorola split into two companies and sold off others, including the “Printrak” Biometric Business Unit. By NextG50 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=130206087
By 2009, Safran (resulting from the merger of Sagem and Snecma) was an international powerhouse in aerospace and defense and also had identity/biometric interests. Motorola, in the meantime, was no longer enjoying the success of its RAZR phone and was looking at trimming down (prior to its eventual, um, bifurcation). In response to these dynamics, Safran announced its intent to purchase Motorola’s Biometric Business Unit in October 2008, an effort that was finalized in April 2009. The Biometric Business Unit (adopting its former name Printrak) was acquired by Sagem Morpho and became MorphoTrak. On a personal level, Dorothy Bullard moved out of Proposals and I moved into Proposals, where I got to work with my new best friends that had previously slammed Motorola for losing the Kansas and New Mexico deals. (Seriously, Cindy and Ron are great folks.)
By 2011, Safran decided that it needed additional identity capabilities, so it acquired L-1 Identity Solutions and renamed the acquisition as MorphoTrust.
If you’re keeping notes, the Big 5 have now become the Big 3: 3M, Safran, and NEC (the one constant in all of this).
While there were subsequent changes (3M sold Cogent and other pieces to Gemalto, Safran sold all of Morpho to Advent International/Oberthur to form IDEMIA, and Gemalto was acquired by Thales), the Big 3 has remained constant over the last decade.
And that’s where we are today…pending future developments.
If Alphabet or Amazon reverse their current reluctance to market their biometric offerings to governments, the entire landscape could change again.
Or perhaps a new AI-fueled competitor could emerge.
The 1 Biometric Content Marketing Expert
This was written by John Bredehoft of Bredemarket.
If you work for the Big 3 or the Little 80+ and need marketing and writing services, the biometric content marketing expert can help you. There are several ways to get in touch:
Book a meeting with me at calendly.com/bredemarket. Be sure to fill out the information form so I can best help you.