Login.gov and IAL2 #realsoonnow

Back in August 2023, the U.S. General Services Administration published a blog post that included the following statement:

Login.gov is on a path to providing an IAL2-compliant identity verification service to its customers in a responsible, equitable way. Building on the strong evidence-based identity verification that Login.gov already offers, Login.gov is on a path to providing IAL2-compliant identity verification that ensures both strong security and broad and equitable access.

From https://www.gsa.gov/blog/2023/08/18/reducing-fraud-and-increasing-access-drives-record-adoption-and-usage-of-logingov

It’s nice to know…NOW…that Login.gov is working to achieve IAL2.

This post explains what the August 2023 GSA post said, and what it didn’t say.

But first, I’ll define what Login.gov and “IAL2” are.

What is Login.gov?

Here is what Login.gov says about itself:

Login.gov is a secure sign in service used by the public to sign in to participating government agencies. Participating agencies will ask you to create a Login.gov account to securely access your information on their website or application.

You can use the same username and password to access any agency that partners with Login.gov. This streamlines your process and eliminates the need to remember multiple usernames and passwords.

From https://www.login.gov/what-is-login/

Obviously there are a number of private companies (over 80 last I counted) that provide secure access to information, but Login.gov is provided by the government itself—specifically by the General Services Administration’s Technology Transformation Services. Agencies at the federal, state, and local level can work with the GSA TTS’ “18F” organization to implement solutions such as Login.gov.

Why would agencies implement Login.gov? Because the agencies want to protect their constituents’ information. If fraudsters capture personally identifiable information (PII) of someone applying for government services, the breached government agency will face severe repurcussions. Login.gov is supposed to protect its partner agencies from these nightmares.

How does Login.gov do this?

  • Sometimes you might use two-factor authentication consisting of a password and a second factor such as an SMS code or the use of an authentication app.
  • In more critical cases, Login.gov requests a more reliable method of identification, such as a government-issued photo ID (driver’s license, passport, etc.).

What is IAL2?

At the risk of repeating myself, I’ll briefly go over what “Identity Assurance Level 2” (IAL2) is.

The U.S. National Institute of Standards and Technology, in its publication NIST SP 800-63a, has defined “identity assurance levels” (IALs) that can be used when dealing with digital identities. It’s helpful to review how NIST has defined the IALs. (I’ll define the other acronyms as we go along.)

Assurance in a subscriber’s identity is described using one of three IALs:

IAL1: There is no requirement to link the applicant to a specific real-life identity. Any attributes provided in conjunction with the subject’s activities are self-asserted or should be treated as self-asserted (including attributes a [Credential Service Provider] CSP asserts to an [Relying Party] RP). Self-asserted attributes are neither validated nor verified.

IAL2: Evidence supports the real-world existence of the claimed identity and verifies that the applicant is appropriately associated with this real-world identity. IAL2 introduces the need for either remote or physically-present identity proofing. Attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL2 can support IAL1 transactions if the user consents.

IAL3: Physical presence is required for identity proofing. Identifying attributes must be verified by an authorized and trained CSP representative. As with IAL2, attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL3 can support IAL1 and IAL2 identity attributes if the user consents.

From https://pages.nist.gov/800-63-3/sp800-63a.html#sec2

So in its simplest terms, IAL2 requires evidence of a verified credential so that an online person can be linked to a real-life identity. If someone says they’re “John Bredehoft” and fills in an online application to receive government services, IAL2 compliance helps to ensure that the person filling out the online application truly IS John Bredehoft, and not Bernie Madoff.

As more and more of us conduct business—including government business—online, IAL2 compliance is essential to reduce fraud.

One more thing about IAL2 compliance. The mere possession of a valid government issued photo ID is NOT sufficient for IAL2 compliance. After all, Bernie Madoff may be using John Bredehoft’s driver’s license. To make sure that it’s John Bredehoft using John Bredehoft’s driver’s license, an additional check is needed.

This has been explained by ID.me, a private company that happens to compete with Login.gov to provide identity proofing services to government agencies.

Biometric comparison (e.g., selfie with liveness detection or fingerprint) of the strongest piece of evidence to the applicant

From https://network.id.me/article/what-is-nist-ial2-identity-verification/

So you basically take the information on a driver’s license and perform a facial recognition 1:1 comparison with the person possessing the driver’s license, ideally using liveness detection, to make sure that the presented person is not a fake.

So what?

So the GSA was apparently claiming how secure Login.gov was. Guess who challenged the claim?

The GSA.

Now sometimes it’s ludicrous to think that the government can police itself, but in some cases government actually identifies government faults.

Of course, this works best when you can identify problems with some other government entity.

Which is why the General Services Administration has an Inspector General. And in March 2023, the GSA Inspector General released a report with the following title: “GSA Misled Customers on Login.gov’s Compliance with Digital Identity Standards.”

The title is pretty clear, but Fedscoop summarized the findings for those who missed the obvious:

As part of an investigation that has run since last April (2022), GSA’s Office of the Inspector General found that the agency was billing agencies for IAL2-compliant services, even though Login.gov did not meet Identity Assurance Level 2 (IAL2) standards.

GSA knowingly billed over $10 million for services provided through contracts with other federal agencies, even though Login.gov is not IAL2 compliant, according to the watchdog.

From https://fedscoop.com/gsa-login-gov-watchdog-report/

So now GSA is explicitly saying that Login.gov ISN’T IAL2-compliant.

Which helps its private sector competitors.

Clean Data is the New Oxygen, and Dirty Data is the New Carbon Monoxide

I have three questions for you, but don’t sweat; I’m giving you the answers.

  1. How long can you survive without pizza? Years (although your existence will be hellish).
  2. OK, how long can you survive without water? From 3 days to 7 days.
  3. OK, how long can you survive without oxygen? Only 10 minutes.

This post asks how long a 21st century firm can survive without data, and what can happen if the data is “dirty.”

How does Mika survive?

Have you heard of Mika? Here’s her LinkedIn profile.

From Mika’s LinkedIn profile at https://www.linkedin.com/in/mika-ai-ceo/

Yes, you already know that I don’t like LinkedIn profiles that don’t belong to real people. But this one is a bit different.

Mika is the Chief Executive Officer of Dictador, a Polish-Colombian spirits firm, and is responsible for “data insight, strategic provocation and DAO community liaison.” Regarding data insight, Mika described her approach in an interview with Inside Edition:

My decision making process relies on extensive data analysis and aligning with the company’s strategic objectives. It’s devoid of personal bias ensuring unbiased and strategic choices that prioritize the organization’s best interests.

From the transcript to https://www.youtube.com/watch?v=8BQEyQ2-awc
From https://www.youtube.com/watch?v=8BQEyQ2-awc

Mika was brought to my attention by accomplished product marketer/artist Danuta (Dana) Deborgoska. (She’s appeared in the Bredemarket blog before, though not by name.) Dana is also Polish (but not Colombian) and clearly takes pride in the artificial intelligence accomplishments of this Polish-headquartered company. You can read her LinkedIn post to see her thoughts, one of which was as follows:

Data is the new oxygen, and we all know that we need clean data to innovate and sustain business models.

From Dana Debogorska’s LinkedIn post.

Dana succinctly made two points:

  1. Data is the new oxygen.
  2. We need clean data.

Point one: data is the new oxygen

There’s a reference to oxygen again, but it’s certainly appropriate. Just as people cannot survive without oxygen, Generative AI cannot survive without data.

But the need for data predates AI models. From 2017:

Reliance Industries Chairman Mukesh Ambani said India is poised to grow…but to make that happen the country’s telecoms and IT industry would need to play a foundational role and create the necessary digital infrastructure.

Calling data the “oxygen” of the digital economy, Ambani said the telecom industry had the urgent task of empowering 1.3 billion Indians with the tools needed to flourish in the digital marketplace.

From India Times.

And we can go back centuries in history and find examples when a lack of data led to catastrophe. Like the time in 1776 when the Hessians didn’t know that George Washington and his troops had crossed the Delaware.

Point two: we need clean data

Of course, the presence or absence of data alone is not enough. As Debogorska notes, we don’t just need any data; we need CLEAN data, without error and without bias. Dirty data is like carbon monoxide, and as you know carbon monoxide is harmful…well, most of the time.

That’s been the challenge not only with artificial intelligence, but with ALL aspects of data gathering.

The all-male board of directors of a fertilizer company in 1960. Fair use. From the New York Times.

In all of these cases, someone (Amazon, Enron’s shareholders, or NIST) asked questions about the cleanliness of the data, and then set out to answer those questions.

  • In the case of Amazon’s recruitment tool and the company Enron, the answers caused Amazon to abandon the tool and Enron to abandon its existence.
  • Despite the entreaties of so-called privacy advocates (who prefer the privacy nightmare of physical driver’s licenses to the privacy-preserving features of mobile driver’s licenses), we have not abandoned facial recognition, but we’re definitely monitoring it in a statistical (not an anecdotal) sense.

The cleanliness of the data will continue to be the challenge as we apply artificial intelligence to new applications.

Clean room of a semiconductor manufacturing facility. Uploaded by Duk 08:45, 16 Feb 2005 (UTC) – http://www.grc.nasa.gov/WWW/ictd/content/labmicrofab.html (original) and https://images.nasa.gov/details/GRC-1998-C-01261 (high resolution), Public Domain, https://commons.wikimedia.org/w/index.php?curid=60825

Point three: if you’re not saying things, then you’re not selling

(Yes, this is the surprise point.)

Dictador is talking about Mika.

Are you talking about your product, or are you keeping mum about it?

I have more to…um…say about this. Follow this link.

Pangiam May Be Acquired Next Year

Things change. Pangiam, a company that didn’t even exist a few years ago, and that started off by acquiring a one-off project from a local government agency, is now itself a friendly acquisition target (pending stockholder and regulatory approvals).

From MWAA to Pangiam

Back when I worked for IDEMIA and helped to market its border control solutions, one of our competitors for airport business was an airport itself—specifically, the Metropolitan Washington Airports Authority. Rather than buying a biometric exit solution from someone else, the MWAA developed its own, called veriScan.

2021 image from the former airportveriscan website.

After I left IDEMIA, the MWAA decided that it didn’t want to be in the software business any more, and sold veriScan to a new company, Pangiam. I posted about this decision and the new company in this blog.

ALEXANDRIA, Va., March 19, 2021 /PRNewswire/ — Pangiam, a technology-based security and travel services provider, announced today that it has acquired veriScan, an integrated biometric facial recognition system for airports and airlines, from the Metropolitan Washington Airports Authority (“Airports Authority”). Terms of the transaction were not disclosed.

From PR Newswire.

But Pangiam was just getting started.

Trueface, FRTE, stadiums, and artificial intelligence

Results for the NIST FRTE 1:N pangiam-000 algorithm, captured November 6, 2023 from NIST.

A few months later Pangiam acquired Trueface and therefore earned a spot on the NIST FRTE 1:N (formerly FRVT 1:N) rankings and an interest in the stadium/venue identity verification/authentication market.

By Chris6d – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=101751795

Meanwhile Pangiam continued to build up its airport business and also improved its core facial recognition technology.

After that I personally concentrated on other markets, and therefore missed the announcements of Pangiam Bridge (introducing artificial intelligence into Pangiam’s border crossing offering) and Project DARTMOUTH (devoted to using artificial intelligence and pattern analysis to airline baggage, cargo, and shipments).

So what will Pangiam work on next? Where will it expand? What will it acquire?

Nothing.

Enter BigBear.ai

Pangiam itself is now an acquisition target.

COLUMBIA, MD.— November 6, 2023 — BigBear.ai (NYSE: BBAI), a leading provider of AI-enabled business intelligence solutions, today announced a definitive merger agreement to acquire Pangiam Intermediate Holdings, LLC (Pangiam), a leader in Vision AI for the global trade, travel, and digital identity industries, for approximately $70 million in an all-stock transaction. The combined company will create one of the industry’s most comprehensive Vision AI portfolios, combining Pangiam’s facial recognition and advanced biometrics with BigBear.ai’s computer vision capabilities, positioning the company as a foundational leader in one of the fastest growing categories for the application of AI. The proposed acquisition is expected to close in the first quarter of 2024, subject to customary closing conditions, including approval by the holders of a majority of BigBear.ai’s outstanding common shares and receipt of regulatory approval.

From bigbear.ai.

Yet another example of how biometrics is now just a minor part of general artificial intelligence efforts. Identify a face or a grenade, it’s all the same.

Anyway, let’s check back in a few months. Because of the technology involved, this proposed acquisition will DEFINITELY merit government review.

Converting Prospects For Your Firm’s “Something You Are” Solution

As identity/biometric professionals well know, there are five authentication factors that you can use to gain access to a person’s account. (You can also use these factors for identity verification to establish the person’s account in the first place.)

I described one of these factors, “something you are,” in a 2021 post on the five authentication factors.

Something You Are. I’ve spent…a long time with this factor, since this is the factor that includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.

From https://bredemarket.com/2021/03/02/the-five-authentication-factors/

As I mentioned in August, there are a number of biometric modalities, including face, fingerprint, iris, hand geometry, palm print, signature, voice, gait, and many more.

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

If your firm offers an identity solution that partially depends upon “something you are,” then you need to create content (blog, case study, social media, white paper, etc.) that converts prospects for your identity/biometric product/service and drives content results.

Bredemarket can help.

Click below for details.

The Imperfect Way to Enforce New York’s Child Data Protection Act

It’s often good to use emotion in your marketing.

For example, when biometric companies want to justify the use of their technology, they have found that it is very effective to position biometrics as a way to combat sex trafficking.

Similarly, moves to rein in social media are positioned as a way to preserve mental health.

By Marc NL at English Wikipedia – Transferred from en.wikipedia to Commons., Public Domain, https://commons.wikimedia.org/w/index.php?curid=2747237

Now that’s a not-so-pretty picture, but it effectively speaks to emotions.

“If poor vulnerable children are exposed to addictive, uncontrolled social media, YOUR child may end up in a straitjacket!”

In New York state, four government officials have declared that the ONLY way to preserve the mental health of underage social media users is via two bills, one of which is the “New York Child Data Protection Act.”

But there is a challenge to enforce ALL of the bill’s provisions…and only one way to solve it. An imperfect way—age estimation.

This post only briefly addresses the alleged mental health issues of social media before plunging into one of the two proposed bills to solve the problem. It then examines a potentially unenforceable part of the bill and a possible solution.

Does social media make children sick?

Letitia “Tish” James is the 67th Attorney General for the state of New York. From https://ag.ny.gov/about/meet-letitia-james

On October 11, a host of New York State government officials, led by New York State Attorney General Letitia James, jointly issued a release with the title “Attorney General James, Governor Hochul, Senator Gounardes, and Assemblymember Rozic Take Action to Protect Children Online.”

Because they want to protect the poor vulnerable children.

By Paolo Monti – Available in the BEIC digital library and uploaded in partnership with BEIC Foundation.The image comes from the Fondo Paolo Monti, owned by BEIC and located in the Civico Archivio Fotografico of Milan., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48057924

And because the major U.S. social media companies are headquartered in California. But I digress.

So why do they say that children need protection?

Recent research has shown devastating mental health effects associated with children and young adults’ social media use, including increased rates of depression, anxiety, suicidal ideation, and self-harm. The advent of dangerous, viral ‘challenges’ being promoted through social media has further endangered children and young adults.

From https://ag.ny.gov/child-online-safety

Of course one can also argue that social media is harmful to adults, but the New Yorkers aren’t going to go that far.

So they are just going to protect the poor vulnerable children.

CC BY-SA 4.0.

This post isn’t going to deeply analyze one of the two bills the quartet have championed, but I will briefly mention that bill now.

  • The “Stop Addictive Feeds Exploitation (SAFE) for Kids Act” (S7694/A8148) defines “addictive feeds” as those that are arranged by a social media platform’s algorithm to maximize the platform’s use.
  • Those of us who are flat-out elderly vaguely recall that this replaced the former “chronological feed” in which the most recent content appeared first, and you had to scroll down to see that really cool post from two days ago. New York wants the chronological feed to be the default for social media users under 18.
  • The bill also proposes to limit under 18 access to social media without parental consent, especially between midnight and 6:00 am.
  • And those who love Illinois BIPA will be pleased to know that the bill allows parents (and their lawyers) to sue for damages.

Previous efforts to control underage use of social media have faced legal scrutinity, but since Attorney General James has sworn to uphold the U.S. Constitution, presumably she has thought about all this.

Enough about SAFE for Kids. Let’s look at the other bill.

The New York Child Data Protection Act

The second bill, and the one that concerns me, is the “New York Child Data Protection Act” (S7695/A8149). Here is how the quartet describes how this bill will protect the poor vulnerable children.

CC BY-SA 4.0.

With few privacy protections in place for minors online, children are vulnerable to having their location and other personal data tracked and shared with third parties. To protect children’s privacy, the New York Child Data Protection Act will prohibit all online sites from collecting, using, sharing, or selling personal data of anyone under the age of 18 for the purposes of advertising, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, this informed consent must come from a parent.

From https://ag.ny.gov/child-online-safety

And again, this bill provides a BIPA-like mechanism for parents or guardians (and their lawyers) to sue for damages.

But let’s dig into the details. With apologies to the New York State Assembly, I’m going to dig into the Senate version of the bill (S7695). Bear in mind that this bill could be amended after I post this, and some of the portions that I cite could change.

The “definitions” section of the bill includes the following:

“MINOR” SHALL MEAN A NATURAL PERSON UNDER THE AGE OF EIGHTEEN.

From https://www.nysenate.gov/legislation/bills/2023/S7695, § 899-EE, 2.

This only applies to natural persons. So the bots are safe, regardless of age.

Speaking of age, the age of 18 isn’t the only age referenced in the bill. Here’s a part of the “privacy protection by default” section:

§ 899-FF. PRIVACY PROTECTION BY DEFAULT.

1. EXCEPT AS PROVIDED FOR IN SUBDIVISION SIX OF THIS SECTION AND SECTION EIGHT HUNDRED NINETY-NINE-JJ OF THIS ARTICLE, AN OPERATOR SHALL NOT PROCESS, OR ALLOW A THIRD PARTY TO PROCESS, THE PERSONAL DATA OF A COVERED USER COLLECTED THROUGH THE USE OF A WEBSITE, ONLINE SERVICE, ONLINE APPLICATION, MOBILE APPLICA- TION, OR CONNECTED DEVICE UNLESS AND TO THE EXTENT:

(A) THE COVERED USER IS TWELVE YEARS OF AGE OR YOUNGER AND PROCESSING IS PERMITTED UNDER 15 U.S.C. § 6502 AND ITS IMPLEMENTING REGULATIONS; OR

(B) THE COVERED USER IS THIRTEEN YEARS OF AGE OR OLDER AND PROCESSING IS STRICTLY NECESSARY FOR AN ACTIVITY SET FORTH IN SUBDIVISION TWO OF THIS SECTION, OR INFORMED CONSENT HAS BEEN OBTAINED AS SET FORTH IN SUBDIVISION THREE OF THIS SECTION.

From https://www.nysenate.gov/legislation/bills/2023/S7695

So a lot of this bill depends upon whether a person is over or under the age of eighteen, or over or under the age of thirteen.

And that’s a problem.

How old are you?

The bill needs to know whether or not a person is 18 years old. And I don’t think the quartet will be satisfied with the way that alcohol websites determine whether someone is 21 years old.

This age verification method is…not that robust.

Attorney General James and the others would presumably prefer that the social media companies verify ages with a government-issued ID such as a state driver’s license, a state identification card, or a national passport. This is how most entities verify ages when they have to satisfy legal requirements.

For some people, even some minors, this is not that much of a problem. Anyone who wants to drive in New York State must have a driver’s license, and you have to be at least 16 years old to get a driver’s license. Admittedly some people in the city never bother to get a driver’s license, but at some point these people will probably get a state ID card.

You don’t need a driver’s license to ride the New York City subway, but if the guitarist wants to open a bank account for his cash it would help him prove his financial identity. By David Shankbone – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=2639495
  • However, there are going to be some 17 year olds who don’t have a driver’s license, government ID or passport.
  • And some 16 year olds.
  • And once you look at younger people—15 year olds, 14 year olds, 13 year olds, 12 year olds—the chances of them having a government-issued identification document are much less.

What are these people supposed to do? Provide a birth certificate? And how will the social media companies know if the birth certificate is legitimate?

But there’s another way to determine ages—age estimation.

How old are you, part 2

As long-time readers of the Bredemarket blog know, I have struggled with the issue of age verification, especially for people who do not have driver’s licenses or other government identification. Age estimation in the absence of a government ID is still an inexact science, as even Yoti has stated.

Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.

From https://www.yoti.com/wp-content/uploads/Yoti-Age-Estimation-White-Paper-March-2023.pdf

So if a minor does not have a government ID, and the social media firm has to use age estimation to determine a minor’s age for purposes of the New York Child Data Protection Act, the following two scenarios are possible:

  • An 11 year old may be incorrectly allowed to give informed consent for purposes of the Act.
  • A 14 year old may be incorrectly denied the ability to give informed consent for purposes of the Act.

Is age estimation “good enough for government work”?

Why Age-Restricted Gig Economy Companies Need Continuous Authentication (and Liveness Detection)

If you ask any one of us in the identity verification industry, we’ll tell you how identity verification proves that you know who is accessing your service.

  • During the identity verification/onboarding step, one common technique is to capture the live face of the person who is being onboarded, then compare that to the face captured from the person’s government identity document. As long as you have assurance that (a) the face is live and not a photo, and (b) the identity document has not been tampered, you positively know who you are onboarding.
  • The authentication step usually captures a live face and compares it to the face that was captured during onboarding, thus positively showing that the right person is accessing the previously onboarded account.

Sound like the perfect solution, especially in industries that rely on age verification to ensure that people are old enough to access the service.

Therefore, if you are employing robust identity verification and authentication that includes age verification, this should never happen.

By LukaszKatlewa – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=49248622

Eduardo Montanari, who manages delivery logistics at a burger shop north of São Paulo, has noticed a pattern: Every time an order pickup is assigned to a female driver, there’s a good chance the worker is a minor.

From https://restofworld.org/2023/underage-gig-workers-brazil/

An underage delivery person who has been onboarded and authenticated, and whose age has been verified? That’s impossible, you say! Read on.

31,000 people already know how to bypass onboarding and authentication

Rest of World wrote an article (tip of the hat to Bianca Gonzalez of Biometric Update) entitled “Underage gig workers keep outsmarting facial recognition.

Outsmarting onboarding

How do the minors do it?

On YouTube, a tutorial — one of many — explains “how to deliver as a minor.” It has over 31,000 views. “You have to create an account in the name of a person who’s the right age. I created mine in my mom’s name,” says a boy, who identifies himself as a minor in the video.

From https://restofworld.org/2023/underage-gig-workers-brazil/
From https://www.youtube.com/watch?v=59vaKab4g2M. “Botei no da minha mãe não conta da minha.” (“I put it on my mother’s account, it doesn’t count on mine.”)

Once a cooperative parent or older sibling agrees to help, the account is created in the older person’s name, the older person’s face and identity document is used to create the account, and everything is valid.

Outsmarting authentication

Yes, but what about authentication?

That’s why it’s helpful to use a family member, or someone who lives in the minor’s home.

Let’s say little Maria is at home, during her homework, when her gig economy app rings with a delivery request. Now Maria was smart enough to have her older sister Irene or her mama Cecile perform the onboarding with the delivery app. If she’s at home, she can go to Irene or Cecile, have them perform the authentication, and then she’s off on her bike to make money.

(Alternatively, if the app does not support liveness detection, Maria can just hold a picture of Irene or Cecile up to the camera and authenticate.)

  • The onboarding process was completed by the account holder.
  • The authentication was completed by the account holder.
  • But the account holder isn’t the one that’s actually using the service. Once authentication is complete, anyone can access the service.

So how do you stop underage gig economy use?

According to Rest of World, one possible solution is to tattle on underage delivery people. If you see something, say something.

But what’s the incentive for a restaurant owner or delivery recipient to report that their deliveries are being performed by a kid?

“The feeling we have is that, at least this poor boy is working. I know this is horrible, but here in Brazil we end up seeing it as an opportunity … It’s ridiculous,” (psychologist Regiane Couto) said.

From https://restofworld.org/2023/underage-gig-workers-brazil/

A much better solution is to replace one-time authetication with continuous authentication, or at least be smarter in authentication. For example, a gig delivery worker could be required to authenticate at multiple points in the process:

  • When the worker receives the delivery request.
  • When the worker arrives at the restaurant.
  • When the worker makes the delivery.

It’s too difficult to drag big sister Irene or mama Cecile to ALL of these points.

As an added bonus, these authetications provide timestamps of critical points in the delivery process, which the delivery company and/or restaurant can use for their analytics.

Problem solved.

Except that little Maria doesn’t have any excuse and has to complete her homework.

Safety vs. Privacy in Montana School Video Surveillance

At the highest level, debates regarding government and enterprise use of biometric technology boil down to a debate about whether to keep people safe, or whether to preserve individual privacy.

In the state of Montana, school safety is winning over school privacy—for now.

The one exception in Montana Senate Bill 397

Biometric Update links to a Helena Independent Record article on how Montana’s far-reaching biometric ban has one significant exception.

The state Legislature earlier this year passed a law barring state and local governments from continuous use of facial recognition technology, typically in the form of cameras capable of reading and collecting a person’s biometric data, like the identifiable features of their face and body. A bipartisan group of legislators went toe-to-toe with software companies and law enforcement in getting Senate Bill 397 over the finish line, contending public safety concerns raised by the technology’s supporters don’t overcome individual privacy rights. 

School districts, however, were specifically carved out of the definition of state and local governments to which the facial recognition technology law applies.

From the Helena Independent Record.

At a minimum Montana school districts seek to abide by two existing Federal laws when installating facial recognition and video surveillance systems.

Without many state-level privacy protection laws in place, school policies typically lean on the Children’s Online Privacy Protection Act (COPPA), a federal law requiring parental consent in order for websites to collect data on their children, or the Family Educational Rights and Privacy Act (FERPA), which protects the privacy of student education records. 

From the Helena Independent Record.

If a vendor doesn’t agree to abide by these laws, then the Montana School Board Association recommends that the school district not do business with the vendor.

Other vendors agree. Here is the statement of one vendor, Verkada (you’ll see them again later) on FERPA:

The Family Educational Rights and Privacy Act was passed by the US federal government to protect the privacy of students’ educational records. This law requires public schools and school districts to give families control over any personally identifiable information about the student.

Verkada provides educational organizations the tools they need to maintain FERPA compliance, such as face blurring for archived footage.

From https://www.verkada.com/security/#compliance

Simms High School’s use of the technology

How are the schools using these systems? In ways you may expect.

(The Sun River Valley School District’s) use of the technology is more focused on keeping people who shouldn’t be on school property away, he said, such as a parent who lost custody of their child.

(Simms) High School Principal Luke McKinley said it’s been more frequent to use the facial recognition technology during extra-curricular activities, when football fans get too rowdy for a high school sports event. 

From the Helena Independent Record.

Technology (in this case from Verkada) helps the Sun River School District, especially in its rural setting. Back in 2022, it took law enforcement an estimated 45 minutes to respond to school incidents. The hope is that the technology could identify those who engaged in illegal activity, or at least deter it.

What about other school districts?

When I created my educational identity page, I included the four key words “When permitted by law.” While Montana school districts are currently permitted to use facial recognition and video surveillance, other school districts need to check their local laws before implementing such a system, and also need to ensure that they comply with federal laws such as COPPA and FERPA.

I may be, um, biased in my view, but as long as the school district (or law enforcement agency, or apartment building owner, or whoever) complies with all applicable laws, and implements the technology with a primary purpose of protecting people rather than spying on them, facial recognition is a far superior tool to protect people than manual recognition methods that rely on all-too-fallible human beings.

Vision Transformer (ViT) Models and Presentation Attack Detection

I tend to view presentation attack detection (PAD) through the lens of iBeta or occasionally of BixeLab. But I need to remind myself that these are not the only entities examining PAD.

A recent paper authored by Koushik SrivatsanMuzammal Naseer, and Karthik Nandakumar of the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) addresses PAD from a research perspective. I honestly don’t understand the research, but perhaps you do.

Flip spoofing his natural appearance by portraying Geraldine. Some were unable to detect the attack. By NBC Television. – eBay itemphoto frontphoto back, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16476809

Here is the abstract from “FLIP: Cross-domain Face Anti-spoofing with Language Guidance.”

Face anti-spoofing (FAS) or presentation attack detection is an essential component of face recognition systems deployed in security-critical applications. Existing FAS methods have poor generalizability to unseen spoof types, camera sensors, and environmental conditions. Recently, vision transformer (ViT) models have been shown to be effective for the FAS task due to their ability to capture long-range dependencies among image patches. However, adaptive modules or auxiliary loss functions are often required to adapt pre-trained ViT weights learned on large-scale datasets such as ImageNet. In this work, we first show that initializing ViTs with multimodal (e.g., CLIP) pre-trained weights improves generalizability for the FAS task, which is in line with the zero-shot transfer capabilities of vision-language pre-trained (VLP) models. We then propose a novel approach for robust cross-domain FAS by grounding visual representations with the help of natural language. Specifically, we show that aligning the image representation with an ensemble of class descriptions (based on natural language semantics) improves FAS generalizability in low-data regimes. Finally, we propose a multimodal contrastive learning strategy to boost feature generalization further and bridge the gap between source and target domains. Extensive experiments on three standard protocols demonstrate that our method significantly outperforms the state-of-the-art methods, achieving better zero-shot transfer performance than five-shot transfer of “adaptive ViTs”.

From https://koushiksrivats.github.io/FLIP/?utm_source=tldrai

FLIP, by the way, stands for “Face Anti-Spoofing with Language-Image Pretraining.” CLIP is “contrastive language-image pre-training.”

While I knew I couldn’t master this, I did want to know what LIP and ViT were.

However, I couldn’t find something that just talked about LIP: all the sources I found talked about FLIP, CLIP, PLIP, GLIP, etc. So I gave up and looked at Matthew Brems’ easy-to-read explainer on CLIP:

CLIP is the first multimodal (in this case, vision and text) model tackling computer vision and was recently released by OpenAI on January 5, 2021….CLIP is a bridge between computer vision and natural language processing.

From https://www.kdnuggets.com/2021/03/beginners-guide-clip-model.html

Sadly, Brems didn’t address ViT, so I turned to Chinmay Bhalerao.

Vision Transformers work by first dividing the image into a sequence of patches. Each patch is then represented as a vector. The vectors for each patch are then fed into a Transformer encoder. The Transformer encoder is a stack of self-attention layers. Self-attention is a mechanism that allows the model to learn long-range dependencies between the patches. This is important for image classification, as it allows the model to learn how the different parts of an image contribute to its overall label.

The output of the Transformer encoder is a sequence of vectors. These vectors represent the features of the image. The features are then used to classify the image.

From https://medium.com/data-and-beyond/vision-transformers-vit-a-very-basic-introduction-6cd29a7e56f3

So Srivatsan et al combined tiny little bits of images with language representations to determine which images are (using my words) “fake fake fake.”

From https://www.youtube.com/shorts/7B9EiNHohHE

Because a bot can’t always recognize a mannequin.

Or perhaps the bot and the mannequin are in shenanigans.

The devil made them do it.

The Big 3, or 4, or 5? Through the Years

On September 30, FindBiometrics and Acuity Market Intelligence released the production version of the Biometric Digital Identity Prism Report. You can request to download it here.

From https://findbiometrics.com/prism/ as of 9/30/2023.

Central to the concept of the Biometric Digital Identity Prism is the idea of the “Big 3 ID,” which the authors define as follows:

These firms have a global presence, a proven track record, and moderate-to-advanced activity in every other prism beam.

From “The Biometric Digital Identity Prism Report, September 2023.”

The Big 3 are IDEMIA, NEC, and Thales.

Whoops, wrong Big Three, although the Soviet Union/Russia and the United Kingdom have also been heavily involved in fingerprint identification. By U.S. Signal Corps photo. – http://hdl.loc.gov/loc.pnp/cph.3a33351 http://teachpol.tcnj.edu/amer_pol_hist/thumbnail381.html, Public Domain, https://commons.wikimedia.org/w/index.php?curid=538831

But FindBiometrics and Acuity Market Intelligence didn’t invent the Big 3. The concept has been around for 40 years. And two of today’s Big 3 weren’t in the Big 3 when things started. Oh, and there weren’t always 3; sometimes there were 4, and some could argue that there were 5.

So how did we get from the Big 3 of 40 years ago to the Big 3 of today?

The Big 3 in the 1980s

Back in 1986 (eight years before I learned how to spell AFIS) the American National Standards Institute, in conjunction with the National Bureau of Standards, issued ANSI/NBS-ICST 1-1986, a data format for information interchange of fingerprints. The PDF of this long-superseded standard is available here.

Cover page of ANSI/NBS-ICST 1-1986. PDF here.

When creating this standard, ANSI and the NBS worked with a number of law enforcement agencies, as well as companies in the nascent fingerprint industry. There is a whole list of companies cited at the beginning of the standard, but I’d like to name four of them.

  • De La Rue Printrak, Inc.
  • Identix, Inc.
  • Morpho Systems
  • NEC Information Systems, Inc.

While all four of these companies produced computerized fingerprinting equipment, three of them had successfully produced automated fingerprint identification systems, or AFIS. As Chapter 6 of the Fingerprint Sourcebook subsequently noted:

  • De La Rue Printrak (formerly part of Rockwell, which was formerly Autonetics) had deployed AFIS equipment for the U.S. Federal Bureau of Investigation and for the cities of Minneapolis and St. Paul as well as other cities. Dorothy Bullard (more about her later) has written about Printrak’s history, as has Reference for Business.
  • Morpho Systems resulted from French AFIS efforts, separate from those of the FBI. These efforts launched Morpho’s long-standing relationship with the French National Police, as well as a similar relationship (now former relationship) with Pierce County, Washington.
  • NEC had deployed AFIS equipment for the National Police Academy of Japan, and (after some prodding; read Chapter 6 for the story) the city of San Francisco. Eventually the state of California obtained an NEC system, which played a part in the identification of “Night Stalker” Richard Ramirez.
Richard Ramirez mug shot, taken on 12 December 1984 after an arrest for car theft. By Los Angeles Police Department – [1], Public Domain, https://commons.wikimedia.org/w/index.php?curid=29431687

After the success of the San Francisco and California AFIS systems, many other jurisdictions began clamoring for AFIS of their own, and turned to these three vendors to supply them.

The Big 4 in the 1990s

But in 1990, these three firms were joined by a fourth upstart, Cogent Systems of South Pasadena, California.

While customers initially preferred the Big 3 to the upstart, Cogent Systems eventually installed a statewide system in Ohio and a border control system for the U.S. government, plus a vast number of local systems at the county and city level.

Between 1991 and 1994, the (Immigfation and Naturalization Service) conducted several studies of automated fingerprint systems, primarily in the San Diego, California, Border Patrol Sector. These studies demonstrated to the INS the feasibility of using a biometric fingerprint identification system to identify apprehended aliens on a large scale. In September 1994, Congress provided almost $30 million for the INS to deploy its fingerprint identification system. In October 1994, the INS began using the system, called IDENT, first in the San Diego Border Patrol Sector and then throughout the rest of the Southwest Border.

From https://oig.justice.gov/reports/plus/e0203/back.htm

I was a proposal writer for Printrak (divested by De La Rue) in the 1990s, and competed against Cogent, Morpho, and NEC in AFIS procurements. By the time I moved from proposals to product management, the next redefinition of the “big” vendors occurred.

The Big 3 in 2003

There are a lot of name changes that affected AFIS participants, one of which was the 1988 name change of the National Bureau of Standards to the National Institute of Standards and Technology (NIST). As fingerprints and other biometric modalities were increasingly employed by government agencies, NIST began conducting tests of biometric systems. These tests continue to this day, as I have previously noted.

One of NIST’s first tests was the Fingerprint Vendor Technology Evaluation of 2003 (FpVTE 2003).

For those who are familiar with NIST testing, it’s no surprise that the test was thorough:

FpVTE 2003 consists of multiple tests performed with combinations of fingers (e.g., single fingers, two index fingers, four to ten fingers) and different types and qualities of operational fingerprints (e.g., flat livescan images from visa applicants, multi-finger slap livescan images from present-day booking or background check systems, or rolled and flat inked fingerprints from legacy criminal databases).

From https://www.nist.gov/itl/iad/image-group/fingerprint-vendor-technology-evaluation-fpvte-2003

Eighteen vendors submitted their fingerprint algorithms to NIST for one or more of the various tests, including Bioscrypt, Cogent Systems, Identix, SAGEM MORPHO (SAGEM had acquired Morpho Systems), NEC, and Motorola (which had acquired Printrak). And at the conclusion of the testing, the FpVTE 2003 summary (PDF) made this statement:

Of the systems tested, NEC, SAGEM, and Cogent produced the most accurate results.

Which would have been great news if I were a product manager at NEC, SAGEM, and Cogent.

Unfortunately, I was a product manager at Motorola.

The effect of this report was…not good, and at least partially (but not fully) contributed to Motorola’s loss of its long-standing client, the Royal Canadian Mounted Police, to Cogent.

The Big 3, 4, or 5 after 2003

So what happened in the years after FpVTE was released? Opinions vary, but here are three possible explanations for what happened next.

Did the Big 3 become the Big 4 again?

Now I probably have a bit of bias in this area since I was a Motorola employee, but I maintain that Motorola overcame this temporary setback and vaulted back into the Big 4 within a couple of years. Among other things, Motorola deployed a national 1000 pixels-per-inch (PPI) system in Sweden several years before the FBI did.

Did the Big 3 remain the Big 3?

Motorola’s arch-enemies at Sagem Morpho had a different opinion, which was revealed when the state of West Virginia finally got around to deploying its own AFIS. A bit ironic, since the national FBI AFIS system IAFIS was located in West Virginia, or perhaps not.

Anyway, Motorola had a very effective sales staff, as was apparent when the state issued its Request for Proposal (RFP) and explicitly said that the state wanted a Motorola AFIS.

That didn’t stop Cogent, Identix, NEC, and Sagem Morpho from bidding on the project.

After the award, Dorothy Bullard and I requested copies of all of the proposals for evaluation. While Motorola (to no one’s surprise) won the competition, Dorothy and I believed that we shouldn’t have won. In particular, our arch-enemies at Sagem Morpho raised a compelling argument that it should be the chosen vendor.

Their argument? Here’s my summary: “Your RFP says that you want a Motorola AFIS. The states of Kansas (see page 6 of this PDF) and New Mexico (see this PDF) USED to have a Motorola AFIS…but replaced their systems with our MetaMorpho AFIS because it’s BETTER than the Motorola AFIS.”

But were Cogent, Motorola, NEC, and Sagem Morpho the only “big” players?

Did the Big 3 become the Big 5?

While the Big 3/Big 4 took a lot of the headlines, there were a number of other companies vying for attention. (I’ve talked about this before, but it’s worthwhile to review it again.)

  • Identix, while making some efforts in the AFIS market, concentrated on creating live scan fingerprinting machines, where it competed (sometimes in court) against companies such as Digital Biometrics and Bioscrypt.
  • The fingerprint companies started to compete against facial recognition companies, including Viisage and Visionics.
  • Oh, and there were also iris companies such as Iridian.
  • And there were other ways to identify people. Even before 9/11 mandated REAL ID (which we may get any year now), Polaroid was making great efforts to improve driver’s licenses to serve as a reliable form of identification.

In short, there were a bunch of small identity companies all over the place.

But in the course of a few short years, Dr. Joseph Atick (initially) and Robert LaPenta (subsequently) concentrated on acquiring and merging those companies into a single firm, L-1 Identity Solutions.

These multiple mergers resulted in former competitors Identix and Digital Biometrics, and former competitors Viisage and Visionics, becoming part of one big happy family. (A multinational big happy family when you count Bioscrypt.) Eventually this company offered fingerprint, face, iris, driver’s license, and passport solutions, something that none of the Big 3/Big 4 could claim (although Sagem Morpho had a facial recognition offering). And L-1 had federal contracts and state contracts that could match anything that the Big 3/Big 4 offered.

So while L-1 didn’t have a state AFIS contract like Cogent, Motorola, NEC, and Sagem Morpho did, you could argue that L-1 was important enough to be ranked with the big boys.

So for the sake of argument let’s assume that there was a Big 5, and L-1 Identity Solutions was part of it, along with the three big boys Motorola, NEC, and Safran (who had acquired Sagem and thus now owned Sagem Morpho), and the independent Cogent Systems. These five companies competed fiercly with each other (see West Virginia, above).

In a two-year period, everything would change.

The Big 3 after 2009

Hang on to your seats.

The Motorola RAZR was hugely popular…until it wasn’t. Eventually Motorola split into two companies and sold off others, including the “Printrak” Biometric Business Unit. By NextG50 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=130206087

If you’re keeping notes, the Big 5 have now become the Big 3: 3M, Safran, and NEC (the one constant in all of this).

While there were subsequent changes (3M sold Cogent and other pieces to Gemalto, Safran sold all of Morpho to Advent International/Oberthur to form IDEMIA, and Gemalto was acquired by Thales), the Big 3 has remained constant over the last decade.

And that’s where we are today…pending future developments.

  • If Alphabet or Amazon reverse their current reluctance to market their biometric offerings to governments, the entire landscape could change again.
  • Or perhaps a new AI-fueled competitor could emerge.

The 1 Biometric Content Marketing Expert

This was written by John Bredehoft of Bredemarket.

If you work for the Big 3 or the Little 80+ and need marketing and writing services, the biometric content marketing expert can help you. There are several ways to get in touch:

  • Book a meeting with me at calendly.com/bredemarket. Be sure to fill out the information form so I can best help you. 

More on NIST’s FRTE-FATE Split

(Part of the biometric product marketing expert series)

I’ve talked about why NIST separated its FRVT efforts into FRTE and FATE.

But I haven’t talked bout how NIST did this.

And as you all know, the second most important question after why is how.

Why the great renaming took place

As I noted back in August, NIST chose to split its Face Recognition Vendor Test (FRVT) into two parts—FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation).

At the time, NIST explained why it did this:

To bring clarity to our testing scope and goals

From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate

In essence, the Face Recognition Vendor Test had become a hodgepodge of different things. Some of the older tests were devoted to identification of individuals (face recognition), while some of the newer tests were looking at issues other than individual identification (face analysis).

Of course, this confusion between identification and non-identification is nothing new, which is why some of the people who read Gender Shades falsely concluded that if the three algorithms couldn’t classify people by sex or race, they couldn’t identify them as individuals.

But I digress. (I won’t do it again.)

NIST explained at the time:

Tracks that involve the processing and analysis of images will run under the FATE activity, and tracks that pertain to identity verification will run under FRTE.

From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate

How the great renaming happened in practice

What is in FRTE?

To date, most of my personal attention (and probably most of yours) was paid to what was previously called FRVT 1:1 and FRVT 1:N.

These two tests are now part of FRTE, and were simply renamed to FRTE 1:1 and FRTE 1:N. They’ve even (for now) retained the same URLs, although that may change in the future.

Other tests that are now part of the FRTE bucket include:

The “Still Face and Iris 1:N Identification” effort (PDF) has apparently also been reclassified as an FRTE effort.

What is in FATE?

Obviously, presentation attack detection (PAD) testing falls into the FATE category, since this does not measure the identification of an individual, but whether a person is truly there or not. The first results have been released; I previously wrote about this here.

The next obvious category is age estimation testing, which again does not try to identify an individual, but estimate how old the person is. This testing has not yet started, but I talked about the concept of age estimation previously.

Other parts of FATE include:

It is very possible that NIST will add additional FRTE and FATE tests in the future. These may be brand new tests, or variations of existing tests. For example, when all of us started wearing face masks a couple of years ago, NIST simulated face masks on its existing facial images and created the data for the face mask test described above.

What do you think NIST should test next, either in the FRTE or the FATE category?

More on morphing

And yes, I’m concluding this post with this video. By the way, this is the full version that (possibly intentionally) caused a ton of controversy and was immediately banned for nearly a quarter century. The morphing starts at 5:30. The crotch-grabbing starts right after the 7:00 mark.

From https://www.youtube.com/watch?v=pTFE8cirkdQ

But on a less controversial note, let’s give equal time to Godley & Creme.

From https://www.youtube.com/watch?v=ypMnBuvP5kA

Perhaps because of the lack of controversy with Godley & Creme’s earlier effort, Ashley Clark prefers it to the later Michael Jackson/John Landis effort.

Whereas Godley & Creme used editing technology to embrace and reflect the ambiguous murk of thwarted love, Jackson and Landis imposed an artificial sheen on the complexity of identity; a sheen that feels poignant if not outright tragic in the wake of Jackson’s ultimate appearance and fate. Really, it did matter if he was black or white.

From https://ashleyclark.substack.com/p/black-or-white-and-crying-all-over

But I digress. (I lied.)

Sadly, morphing escaped from the hands of music video directors and artists and entered the world of fraudsters, as Regula explains.

One of the main application areas of facial morphing for criminal purposes is forging identity documents. The attack targets face-based identity verification systems and procedures. Most often it involves passports; however, any ID document with a photo can be compromised.

One well-known case happened in 2018 when a group of activists merged together a photo of Federica Mogherini, the High Representative of the European Union for Foreign Affairs and Security Policy, and a member of their group. Using this morphed photo, they managed to obtain an authentic German passport.

From https://regulaforensics.com/blog/facial-morphing/

Which is why NIST didn’t just cry about the problem. They tested it to assist the vendors in solving the problem.