Identification Perfection is Impossible

(Part of the biometric product marketing expert series)

There are many different types of perfection.

Jehan Cauvin (we don’t spell his name like he spelled it). By Titian – Bridgeman Art Library: Object 80411, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6016067

This post concentrates on IDENTIFICATION perfection, or the ability to enjoy zero errors when identifying individuals.

The risk of claiming identification perfection (or any perfection) is that a SINGLE counter-example disproves the claim.

  • If you assert that your biometric solution offers 100% accuracy, a SINGLE false positive or false negative shatters the assertion.
  • If you claim that your presentation attack detection solution exposes deepfakes (face, voice, or other), then a SINGLE deepfake that gets past your solution disproves your claim.
  • And as for the pre-2009 claim that latent fingerprint examiners never make a mistake in an identification…well, ask Brandon Mayfield about that one.

In fact, I go so far as to avoid using the phrase “no two fingerprints are alike.” Many years ago (before 2009) in an International Association for Identification meeting, I heard someone justify the claim by saying, “We haven’t found a counter-example yet.” That doesn’t mean that we’ll NEVER find one.

You’ve probably heard me tell the story before about how I misspelled the word “quality.”

In a process improvement document.

While employed by Motorola (pre-split).

At first glance, it appears that Motorola would be the last place to make a boneheaded mistake like that. After all, Motorola is known for its focus on quality.

But in actuality, Motorola was the perfect place to make such a mistake, since it was one of the champions of the “Six Sigma” philosophy (which targets a maximum of 3.4 defects per million opportunities). Motorola realized that manufacturing perfection is impossible, so manufacturers (and the people in Motorola’s weird Biometric Business Unit) should instead concentrate on reducing the error rate as much as possible.

So one misspelling could be tolerated, but I shudder to think what would have happened if I had misspelled “quality” a second time.

Login.gov and IAL2 #realsoonnow

Back in August 2023, the U.S. General Services Administration published a blog post that included the following statement:

Login.gov is on a path to providing an IAL2-compliant identity verification service to its customers in a responsible, equitable way. Building on the strong evidence-based identity verification that Login.gov already offers, Login.gov is on a path to providing IAL2-compliant identity verification that ensures both strong security and broad and equitable access.

From https://www.gsa.gov/blog/2023/08/18/reducing-fraud-and-increasing-access-drives-record-adoption-and-usage-of-logingov

It’s nice to know…NOW…that Login.gov is working to achieve IAL2.

This post explains what the August 2023 GSA post said, and what it didn’t say.

But first, I’ll define what Login.gov and “IAL2” are.

What is Login.gov?

Here is what Login.gov says about itself:

Login.gov is a secure sign in service used by the public to sign in to participating government agencies. Participating agencies will ask you to create a Login.gov account to securely access your information on their website or application.

You can use the same username and password to access any agency that partners with Login.gov. This streamlines your process and eliminates the need to remember multiple usernames and passwords.

From https://www.login.gov/what-is-login/

Obviously there are a number of private companies (over 80 last I counted) that provide secure access to information, but Login.gov is provided by the government itself—specifically by the General Services Administration’s Technology Transformation Services. Agencies at the federal, state, and local level can work with the GSA TTS’ “18F” organization to implement solutions such as Login.gov.

Why would agencies implement Login.gov? Because the agencies want to protect their constituents’ information. If fraudsters capture personally identifiable information (PII) of someone applying for government services, the breached government agency will face severe repurcussions. Login.gov is supposed to protect its partner agencies from these nightmares.

How does Login.gov do this?

  • Sometimes you might use two-factor authentication consisting of a password and a second factor such as an SMS code or the use of an authentication app.
  • In more critical cases, Login.gov requests a more reliable method of identification, such as a government-issued photo ID (driver’s license, passport, etc.).

What is IAL2?

At the risk of repeating myself, I’ll briefly go over what “Identity Assurance Level 2” (IAL2) is.

The U.S. National Institute of Standards and Technology, in its publication NIST SP 800-63a, has defined “identity assurance levels” (IALs) that can be used when dealing with digital identities. It’s helpful to review how NIST has defined the IALs. (I’ll define the other acronyms as we go along.)

Assurance in a subscriber’s identity is described using one of three IALs:

IAL1: There is no requirement to link the applicant to a specific real-life identity. Any attributes provided in conjunction with the subject’s activities are self-asserted or should be treated as self-asserted (including attributes a [Credential Service Provider] CSP asserts to an [Relying Party] RP). Self-asserted attributes are neither validated nor verified.

IAL2: Evidence supports the real-world existence of the claimed identity and verifies that the applicant is appropriately associated with this real-world identity. IAL2 introduces the need for either remote or physically-present identity proofing. Attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL2 can support IAL1 transactions if the user consents.

IAL3: Physical presence is required for identity proofing. Identifying attributes must be verified by an authorized and trained CSP representative. As with IAL2, attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL3 can support IAL1 and IAL2 identity attributes if the user consents.

From https://pages.nist.gov/800-63-3/sp800-63a.html#sec2

So in its simplest terms, IAL2 requires evidence of a verified credential so that an online person can be linked to a real-life identity. If someone says they’re “John Bredehoft” and fills in an online application to receive government services, IAL2 compliance helps to ensure that the person filling out the online application truly IS John Bredehoft, and not Bernie Madoff.

As more and more of us conduct business—including government business—online, IAL2 compliance is essential to reduce fraud.

One more thing about IAL2 compliance. The mere possession of a valid government issued photo ID is NOT sufficient for IAL2 compliance. After all, Bernie Madoff may be using John Bredehoft’s driver’s license. To make sure that it’s John Bredehoft using John Bredehoft’s driver’s license, an additional check is needed.

This has been explained by ID.me, a private company that happens to compete with Login.gov to provide identity proofing services to government agencies.

Biometric comparison (e.g., selfie with liveness detection or fingerprint) of the strongest piece of evidence to the applicant

From https://network.id.me/article/what-is-nist-ial2-identity-verification/

So you basically take the information on a driver’s license and perform a facial recognition 1:1 comparison with the person possessing the driver’s license, ideally using liveness detection, to make sure that the presented person is not a fake.

So what?

So the GSA was apparently claiming how secure Login.gov was. Guess who challenged the claim?

The GSA.

Now sometimes it’s ludicrous to think that the government can police itself, but in some cases government actually identifies government faults.

Of course, this works best when you can identify problems with some other government entity.

Which is why the General Services Administration has an Inspector General. And in March 2023, the GSA Inspector General released a report with the following title: “GSA Misled Customers on Login.gov’s Compliance with Digital Identity Standards.”

The title is pretty clear, but Fedscoop summarized the findings for those who missed the obvious:

As part of an investigation that has run since last April (2022), GSA’s Office of the Inspector General found that the agency was billing agencies for IAL2-compliant services, even though Login.gov did not meet Identity Assurance Level 2 (IAL2) standards.

GSA knowingly billed over $10 million for services provided through contracts with other federal agencies, even though Login.gov is not IAL2 compliant, according to the watchdog.

From https://fedscoop.com/gsa-login-gov-watchdog-report/

So now GSA is explicitly saying that Login.gov ISN’T IAL2-compliant.

Which helps its private sector competitors.

In Which I “Nyah Nyah” Tongue Identification

(Part of the biometric product marketing expert series)

If you listen closely, you can hear about all sorts of wonderful biometric identifiers. They range from the common (such as fingerprint ridges and detail) to the esoteric (my favorite was the 2013 story about Japanese car seats that captured butt prints).

The butt sensor at work in a Japanese lab. (Advanced Institute of Industrial Technology photo). From https://www.cartalk.com/content/bottom-line-japanese-butt-sensor-protect-your-car

A former coworker who left the biometric world for the world of Adobe Experience Manager (AEM) expert consulting brought one of the latter to my attention.

Tongue prints.

This article, shared with me by Krassimir Boyanov of KBWEB Consult, links to this article from Science ABC.

As is usual with such articles, the title is breathless: “How Tongue Prints Are Going To Revolutionize Identification Methods.”

Forget about fingerprints and faces and irises and DNA and gait recognition and butt prints. Tongue prints are the answer!

Benefits of tongue print biometrics

To its credit, the article does point out two benefits of using tongue prints as a biometric identifier.

  • Consent and privacy. Unlike fingerprints and irises (and faces) which are always exposed and can conceivably be captured without the person’s knowledge, the subject has to provide consent before a tongue image is captured. For the most part, tongues are privacy-perfect.
  • Liveness. The article claims that “sticking out one’s tongue is an undeniable ‘proof of life.'” Perhaps that’s an exaggeration, but it is admittedly much harder to fake a tongue than it is to fake a finger or a face.

Are tongues unique?

But the article also makes these claims.

Two main attributes are measured for a tongue print. First is the tongue shape, as the shape of the tongue is unique to everyone.

From https://www.scienceabc.com/innovation/how-tongue-prints-are-going-to-revolutionize-identification-methods.html

The other notable feature is the texture of the tongue. Tongues consist of a number of ridges, wrinkles, seams and marks that are unique to every individual.

From https://www.scienceabc.com/innovation/how-tongue-prints-are-going-to-revolutionize-identification-methods.html

So tongue shape and tongue texture are unique to every individual?

Prove it.

Even for some of the more common biometric identifiers, we do not have scientific proof that most biometric identifiers are unique to every individual.

But at least these modalities are under study. Has anyone conducted a rigorous study to prove or disprove the uniqueness of tongues? By “rigorous,” I mean a study that has evaluated millions of tongues in the same way that NIST has evaluated millions of fingerprints, faces, and irises?

We know that NIST hasn’t studied tongues.

I did find this 2017 tongue identification pilot study but it only included a whopping 20 participants. And the study authors (who are always seeking funding anyway) admitted that “large-scale studies are required to validate the results.”

Conclusion

So if a police officer tells you to stick out your tongue for identification purposes, think twice.

Why Age-Restricted Gig Economy Companies Need Continuous Authentication (and Liveness Detection)

If you ask any one of us in the identity verification industry, we’ll tell you how identity verification proves that you know who is accessing your service.

  • During the identity verification/onboarding step, one common technique is to capture the live face of the person who is being onboarded, then compare that to the face captured from the person’s government identity document. As long as you have assurance that (a) the face is live and not a photo, and (b) the identity document has not been tampered, you positively know who you are onboarding.
  • The authentication step usually captures a live face and compares it to the face that was captured during onboarding, thus positively showing that the right person is accessing the previously onboarded account.

Sound like the perfect solution, especially in industries that rely on age verification to ensure that people are old enough to access the service.

Therefore, if you are employing robust identity verification and authentication that includes age verification, this should never happen.

By LukaszKatlewa – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=49248622

Eduardo Montanari, who manages delivery logistics at a burger shop north of São Paulo, has noticed a pattern: Every time an order pickup is assigned to a female driver, there’s a good chance the worker is a minor.

From https://restofworld.org/2023/underage-gig-workers-brazil/

An underage delivery person who has been onboarded and authenticated, and whose age has been verified? That’s impossible, you say! Read on.

31,000 people already know how to bypass onboarding and authentication

Rest of World wrote an article (tip of the hat to Bianca Gonzalez of Biometric Update) entitled “Underage gig workers keep outsmarting facial recognition.

Outsmarting onboarding

How do the minors do it?

On YouTube, a tutorial — one of many — explains “how to deliver as a minor.” It has over 31,000 views. “You have to create an account in the name of a person who’s the right age. I created mine in my mom’s name,” says a boy, who identifies himself as a minor in the video.

From https://restofworld.org/2023/underage-gig-workers-brazil/
From https://www.youtube.com/watch?v=59vaKab4g2M. “Botei no da minha mãe não conta da minha.” (“I put it on my mother’s account, it doesn’t count on mine.”)

Once a cooperative parent or older sibling agrees to help, the account is created in the older person’s name, the older person’s face and identity document is used to create the account, and everything is valid.

Outsmarting authentication

Yes, but what about authentication?

That’s why it’s helpful to use a family member, or someone who lives in the minor’s home.

Let’s say little Maria is at home, during her homework, when her gig economy app rings with a delivery request. Now Maria was smart enough to have her older sister Irene or her mama Cecile perform the onboarding with the delivery app. If she’s at home, she can go to Irene or Cecile, have them perform the authentication, and then she’s off on her bike to make money.

(Alternatively, if the app does not support liveness detection, Maria can just hold a picture of Irene or Cecile up to the camera and authenticate.)

  • The onboarding process was completed by the account holder.
  • The authentication was completed by the account holder.
  • But the account holder isn’t the one that’s actually using the service. Once authentication is complete, anyone can access the service.

So how do you stop underage gig economy use?

According to Rest of World, one possible solution is to tattle on underage delivery people. If you see something, say something.

But what’s the incentive for a restaurant owner or delivery recipient to report that their deliveries are being performed by a kid?

“The feeling we have is that, at least this poor boy is working. I know this is horrible, but here in Brazil we end up seeing it as an opportunity … It’s ridiculous,” (psychologist Regiane Couto) said.

From https://restofworld.org/2023/underage-gig-workers-brazil/

A much better solution is to replace one-time authetication with continuous authentication, or at least be smarter in authentication. For example, a gig delivery worker could be required to authenticate at multiple points in the process:

  • When the worker receives the delivery request.
  • When the worker arrives at the restaurant.
  • When the worker makes the delivery.

It’s too difficult to drag big sister Irene or mama Cecile to ALL of these points.

As an added bonus, these authetications provide timestamps of critical points in the delivery process, which the delivery company and/or restaurant can use for their analytics.

Problem solved.

Except that little Maria doesn’t have any excuse and has to complete her homework.

Vision Transformer (ViT) Models and Presentation Attack Detection

I tend to view presentation attack detection (PAD) through the lens of iBeta or occasionally of BixeLab. But I need to remind myself that these are not the only entities examining PAD.

A recent paper authored by Koushik SrivatsanMuzammal Naseer, and Karthik Nandakumar of the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) addresses PAD from a research perspective. I honestly don’t understand the research, but perhaps you do.

Flip spoofing his natural appearance by portraying Geraldine. Some were unable to detect the attack. By NBC Television. – eBay itemphoto frontphoto back, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16476809

Here is the abstract from “FLIP: Cross-domain Face Anti-spoofing with Language Guidance.”

Face anti-spoofing (FAS) or presentation attack detection is an essential component of face recognition systems deployed in security-critical applications. Existing FAS methods have poor generalizability to unseen spoof types, camera sensors, and environmental conditions. Recently, vision transformer (ViT) models have been shown to be effective for the FAS task due to their ability to capture long-range dependencies among image patches. However, adaptive modules or auxiliary loss functions are often required to adapt pre-trained ViT weights learned on large-scale datasets such as ImageNet. In this work, we first show that initializing ViTs with multimodal (e.g., CLIP) pre-trained weights improves generalizability for the FAS task, which is in line with the zero-shot transfer capabilities of vision-language pre-trained (VLP) models. We then propose a novel approach for robust cross-domain FAS by grounding visual representations with the help of natural language. Specifically, we show that aligning the image representation with an ensemble of class descriptions (based on natural language semantics) improves FAS generalizability in low-data regimes. Finally, we propose a multimodal contrastive learning strategy to boost feature generalization further and bridge the gap between source and target domains. Extensive experiments on three standard protocols demonstrate that our method significantly outperforms the state-of-the-art methods, achieving better zero-shot transfer performance than five-shot transfer of “adaptive ViTs”.

From https://koushiksrivats.github.io/FLIP/?utm_source=tldrai

FLIP, by the way, stands for “Face Anti-Spoofing with Language-Image Pretraining.” CLIP is “contrastive language-image pre-training.”

While I knew I couldn’t master this, I did want to know what LIP and ViT were.

However, I couldn’t find something that just talked about LIP: all the sources I found talked about FLIP, CLIP, PLIP, GLIP, etc. So I gave up and looked at Matthew Brems’ easy-to-read explainer on CLIP:

CLIP is the first multimodal (in this case, vision and text) model tackling computer vision and was recently released by OpenAI on January 5, 2021….CLIP is a bridge between computer vision and natural language processing.

From https://www.kdnuggets.com/2021/03/beginners-guide-clip-model.html

Sadly, Brems didn’t address ViT, so I turned to Chinmay Bhalerao.

Vision Transformers work by first dividing the image into a sequence of patches. Each patch is then represented as a vector. The vectors for each patch are then fed into a Transformer encoder. The Transformer encoder is a stack of self-attention layers. Self-attention is a mechanism that allows the model to learn long-range dependencies between the patches. This is important for image classification, as it allows the model to learn how the different parts of an image contribute to its overall label.

The output of the Transformer encoder is a sequence of vectors. These vectors represent the features of the image. The features are then used to classify the image.

From https://medium.com/data-and-beyond/vision-transformers-vit-a-very-basic-introduction-6cd29a7e56f3

So Srivatsan et al combined tiny little bits of images with language representations to determine which images are (using my words) “fake fake fake.”

From https://www.youtube.com/shorts/7B9EiNHohHE

Because a bot can’t always recognize a mannequin.

Or perhaps the bot and the mannequin are in shenanigans.

The devil made them do it.

I Guess I Was Fated to Write About NIST IR 8491 on Passive Presentation Attack Detection

Remember in mid-August when I said that the U.S. National Institute of Standards and Technology was splitting its FRVT tests into FRTE and FATE tests?

Well, the FATE side of the house has released its first two studies, including one entitled “Face Analysis Technology Evaluation (FATE) Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms” (NIST Internal Report NIST IR 8491; PDF here).

By JamesHarrison – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4873863

I’ve written all about this study in a LinkedIn article under my own name that answers the following questions:

  • What is a presentation attack?
  • How do you detect presentation attacks?
  • Why does NIST care about presentation attacks?
  • And why should you?

My LinkedIn article, “Why NIST Cares About Presentation Attack Detection…and Why You Should Also,” can be found at the link https://www.linkedin.com/pulse/why-nist-cares-presentation-attack-detectionand-you-should-bredehoft/.

ICYMI: Gummy Fingers

In case you missed it…

My recent post “Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event” included the following sentence:

But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/

A biometrics industry colleague noticed the rhyming words “dummy” and “gummy” and wondered if the latter was a typo. It turns out it wasn’t.

To my knowledge, these gummy fingers do NOT have ridges. From https://www.candynation.com/gummy-fingers

Back in 2002, researcher Tsutomu Matsumoto used “gummy bears” gelatin to create a fake finger that fooled a fingerprint reader.

Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.

Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers.

For the rest of the story, see “We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.”

(Bredemarket email, meeting, contact, subscribe)

The Great Renaming: FRVT is now FRTE and FATE

Face professionals, your world just changed.

I and countless others have spent the last several years referring to the National Institute of Standards and Technology’s Face Recognition Vendor Test, or FRVT. I guess some people have spent almost a quarter century referring to FRVT, because the term has been in use since 1999.

Starting now, you’re not supposed to use the FRVT acronym any more.

From NIST:

Face Technology Evaluations – FRTE/FATE

To bring clarity to our testing scope and goals, what was formerly known as FRVT has been rebranded and split into FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation).  Tracks that involve the processing and analysis of images will run under the FATE activity, and tracks that pertain to identity verification will run under FRTE.  All existing participation and submission procedures remain unchanged.

From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate

So, for example, the former “FRVT 1:1” and “FRVT 1:N” are now named “FRTE 1:1” and “FRTE 1:N,” respectively. At least at present, the old links https://pages.nist.gov/frvt/html/frvt11.html and https://pages.nist.gov/frvt/html/frvt1N.html still work.

The change actually makes sense, since tasks such as age estimation and presentation attack detection (liveness detection) do not directly relate to the identification of individuals.

Us old folks just have to get used to the change.

I just hope that the new “FATE” acronym doesn’t mean that some algorithms are destined to perform better than others.

Communicating How Your Firm Fights Synthetic Identities

(Updated question count 10/23/2023)

Does your firm fight crooks who try to fraudulently use synthetic identities? If so, how do you communicate your solution?

This post explains what synthetic identities are (with examples), tells four ways to detect synthetic identities, and closes by providing an answer to the communication question.

While this post is primarily intended for identity firms who can use Bredemarket’s marketing and writing services, anyone else who is interested in synthetic identities can read along.

What are synthetic identities?

To explain what synthetic identities are, let me start by telling you about Jason Brown.

Jason Brown wasn’t Jason Brown

You may not have heard of him unless you lived in Atlanta, Georgia in 2019 and lived near the apartment he rented.

Jason Brown’s renting of an apartment isn’t all that unusual.

If you were to visit Brown’s apartment in February 2019, you would find credit cards and financial information for Adam M. Lopez and Carlos Rivera.

Now that’s a little unusual, especially since Lopez and Rivera never existed.

For that matter, Jason Brown never existed either.

Brown was synthetically created from a stolen social security number and a fake California driver’s license. The creator was a man named Corey Cato, who was engaged in massive synthetic identity fraud. If you want to talk about a case that emphasizes the importance of determining financial identity, this is it.

A Georgia man was sentenced Sept. 1 (2022) to more than seven years in federal prison for participating in a nationwide fraud ring that used stolen social security numbers, including those belonging to children, to create synthetic identities used to open lines of credit, create shell companies, and steal nearly $2 million from financial institutions….

Cato joined conspiracies to defraud banks and illegally possess credit cards. Cato and his co-conspirators created “synthetic identities” by combining false personal information such as fake names and dates of birth with the information of real people, such as their social security numbers. Cato and others then used the synthetic identities and fake ID documents to open bank and credit card accounts at financial institutions. Cato and his co-conspirators used the unlawfully obtained credit cards to fund their lifestyles.

From https://www.ice.gov/news/releases/hsi-investigates-synthetic-identities-scheme-defrauded-banks-nearly-2m

Talking about synthetic identity at Victoria Gardens

Here’s a video that I created on Saturday that describes, at a very high level, how synthetic identities can be used fraudulently. People who live near Rancho Cucamonga, California will recognize the Victoria Gardens shopping center, proof that synthetic identity theft can occur far away from Georgia.

From https://www.youtube.com/watch?v=oDrSBlDJVCk

Note that synthetic identity theft different from stealing someone else’s existing identity. In this case, a new identity is created.

So how do you catch these fraudsters?

Catching the identity synthesizers

If you’re renting out an apartment, and Jason Brown shows you his driver’s license and provides his Social Security Number, how can you detect if Brown is a crook? There are four methods to verify that Jason Brown exists, and that he’s the person renting your apartment.

Method One: Private Databases

One way to check Jason Brown’s story is to perform credit checks and other data investigations using financial databases.

  • Did Jason Brown just spring into existence within the past year, with no earlier credit record? That seems suspicious.
  • Does Jason Brown’s credit record appear TOO clean? That seems suspicious.
  • Does Jason Brown share information such as a common social security number with other people? Are any of those other identities also fraudulent? That is DEFINITELY suspicious.

This is one way that many firms detect synthetic identities, and for some firms it is the ONLY way they detect synthetic identities. And these firms have to tell their story to their prospects.

If your firm offers a tool to verify identities via private databases, how do you let your prospects know the benefits of your tool, and why your solution is better than all other solutions?

Method Two: Check That Driver’s License (or other government document)

What about that driver’s license that Brown presented? There are a wide variety of software tools that can check the authenticity of driver’s licenses, passports, and other government-issued documents. Some of these tools existed back in 2019 when “Brown” was renting his apartment, and a number of them exist today.

Maybe your firm has created such a tool, or uses a tool from a third party.

If your firm offers this capability, how can your prospects learn about its benefits, and why your solution excels?

Method Three: Check Government Databases

Checking the authenticity of a government-issued document may not be enough, since the document itself may be legitimate, but the implied credentials may no longer be legitimate. For example, if my California driver’s license expires in 2025, but I move to Minnesota in 2023 and get a new license, my California driver’s license is no longer valid, even though I have it in my possession.

Why not check the database of the Department of Motor Vehicles (or the equivalent in your state) to see if there is still an active driver’s license for that person?

The American Association of Motor Vehicle Administrators (AAMVA) maintains a Driver’s License Data Verification (DLDV) Service in which participating jurisdictions allow other entities to verify the license data for individuals. Your firm may be able to access the DLDV data for selected jurisdictions, providing an extra identity verification tool.

If your firm offers this capability, how can your prospects learn where it is available, what its benefits are, and why it is an important part of your solution?

Method Four: Conduct the “Who You Are” Test

There is one more way to confirm that a person is real, and that is to check the person. Literally.

If someone on a smartphone or videoconference says that they are Jason Brown, how do you know that it’s the real Jason Brown and not Jim Smith, or a previous recording or simulation of Jason Brown?

This is where tools such as facial recognition and liveness detection come to play.

  • You can ensure that the live face matches any face on record.
  • You can also confirm that the face is truly a live face.

In addition to these two tests, you can compare the face against the face on the presented driver’s license or passport to offer additional confirmation of true identity.

Now some companies offer facial recognition, others offer liveness detection, others match the live face to a face on a government ID, and many companies offer two or three of these capabilities.

One more time: if your firm offers these capabilities—either your own or someone else’s—what are the benefits of your algorithms? (For example, are they more accurate than competing algorithms? And under what conditions?) And why is your solution better than the others?

This is for the firms who fight synthetic identities

While most of this post is of general interest to anyone dealing with synthetic identities, this part of this post is specifically addressed to identity and biometric firms who provide synthetic identity-fighting solutions.

When you communicate about your solutions, your communicator needs to have certain types of experience.

  • Industry experience. Perhaps you sell your identity solution to financial institutions, or educational institutions , or a host of other industries (gambling/gaming, healthcare, hospitality, retailers, or sport/concert venues, or others). You need someone with this industry experience.
  • Solution experience. Perhaps your communications require someone with 29 years of experience in identity, biometrics, and technology marketing, including experience with all five factors of authentication (and verification).
  • Communication experience. Perhaps you need to effectively communicate with your prospects in a customer focused, benefits-oriented way. (Content that is all about you and your features won’t win business.)

Perhaps you can use Bredemarket, the identity content marketing expert. I work with you (and I have worked with others) to ensure that your content meets your awareness, consideration, and/or conversion goals.

How can I work with you to communicate your firm’s anti-synthetic identity message? For example, I can apply my identity/biometric blog expert knowledge to create an identity blog post for your firm. Blog posts provide an immediate business impact to your firm, and are easy to reshare and repurpose. For B2B needs, LinkedIn articles provide similar benefits.

If Bredemarket can help your firm convey your message about synthetic identity, let’s talk.

And thirteen more things

If you haven’t read a Bredemarket blog post before, or even if you have, you may not realize that this post is jam-packed with additional information well beyond the post itself. This post alone links to the following Bredemarket posts and other content. You may want to follow one or more of the 13 links below if you need additional information on a particular topic:

  1. Synthetic Identity video (YouTube), August 12, 2023. https://www.youtube.com/watch?v=oDrSBlDJVCk
  2. Using “Multispectral” and “Liveness” in the Same Sentence (Bredemarket blog), June 6, 2023. https://bredemarket.com/2023/06/06/using-multispectral-and-liveness-in-the-same-sentence/
  3. Who is THE #1 NIST facial recognition vendor? (Bredemarket blog), February 23, 2022. https://bredemarket.com/2022/02/23/number1frvt/
  4. Financial Identity (Bredemarket website). https://bredemarket.com/financial-identity/
  5. Educational Identity (Bredemarket website). https://bredemarket.com/educational-identity/
  6. The five authentication factors (Bredemarket blog), March 2, 2021. https://bredemarket.com/2021/03/02/the-five-authentication-factors/
  7. Customer Focus (Bredemarket website). https://bredemarket.com/customer-focus/
  8. Benefits (Bredemarket website). https://bredemarket.com/benefits/
  9. Seven Questions Your Content Creator Should Ask You: the e-book version (Bredemarket blog and e-book), October 22, 2023. https://bredemarket.com/2023/10/22/seven-questions-your-content-creator-should-ask-you-the-e-book-version/
  10. Four Mini-Case Studies for One Inland Empire Business—My Own (Bredemarket blog and e-book), April 16, 2023. https://bredemarket.com/2023/04/16/four-mini-case-studies-for-one-inland-empire-business-my-own/
  11. Identity blog post writing (Bredemarket website). https://bredemarket.com/identity-blog-post-writing/
  12. Blog About Your Identity Firm’s Benefits Now. Why Wait? (Bredemarket blog), August 11, 2023. https://bredemarket.com/2023/08/11/blog-about-your-identity-firms-benefits-now-why-wait/
  13. Why Your Company Should Write LinkedIn Articles (Bredemarket LinkedIn article), July 31, 2023. https://www.linkedin.com/pulse/why-your-company-should-write-linkedin-articles-bredemarket/

That’s twelve more things than the Cupertino guys do, although my office isn’t as cool as theirs.

Well, why not one more?

Here’s my latest brochure for the Bredemarket 400 Short Writing Service, my standard package to create your 400 to 600 word blog posts and LinkedIn articles. Be sure to check the Bredemarket 400 Short Writing Service page for updates.

If that doesn’t fit your needs, I have other offerings.

Plus, I’m real. I’m not a bot.

We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.

(Part of the biometric product marketing expert series)

Some of you are probably going to get into an automobile today.

Are you insane?

The National Highway Traffic Safety Administration has released its latest projections for traffic fatalities in 2022, estimating that 42,795 people died in motor vehicle traffic crashes.

From https://www.nhtsa.gov/press-releases/traffic-crash-death-estimates-2022

When you have tens of thousands of people dying, then the only conscionable response is to ban automobiles altogether. Any other action or inaction is completely irresponsible.

After all, you can ask the experts who want us to ban biometrics because it can be spoofed and is racist, so therefore we shouldn’t use biometrics at all.

I disagree with the calls to ban biometrics, and I’ll go through three “biometrics are bad” examples and say why banning biometrics is NOT justified.

  • Even some identity professionals may not know about the old “gummy fingers” story from 20+ years ago.
  • And yes, I know that I’ve talked about Gender Shades ad nauseum, but it bears repeating again.
  • And voice deepfakes are always a good topic to discuss in our AI-obsessed world.

Example 1: Gummy fingers

My recent post “Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event” included the following sentence:

But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/

A biometrics industry colleague noticed the rhyming words “dummy” and “gummy” and wondered if the latter was a typo. It turns out it wasn’t.

To my knowledge, these gummy fingers do NOT have ridges. From https://www.candynation.com/gummy-fingers

Back in 2002, researcher Tsutomu Matsumoto used “gummy bears” gelatin to create a fake finger that fooled a fingerprint reader.

Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.

Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers. Here’s a recent example of presentation attack detection (liveness detection) from TECH5:

TECH5 participated in the 2023 LivDet Non-contact Fingerprint competition to evaluate its latest NN-based fingerprint liveness detection algorithm and has achieved first and second ranks in the “Systems” category for both single- and four-fingerprint liveness detection algorithms respectively. Both submissions achieved the lowest error rates on bonafide (live) fingerprints. TECH5 achieved 100% accuracy in detecting complex spoof types such as Ecoflex, Playdoh, wood glue, and latex with its groundbreaking Neural Network model that is only 1.5MB in size, setting a new industry benchmark for both accuracy and efficiency.

From https://tech5.ai/tech5s-mobile-fingerprint-liveness-detection-technology-ranked-the-most-accurate-in-the-market/

TECH5 excelled in detecting fake fingers for “non-contact” reading where the fingers don’t even touch a surface such as an optical surface. That’s appreciably harder than detecting fake fingers that touch contact devices.

I should note that LivDet is an independent assessment. As I’ve said before, independent technology assessments provide some guidance on the accuracy and performance of technologies.

So gummy fingers and future threats can be addressed as they arrive.

But at least gummy fingers aren’t racist.

Example 2: Gender shades

In 2017-2018, the Algorithmic Justice League set out to answer this question:

How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?

From http://gendershades.org/. Yes, that’s “http,” not “https.” But I digress.

Let’s stop right there for a moment and address two items before we continue. Trust me; it’s important.

  1. This study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.
  2. The study focused on gender classification and race classification. Back in those primitive innocent days of 2018, the world assumed that you could look at a person and tell whether the person was male or female, or tell the race of a person. (The phrase “self-identity” had not yet become popular, despite the Rachel Dolezal episode which happened before the Gender Shades study). Most importantly, the study did not address identification of individuals at all.

However, the findings did find something:

While the companies appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. Let’s explore.

All companies perform better on males than females with an 8.1% – 20.6% difference in error rates.

All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% – 19.2% difference in error rates.

When we analyze the results by intersectional subgroups – darker males, darker females, lighter males, lighter females – we see that all companies perform worst on darker females.

From http://gendershades.org/overview.html

What does this mean? It means that if you are using one of these three algorithms solely for the purpose of determining a person’s gender and race, some results are more accurate than others.

Three algorithms do not predict hundreds of algorithms, and classification is not identification. If you’re interested in more information on the differences between classification and identification, see Bredemarket’s November 2021 submission to the Department of Homeland Security. (Excerpt here.)

And all the stories about people such as Robert Williams being wrongfully arrested based upon faulty facial recognition results have nothing to do with Gender Shades. I’ll address this briefly (for once):

  • In the United States, facial recognition identification results should only be used by the police as an investigative lead, and no one should be arrested solely on the basis of facial recognition. (The city of Detroit stated that Williams’ arrest resulted from “sloppy” detective work.)
  • If you are using facial recognition for criminal investigations, your people had better have forensic face training. (Then they would know, as Detroit investigators apparently didn’t know, that the quality of surveillance footage is important.)
  • If you’re going to ban computerized facial recognition (even when only used as an investigative lead, and even when only used by properly trained individuals), consider the alternative of human witness identification. Or witness misidentification. Roeling Adams, Reggie Cole, Jason Kindle, Adam Riojas, Timothy Atkins, Uriah Courtney, Jason Rivera, Vondell Lewis, Guy Miles, Luis Vargas, and Rafael Madrigal can tell you how inaccurate (and racist) human facial recognition can be. See my LinkedIn article “Don’t ban facial recognition.”

Obviously, facial recognition has been the subject of independent assessments, including continuous bias testing by the National Institute of Standards and Technology as part of its Face Recognition Vendor Test (FRVT), specifically within the 1:1 verification testing. And NIST has measured the identification bias of hundreds of algorithms, not just three.

In fact, people that were calling for facial recognition to be banned just a few years ago are now questioning the wisdom of those decisions.

But those days were quaint. Men were men, women were women, and artificial intelligence was science fiction.

The latter has certainly changed.

Example 3: Voice spoofs

Perhaps it’s an exaggeration to say that recent artificial intelligence advances will change the world. Perhaps it isn’t. Personally I’ve been concentrating on whether AI writing can adopt the correct tone of voice, but what if we take the words “tone of voice” literally? Let’s listen to President Richard Nixon:

From https://www.youtube.com/watch?v=2rkQn-43ixs

Richard Nixon never spoke those words in public, although it’s possible that he may have rehearsed William Safire’s speech, composed in case Apollo 11 had not resulted in one giant leap for mankind. As noted in the video, Nixon’s voice and appearance were spoofed using artificial intelligence to create a “deepfake.”

It’s one thing to alter the historical record. It’s another thing altogether when a fraudster spoofs YOUR voice and takes money out of YOUR bank account. By definition, you will take that personally.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.

What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech…

From https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=8e8417775591

Now I’ll grant that this is an example of human voice verification, which can be as inaccurate as the previously referenced human witness misidentification. But are computerized systems any better, and can they detect spoofed voices?

Well, in the same way that fingerprint readers worked to overcome gummy bears, voice readers are working to overcome deepfake voices. Here’s what one company, ID R&D, is doing to combat voice spoofing:

IDVoice Verified combines ID R&D’s core voice verification biometric engine, IDVoice, with our passive voice liveness detection, IDLive Voice, to create a high-performance solution for strong authentication, fraud prevention, and anti-spoofing verification.

Anti-spoofing verification technology is a critical component in voice biometric authentication for fraud prevention services. Before determining a match, IDVoice Verified ensures that the voice presented is not a recording.

From https://www.idrnd.ai/idvoice-verified-voice-biometrics-and-anti-spoofing/

This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.

As for independent testing:

A final thought

Yes, fraudsters can use advanced tools to do bad things.

But the people who battle fraudsters can also use advanced tools to defeat the fraudsters.

Take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259