Don’t Miss the Boat

Bredemarket helps identity/biometric firms.

  • Finger, face, iris, voice, DNA, ID documents, geolocation, and even knowledge.
  • Content-Proposal-Analysis. (Bredemarket’s “CPA.”)

Don’t miss the boat.

Augment your team with Bredemarket.

Find out more.

Don’t miss the boat.

Do All 5 Identity Factors Apply to Non-Human Identities?

I’ve talked ad nauseam about the five factors of identity verification and authentication. In case you’ve forgotten, these factors are:

  • Something you know.
  • Something you have.
  • Something you are.
  • Something you do.
  • Somewhere you are.

I’ll leave “somewhat you why” out of the discussion for now, but perhaps I’ll bring it back later.

These five (or six) factors are traditionally used to identify people.

Identifying “Non-Person Entities”

But what happens when the entity you want to identify is not a person? I’ll give two examples:

Kwebbelkop AI? https://www.youtube.com/watch?v=3l4KCbTyXQ4.
  • Kwebbelkop AI, discussed in “Human Cloning Via Artificial Intelligence: It’s Starting,” is not a human. But is there a way to identify the “real” Kwebbelkop AI from a “fake” one?
  • In “On Attribute-Based Access Control,” I noted that NIST defined a subject as “a human user or NPE (Non-Person Entity), such as a device that issues access requests to perform operations on objects.” Again, there’s a need to determine that the NPE has the right attributes, and is not a fake, deep or shallow.

There’s clearly a need to identify non-person entities. If I work for IBM and have a computer issued by IBM, the internal network needs to know that this is my computer, and not the computer of a North Korean hacker.

But I was curious. Can the five (or six) factors identify non-person entities?

Let’s consider factor applicability, going from the easiest to the hardest.

The easy factors

  • Somewhere you are. Not only is this extremely applicable to non-person entities, but in truth this factor doesn’t identify persons, but non-person entities. Think about it: a standard geolocation application doesn’t identify where YOU are. It identities where YOUR SMARTPHONE is. Unless you have a chip implant, there is nothing on your body that can identify your location. So obviously “somewhere you are” applies to NPEs.
  • Something you have. Another no brainer. If a person has “something,” that something is by definition an NPE. So “something you have” applies to NPEs.
  • Something you do. NPEs can do things. My favorite example is Kraftwerk’s pocket calculator. You will recall that “by pressing down this special key it plays a little melody.” I actually had a Casio pocket calculator that did exactly that, playing a tune that is associated with Casio. Later, Brian Eno composed a startup sound for Windows 95. So “something you do” applies to NPEs. (Although I’m forced to admit that an illegal clone computer and operating system could reproduce the Eno sound.)
Something you do, 1980s version. Advance to 1:49 to hear the little melody. https://www.youtube.com/watch?v=6ozWOe9WEU8.
Something you do, 1990s version. https://www.youtube.com/watch?v=miZHa7ZC6Z0.

Those three were easy. Now it gets harder.

The hard factors

Something you know. This one is a conceptual challenge. What does an NPE “know”? For artificial intelligence creations such as Kwebbelkop AI, you can look at the training data used to create it and maintain it. For a German musician’s (or an Oregon college student’s) pocket calculator, you can look at the code used in the device, from the little melody itself to the action to take when the user enters a 1, a plus sign, and another 1. But is this knowledge? I lean toward saying yes—I can teach a bot my mother’s maiden name just as easily as I can teach myself my maiden name. But perhaps some would disagree.

Something you are. For simplicity’s sake, I’ll stick to physical objects here, ranging from pocket calculators to hand-made ceramic plates. The major reason that we like to use “something you are” as a factor is the promise of uniqueness. We believe that fingerprints are unique (well, most of us), and that irises are unique, and that DNA is unique except for identical twins. But is a pocket calculator truly unique, given that the same assembly line manufactures many pocket calculators? Perhaps ceramic plates exhibit uniqueness, perhaps not.

That’s all five factors, right?

Well, let’s look at the sixth one.

Somewhat you why

You know that I like the “why” question, and some time ago I tried to apply it to identity.

  • Why is a person using a credit card at a McDonald’s in Atlantic City? (Link) Or, was the credit card stolen, or was it being used legitimately?
  • Why is a person boarding a bus? (Link) Or, was the bus pass stolen, or was it being used legitimately?
  • Why is a person standing outside a corporate office with a laptop and monitor? (Link) Or, is there a legitimate reason for an ex-employee to gain access to the corporate office?

The first example is fundamental from an identity standpoint. It’s taken from real life, because I had never used any credit card in Atlantic City before. However, there was data that indicated that someone with my name (but not my REAL ID; they didn’t exist yet) flew to Atlantic City, so a reasonable person (or identity verification system) could conclude that I might want to eat while I was there.

But can you measure intent for an NPE?

  • Does Kwebbelkop AI have a reason to perform a particular activity?
  • Does my pocket calculator have a reason to tell me that 1 plus 1 equals 3?
  • Does my ceramic plate have a reason to stay intact when I drop it ten meters?

I’m not sure.

By Bundesarchiv, Bild 102-13018 / CC-BY-SA 3.0, CC BY-SA 3.0 de, https://commons.wikimedia.org/w/index.php?curid=5480820.

Who Is IN With IDEMIA?

Unlike the other rumors over the last few years, this is official. 

From IDEMIA:

“IN Groupe and IDEMIA Group have entered into exclusive negotiations regarding the acquisition of IDEMIA Smart Identity, one of the three divisions of IDEMIA Group.”

But discussions are one thing, and government approvals are another. By the way, IN Groupe’s sole shareholder is the French state…

Plus IDEMIA, like Motorola before it, will have to figure out how the, um, bifurcated components will work with each other. After all, IDEMIA Smart Identity is intertwined with the other parts of IDEMIA. 

Again, from IDEMIA:

“IDEMIA Smart Identity, a division of IDEMIA Group, is a leader in physical and digital identity solutions. We have fostered longstanding relationships with governments across the globe, based on the shared understanding that a secured legal identity enables citizens to access their fundamental rights in the physical and digital worlds.”

Regardless, this process will take some time.

And what will Advent International eventually do with the other parts of IDEMIA? That will take even more time to figure out.

Oh, Florida (mobile driver’s licenses)

I should properly open this post by stating any necessary disclosures…but I don’t have any. I know NOTHING about the goings-on reported in this post other than what I read in the papers.

“I know NOTHING.” By CBS Television – eBayfrontback, Public Domain, https://commons.wikimedia.org/w/index.php?curid=73578107.

However, I do know the history of Thales and mobile driver’s licenses. Which makes the recent announcements from Florida and Thales even more surprising.

Gemalto’s pioneering mobile driver’s license pilots

Back when I worked for IDEMIA from 2017 to 2020, many states were performing some level of testing of mobile driver’s licenses. Rather than having to carry a physical driver’s license card, you would be able to carry a virtual one on your phone.

While Louisiana was the first state to release an operational mobile driver’s license (with Envoc’s “LA Wallet”), several states were working on pilot projects.

Some of these states were working with the company Gemalto to create pilots for mobile driver’s licenses. As early as 2016, Gemalto announced its participation in pilot mDL projects in Colorado, Idaho, Maryland, and Washington DC. As I recall, at the time Gemalto had more publicly-known pilots in process than any other vendor, and appeared to be leading the pack in the effort to transition driver’s licenses from the (physical) wallet to the smartphone.

Thales’ operational mobile driver’s license

By the time Gemalto was acquired by and absorbed into Thales, the company won the opportunity to provide an operational (as opposed to pilot) driver’s license. The Florida Smart ID app has been available to both iPhone and Android users since 2021.

From https://www.flhsmv.gov/floridasmartid/ as of July 12. No idea whether this image will still be there on July 15.

What just happened?

This morning I woke up to a slew of articles (such as the LinkedIn post from PEAK IDV’s Steve Craig, and the Biometric Update post from Chris Burt) that indicated the situation had changed.

One of the most important pieces of new information was a revised set of Frequently Asked Questions (or “Question,” or “Statement”) on the “Florida Smart ID” section of the Florida Highway Safety and Motor Vehicles website.

The Florida Smart ID applications will be updated and improved by a new vendor. At this time, the Florida Department of Highway Safety and Motor Vehicles is removing the current Florida Smart ID application from the app store. Please email FloridaSmartID@flhsmv.gov to receive notification of future availability.

Um…that was abrupt.

But a second piece of information, a Thales statement shared by PC Mag, explained the abruptness…in part.

In a statement provided to PCMag, a Thales spokesperson said the company’s contract with the FLHSMV expired on June 30, 2024.

“The project has now entered a new phase in which the FLHSMV requirements have evolved, necessitating a retender,” Thales says. “Thales chose not to compete in this tender. However, we are pleased to have been a part of this pioneering solution and wishes it continued success.”

Now normally when a government project transitions from one vendor to another, the old vendor continues to provide the service until the date that the new vendor’s system is operational. This is true even in contentious cases, such as the North Carolina physical driver’s license transition from IDEMIA to CBN Secure Technologies.

But in the Florida case:

  • Thales chose not to bid on the contract renewal.
  • The new vendor and/or the State of Florida chose not to begin providing services when the Thales contract expired on June 30.
  • Thales and/or the State of Florida chose not to temporarily renew the existing contract until the new vendor was providing services in 2025.

This third point is especially odd. I’ve known of situations where Company A lost a renewal bid to Company B, Company B was unable to deliver the new system on time, and Company A was all too happy to continue to provide service until Company B (or in some cases the government agency itself) got its act together.

Anyway, for whatever reason, those who had Florida mobile driver’s licenses have now lost them, and will presumably have to go through an entirely new process (with an as-yet unknown vendor) to get their mobile driver’s licenses again.

I’m not sure how much more we will learn publicly, and I don’t know how much is being whispered privately. Presumably the new vendor, whoever it is, has some insight, but they’re not talking.

Who You Are, Plus What You Have, Equals What You Are

(Part of the biometric product marketing expert series)

Yes, I know the differences between the various factors of authentication.

Let me focus on two of the factors.

  • Something You Are. This is the factor that identifies people. It includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.
  • Something You Have. While this is used to identify people, in truth this is the factor that identifies things. It includes driver’s licenses and hardware or software tokens.

There’s a very clear distinction between these two factors of authentication: “something you are” for people, and “something you have” for things.

But what happens when we treat the things as beings?

Who, or what, possesses identity?

License Plate Recognition

I’ve spent a decade working with automatic license plate recognitrion (ALPR), sometimes known as automatic number plate recognition (ANPR).

Actually more than a decade, since my car’s picture was taken in Montclair, California a couple of decades ago doing something it shouldn’t have been doing. I ended up in traffic school for that one.

But my traffic school didn’t have a music soundtrack. From https://www.imdb.com/title/tt0088847/mediaviewer/rm1290438144/?ref_=tt_md_2.

Now license plate recognition isn’t that reliable of an identifier, since within a minute I can remove a license plate from a vehicle and substitute another one in its place. However, it’s deemed to be reliable enough that it is used to identify who a car is.

Note my intentional use of the word “who” in the sentence above.

  • Because when my car made a left turn against a red light all those years ago, the police didn’t haul MY CAR into court.
  • Using then-current technology, it identified the car, looked up the registered owner, and hauled ME into court.

These days, it’s theoretically possible (where legally allowed) to identify the license plate of the car AND identify the face of the person driving the car.

But you still have this strange merger of who and what in which the non-human characteristics of an entity are used to identify the entity.

What you are.

But that’s nothing compared to what’s emerged over the past few years.

We Are The Robots

When the predecessors to today’s Internet were conceived in the 1960s, they were intended as a way for people to communicate with each other electronically.

And for decades the Internet continued to operate this way.

Until the Internet of Things (IoT) became more and more prominent.

From LINK REMOVED 2025-01-20

How prominent? The Hacker News explains:

Application programming interfaces (APIs) are the connective tissue behind digital modernization, helping applications and databases exchange data more effectively. The State of API Security in 2024 Report from Imperva, a Thales company, found that the majority of internet traffic (71%) in 2023 was API calls.

Couple this with the increasing use of chatbots and other artificial intelligence bots to generate content, and the result is that when you are communicating with someone on the Internet, there is often no “who.” There’s a “what.”

What you are.

Between the cars and the bots, there’s a lot going on.

What does this mean?

There are numerous legal and technical ramifications, but I want to concentrate on the higher meaning of all this. I’ve spent 29 years professionally devoted to the identification of who people are, but this focus on people is undergoing a seismic change.

KITT. By Tabercil – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=14927883.

The science fiction stories of the past, including TV shows such as Knight Rider and its car KITT, are becoming the present as we interact with automobiles, refrigerators, and other things. None of them have true sentience, but it doesn’t matter because they have the power to do things.

The late Dr. Frank Poole, died in 2001. From https://cinemorgue.fandom.com/wiki/Gary_Lockwood.

In the meantime, the identification industry not only has to identify people, but also identify things.

And it’s becoming more crucial that we do so, and do it accurately.

Positioning, Messaging, and Your Facial Recognition Product Marketing

(Part of the biometric product marketing expert series)

By Original: Jack Ver at Dutch Wikipedia Vector: Ponor – Own work based on: Plaatsvector.png by Jack Ver at Dutch Wikipedia, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=95477901.

When marketing your facial recognition product (or any product), you need to pay attention to your positioning and messaging. This includes developing the answers to why, how, and what questions. But your positioning and your resulting messaging are deeply influenced by the characteristics of your product.

If facial recognition is your only modality

There are hundreds of facial recognition products on the market that are used for identity verification, authentication, crime solving (but ONLY as an investigative lead), and other purposes.

Some of these solutions ONLY use face as a biometric modality. Others use additional biometric modalities.

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Your positioning depends upon whether your solution only uses face, or uses other factors such as voice.

Of course, if you initially only offer a face solution and then offer a second biometric, you’ll have to rewrite all your material. “You know how we said that face is great? Well, face and gait are even greater!”

If biometrics is your only factor

It’s no secret that I am NOT a fan of the “passwords are dead” movement.

Too many of the tombstones are labeled “12345.” By GreatBernard – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=116933238.

It seems that many of the people that are waiting the long-delayed death of the password think that biometrics is the magic solution that will completely replace passwords.

For this reason, your company might have decided to use biometrics as your sole factor of identity verification and authentication.

Or perhaps your company took a different approach, and believes that multiple factors—perhaps all five factors—are required to truly verify and/or authenticate an individual. Use some combination of biometrics, secure documents such as driver’s licenses, geolocation, “something you do” such as a particular swiping pattern, and even (horrors!) knowledge-based authentication such as passwords or PINs.

This naturally shapes your positioning and messaging.

  • The single factor companies will argue that their approach is very fast, very secure, and completely frictionless. (Sound familiar?) No need to drag out your passport or your key fob, or to turn off your VPN to accurately indicate your location. Biometrics does it all!
  • The multiple factor companies will argue that ANY single factor can be spoofed, but that it is much, much harder to spoof multiple factors at once. (Sound familiar?)

So position yourself however you need to position yourself. Again, be prepared to change if your single factor solution adopts a second factor.

A final thought

Every company has its own way of approaching a problem, and your company is no different. As you prepare to market your products, survey your product, your customers, and your prospects and choose the correct positioning (and messaging) for your own circumstances.

And if you need help with biometric positioning and messaging, feel free to contact the biometric product marketing expert, John E. Bredehoft. (Full-time employment opportunities via LinkedIn, consulting opportunities via Bredemarket.)

In the meantime, take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259.

LMM vs. LMM (Acronyms Are Funner)

Do you recall my October 2023 post “LLM vs. LMM (Acronyms Are Fun)“?

It discussed both large language models and large multimodal models. In this case “multimodal” is used in a way that I normally DON’T use it, namely to refer to the different modes in which humans interact (text, images, sounds, videos). Of course, I gravitated to a discussion in which an image of a person’s face was one of the modes.

Document processing with GPT-4V. The model’s mistake is highlighted in red. From https://huyenchip.com/2023/10/10/multimodal.html?utm_source=tldrai.

In this post I will look at LMMs…and I will also look at LMMs. There’s a difference. And a ton of power when LMMs and LMMs work together for the common good.

Revisiting the Large Multimodal Model (LMM)

Since I wrote that piece last year, large multimodal models continue to be discussed. Harry Guinness just wrote a piece for Zapier in March.

When Google announced its Gemini series of AI models, it made a big deal about how they were “natively multimodal.” Instead of having different modules tacked on to give the appearance of multimodality, they were apparently trained from the start to be able to handle text, images, audio, video, and more. 

Other AI models are starting to function in a TRULY multimodal way, rather than using separate models to handle the different modes.

So now that we know that LLMs are large multimodal models, we need to…

…um, wait a minute…

Introducing the Large Medical Model (LMM)

It turns out that the health people have a DIFFERENT definition of the acronym LMM. Rather than using it to refer to a large multimodal model, they refer to a large MEDICAL model.

As you can probably guess, the GenHealth.AI model is trained for medical purposes.

Our first of a kind Large Medical Model or LMM for short is a type of machine learning model that is specifically designed for healthcare and medical purposes. It is trained on a large dataset of medical records, claims, and other healthcare information including ICD, CPT, RxNorm, Claim Approvals/Denials, price and cost information, etc.

I don’t think I’m stepping out on a limb if I state that medical records cannot be classified as “natural” language. So the GenHealth.AI model is trained specifically on those attributes found in medical records, and not on people hemming and hawing and asking what a Pekingese dog looks like.

But there is still more work to do.

What about the LMM that is also an LMM?

Unless I’m missing something, the Large Medical Model described above is designed to work with only one mode of data, textual data.

But what if the Large Medical Model were also a Large Multimodal Model?

By Piotr Bodzek, MD – Uploaded from http://www.ginbytom.slam.katowice.pl/25.html with author permission., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=372117
  • Rather than converting a medical professional’s voice notes to text, the LMM-LMM would work directly with the voice data. This could lead to increased accuracy: compare the tone of voice of an offhand comment “This doesn’t look good” with the tone of voice of a shocked comment “This doesn’t look good.” They appear the same when reduced to text format, but the original voice data conveys significant differences.
  • Rather than just using the textual codes associated with an X-ray, the LMM-LMM would read the X-ray itself. If the image model has adequate training, it will again pick up subtleties in the X-ray data that are not present when the data is reduced to a single medical code.
  • In short, the LMM-LMM (large medical model-large multimodal model) would accept ALL the medical outputs: text, voice, image, video, biometric readings, and everything else. And the LMM-LMM would deal with all of it natively, increasing the speed and accuracy of healthcare by removing the need to convert everything to textual codes.

A tall order, but imagine how healthcare would be revolutionized if you didn’t have to convert everything into text format to get things done. And if you could use the actual image, video, audio, or other data rather than someone’s textual summation of it.

Obviously you’d need a ton of training data to develop an LMM-LMM that could perform all these tasks. And you’d have to obtain the training data in a way that conforms to privacy requirements: in this case protected health information (PHI) requirements such as HIPAA requirements.

But if someone successfully pulls this off, the benefits are enormous.

You’ve come a long way, baby.

Robert Young (“Marcus Welby”) and Jane Wyatt (“Margaret Anderson” on a different show). By ABC TelevisionUploaded by We hope at en.wikipedia – eBay itemphoto informationTransferred from en.wikipedia by SreeBot, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16472486.

Defeating Synthetic Identity Fraud

I’ve talked about synthetic identity fraud a lot in the Bredemarket blog over the past several years. I’ll summarize a few examples in this post, talk about how to fight synthetic identity fraud, and wrap up by suggesting how to get the word out about your anti-synthetic identity solution.

But first let’s look at a few examples of synthetic identity.

Synthetic identities pop up everywhere

As far back as December 2020, I discussed Kris’ Rides’ encounter with a synthetic employee from a company with a number of synthetic employees (many of who were young females).

More recently, I discussed attempts to create synthetic identities using gummy fingers and fake/fraudulent voices. The topic of deepfakes continues to be hot across all biometric modalities.

I shared a video I created about synthetic identities and their use to create fraudulent financial identities.

From https://www.youtube.com/watch?v=oDrSBlDJVCk.

I even discussed Kelly Shepherd, the fake vegan mom created by HBO executive Casey Bloys to respond to HBO critics.

And that’s just some of what Bredemarket has written about synthetic identity. You can find the complete list of my synthetic identity posts here.

So what? You must fight!

It isn’t enough to talk about the fact that synthetic identities exist: sometimes for innocent reasons, sometimes for outright fraudulent reasons.

You need to communicate how to fight synthetic identities, especially if your firm offers an anti-fraud solution.

Here are four ways to fight synthetic identities:

  1. Checking the purported identity against private databases, such as credit records.
  2. Checking the person’s driver’s license or other government document to ensure it’s real and not a fake.
  3. Checking the purported identity against government databases, such as driver’s license databases. (What if the person presents a real driver’s license, but that license was subsequently revoked?)
  4. Perform a “who you are” biometric test against the purported identity.

If you conduct all four tests, then you have used multiple factors of authentication to confirm that the person is who they say they are. If the identity is synthetic, chances are the purported person will fail at least one of these tests.

Do you fight synthetic identity fraud?

If you fight synthetic identity fraud, you should let people know about your solution.

Perhaps you can use Bredemarket, the identity content marketing expertI work with you (and I have worked with others) to ensure that your content meets your awareness, consideration, and/or conversion goals.

How can I work with you to communicate your firm’s anti-synthetic identity message? For example, I can apply my identity/biometric blog expert knowledge to create an identity blog post for your firm. Blog posts provide an immediate business impact to your firm, and are easy to reshare and repurpose. For B2B needs, LinkedIn articles provide similar benefits.

If Bredemarket can help your firm convey your message about synthetic identity, let’s talk.

Reasonable Minds Vehemently Disagree On Three Biometric Implementation Choices

(Part of the biometric product marketing expert series)

There are a LOT of biometric companies out there.

The Prism Project’s home page at https://www.the-prism-project.com/, illustrating the Biometric Digital Identity Prism as of March 2024. From Acuity Market Intelligence and FindBiometrics.

With over 100 firms in the biometric industry, their offerings are going to naturally differ—even if all the firms are TRYING to copy each other and offer “me too” solutions.

Will Ferrell and Chad Smith, or maybe vice versa. Fair use. From https://www.billboard.com/music/music-news/will-ferrell-chad-smith-red-hot-benefit-chili-peppers-6898348/, originally from NBC.

I’ve worked for over a dozen biometric firms as an employee or independent contractor, and I’ve analyzed over 80 biometric firms in competitive intelligence exercises, so I’m well aware of the vast implementation differences between the biometric offerings.

Some of the implementation differences provoke vehement disagreements between biometric firms regarding which choice is correct. Yes, we FIGHT.

MMA stands for Messy Multibiometric Authentication. Public Domain, https://commons.wikimedia.org/w/index.php?curid=607428

Let’s look at three (out of many) of these implementation differences and see how they affect YOUR company’s content marketing efforts—whether you’re engaging in identity blog post writing, or some other content marketing activity.

The three biometric implementation choices

Firms that develop biometric solutions make (or should make) the following choices when implementing their solutions.

  1. Presentation attack detection. Assuming the solution incorporates presentation attack detection (liveness detection), or a way of detecting whether the presented biometric is real or a spoof, the firm must decide whether to use active or passive liveness detection.
  2. Age assurance. When choosing age assurance solutions that determine whether a person is old enough to access a product or service, the firm must decide whether or not age estimation is acceptable.
  3. Biometric modality. Finally, the firm must choose which biometric modalities to support. While there are a number of modality wars involving all the biometric modalities, this post is going to limit itself to the question of whether or not voice biometrics are acceptable.

I will address each of these questions in turn, highlighting the pros and cons of each implementation choice. After that, we’ll see how this affects your firm’s content marketing.

Choice 1: Active or passive liveness detection?

Back in June 2023 I defined what a “presentation attack” is.

(I)nstead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).

This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).

Then I talked about standards and testing.

But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.

And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.

(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)

Well…that day is today.

A balanced assessment

Now I could cite a firm using active liveness detection to say why it’s great, or I could cite a firm using passive liveness detection to say why it’s great. But perhaps the most balanced assessment comes from facia, which offers both types of liveness detection. How does facia define the two types of liveness detection?

Active liveness detection, as the name suggests, requires some sort of activity from the user. If a system is unable to detect liveness, it will ask the user to perform some specific actions such as nodding, blinking or any other facial movement. This allows the system to detect natural movements and separate it from a system trying to mimic a human being….

Passive liveness detection operates discreetly in the background, requiring no explicit action from the user. The system’s artificial intelligence continuously analyses facial movements, depth, texture, and other biometric indicators to detect an individual’s liveness.

Pros and cons

Briefly, the pros and cons of the two methods are as follows:

  • While active liveness detection offers robust protection, requires clear consent, and acts as a deterrent, it is hard to use, complex, and slow.
  • Passive liveness detection offers an enhanced user experience via ease of use and speed and is easier to integrate with other solutions, but it incorporates privacy concerns (passive liveness detection can be implemented without the user’s knowledge) and may not be used in high-risk situations.

So in truth the choice is up to each firm. I’ve worked with firms that used both liveness detection methods, and while I’ve spent most of my time with passive implementations, the active ones can work also.

A perfect wishy-washy statement that will get BOTH sides angry at me. (Except perhaps for companies like facia that use both.)

Choice 2: Age estimation, or no age estimation?

Designed by Freepik.

There are a lot of applications for age assurance, or knowing how old a person is. These include smoking tobacco or marijuana, buying firearms, driving a cardrinking alcoholgamblingviewing adult contentusing social media, or buying garden implements.

If you need to know a person’s age, you can ask them. Because people never lie.

Well, maybe they do. There are two better age assurance methods:

  • Age verification, where you obtain a person’s government-issued identity document with a confirmed birthdate, confirm that the identity document truly belongs to the person, and then simply check the date of birth on the identity document and determine whether the person is old enough to access the product or service.
  • Age estimation, where you don’t use a government-issued identity document and instead examine the face and estimate the person’s age.

I changed my mind on age estimation

I’ve gone back and forth on this. As I previously mentioned, my employment history includes time with a firm produces driver’s licenses for the majority of U.S. states. And back when that firm was providing my paycheck, I was financially incentivized to champion age verification based upon the driver’s licenses that my company (or occasionally some inferior company) produced.

But as age assurance applications moved into other areas such as social media use, a problem occurred since 13 year olds usually don’t have government IDs. A few of them may have passports or other government IDs, but none of them have driver’s licenses.

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727.

Pros and cons

But does age estimation work? I’m not sure if ANYONE has posted a non-biased view, so I’ll try to do so myself.

  • The pros of age estimation include its applicability to all ages including young people, its protection of privacy since it requires no information about the individual identity, and its ease of use since you don’t have to dig for your physical driver’s license or your mobile driver’s license—your face is already there.
  • The huge con of age estimation is that it is by definition an estimate. If I show a bartender my driver’s license before buying a beer, they will know whether I am 20 years and 364 days old and ineligible to purchase alcohol, or whether I am 21 years and 0 days old and eligible. Estimates aren’t that precise.

How precise is age estimation? We’ll find out soon, once NIST releases the results of its Face Analysis Technology Evaluation (FATE) Age Estimation & Verification test. The release of results is expected in early May.

Choice 3: Is voice an acceptable biometric modality?

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Fingerprints, palm prints, faces, irises, and everything up to gait. (And behavioral biometrics.) There are a lot of biometric modalities out there, and one that has been around for years is the voice biometric.

I’ve discussed this topic before, and the partial title of the post (“We’ll Survive Voice Spoofing”) gives away how I feel about the matter, but I’ll present both sides of the issue.

White House photo by Kimberlee Hewitt – whitehouse.gov, President George W. Bush and comedian Steve Bridges, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3052515

No one can deny that voice spoofing exists and is effective, but many of the examples cited by the popular press are cases in which a HUMAN (rather than an ALGORITHM) was fooled by a deepfake voice. But voice recognition software can also be fooled.

(Incidentally, there is a difference between voice recognition and speech recognition. Voice recognition attempts to determine who a person is. Speech recognition attempts to determine what a person says.)

Finally facing my Waterloo

Take a study from the University of Waterloo, summarized here, that proclaims: “Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.”

If you re-read that sentence, you will notice that it includes the words “up to.” Those words are significant if you actually read the article.

In a recent test against Amazon Connect’s voice authentication system, they achieved a 10 per cent success rate in one four-second attack, with this rate rising to over 40 per cent in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99 per cent success rate after six attempts.

Other voice spoofing studies

Similar to Gender Shades, the University of Waterloo study does not appear to have tested hundreds of voice recognition algorithms. But there are other studies.

  • The 2021 NIST Speaker Recognition Evaluation (PDF here) tested results from 15 teams, but this test was not specific to spoofing.
  • A test that was specific to spoofing was the ASVspoof 2021 test with 54 team participants, but the ASVspoof 2021 results are only accessible in abstract form, with no detailed results.
  • Another test, this one with results, is the SASV2022 challenge, with 23 valid submissions. Here are the top 10 performers and their error rates.

You’ll note that the top performers don’t have error rates anywhere near the University of Waterloo’s 99 percent.

So some firms will argue that voice recognition can be spoofed and thus cannot be trusted, while other firms will argue that the best voice recognition algorithms are rarely fooled.

What does this mean for your company?

Obviously, different firms are going to respond to the three questions above in different ways.

  • For example, a firm that offers face biometrics but not voice biometrics will convey how voice is not a secure modality due to the ease of spoofing. “Do you want to lose tens of millions of dollars?”
  • A firm that offers voice biometrics but not face biometrics will emphasize its spoof detection capabilities (and cast shade on face spoofing). “We tested our algorithm against that voice fake that was in the news, and we detected the voice as a deepfake!”

There is no universal truth here, and the message your firm conveys depends upon your firm’s unique characteristics.

And those characteristics can change.

  • Once when I was working for a client, this firm had made a particular choice with one of these three questions. Therefore, when I was writing for the client, I wrote in a way that argued the client’s position.
  • After I stopped working for this particular client, the client’s position changed and the firm adopted the opposite view of the question.
  • Therefore I had to message the client and say, “Hey, remember that piece I wrote for you that said this? Well, you’d better edit it, now that you’ve changed your mind on the question…”

Bear this in mind as you create your blog, white paper, case study, or other identity/biometric content, or have someone like the biometric content marketing expert Bredemarket work with you to create your content. There are people who sincerely hold the opposite belief of your firm…but your firm needs to argue that those people are, um, misinformed.

And as a postscript I’ll provide two videos that feature voices. The first is for those who detected my reference to the ABBA song “Waterloo.”

From https://www.youtube.com/watch?v=4XJBNJ2wq0Y.

The second features the late Steve Bridges as President George W. Bush at the White House Correspondents Dinner.

From https://www.youtube.com/watch?v=u5DpKjlgoP4.

Take Me to the (Login.gov IAL2) Pilot

As further proof that I am celebrating, rather than hiding, my “seasoned” experience—and you know what the code word “seasoned” means—I am entitling this blog post “Take Me to the Pilot.”

Although I’m thinking about a different type of “pilot”—a pilot to establish that Login.gov can satisfy Identity Assurance Level 2 (IAL2).

A recap of Login.gov and IAL2-non compliance

I just mentioned IAL2 in a blog post on Wednesday, with this seemingly throwaway sentence.

So if you think you can use Login.gov to access a porn website, think again.

From https://bredemarket.com/2024/04/10/age-assurance-meets-identity-assurance-level-2/.

The link in that sentence directs the kind reader to a post I wrote in November 2023, detailing that fact that the GSA Inspector General criticized…the GSA…for implying that Login.gov was IAL2-compliant when it was not. The November post references a GSA-authored August blog post which reads in part (in bold):

Login.gov is on a path to providing an IAL2-compliant identity verification service to its customers in a responsible, equitable way.

From https://www.gsa.gov/blog/2023/08/18/reducing-fraud-and-increasing-access-drives-record-adoption-and-usage-of-logingov.

Because it obviously wouldn’t be good to do it in an irresponsible inequitable way.

But the GSA didn’t say how long that path would be. Would Login.gov be IAL2-compliant by the end of 2023? By mid 2024?

It turns out the answer is neither.

Eight months later we have…a pilot

You would think that achieving IAL2 compliance would be a top priority. After all, the longer that Login.gov doesn’t comply, the more government agencies that will flock to IAL2-compliant ID.me.

Enter Steve Craig of PEAK.IDV and the weekly news summaries that he posts on LinkedIn. Today’s summary includes the following item:

4/ GSA’s Login.gov Pilots Enhanced Identity Verification

Login.gov’s pilot will allow users to match a live selfie with the photo on a self-supplied form of photo ID, such as a driver’s license

Other interesting updates in the press release 👇

From https://www.linkedin.com/posts/stevenbcraig_digitalidentity-aml-compliance-activity-7184539504504930306-LVPF/.

And here’s what GSA’s April 11 press release says.

Specifically, over the next few months, Login.gov will:

Pilot facial matching technology consistent with the National Institute of Standards and Technology’s Digital Identity Guidelines (800-63-3) to achieve evidence-based remote identity verification at the IAL2 level….

Using proven facial matching technology, Login.gov’s pilot will allow users to match a live selfie with the photo on a self-supplied form of photo ID, such as a driver’s license. Login.gov will not allow these images to be used for any purpose other than verifying identity, an approach which reflects Login.gov’s longstanding commitment to ensuring the privacy of its users. This pilot is slated to start in May with a handful of existing agency-partners who have expressed interest, with the pilot expanding to additional partners over the summer. GSA will simultaneously seek an independent third party assessment (Kantara) of IAL2 compliance, which GSA expects will be completed later this year. 

From https://www.gsa.gov/about-us/newsroom/news-releases/general-services-administrations-logingov-pilot-04112024#.

In short, GSA’s April 11 press release about the Login.gov pilot says that it expects to complete IAL2 compliance later this year. So it’s going to take more than a year for the GSA to repair the gap that its Inspector General identified.

My seasoned response

Once I saw Steve’s update this morning, I felt it sufficiently important to share the news among Bredemarket’s various social channels.

With a picture.

B-side of Elton John “Your Song” single issued 1970.

For those of you who are not as “seasoned” as I am, the picture depicts the B-side of a 1970 vinyl 7″ single (not a compact disc) from Elton John, taken from the album that broke Elton in the United States. (Not literally; that would come a few years later.)

By the way, while the original orchestrated studio version is great, the November 1970 live version with just the Elton John – Dee Murray – Nigel Olsson trio is OUTSTANDING.

From https://www.youtube.com/watch?v=cC1ocO0pVgs.

Back to Bredemarket social media. If you go to my Instagram post on this topic, I was able to incorporate an audio snippet from “Take Me to the Pilot” (studio version) into the post. (You may have to go to the Instagram post to actually hear the audio.)

Not that the song has anything to do with identity verification using government ID documents paired with facial recognition. Or maybe it does; Elton John doesn’t know what the song means, and even lyricist Bernie Taupin doesn’t know what the song means.

So from now on I’m going to say that “Take Me to the Pilot” documents future efforts toward IAL2 compliance. Although frankly the lyrics sound like they describe a successful iris spoofing attempt.

Through a glass eye, your throne
Is the one danger zone

From https://genius.com/Elton-john-take-me-to-the-pilot-lyrics.

Postscript

For you young whippersnappers who don’t understand why the opening image mentioned “54 Years On,” this is a reference to another Elton John song.

And it’s no surprise that the live version is better.

From https://www.youtube.com/watch?v=rRngmF-AcFQ.

Now I’m going to listen to this all day. Cue the Instagram post (if Instagram has access to the 17-11-70/11-17-70 version).