Both the U.S. National Institute of Standards and Technology and the Digital Benefits Hub made important announcements this morning. I will quote portions of the latter announcement.
In response to heightened fraud and related cybersecurity threats during the COVID-19 pandemic, some benefits-administering agencies began to integrate new safeguards such as individual digital accounts and identity verification, also known as identity proofing, into online applications. However, the use of certain approaches, like those reliant upon facial recognition or data brokers, has raised questions about privacy and data security, due process issues, and potential biases in systems that disproportionately impact communities of color and marginalized groups. Simultaneously, adoption of more effective, evidence-based methods of identity verification has lagged, despite recommendations from NIST (Question A4) and the Government Accountability Office.
There’s a ton to digest here. This impacts a number of issues that I and others have been discussing for years.
Image from the mid-2010s. “John, how do you use the CrowdCompass app for this Users Conference?” Well, let me tell you…
Because of my former involvement with the biometric user conference managed by IDEMIA, MorphoTrak, Sagem Morpho, Motorola, and older entities, I always like to peek and see what they’re doing these days. And it looks like they’re still prioritizing the educational element of the conference.
Although the 2024 Justice and Public Safety Conference won’t take place until September, the agenda is already online.
Subject to change, presumably.
This Joseph Courtesis session, scheduled for the afternoon of Thursday, September 12 caught my eye. It’s entitled “Ethical Use of Facial Recognition in Law Enforcement: Policy Before Technology.” Here’s an excerpt from the abstract:
This session will focus on post investigative image identification with the assistance of Facial Recognition Technology (FRT). It’s important to point out that FRT, by itself, does not produce Probable Cause to arrest.
Re-read that last sentence, then re-read it one more time. 100% of the wrongful arrest cases would be eliminated if everyone adopted this one practice. FRT is ONLY an investigative lead.
And Courtesis makes one related point:
Any image identification process that includes FRT should put policy before the technology.
Any technology that could deprive a person of their liberty needs a clear policy on its proper use.
September conference attendees will definitely receive a comprehensive education from an authority on the topic.
But now I’m having flashbacks, and visions of Excel session planning workbooks are dancing in my head. Maybe they plan with Asana today.
HID Global has teamed up with Amazon Web Services to enhance biometric face imaging capabilities by utilizing the Amazon Rekognition computer vision cloud service on its U.ARE.U camera system.
And I also don’t know whether HID Global will be prevented from providing the U.ARE.U face product to law enforcement, given Amazon’s 2020-2021 ban on law enforcement use of Amazon Rekognition’s face capabilities.
Amazon Rekognition and the FBI
Especially since Fedscoop revealed in January that the FBI was in the “initiation” phase of using Amazon Rekognition. Neither Amazon nor the FBI would say whether facial recognition was part of the deal.
If Alphabet or Amazon reverse their current reluctance to market their biometric offerings to governments, the entire landscape could change again.
If they wished, Alphabet, Amazon, and the other tech powers could shut IDEMIA, NEC, and Thales completely out of the biometric business with a minimal (to them) investment. If you’re familiar with SWOT analyses, this definitely falls into the “threat” category.
But the Really Big Bunch still fear public reaction to any so-called “police state” involvement.
Biometric security company Facewatch…is facing a lawsuit after its system wrongly flagged a 19-year-old girl as a shoplifter….(The girl) was shopping at Home Bargains in Manchester in February when staff confronted her and threw her out of the store…..’I have never stolen in my life and so I was confused, upset and humiliated to be labeled as a criminal in front of a whole shop of people,’ she said in a statement.
While Big Brother Watch and others are using this story to conclude that facial recognition is evil and no one should ever use it, the problem isn’t the technology. The problem is when the technology is misused.
Were the Home Bargains staff trained in forensic face examination, so that they could confirm that the customer was the shoplifter? I doubt it.
Even if they were forensically trained, did the Home Bargains staff follow accepted practices and use the face recognition results ONLY as an investigative lead, and seek other corroborating evidence to identify the girl as a shoplifter? I doubt it.
Again, the problem is NOT the technology. The problem is MISUSE of the technology—by this English store, by a certain chain of U.S. stores, and even by U.S. police agencies who fail to use facial recognition results solely as an investigative lead.
A prospect approached me some time ago to have Bredemarket help tell this story. However, the prospect has delayed moving forward with the project, and so their story has not yet been told.
Does YOUR firm have a story that you have failed to tell?
When marketing your facial recognition product (or any product), you need to pay attention to your positioning and messaging. This includes developing the answers to why, how, and what questions. But your positioning and your resulting messaging are deeply influenced by the characteristics of your product.
If facial recognition is your only modality
There are hundreds of facial recognition products on the market that are used for identity verification, authentication, crime solving (but ONLY as an investigative lead), and other purposes.
Some of these solutions ONLY use face as a biometric modality. Others use additional biometric modalities.
Similarly, a face-only company will argue that facial recognition is a very fast, very secure, and completely frictionless method of verification and authentication. When opponents bring up the demonstrated spoofs against faces, you will argue that your iBeta-conformant presentation attack detection methodology guards against such spoofing attempts.
Of course, if you initially only offer a face solution and then offer a second biometric, you’ll have to rewrite all your material. “You know how we said that face is great? Well, face and gait are even greater!”
It seems that many of the people that are waiting the long-delayed death of the password think that biometrics is the magic solution that will completely replace passwords.
For this reason, your company might have decided to use biometrics as your sole factor of identity verification and authentication.
Or perhaps your company took a different approach, and believes that multiple factors—perhaps all five factors—are required to truly verify and/or authenticate an individual. Use some combination of biometrics, secure documents such as driver’s licenses, geolocation, “something you do” such as a particular swiping pattern, and even (horrors!) knowledge-based authentication such as passwords or PINs.
This naturally shapes your positioning and messaging.
The single factor companies will argue that their approach is very fast, very secure, and completely frictionless. (Sound familiar?) No need to drag out your passport or your key fob, or to turn off your VPN to accurately indicate your location. Biometrics does it all!
The multiple factor companies will argue that ANY single factor can be spoofed, but that it is much, much harder to spoof multiple factors at once. (Sound familiar?)
So position yourself however you need to position yourself. Again, be prepared to change if your single factor solution adopts a second factor.
A final thought
Every company has its own way of approaching a problem, and your company is no different. As you prepare to market your products, survey your product, your customers, and your prospects and choose the correct positioning (and messaging) for your own circumstances.
And if you need help with biometric positioning and messaging, feel free to contact the biometric product marketing expert, John E. Bredehoft. (Full-time employment opportunities via LinkedIn, consulting opportunities via Bredemarket.)
In the meantime, take care of yourself, and each other.
Checking the purported identity against private databases, such as credit records.
Checking the person’s driver’s license or other government document to ensure it’s real and not a fake.
Checking the purported identity against government databases, such as driver’s license databases. (What if the person presents a real driver’s license, but that license was subsequently revoked?)
Perform a “who you are” biometric test against the purported identity.
If you conduct all four tests, then you have used multiple factors of authentication to confirm that the person is who they say they are. If the identity is synthetic, chances are the purported person will fail at least one of these tests.
Do you fight synthetic identity fraud?
If you fight synthetic identity fraud, you should let people know about your solution.
The Prism Project’s home page at https://www.the-prism-project.com/, illustrating the Biometric Digital Identity Prism as of March 2024. From Acuity Market Intelligence and FindBiometrics.
With over 100 firms in the biometric industry, their offerings are going to naturally differ—even if all the firms are TRYING to copy each other and offer “me too” solutions.
I’ve worked for over a dozen biometric firms as an employee or independent contractor, and I’ve analyzed over 80 biometric firms in competitive intelligence exercises, so I’m well aware of the vast implementation differences between the biometric offerings.
Some of the implementation differences provoke vehement disagreements between biometric firms regarding which choice is correct. Yes, we FIGHT.
Let’s look at three (out of many) of these implementation differences and see how they affect YOUR company’s content marketing efforts—whether you’re engaging in identity blog post writing, or some other content marketing activity.
The three biometric implementation choices
Firms that develop biometric solutions make (or should make) the following choices when implementing their solutions.
Presentation attack detection. Assuming the solution incorporates presentation attack detection (liveness detection), or a way of detecting whether the presented biometric is real or a spoof, the firm must decide whether to use active or passive liveness detection.
Age assurance. When choosing age assurance solutions that determine whether a person is old enough to access a product or service, the firm must decide whether or not age estimation is acceptable.
Biometric modality. Finally, the firm must choose which biometric modalities to support. While there are a number of modality wars involving all the biometric modalities, this post is going to limit itself to the question of whether or not voice biometrics are acceptable.
I will address each of these questions in turn, highlighting the pros and cons of each implementation choice. After that, we’ll see how this affects your firm’s content marketing.
(I)nstead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).
This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).
And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.
(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)
Now I could cite a firm using active liveness detection to say why it’s great, or I could cite a firm using passive liveness detection to say why it’s great. But perhaps the most balanced assessment comes from facia, which offers both types of liveness detection. How does facia define the two types of liveness detection?
Active liveness detection, as the name suggests, requires some sort of activity from the user. If a system is unable to detect liveness, it will ask the user to perform some specific actions such as nodding, blinking or any other facial movement. This allows the system to detect natural movements and separate it from a system trying to mimic a human being….
Passive liveness detection operates discreetly in the background, requiring no explicit action from the user. The system’s artificial intelligence continuously analyses facial movements, depth, texture, and other biometric indicators to detect an individual’s liveness.
Pros and cons
Briefly, the pros and cons of the two methods are as follows:
While active liveness detection offers robust protection, requires clear consent, and acts as a deterrent, it is hard to use, complex, and slow.
Passive liveness detection offers an enhanced user experience via ease of use and speed and is easier to integrate with other solutions, but it incorporates privacy concerns (passive liveness detection can be implemented without the user’s knowledge) and may not be used in high-risk situations.
So in truth the choice is up to each firm. I’ve worked with firms that used both liveness detection methods, and while I’ve spent most of my time with passive implementations, the active ones can work also.
A perfect wishy-washy statement that will get BOTH sides angry at me. (Except perhaps for companies like facia that use both.)
If you need to know a person’s age, you can ask them. Because people never lie.
Well, maybe they do. There are two better age assurance methods:
Age verification, where you obtain a person’s government-issued identity document with a confirmed birthdate, confirm that the identity document truly belongs to the person, and then simply check the date of birth on the identity document and determine whether the person is old enough to access the product or service.
Age estimation, where you don’t use a government-issued identity document and instead examine the face and estimate the person’s age.
I changed my mind on age estimation
I’ve gone back and forth on this. As I previously mentioned, my employment history includes time with a firm produces driver’s licenses for the majority of U.S. states. And back when that firm was providing my paycheck, I was financially incentivized to champion age verification based upon the driver’s licenses that my company (or occasionally some inferior company) produced.
But as age assurance applications moved into other areas such as social media use, a problem occurred since 13 year olds usually don’t have government IDs. A few of them may have passports or other government IDs, but none of them have driver’s licenses.
But does age estimation work? I’m not sure if ANYONE has posted a non-biased view, so I’ll try to do so myself.
The pros of age estimation include its applicability to all ages including young people, its protection of privacy since it requires no information about the individual identity, and its ease of use since you don’t have to dig for your physical driver’s license or your mobile driver’s license—your face is already there.
The huge con of age estimation is that it is by definition an estimate. If I show a bartender my driver’s license before buying a beer, they will know whether I am 20 years and 364 days old and ineligible to purchase alcohol, or whether I am 21 years and 0 days old and eligible. Estimates aren’t that precise.
Fingerprints, palm prints, faces, irises, and everything up to gait. (And behavioral biometrics.) There are a lot of biometric modalities out there, and one that has been around for years is the voice biometric.
I’ve discussed this topic before, and the partial title of the post (“We’ll Survive Voice Spoofing”) gives away how I feel about the matter, but I’ll present both sides of the issue.
No one can deny that voice spoofing exists and is effective, but many of the examples cited by the popular press are cases in which a HUMAN (rather than an ALGORITHM) was fooled by a deepfake voice. But voice recognition software can also be fooled.
Take a study from the University of Waterloo, summarized here, that proclaims: “Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.”
If you re-read that sentence, you will notice that it includes the words “up to.” Those words are significant if you actually read the article.
In a recent test against Amazon Connect’s voice authentication system, they achieved a 10 per cent success rate in one four-second attack, with this rate rising to over 40 per cent in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99 per cent success rate after six attempts.
Other voice spoofing studies
Similar to Gender Shades, the University of Waterloo study does not appear to have tested hundreds of voice recognition algorithms. But there are other studies.
The 2021 NIST Speaker Recognition Evaluation (PDF here) tested results from 15 teams, but this test was not specific to spoofing.
A test that was specific to spoofing was the ASVspoof 2021 test with 54 team participants, but the ASVspoof 2021 results are only accessible in abstract form, with no detailed results.
Another test, this one with results, is the SASV2022 challenge, with 23 valid submissions. Here are the top 10 performers and their error rates.
You’ll note that the top performers don’t have error rates anywhere near the University of Waterloo’s 99 percent.
So some firms will argue that voice recognition can be spoofed and thus cannot be trusted, while other firms will argue that the best voice recognition algorithms are rarely fooled.
What does this mean for your company?
Obviously, different firms are going to respond to the three questions above in different ways.
For example, a firm that offers face biometrics but not voice biometrics will convey how voice is not a secure modality due to the ease of spoofing. “Do you want to lose tens of millions of dollars?”
A firm that offers voice biometrics but not face biometrics will emphasize its spoof detection capabilities (and cast shade on face spoofing). “We tested our algorithm against that voice fake that was in the news, and we detected the voice as a deepfake!”
There is no universal truth here, and the message your firm conveys depends upon your firm’s unique characteristics.
And those characteristics can change.
Once when I was working for a client, this firm had made a particular choice with one of these three questions. Therefore, when I was writing for the client, I wrote in a way that argued the client’s position.
After I stopped working for this particular client, the client’s position changed and the firm adopted the opposite view of the question.
Therefore I had to message the client and say, “Hey, remember that piece I wrote for you that said this? Well, you’d better edit it, now that you’ve changed your mind on the question…”
Bear this in mind as you create your blog, white paper, case study, or other identity/biometric content, or have someone like the biometric content marketing expert Bredemarket work with you to create your content. There are people who sincerely hold the opposite belief of your firm…but your firm needs to argue that those people are, um, misinformed.
As further proof that I am celebrating, rather than hiding, my “seasoned” experience—and you know what the code word “seasoned” means—I am entitling this blog post “Take Me to the Pilot.”
Although I’m thinking about a different type of “pilot”—a pilot to establish that Login.gov can satisfy Identity Assurance Level 2 (IAL2).
The link in that sentence directs the kind reader to a post I wrote in November 2023, detailing that fact that the GSA Inspector General criticized…the GSA…for implying that Login.gov was IAL2-compliant when it was not. The November post references a GSA-authored August blog post which reads in part (in bold):
Login.gov is on a path to providing an IAL2-compliant identity verification service to its customers in a responsible, equitable way.
Specifically, over the next few months, Login.gov will:
Pilot facial matching technology consistent with the National Institute of Standards and Technology’s Digital Identity Guidelines (800-63-3) to achieve evidence-based remote identity verification at the IAL2 level….
Using proven facial matching technology, Login.gov’s pilot will allow users to match a live selfie with the photo on a self-supplied form of photo ID, such as a driver’s license. Login.gov will not allow these images to be used for any purpose other than verifying identity, an approach which reflects Login.gov’s longstanding commitment to ensuring the privacy of its users. This pilot is slated to start in May with a handful of existing agency-partners who have expressed interest, with the pilot expanding to additional partners over the summer. GSA will simultaneously seek an independent third party assessment (Kantara) of IAL2 compliance, which GSA expects will be completed later this year.
In short, GSA’s April 11 press release about the Login.gov pilot says that it expects to complete IAL2 compliance later this year. So it’s going to take more than a year for the GSA to repair the gap that its Inspector General identified.
My seasoned response
Once I saw Steve’s update this morning, I felt it sufficiently important to share the news among Bredemarket’s various social channels.
With a picture.
B-side of Elton John “Your Song” single issued 1970.
For those of you who are not as “seasoned” as I am, the picture depicts the B-side of a 1970 vinyl 7″ single (not a compact disc) from Elton John, taken from the album that broke Elton in the United States. (Not literally; that would come a few years later.)
By the way, while the original orchestrated studio version is great, the November 1970 live version with just the Elton John – Dee Murray – Nigel Olsson trio is OUTSTANDING.
Back to Bredemarket social media. If you go to my Instagram post on this topic, I was able to incorporate an audio snippet from “Take Me to the Pilot” (studio version) into the post. (You may have to go to the Instagram post to actually hear the audio.)
Not that the song has anything to do with identity verification using government ID documents paired with facial recognition. Or maybe it does; Elton John doesn’t know what the song means, and even lyricist Bernie Taupin doesn’t know what the song means.
So from now on I’m going to say that “Take Me to the Pilot” documents future efforts toward IAL2 compliance. Although frankly the lyrics sound like they describe a successful iris spoofing attempt.
Through a glass eye, your throne Is the one danger zone
We get all sorts of great tools, but do we know how to use them? And what are the consequences if we don’t know how to use them? Could we lose the use of those tools entirely due to bad publicity from misuse?
According to a report released in September by the US Government Accountability Office, only 5 percent of the 196 FBI agents who have access to facial recognition technology from outside vendors have completed any training on how to properly use the tools.
It turns out that the study is NOT limited to FBI use of facial recognition services, but also addresses six other federal agencies: the Bureau of Alcohol, Tobacco, Firearms and Explosives (the guvmint doesn’t believe in the Oxford comma); U.S. Customs and Border Protection; the Drug Enforcement Administration; Homeland Security Investigations; the U.S. Marshals Service; and the U.S. Secret Service.
Initially, none of the seven agencies required users to complete facial recognition training. As of April 2023, two of the agencies (Homeland Security Investigations and the U.S. Marshals Service) required training, two (the FBI and Customs and Border Protection) did not, and the other three had quit using these four facial recognition services.
The FBI stated that facial recognition training was recommended as a “best practice,” but not mandatory. And when something isn’t mandatory, you can guess what happened:
GAO found that few of these staff completed the training, and across the FBI, only 10 staff completed facial recognition training of 196 staff that accessed the service. FBI said they intend to implement a training requirement for all staff, but have not yet done so.
Although not a requirement, FBI officials said they recommend (as a best practice) that some staff complete FBI’s Face Comparison and Identification Training when using Clearview AI. The recommended training course, which is 24 hours in length, provides staff with information on how to interpret the output of facial recognition services, how to analyze different facial features (such as ears, eyes, and mouths), and how changes to facial features (such as aging) could affect results.
However, this type of training was not recommended for all FBI users of Clearview AI, and was not recommended for any FBI users of Marinus Analytics or Thorn.
I should note that the report was issued in September 2023, based upon data gathered earlier in the year, and that for all I know the FBI now mandates such training.
Or maybe it doesn’t.
What about your state and local facial recognition users?
Of course, training for federal facial recognition users is only a small part of the story, since most of the law enforcement activity takes place at the state and local level. State and local users need training so that they can understand:
The anatomy of the face, and how it affects comparisons between two facial images.
How cameras work, and how this affects comparisons between two facial images.
How poor quality images can adversely affect facial recognition.
How facial recognition should ONLY be used as an investigative lead.
If facial recognition users had been trained, none of the false arrests over the last few years would have taken place.
The users would have realized that the poor images were not of sufficient quality to determine a match.
The users would have realized that even if they had been of sufficient quality, facial recognition must only be used as an investigative lead, and once other data had been checked, the cases would have fallen apart.
But the false arrests gave the privacy advocates the ammunition they needed.
Not to insist upon proper training in the use of facial recognition.
Like nuclear or biological weapons, facial recognition’s threat to human society and civil liberties far outweighs any potential benefits. Silicon Valley lobbyists are disingenuously calling for regulation of facial recognition so they can continue to profit by rapidly spreading this surveillance dragnet. They’re trying to avoid the real debate: whether technology this dangerous should even exist. Industry-friendly and government-friendly oversight will not fix the dangers inherent in law enforcement’s discriminatory use of facial recognition: we need an all-out ban.
(And just wait until the anti-facial recognition forces discover that this is not only a plot of evil Silicon Valley, but also a plot of evil non-American foreign interests located in places like Paris and Tokyo.)
Because the anti-facial recognition forces want us to remove the use of technology and go back to the good old days…of eyewitness misidentification.
Eyewitnesses are often expected to identify perpetrators of crimes based on memory, which is incredibly malleable. Under intense pressure, through suggestive police practices, or over time, an eyewitness is more likely to find it difficult to correctly recall details about what they saw.
This post concentrates on IDENTIFICATION perfection, or the ability to enjoy zero errors when identifying individuals.
The risk of claiming identification perfection (or any perfection) is that a SINGLE counter-example disproves the claim.
If you assert that your biometric solution offers 100% accuracy, a SINGLE false positive or false negative shatters the assertion.
If you claim that your presentation attack detection solution exposes deepfakes (face, voice, or other), then a SINGLE deepfake that gets past your solution disproves your claim.
And as for the pre-2009 claim that latent fingerprint examiners never make a mistake in an identification…well, ask Brandon Mayfield about that one.
In fact, I go so far as to avoid using the phrase “no two fingerprints are alike.” Many years ago (before 2009) in an International Association for Identification meeting, I heard someone justify the claim by saying, “We haven’t found a counter-example yet.” That doesn’t mean that we’ll NEVER find one.
At first glance, it appears that Motorola would be the last place to make a boneheaded mistake like that. After all, Motorola is known for its focus on quality.
But in actuality, Motorola was the perfect place to make such a mistake, since it was one of the champions of the “Six Sigma” philosophy (which targets a maximum of 3.4 defects per million opportunities). Motorola realized that manufacturing perfection is impossible, so manufacturers (and the people in Motorola’s weird Biometric Business Unit) should instead concentrate on reducing the error rate as much as possible.
So one misspelling could be tolerated, but I shudder to think what would have happened if I had misspelled “quality” a second time.