Well, the FATE side of the house has released its first two studies, including one entitled “Face Analysis Technology Evaluation (FATE) Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms” (NIST Internal Report NIST IR 8491; PDF here).
Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.
Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers.
I and countless others have spent the last several years referring to the National Institute of Standards and Technology’s Face Recognition Vendor Test, or FRVT. I guess some people have spent almost a quarter century referring to FRVT, because the term has been in use since 1999.
Starting now, you’re not supposed to use the FRVT acronym any more.
To bring clarity to our testing scope and goals, what was formerly known as FRVT has been rebranded and split into FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation). Tracks that involve the processing and analysis of images will run under the FATE activity, and tracks that pertain to identity verification will run under FRTE. All existing participation and submission procedures remain unchanged.
The change actually makes sense, since tasks such as age estimation and presentation attack detection (liveness detection) do not directly relate to the identification of individuals.
Us old folks just have to get used to the change.
I just hope that the new “FATE” acronym doesn’t mean that some algorithms are destined to perform better than others.
Does your firm fight crooks who try to fraudulently use synthetic identities? If so, how do you communicate your solution?
This post explains what synthetic identities are (with examples), tells four ways to detect synthetic identities, and closes by providing an answer to the communication question.
While this post is primarily intended for identity firms who can use Bredemarket’s marketing and writing services, anyone else who is interested in synthetic identities can read along.
What are synthetic identities?
To explain what synthetic identities are, let me start by telling you about Jason Brown.
Jason Brown wasn’t Jason Brown
You may not have heard of him unless you lived in Atlanta, Georgia in 2019 and lived near the apartment he rented.
Jason Brown’s renting of an apartment isn’t all that unusual.
If you were to visit Brown’s apartment in February 2019, you would find credit cards and financial information for Adam M. Lopez and Carlos Rivera.
Now that’s a little unusual, especially since Lopez and Rivera never existed.
For that matter, Jason Brown never existed either.
A Georgia man was sentenced Sept. 1 (2022) to more than seven years in federal prison for participating in a nationwide fraud ring that used stolen social security numbers, including those belonging to children, to create synthetic identities used to open lines of credit, create shell companies, and steal nearly $2 million from financial institutions….
Cato joined conspiracies to defraud banks and illegally possess credit cards. Cato and his co-conspirators created “synthetic identities” by combining false personal information such as fake names and dates of birth with the information of real people, such as their social security numbers. Cato and others then used the synthetic identities and fake ID documents to open bank and credit card accounts at financial institutions. Cato and his co-conspirators used the unlawfully obtained credit cards to fund their lifestyles.
Talking about synthetic identity at Victoria Gardens
Here’s a video that I created on Saturday that describes, at a very high level, how synthetic identities can be used fraudulently. People who live near Rancho Cucamonga, California will recognize the Victoria Gardens shopping center, proof that synthetic identity theft can occur far away from Georgia.
Note that synthetic identity theft different from stealing someone else’s existing identity. In this case, a new identity is created.
So how do you catch these fraudsters?
Catching the identity synthesizers
If you’re renting out an apartment, and Jason Brown shows you his driver’s license and provides his Social Security Number, how can you detect if Brown is a crook? There are four methods to verify that Jason Brown exists, and that he’s the person renting your apartment.
Method One: Private Databases
One way to check Jason Brown’s story is to perform credit checks and other data investigations using financial databases.
Did Jason Brown just spring into existence within the past year, with no earlier credit record? That seems suspicious.
Does Jason Brown’s credit record appear TOO clean? That seems suspicious.
Does Jason Brown share information such as a common social security number with other people? Are any of those other identities also fraudulent? That is DEFINITELY suspicious.
This is one way that many firms detect synthetic identities, and for some firms it is the ONLY way they detect synthetic identities. And these firms have to tell their story to their prospects.
If your firm offers a tool to verify identities via private databases, how do you let your prospects know the benefits of your tool, and why your solution is better than all other solutions?
Method Two: Check That Driver’s License (or other government document)
What about that driver’s license that Brown presented? There are a wide variety of software tools that can check the authenticity of driver’s licenses, passports, and other government-issued documents. Some of these tools existed back in 2019 when “Brown” was renting his apartment, and a number of them exist today.
Maybe your firm has created such a tool, or uses a tool from a third party.
If your firm offers this capability, how can your prospects learn about its benefits, and why your solution excels?
Method Three: Check Government Databases
Checking the authenticity of a government-issued document may not be enough, since the document itself may be legitimate, but the implied credentials may no longer be legitimate. For example, if my California driver’s license expires in 2025, but I move to Minnesota in 2023 and get a new license, my California driver’s license is no longer valid, even though I have it in my possession.
Why not check the database of the Department of Motor Vehicles (or the equivalent in your state) to see if there is still an active driver’s license for that person?
The American Association of Motor Vehicle Administrators (AAMVA) maintains a Driver’s License Data Verification (DLDV) Service in which participating jurisdictions allow other entities to verify the license data for individuals. Your firm may be able to access the DLDV data for selected jurisdictions, providing an extra identity verification tool.
If your firm offers this capability, how can your prospects learn where it is available, what its benefits are, and why it is an important part of your solution?
Method Four: Conduct the “Who You Are” Test
There is one more way to confirm that a person is real, and that is to check the person. Literally.
If someone on a smartphone or videoconference says that they are Jason Brown, how do you know that it’s the real Jason Brown and not Jim Smith, or a previous recording or simulation of Jason Brown?
This is where tools such as facial recognition and liveness detection come to play.
You can ensure that the live face matches any face on record.
You can also confirm that the face is truly a live face.
In addition to these two tests, you can compare the face against the face on the presented driver’s license or passport to offer additional confirmation of true identity.
Now some companies offer facial recognition, others offer liveness detection, others match the live face to a face on a government ID, and many companies offer two or three of these capabilities.
One more time: if your firm offers these capabilities—either your own or someone else’s—what are the benefits of your algorithms? (For example, are they more accurate than competing algorithms? And under what conditions?) And why is your solution better than the others?
This is for the firms who fight synthetic identities
While most of this post is of general interest to anyone dealing with synthetic identities, this part of this post is specifically addressed to identity and biometric firms who provide synthetic identity-fighting solutions.
When you communicate about your solutions, your communicator needs to have certain types of experience.
Industry experience. Perhaps you sell your identity solution to financial institutions, or educational institutions , or a host of other industries (gambling/gaming, healthcare, hospitality, retailers, or sport/concert venues, or others). You need someone with this industry experience.
Solution experience. Perhaps your communications require someone with 29 years of experience in identity, biometrics, and technology marketing, including experience with all five factors of authentication (and verification).
Communication experience. Perhaps you need to effectively communicate with your prospects in a customer focused, benefits-oriented way. (Content that is all about you and your features won’t win business.)
If you haven’t read a Bredemarket blog post before, or even if you have, you may not realize that this post is jam-packed with additional information well beyond the post itself. This post alone links to the following Bredemarket posts and other content. You may want to follow one or more of the 13 links below if you need additional information on a particular topic:
Here’s my latest brochure for the Bredemarket 400 Short Writing Service, my standard package to create your 400 to 600 word blog posts and LinkedIn articles. Be sure to check the Bredemarket 400 Short Writing Service page for updates.
When you have tens of thousands of people dying, then the only conscionable response is to ban automobiles altogether. Any other action or inaction is completely irresponsible.
After all, you can ask the experts who want us to ban biometrics because it can be spoofed and is racist, so therefore we shouldn’t use biometrics at all.
I disagree with the calls to ban biometrics, and I’ll go through three “biometrics are bad” examples and say why banning biometrics is NOT justified.
Even some identity professionals may not know about the old “gummy fingers” story from 20+ years ago.
And yes, I know that I’ve talked about Gender Shades ad nauseum, but it bears repeating again.
And voice deepfakes are always a good topic to discuss in our AI-obsessed world.
But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.
Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.
TECH5 participated in the 2023 LivDet Non-contact Fingerprint competition to evaluate its latest NN-based fingerprint liveness detection algorithm and has achieved first and second ranks in the “Systems” category for both single- and four-fingerprint liveness detection algorithms respectively. Both submissions achieved the lowest error rates on bonafide (live) fingerprints. TECH5 achieved 100% accuracy in detecting complex spoof types such as Ecoflex, Playdoh, wood glue, and latex with its groundbreaking Neural Network model that is only 1.5MB in size, setting a new industry benchmark for both accuracy and efficiency.
TECH5 excelled in detecting fake fingers for “non-contact” reading where the fingers don’t even touch a surface such as an optical surface. That’s appreciably harder than detecting fake fingers that touch contact devices.
I should note that LivDet is an independent assessment. As I’ve said before, independent technology assessments provide some guidance on the accuracy and performance of technologies.
So gummy fingers and future threats can be addressed as they arrive.
Let’s stop right there for a moment and address two items before we continue. Trust me; it’s important.
This study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.
The study focused on gender classification and race classification. Back in those primitive innocent days of 2018, the world assumed that you could look at a person and tell whether the person was male or female, or tell the race of a person. (The phrase “self-identity” had not yet become popular, despite the Rachel Dolezal episode which happened before the Gender Shades study). Most importantly, the study did not address identification of individuals at all.
However, the findings did find something:
While the companies appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. Let’s explore.
All companies perform better on males than females with an 8.1% – 20.6% difference in error rates.
All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% – 19.2% difference in error rates.
When we analyze the results by intersectional subgroups – darker males, darker females, lighter males, lighter females – we see that all companies perform worst on darker females.
What does this mean? It means that if you are using one of these three algorithms solely for the purpose of determining a person’s gender and race, some results are more accurate than others.
And all the stories about people such as Robert Williams being wrongfully arrested based upon faulty facial recognition results have nothing to do with Gender Shades. I’ll address this briefly (for once):
In the United States, facial recognition identification results should only be used by the police as an investigative lead, and no one should be arrested solely on the basis of facial recognition. (The city of Detroit stated that Williams’ arrest resulted from “sloppy” detective work.)
If you are using facial recognition for criminal investigations, your people had better have forensic face training. (Then they would know, as Detroit investigators apparently didn’t know, that the quality of surveillance footage is important.)
If you’re going to ban computerized facial recognition (even when only used as an investigative lead, and even when only used by properly trained individuals), consider the alternative of human witness identification. Or witness misidentification. Roeling Adams, Reggie Cole, Jason Kindle, Adam Riojas, Timothy Atkins, Uriah Courtney, Jason Rivera, Vondell Lewis, Guy Miles, Luis Vargas, and Rafael Madrigal can tell you how inaccurate (and racist) human facial recognition can be. See my LinkedIn article “Don’t ban facial recognition.”
Obviously, facial recognition has been the subject of independent assessments, including continuous bias testing by the National Institute of Standards and Technology as part of its Face Recognition Vendor Test (FRVT), specifically within the 1:1 verification testing. And NIST has measured the identification bias of hundreds of algorithms, not just three.
Richard Nixon never spoke those words in public, although it’s possible that he may have rehearsed William Safire’s speech, composed in case Apollo 11 had not resulted in one giant leap for mankind. As noted in the video, Nixon’s voice and appearance were spoofed using artificial intelligence to create a “deepfake.”
In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.
What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech…
Now I’ll grant that this is an example of human voice verification, which can be as inaccurate as the previously referenced human witness misidentification. But are computerized systems any better, and can they detect spoofed voices?
IDVoice Verified combines ID R&D’s core voice verification biometric engine, IDVoice, with our passive voice liveness detection, IDLive Voice, to create a high-performance solution for strong authentication, fraud prevention, and anti-spoofing verification.
Anti-spoofing verification technology is a critical component in voice biometric authentication for fraud prevention services. Before determining a match, IDVoice Verified ensures that the voice presented is not a recording.
This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.
As for independent testing:
ID R&D has participated in multiple ASVspoof tests, and performed well in them.
It ISN’T time for me to jump on the Apple Vision Pro bandwagon, because while Apple Vision Pro affects the biometric industry, it’s not a REVOLUTIONARY biometric event.
The four revolutionary biometric events in the 21st century
How do I define a “revolutionary biometric event”?
I define it as something that completely transforms the biometric industry.
When I mention three of the four revolutionary biometric events in the 21st century, you will understand what I mean.
9/11. After 9/11, orders of biometric devices skyrocketed, and biometrics were incorporated into identity documents such as passports and driver’s licenses. Who knows, maybe someday we’ll actually implement REAL ID in the United States. The latest extension of the REAL ID enforcement date moved it out to May 7, 2025. (Subject to change, of course.)
The Boston Marathon bombings, April 2013. After the bombings, the FBI was challenged in managing and analyzing countless hours of video evidence. Companies such as IDEMIA National Security Solutions, MorphoTrak, Motorola, Paravision, Rank One Computing, and many others have tirelessly worked to address this challenge, while ensuring that facial recognition results accurately identify perpetrators while protecting the privacy of others in the video feeds.
COVID-19, spring 2020 and beyond. COVID accelerated changes that were already taking place in the biometric industry. COVID prioritized mobile, remote, and contactless interactions and forced businesses to address issues that were not as critical previously, such as liveness detection.
These three are cataclysmic world events that had a profound impact on biometrics. The fourth one, which occurred after the Boston Marathon bombings but before COVID, was…an introduction of a product feature.
Touch ID, September 2013. When Apple introduced the iPhone 5s, it also introduced a new way to log in to the device. Rather than entering a passcode, iPhone 5S users could just use their finger to log in. The technical accomplishment was dwarfed by the legitimacy that this brought to using fingerprints for identification. Before 2013, attempts to implement fingerprint verification for benefits recipients were resisted because fingerprinting was something that criminals did. After September 2013, fingerprinting was something that the cool Apple kids did. The biometric industry changed overnight.
Of course, Apple followed Touch ID with Face ID, with adherents of the competing biometric modalities sparring over which was better. But Face ID wouldn’t have been accepted as widely if Touch ID hadn’t paved the way.
So why hasn’t iris verification taken off?
Iris verification has been around for decades (I remember Iridian before L-1; it’s now part of IDEMIA), but iris verification is nowhere near as popular in the general population as finger and face verification. There are two reasons for this:
Compared to other biometrics, irises are hard to capture. To capture a fingerprint, you can lay your finger on a capture device, or “slap” your four fingers on a capture device, or even “wave” your fingers across a capture device. Faces are even easier to capture; while older face capture systems required you to stand close to the camera, modern face devices can capture your face as you are walking by the camera, or even if you are some distance from the camera.
Compared to other biometrics, irises are expensive to capture. Many years ago, my then-employer developed a technological marvel, an iris capture device that could accurately capture irises for people of any height. Unfortunately, the technological marvel cost thousands upon thousands of dollars, and no customers were going to use it when they could acquire fingerprint and face capture devices that were much less costly.
So while people rushed to implement finger and face capture on phones and other devices, iris capture was reserved for narrow verticals that required iris accuracy.
The Apple Vision Pro is not the first headset that was ever created, but the iPhone wasn’t the first smartphone either. And coming late to the game doesn’t matter. Apple’s visibility among trendsetters ensures that when Apple releases something, people take notice.
According to Apple, Optic ID works by analyzing a user’s iris through LED light exposure and then comparing it with an enrolled Optic ID stored on the device’s Secure Enclave….Optic ID will be used for everything from unlocking Vision Pro to using Apple Pay in your own headspace.
So why did Apple incorporate Optic ID on this device and not the others?
There are multiple reasons, but one key reason is that the Vision Pro retails for US$3,499, which makes it easier for Apple to justify the cost of the iris components.
But the high price of the Vision Pro comes at…a price
However, that high price is also the reason why the Vision Pro is not going to revolutionize the biometric industry. CNET admitted that the Vision Pro is a niche item:
With Vision Pro, Apple is trying to establish what it believes will be the next major evolution of the personal computer. That’s a bigger goal than selling millions of units on launch day, and a shift like that doesn’t happen overnight, no matter what the price is. The version of Vision Pro that Apple launches next year likely isn’t the one that most people will buy.
Certainly Vision Pro and Optic ID have the potential to revolutionize the computing industry…in the long term. And as that happens, the use of iris biometrics will become more popular with the general public…in the long term.
But not today. You’ll have to wait a little longer for the next biometric revolution. And hopefully it won’t be a catastrophic event like three of the previous revolutions.
Latent prints are usually produced by sweat, skin debris or other sebaceous excretions that cover up the palmar surface of the fingertips. If a latent print is on the glass platen of the optical sensor and light is directed on it, this print can fool the optical scanner….
Capacitive sensors can be spoofed by using gelatin based soft artificial fingers.
There is another weakness of these types of readers. Some professions damage and wear away a person’s fingerprint ridges. Examples of professions whose practitioners exhibit worn ridges include construction workers and biometric content marketing experts (who, at least in the old days, handled a lot of paper).
The solution is to design a fingerprint reader that not only examines the surface of the finger, but goes deeper.
The specialty of multispectral sensors is that it can capture the features of the tissue that lie below the skin surface as well as the usual features on the finger surface. The features under the skin surface are able to provide a second representation of the pattern on the fingerprint surface.
Multispectral sensors are nothing new. When I worked for Motorola, Motorola Ventures had invested in a company called Lumidigm that produced multispectral fingerprint sensors; they were much more expensive than your typical optical or capacitive sensor, but were much more effective in capturing true fingerprints to the subdermal level.
“Gelatin based soft artificial fingers” aren’t the only way to fool a biometric sensor, whether you’re talking about a fingerprint sensor or some other sensor such as a face sensor.
Regardless of the biometric modality, the intent is the same; instead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).
This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).
And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.
(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)
[UPDATE 4/24/2024: I FINALLY ADDRESSED THE DIFFERENCE BETWEEN ACTIVE AND PASSIVE LIVENESS HERE.]
Multispectral liveness
While multispectral fingerprint readers aren’t the only fingerprint readers, or the only biometric readers, that iBeta has tested for liveness, the HID Global Lumidigm readers conform to Level 2 (the higher level) of iBeta testing.