This post concentrates on IDENTIFICATION perfection, or the ability to enjoy zero errors when identifying individuals.
The risk of claiming identification perfection (or any perfection) is that a SINGLE counter-example disproves the claim.
If you assert that your biometric solution offers 100% accuracy, a SINGLE false positive or false negative shatters the assertion.
If you claim that your presentation attack detection solution exposes deepfakes (face, voice, or other), then a SINGLE deepfake that gets past your solution disproves your claim.
And as for the pre-2009 claim that latent fingerprint examiners never make a mistake in an identification…well, ask Brandon Mayfield about that one.
In fact, I go so far as to avoid using the phrase “no two fingerprints are alike.” Many years ago (before 2009) in an International Association for Identification meeting, I heard someone justify the claim by saying, “We haven’t found a counter-example yet.” That doesn’t mean that we’ll NEVER find one.
At first glance, it appears that Motorola would be the last place to make a boneheaded mistake like that. After all, Motorola is known for its focus on quality.
But in actuality, Motorola was the perfect place to make such a mistake, since it was one of the champions of the “Six Sigma” philosophy (which targets a maximum of 3.4 defects per million opportunities). Motorola realized that manufacturing perfection is impossible, so manufacturers (and the people in Motorola’s weird Biometric Business Unit) should instead concentrate on reducing the error rate as much as possible.
So one misspelling could be tolerated, but I shudder to think what would have happened if I had misspelled “quality” a second time.
Back in August 2023, the U.S. General Services Administration published a blog post that included the following statement:
Login.gov is on a path to providing an IAL2-compliant identity verification service to its customers in a responsible, equitable way. Building on the strong evidence-based identity verification that Login.gov already offers, Login.gov is on a path to providing IAL2-compliant identity verification that ensures both strong security and broad and equitable access.
Login.gov is a secure sign in service used by the public to sign in to participating government agencies. Participating agencies will ask you to create a Login.gov account to securely access your information on their website or application.
You can use the same username and password to access any agency that partners with Login.gov. This streamlines your process and eliminates the need to remember multiple usernames and passwords.
Why would agencies implement Login.gov? Because the agencies want to protect their constituents’ information. If fraudsters capture personally identifiable information (PII) of someone applying for government services, the breached government agency will face severe repurcussions. Login.gov is supposed to protect its partner agencies from these nightmares.
How does Login.gov do this?
Sometimes you might use two-factor authentication consisting of a password and a second factor such as an SMS code or the use of an authentication app.
In more critical cases, Login.gov requests a more reliable method of identification, such as a government-issued photo ID (driver’s license, passport, etc.).
The U.S. National Institute of Standards and Technology, in its publication NIST SP 800-63a, has defined “identity assurance levels” (IALs) that can be used when dealing with digital identities. It’s helpful to review how NIST has defined the IALs. (I’ll define the other acronyms as we go along.)
Assurance in a subscriber’s identity is described using one of three IALs:
IAL1: There is no requirement to link the applicant to a specific real-life identity. Any attributes provided in conjunction with the subject’s activities are self-asserted or should be treated as self-asserted (including attributes a [Credential Service Provider] CSP asserts to an [Relying Party] RP). Self-asserted attributes are neither validated nor verified.
IAL2: Evidence supports the real-world existence of the claimed identity and verifies that the applicant is appropriately associated with this real-world identity. IAL2 introduces the need for either remote or physically-present identity proofing. Attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL2 can support IAL1 transactions if the user consents.
IAL3: Physical presence is required for identity proofing. Identifying attributes must be verified by an authorized and trained CSP representative. As with IAL2, attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL3 can support IAL1 and IAL2 identity attributes if the user consents.
So in its simplest terms, IAL2 requires evidence of a verified credential so that an online person can be linked to a real-life identity. If someone says they’re “John Bredehoft” and fills in an online application to receive government services, IAL2 compliance helps to ensure that the person filling out the online application truly IS John Bredehoft, and not Bernie Madoff.
As more and more of us conduct business—including government business—online, IAL2 compliance is essential to reduce fraud.
One more thing about IAL2 compliance. The mere possession of a valid government issued photo ID is NOT sufficient for IAL2 compliance. After all, Bernie Madoff may be using John Bredehoft’s driver’s license. To make sure that it’s John Bredehoft using John Bredehoft’s driver’s license, an additional check is needed.
This has been explained by ID.me, a private company that happens to compete with Login.gov to provide identity proofing services to government agencies.
Biometric comparison (e.g., selfie with liveness detection or fingerprint) of the strongest piece of evidence to the applicant
So you basically take the information on a driver’s license and perform a facial recognition 1:1 comparison with the person possessing the driver’s license, ideally using liveness detection, to make sure that the presented person is not a fake.
As part of an investigation that has run since last April (2022), GSA’s Office of the Inspector General found that the agency was billing agencies for IAL2-compliant services, even though Login.gov did not meet Identity Assurance Level 2 (IAL2) standards.
GSA knowingly billed over $10 million for services provided through contracts with other federal agencies, even though Login.gov is not IAL2 compliant, according to the watchdog.
If you listen closely, you can hear about all sorts of wonderful biometric identifiers. They range from the common (such as fingerprint ridges and detail) to the esoteric (my favorite was the 2013 story about Japanese car seats that captured butt prints).
Forget about fingerprints and faces and irises and DNA and gait recognition and butt prints. Tongue prints are the answer!
Benefits of tongue print biometrics
To its credit, the article does point out two benefits of using tongue prints as a biometric identifier.
Consent and privacy. Unlike fingerprints and irises (and faces) which are always exposed and can conceivably be captured without the person’s knowledge, the subject has to provide consent before a tongue image is captured. For the most part, tongues are privacy-perfect.
Liveness. The article claims that “sticking out one’s tongue is an undeniable ‘proof of life.'” Perhaps that’s an exaggeration, but it is admittedly much harder to fake a tongue than it is to fake a finger or a face.
Are tongues unique?
But the article also makes these claims.
Two main attributes are measured for a tongue print. First is the tongue shape, as the shape of the tongue is unique to everyone.
The other notable feature is the texture of the tongue. Tongues consist of a number of ridges, wrinkles, seams and marks that are unique to every individual.
There is serious doubt (if not outright denial) that everyone has a unique face (although NIST is investigating this via the FRTE Twins Demonstration).
But at least these modalities are under study. Has anyone conducted a rigorous study to prove or disprove the uniqueness of tongues? By “rigorous,” I mean a study that has evaluated millions of tongues in the same way that NIST has evaluated millions of fingerprints, faces, and irises?
I did find this 2017 tongue identification pilot study but it only included a whopping 20 participants. And the study authors (who are always seeking funding anyway) admitted that “large-scale studies are required to validate the results.”
Conclusion
So if a police officer tells you to stick out your tongue for identification purposes, think twice.
If you ask any one of us in the identity verification industry, we’ll tell you how identity verification proves that you know who is accessing your service.
During the identity verification/onboarding step, one common technique is to capture the live face of the person who is being onboarded, then compare that to the face captured from the person’s government identity document. As long as you have assurance that (a) the face is live and not a photo, and (b) the identity document has not been tampered, you positively know who you are onboarding.
The authentication step usually captures a live face and compares it to the face that was captured during onboarding, thus positively showing that the right person is accessing the previously onboarded account.
Sound like the perfect solution, especially in industries that rely on age verification to ensure that people are old enough to access the service.
Therefore, if you are employing robust identity verification and authentication that includes age verification, this should never happen.
Eduardo Montanari, who manages delivery logistics at a burger shop north of São Paulo, has noticed a pattern: Every time an order pickup is assigned to a female driver, there’s a good chance the worker is a minor.
On YouTube, a tutorial — one of many — explains “how to deliver as a minor.” It has over 31,000 views. “You have to create an account in the name of a person who’s the right age. I created mine in my mom’s name,” says a boy, who identifies himself as a minor in the video.
Once a cooperative parent or older sibling agrees to help, the account is created in the older person’s name, the older person’s face and identity document is used to create the account, and everything is valid.
Outsmarting authentication
Yes, but what about authentication?
That’s why it’s helpful to use a family member, or someone who lives in the minor’s home.
Let’s say little Maria is at home, during her homework, when her gig economy app rings with a delivery request. Now Maria was smart enough to have her older sister Irene or her mama Cecile perform the onboarding with the delivery app. If she’s at home, she can go to Irene or Cecile, have them perform the authentication, and then she’s off on her bike to make money.
(Alternatively, if the app does not support liveness detection, Maria can just hold a picture of Irene or Cecile up to the camera and authenticate.)
The onboarding process was completed by the account holder.
The authentication was completed by the account holder.
But the account holder isn’t the one that’s actually using the service. Once authentication is complete, anyone can access the service.
So how do you stop underage gig economy use?
According to Rest of World, one possible solution is to tattle on underage delivery people. If you see something, say something.
But what’s the incentive for a restaurant owner or delivery recipient to report that their deliveries are being performed by a kid?
“The feeling we have is that, at least this poor boy is working. I know this is horrible, but here in Brazil we end up seeing it as an opportunity … It’s ridiculous,” (psychologist Regiane Couto) said.
A much better solution is to replace one-time authetication with continuous authentication, or at least be smarter in authentication. For example, a gig delivery worker could be required to authenticate at multiple points in the process:
When the worker receives the delivery request.
When the worker arrives at the restaurant.
When the worker makes the delivery.
It’s too difficult to drag big sister Irene or mama Cecile to ALL of these points.
As an added bonus, these authetications provide timestamps of critical points in the delivery process, which the delivery company and/or restaurant can use for their analytics.
Problem solved.
Except that little Maria doesn’t have any excuse and has to complete her homework.
I tend to view presentation attack detection (PAD) through the lens of iBeta or occasionally of BixeLab. But I need to remind myself that these are not the only entities examining PAD.
A recent paper authored by Koushik Srivatsan, Muzammal Naseer, and Karthik Nandakumar of the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) addresses PAD from a research perspective. I honestly don’t understand the research, but perhaps you do.
Flip spoofing his natural appearance by portraying Geraldine. Some were unable to detect the attack. By NBC Television. – eBay itemphoto frontphoto back, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16476809
Face anti-spoofing (FAS) or presentation attack detection is an essential component of face recognition systems deployed in security-critical applications. Existing FAS methods have poor generalizability to unseen spoof types, camera sensors, and environmental conditions. Recently, vision transformer (ViT) models have been shown to be effective for the FAS task due to their ability to capture long-range dependencies among image patches. However, adaptive modules or auxiliary loss functions are often required to adapt pre-trained ViT weights learned on large-scale datasets such as ImageNet. In this work, we first show that initializing ViTs with multimodal (e.g., CLIP) pre-trained weights improves generalizability for the FAS task, which is in line with the zero-shot transfer capabilities of vision-language pre-trained (VLP) models. We then propose a novel approach for robust cross-domain FAS by grounding visual representations with the help of natural language. Specifically, we show that aligning the image representation with an ensemble of class descriptions (based on natural language semantics) improves FAS generalizability in low-data regimes. Finally, we propose a multimodal contrastive learning strategy to boost feature generalization further and bridge the gap between source and target domains. Extensive experiments on three standard protocols demonstrate that our method significantly outperforms the state-of-the-art methods, achieving better zero-shot transfer performance than five-shot transfer of “adaptive ViTs”.
CLIP is the first multimodal (in this case, vision and text) model tackling computer vision and was recently released by OpenAI on January 5, 2021….CLIP is a bridge between computer vision and natural language processing.
Sadly, Brems didn’t address ViT, so I turned to Chinmay Bhalerao.
Vision Transformers work by first dividing the image into a sequence of patches. Each patch is then represented as a vector. The vectors for each patch are then fed into a Transformer encoder. The Transformer encoder is a stack of self-attention layers. Self-attention is a mechanism that allows the model to learn long-range dependencies between the patches. This is important for image classification, as it allows the model to learn how the different parts of an image contribute to its overall label.
The output of the Transformer encoder is a sequence of vectors. These vectors represent the features of the image. The features are then used to classify the image.
Well, the FATE side of the house has released its first two studies, including one entitled “Face Analysis Technology Evaluation (FATE) Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms” (NIST Internal Report NIST IR 8491; PDF here).
Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.
Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers.
I and countless others have spent the last several years referring to the National Institute of Standards and Technology’s Face Recognition Vendor Test, or FRVT. I guess some people have spent almost a quarter century referring to FRVT, because the term has been in use since 1999.
Starting now, you’re not supposed to use the FRVT acronym any more.
To bring clarity to our testing scope and goals, what was formerly known as FRVT has been rebranded and split into FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation). Tracks that involve the processing and analysis of images will run under the FATE activity, and tracks that pertain to identity verification will run under FRTE. All existing participation and submission procedures remain unchanged.
The change actually makes sense, since tasks such as age estimation and presentation attack detection (liveness detection) do not directly relate to the identification of individuals.
Us old folks just have to get used to the change.
I just hope that the new “FATE” acronym doesn’t mean that some algorithms are destined to perform better than others.
Does your firm fight crooks who try to fraudulently use synthetic identities? If so, how do you communicate your solution?
This post explains what synthetic identities are (with examples), tells four ways to detect synthetic identities, and closes by providing an answer to the communication question.
While this post is primarily intended for identity firms who can use Bredemarket’s marketing and writing services, anyone else who is interested in synthetic identities can read along.
What are synthetic identities?
To explain what synthetic identities are, let me start by telling you about Jason Brown.
Jason Brown wasn’t Jason Brown
You may not have heard of him unless you lived in Atlanta, Georgia in 2019 and lived near the apartment he rented.
Jason Brown’s renting of an apartment isn’t all that unusual.
If you were to visit Brown’s apartment in February 2019, you would find credit cards and financial information for Adam M. Lopez and Carlos Rivera.
Now that’s a little unusual, especially since Lopez and Rivera never existed.
For that matter, Jason Brown never existed either.
A Georgia man was sentenced Sept. 1 (2022) to more than seven years in federal prison for participating in a nationwide fraud ring that used stolen social security numbers, including those belonging to children, to create synthetic identities used to open lines of credit, create shell companies, and steal nearly $2 million from financial institutions….
Cato joined conspiracies to defraud banks and illegally possess credit cards. Cato and his co-conspirators created “synthetic identities” by combining false personal information such as fake names and dates of birth with the information of real people, such as their social security numbers. Cato and others then used the synthetic identities and fake ID documents to open bank and credit card accounts at financial institutions. Cato and his co-conspirators used the unlawfully obtained credit cards to fund their lifestyles.
Talking about synthetic identity at Victoria Gardens
Here’s a video that I created on Saturday that describes, at a very high level, how synthetic identities can be used fraudulently. People who live near Rancho Cucamonga, California will recognize the Victoria Gardens shopping center, proof that synthetic identity theft can occur far away from Georgia.
Note that synthetic identity theft different from stealing someone else’s existing identity. In this case, a new identity is created.
So how do you catch these fraudsters?
Catching the identity synthesizers
If you’re renting out an apartment, and Jason Brown shows you his driver’s license and provides his Social Security Number, how can you detect if Brown is a crook? There are four methods to verify that Jason Brown exists, and that he’s the person renting your apartment.
Method One: Private Databases
One way to check Jason Brown’s story is to perform credit checks and other data investigations using financial databases.
Did Jason Brown just spring into existence within the past year, with no earlier credit record? That seems suspicious.
Does Jason Brown’s credit record appear TOO clean? That seems suspicious.
Does Jason Brown share information such as a common social security number with other people? Are any of those other identities also fraudulent? That is DEFINITELY suspicious.
This is one way that many firms detect synthetic identities, and for some firms it is the ONLY way they detect synthetic identities. And these firms have to tell their story to their prospects.
If your firm offers a tool to verify identities via private databases, how do you let your prospects know the benefits of your tool, and why your solution is better than all other solutions?
Method Two: Check That Driver’s License (or other government document)
What about that driver’s license that Brown presented? There are a wide variety of software tools that can check the authenticity of driver’s licenses, passports, and other government-issued documents. Some of these tools existed back in 2019 when “Brown” was renting his apartment, and a number of them exist today.
Maybe your firm has created such a tool, or uses a tool from a third party.
If your firm offers this capability, how can your prospects learn about its benefits, and why your solution excels?
Method Three: Check Government Databases
Checking the authenticity of a government-issued document may not be enough, since the document itself may be legitimate, but the implied credentials may no longer be legitimate. For example, if my California driver’s license expires in 2025, but I move to Minnesota in 2023 and get a new license, my California driver’s license is no longer valid, even though I have it in my possession.
Why not check the database of the Department of Motor Vehicles (or the equivalent in your state) to see if there is still an active driver’s license for that person?
The American Association of Motor Vehicle Administrators (AAMVA) maintains a Driver’s License Data Verification (DLDV) Service in which participating jurisdictions allow other entities to verify the license data for individuals. Your firm may be able to access the DLDV data for selected jurisdictions, providing an extra identity verification tool.
If your firm offers this capability, how can your prospects learn where it is available, what its benefits are, and why it is an important part of your solution?
Method Four: Conduct the “Who You Are” Test
There is one more way to confirm that a person is real, and that is to check the person. Literally.
If someone on a smartphone or videoconference says that they are Jason Brown, how do you know that it’s the real Jason Brown and not Jim Smith, or a previous recording or simulation of Jason Brown?
This is where tools such as facial recognition and liveness detection come to play.
You can ensure that the live face matches any face on record.
You can also confirm that the face is truly a live face.
In addition to these two tests, you can compare the face against the face on the presented driver’s license or passport to offer additional confirmation of true identity.
Now some companies offer facial recognition, others offer liveness detection, others match the live face to a face on a government ID, and many companies offer two or three of these capabilities.
One more time: if your firm offers these capabilities—either your own or someone else’s—what are the benefits of your algorithms? (For example, are they more accurate than competing algorithms? And under what conditions?) And why is your solution better than the others?
This is for the firms who fight synthetic identities
While most of this post is of general interest to anyone dealing with synthetic identities, this part of this post is specifically addressed to identity and biometric firms who provide synthetic identity-fighting solutions.
When you communicate about your solutions, your communicator needs to have certain types of experience.
Industry experience. Perhaps you sell your identity solution to financial institutions, or educational institutions , or a host of other industries (gambling/gaming, healthcare, hospitality, retailers, or sport/concert venues, or others). You need someone with this industry experience.
Solution experience. Perhaps your communications require someone with 29 years of experience in identity, biometrics, and technology marketing, including experience with all five factors of authentication (and verification).
Communication experience. Perhaps you need to effectively communicate with your prospects in a customer focused, benefits-oriented way. (Content that is all about you and your features won’t win business.)
If you haven’t read a Bredemarket blog post before, or even if you have, you may not realize that this post is jam-packed with additional information well beyond the post itself. This post alone links to the following Bredemarket posts and other content. You may want to follow one or more of the 13 links below if you need additional information on a particular topic:
Here’s my latest brochure for the Bredemarket 400 Short Writing Service, my standard package to create your 400 to 600 word blog posts and LinkedIn articles. Be sure to check the Bredemarket 400 Short Writing Service page for updates.
When you have tens of thousands of people dying, then the only conscionable response is to ban automobiles altogether. Any other action or inaction is completely irresponsible.
After all, you can ask the experts who want us to ban biometrics because it can be spoofed and is racist, so therefore we shouldn’t use biometrics at all.
I disagree with the calls to ban biometrics, and I’ll go through three “biometrics are bad” examples and say why banning biometrics is NOT justified.
Even some identity professionals may not know about the old “gummy fingers” story from 20+ years ago.
And yes, I know that I’ve talked about Gender Shades ad nauseum, but it bears repeating again.
And voice deepfakes are always a good topic to discuss in our AI-obsessed world.
But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.
Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.
TECH5 participated in the 2023 LivDet Non-contact Fingerprint competition to evaluate its latest NN-based fingerprint liveness detection algorithm and has achieved first and second ranks in the “Systems” category for both single- and four-fingerprint liveness detection algorithms respectively. Both submissions achieved the lowest error rates on bonafide (live) fingerprints. TECH5 achieved 100% accuracy in detecting complex spoof types such as Ecoflex, Playdoh, wood glue, and latex with its groundbreaking Neural Network model that is only 1.5MB in size, setting a new industry benchmark for both accuracy and efficiency.
TECH5 excelled in detecting fake fingers for “non-contact” reading where the fingers don’t even touch a surface such as an optical surface. That’s appreciably harder than detecting fake fingers that touch contact devices.
I should note that LivDet is an independent assessment. As I’ve said before, independent technology assessments provide some guidance on the accuracy and performance of technologies.
So gummy fingers and future threats can be addressed as they arrive.
Let’s stop right there for a moment and address two items before we continue. Trust me; it’s important.
This study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.
The study focused on gender classification and race classification. Back in those primitive innocent days of 2018, the world assumed that you could look at a person and tell whether the person was male or female, or tell the race of a person. (The phrase “self-identity” had not yet become popular, despite the Rachel Dolezal episode which happened before the Gender Shades study). Most importantly, the study did not address identification of individuals at all.
However, the findings did find something:
While the companies appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. Let’s explore.
All companies perform better on males than females with an 8.1% – 20.6% difference in error rates.
All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% – 19.2% difference in error rates.
When we analyze the results by intersectional subgroups – darker males, darker females, lighter males, lighter females – we see that all companies perform worst on darker females.
What does this mean? It means that if you are using one of these three algorithms solely for the purpose of determining a person’s gender and race, some results are more accurate than others.
And all the stories about people such as Robert Williams being wrongfully arrested based upon faulty facial recognition results have nothing to do with Gender Shades. I’ll address this briefly (for once):
In the United States, facial recognition identification results should only be used by the police as an investigative lead, and no one should be arrested solely on the basis of facial recognition. (The city of Detroit stated that Williams’ arrest resulted from “sloppy” detective work.)
If you are using facial recognition for criminal investigations, your people had better have forensic face training. (Then they would know, as Detroit investigators apparently didn’t know, that the quality of surveillance footage is important.)
If you’re going to ban computerized facial recognition (even when only used as an investigative lead, and even when only used by properly trained individuals), consider the alternative of human witness identification. Or witness misidentification. Roeling Adams, Reggie Cole, Jason Kindle, Adam Riojas, Timothy Atkins, Uriah Courtney, Jason Rivera, Vondell Lewis, Guy Miles, Luis Vargas, and Rafael Madrigal can tell you how inaccurate (and racist) human facial recognition can be. See my LinkedIn article “Don’t ban facial recognition.”
Obviously, facial recognition has been the subject of independent assessments, including continuous bias testing by the National Institute of Standards and Technology as part of its Face Recognition Vendor Test (FRVT), specifically within the 1:1 verification testing. And NIST has measured the identification bias of hundreds of algorithms, not just three.
Richard Nixon never spoke those words in public, although it’s possible that he may have rehearsed William Safire’s speech, composed in case Apollo 11 had not resulted in one giant leap for mankind. As noted in the video, Nixon’s voice and appearance were spoofed using artificial intelligence to create a “deepfake.”
In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.
What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech…
Now I’ll grant that this is an example of human voice verification, which can be as inaccurate as the previously referenced human witness misidentification. But are computerized systems any better, and can they detect spoofed voices?
IDVoice Verified combines ID R&D’s core voice verification biometric engine, IDVoice, with our passive voice liveness detection, IDLive Voice, to create a high-performance solution for strong authentication, fraud prevention, and anti-spoofing verification.
Anti-spoofing verification technology is a critical component in voice biometric authentication for fraud prevention services. Before determining a match, IDVoice Verified ensures that the voice presented is not a recording.
This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.
As for independent testing:
ID R&D has participated in multiple ASVspoof tests, and performed well in them.