Defeating Synthetic Identity Fraud

I’ve talked about synthetic identity fraud a lot in the Bredemarket blog over the past several years. I’ll summarize a few examples in this post, talk about how to fight synthetic identity fraud, and wrap up by suggesting how to get the word out about your anti-synthetic identity solution.

But first let’s look at a few examples of synthetic identity.

Synthetic identities pop up everywhere

As far back as December 2020, I discussed Kris’ Rides’ encounter with a synthetic employee from a company with a number of synthetic employees (many of who were young females).

More recently, I discussed attempts to create synthetic identities using gummy fingers and fake/fraudulent voices. The topic of deepfakes continues to be hot across all biometric modalities.

I shared a video I created about synthetic identities and their use to create fraudulent financial identities.

From https://www.youtube.com/watch?v=oDrSBlDJVCk.

I even discussed Kelly Shepherd, the fake vegan mom created by HBO executive Casey Bloys to respond to HBO critics.

And that’s just some of what Bredemarket has written about synthetic identity. You can find the complete list of my synthetic identity posts here.

So what? You must fight!

It isn’t enough to talk about the fact that synthetic identities exist: sometimes for innocent reasons, sometimes for outright fraudulent reasons.

You need to communicate how to fight synthetic identities, especially if your firm offers an anti-fraud solution.

Here are four ways to fight synthetic identities:

  1. Checking the purported identity against private databases, such as credit records.
  2. Checking the person’s driver’s license or other government document to ensure it’s real and not a fake.
  3. Checking the purported identity against government databases, such as driver’s license databases. (What if the person presents a real driver’s license, but that license was subsequently revoked?)
  4. Perform a “who you are” biometric test against the purported identity.

If you conduct all four tests, then you have used multiple factors of authentication to confirm that the person is who they say they are. If the identity is synthetic, chances are the purported person will fail at least one of these tests.

Do you fight synthetic identity fraud?

If you fight synthetic identity fraud, you should let people know about your solution.

Perhaps you can use Bredemarket, the identity content marketing expertI work with you (and I have worked with others) to ensure that your content meets your awareness, consideration, and/or conversion goals.

How can I work with you to communicate your firm’s anti-synthetic identity message? For example, I can apply my identity/biometric blog expert knowledge to create an identity blog post for your firm. Blog posts provide an immediate business impact to your firm, and are easy to reshare and repurpose. For B2B needs, LinkedIn articles provide similar benefits.

If Bredemarket can help your firm convey your message about synthetic identity, let’s talk.

Reasonable Minds Vehemently Disagree On Three Biometric Implementation Choices

(Part of the biometric product marketing expert series)

There are a LOT of biometric companies out there.

The Prism Project’s home page at https://www.the-prism-project.com/, illustrating the Biometric Digital Identity Prism as of March 2024. From Acuity Market Intelligence and FindBiometrics.

With over 100 firms in the biometric industry, their offerings are going to naturally differ—even if all the firms are TRYING to copy each other and offer “me too” solutions.

Will Ferrell and Chad Smith, or maybe vice versa. Fair use. From https://www.billboard.com/music/music-news/will-ferrell-chad-smith-red-hot-benefit-chili-peppers-6898348/, originally from NBC.

I’ve worked for over a dozen biometric firms as an employee or independent contractor, and I’ve analyzed over 80 biometric firms in competitive intelligence exercises, so I’m well aware of the vast implementation differences between the biometric offerings.

Some of the implementation differences provoke vehement disagreements between biometric firms regarding which choice is correct. Yes, we FIGHT.

MMA stands for Messy Multibiometric Authentication. Public Domain, https://commons.wikimedia.org/w/index.php?curid=607428

Let’s look at three (out of many) of these implementation differences and see how they affect YOUR company’s content marketing efforts—whether you’re engaging in identity blog post writing, or some other content marketing activity.

The three biometric implementation choices

Firms that develop biometric solutions make (or should make) the following choices when implementing their solutions.

  1. Presentation attack detection. Assuming the solution incorporates presentation attack detection (liveness detection), or a way of detecting whether the presented biometric is real or a spoof, the firm must decide whether to use active or passive liveness detection.
  2. Age assurance. When choosing age assurance solutions that determine whether a person is old enough to access a product or service, the firm must decide whether or not age estimation is acceptable.
  3. Biometric modality. Finally, the firm must choose which biometric modalities to support. While there are a number of modality wars involving all the biometric modalities, this post is going to limit itself to the question of whether or not voice biometrics are acceptable.

I will address each of these questions in turn, highlighting the pros and cons of each implementation choice. After that, we’ll see how this affects your firm’s content marketing.

Choice 1: Active or passive liveness detection?

Back in June 2023 I defined what a “presentation attack” is.

(I)nstead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).

This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).

Then I talked about standards and testing.

But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.

And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.

(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)

Well…that day is today.

A balanced assessment

Now I could cite a firm using active liveness detection to say why it’s great, or I could cite a firm using passive liveness detection to say why it’s great. But perhaps the most balanced assessment comes from facia, which offers both types of liveness detection. How does facia define the two types of liveness detection?

Active liveness detection, as the name suggests, requires some sort of activity from the user. If a system is unable to detect liveness, it will ask the user to perform some specific actions such as nodding, blinking or any other facial movement. This allows the system to detect natural movements and separate it from a system trying to mimic a human being….

Passive liveness detection operates discreetly in the background, requiring no explicit action from the user. The system’s artificial intelligence continuously analyses facial movements, depth, texture, and other biometric indicators to detect an individual’s liveness.

Pros and cons

Briefly, the pros and cons of the two methods are as follows:

  • While active liveness detection offers robust protection, requires clear consent, and acts as a deterrent, it is hard to use, complex, and slow.
  • Passive liveness detection offers an enhanced user experience via ease of use and speed and is easier to integrate with other solutions, but it incorporates privacy concerns (passive liveness detection can be implemented without the user’s knowledge) and may not be used in high-risk situations.

So in truth the choice is up to each firm. I’ve worked with firms that used both liveness detection methods, and while I’ve spent most of my time with passive implementations, the active ones can work also.

A perfect wishy-washy statement that will get BOTH sides angry at me. (Except perhaps for companies like facia that use both.)

Choice 2: Age estimation, or no age estimation?

Designed by Freepik.

There are a lot of applications for age assurance, or knowing how old a person is. These include smoking tobacco or marijuana, buying firearms, driving a cardrinking alcoholgamblingviewing adult contentusing social media, or buying garden implements.

If you need to know a person’s age, you can ask them. Because people never lie.

Well, maybe they do. There are two better age assurance methods:

  • Age verification, where you obtain a person’s government-issued identity document with a confirmed birthdate, confirm that the identity document truly belongs to the person, and then simply check the date of birth on the identity document and determine whether the person is old enough to access the product or service.
  • Age estimation, where you don’t use a government-issued identity document and instead examine the face and estimate the person’s age.

I changed my mind on age estimation

I’ve gone back and forth on this. As I previously mentioned, my employment history includes time with a firm produces driver’s licenses for the majority of U.S. states. And back when that firm was providing my paycheck, I was financially incentivized to champion age verification based upon the driver’s licenses that my company (or occasionally some inferior company) produced.

But as age assurance applications moved into other areas such as social media use, a problem occurred since 13 year olds usually don’t have government IDs. A few of them may have passports or other government IDs, but none of them have driver’s licenses.

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727.

Pros and cons

But does age estimation work? I’m not sure if ANYONE has posted a non-biased view, so I’ll try to do so myself.

  • The pros of age estimation include its applicability to all ages including young people, its protection of privacy since it requires no information about the individual identity, and its ease of use since you don’t have to dig for your physical driver’s license or your mobile driver’s license—your face is already there.
  • The huge con of age estimation is that it is by definition an estimate. If I show a bartender my driver’s license before buying a beer, they will know whether I am 20 years and 364 days old and ineligible to purchase alcohol, or whether I am 21 years and 0 days old and eligible. Estimates aren’t that precise.

How precise is age estimation? We’ll find out soon, once NIST releases the results of its Face Analysis Technology Evaluation (FATE) Age Estimation & Verification test. The release of results is expected in early May.

Choice 3: Is voice an acceptable biometric modality?

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Fingerprints, palm prints, faces, irises, and everything up to gait. (And behavioral biometrics.) There are a lot of biometric modalities out there, and one that has been around for years is the voice biometric.

I’ve discussed this topic before, and the partial title of the post (“We’ll Survive Voice Spoofing”) gives away how I feel about the matter, but I’ll present both sides of the issue.

White House photo by Kimberlee Hewitt – whitehouse.gov, President George W. Bush and comedian Steve Bridges, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3052515

No one can deny that voice spoofing exists and is effective, but many of the examples cited by the popular press are cases in which a HUMAN (rather than an ALGORITHM) was fooled by a deepfake voice. But voice recognition software can also be fooled.

(Incidentally, there is a difference between voice recognition and speech recognition. Voice recognition attempts to determine who a person is. Speech recognition attempts to determine what a person says.)

Finally facing my Waterloo

Take a study from the University of Waterloo, summarized here, that proclaims: “Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.”

If you re-read that sentence, you will notice that it includes the words “up to.” Those words are significant if you actually read the article.

In a recent test against Amazon Connect’s voice authentication system, they achieved a 10 per cent success rate in one four-second attack, with this rate rising to over 40 per cent in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99 per cent success rate after six attempts.

Other voice spoofing studies

Similar to Gender Shades, the University of Waterloo study does not appear to have tested hundreds of voice recognition algorithms. But there are other studies.

  • The 2021 NIST Speaker Recognition Evaluation (PDF here) tested results from 15 teams, but this test was not specific to spoofing.
  • A test that was specific to spoofing was the ASVspoof 2021 test with 54 team participants, but the ASVspoof 2021 results are only accessible in abstract form, with no detailed results.
  • Another test, this one with results, is the SASV2022 challenge, with 23 valid submissions. Here are the top 10 performers and their error rates.

You’ll note that the top performers don’t have error rates anywhere near the University of Waterloo’s 99 percent.

So some firms will argue that voice recognition can be spoofed and thus cannot be trusted, while other firms will argue that the best voice recognition algorithms are rarely fooled.

What does this mean for your company?

Obviously, different firms are going to respond to the three questions above in different ways.

  • For example, a firm that offers face biometrics but not voice biometrics will convey how voice is not a secure modality due to the ease of spoofing. “Do you want to lose tens of millions of dollars?”
  • A firm that offers voice biometrics but not face biometrics will emphasize its spoof detection capabilities (and cast shade on face spoofing). “We tested our algorithm against that voice fake that was in the news, and we detected the voice as a deepfake!”

There is no universal truth here, and the message your firm conveys depends upon your firm’s unique characteristics.

And those characteristics can change.

  • Once when I was working for a client, this firm had made a particular choice with one of these three questions. Therefore, when I was writing for the client, I wrote in a way that argued the client’s position.
  • After I stopped working for this particular client, the client’s position changed and the firm adopted the opposite view of the question.
  • Therefore I had to message the client and say, “Hey, remember that piece I wrote for you that said this? Well, you’d better edit it, now that you’ve changed your mind on the question…”

Bear this in mind as you create your blog, white paper, case study, or other identity/biometric content, or have someone like the biometric content marketing expert Bredemarket work with you to create your content. There are people who sincerely hold the opposite belief of your firm…but your firm needs to argue that those people are, um, misinformed.

And as a postscript I’ll provide two videos that feature voices. The first is for those who detected my reference to the ABBA song “Waterloo.”

From https://www.youtube.com/watch?v=4XJBNJ2wq0Y.

The second features the late Steve Bridges as President George W. Bush at the White House Correspondents Dinner.

From https://www.youtube.com/watch?v=u5DpKjlgoP4.

Does Your Gardening Implement Company Require Age Assurance?

Age assurance shows that a customer meets the minimum age for buying a product or service.

I thought I knew every possible use case for age assurance—smoking tobacco or marijuana, buying firearms, driving a car, drinking alcohol, gambling, viewing adult content, or using social media.

But after investigating a product featured in Cultivated Cool, I realized that I had missed one use case. Turns out that there’s another type of company that needs age assurance…and a way to explain the age assurance method the company adopts.

Off on a tangent: what is Cultivated Cool?

Psst…don’t tell anyone what you’re about to read.

The so-called experts say that a piece of content should only have one topic and one call to action. Well, it’s Sunday so hopefully the so-called experts are taking a break and will never see the paragraphs below.

This is my endorsement for Cultivated Cool. Its URL is https://cultivated.cool/, which I hope you can remember.

Cultivated Cool self-identifies as “(y)our weekly guide to the newest, coolest products you didn’t know you needed.” Concentrating on the direct-to-consumer (DTC or D2C) space, Cultivated Cool works with companies to “transform (their) email marketing from a chore into a revenue generator.” And to prove the effectiveness of email, it offers its own weekly email that highlights various eye-catching products. But not trendy ones:

Trends come and go but cool never goes out of style.

From https://cultivated.cool/.

Bredemarket isn’t a prospect for Cultivated Cool’s first service—my written content creation is not continuously cool. (Although it’s definitely not trendy either). But I am a consumer of Cultivated Cool’s weekly emails, and you should subscribe to its weekly emails also. Enter your email and click the “Subscribe” button on Cultivated Cool’s webpage.

And Cultivated Cool’s weekly emails lead me to the point of this post.

The day that Stella sculpted air

Today’s weekly newsletter issue from Cultivated Cool is entitled “Dig It.” But this has nothing to do with the Beatles or with Abba. Instead it has to do with gardening, and the issue tells the story of Stella, in five parts. The first part is entitled “Snip it in the Bud,” and begins as follows.

Stella felt a shiver go down her spine the first time the pruner blades closed. She wasn’t just cutting branches; she was sculpting air.

From https://cultivated.cool/dig-it/.

The pruner blades featured in Cultivated Cool are sold by Niwaki, an English company that offers Japanese-inspired products. As I type this, Niwaki offers 18 different types of secateurs (pruning shears), including large hand, small hand, right-handed, and left-handed varieties. You won’t get these at your dollar store; prices (excluding VAT) range from US$45.50 to US$280.50 (Tobisho Hiryu Secateurs).

Stella, how old are you?

But regardless of price, all the secateurs sold by Niwaki have one thing in common: an age restriction on purchases. Not that Niwaki truly enforces this restriction.

Please note: By law, we are not permitted to sell a knife or blade to any person under the age of 18. By placing an order for one of these items you are declaring that you are 18 years of age or over. These items must be used responsibly and appropriately.

From https://www.niwaki.com/tobisho-hiryu-secateurs/#P00313-1.

That’s the functional equivalent of the so-called age verification scheme used on some alcohol websites.

I hope you’re sitting down as I reveal this to you: underage people can bypass the age assurance scheme on alcohol websites by inputting any year of birth that they wish. Just like anyone, even a small child, can make any declaration of age that they want, as long as their credit card is valid.

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727.

Now I have no idea whether Ofcom’s UK Online Safety Act consultations will eventually govern Niwaki’s sales of adult-controlled physical products. But if Niwaki finds itself under the UK Online Safety Act, or some other act in the United Kingdom or any country where Niwaki conducts business, then a simple assurance that the purchaser is old enough to buy “a knife or blade” will not be sufficient.

Niwaki’s website would then need to adopt some form of age assurance for purchasers, either by using a government-issued identification document (age verification) or examining the face to algorithmically surmise the customer’s age (age estimation).

  • Age verification. For example, the purchaser would need to provide their government-issued identity document so that the seller can verify the purchaser’s age. Ideally, this would be coupled with live face capture so that the seller can compare the live face to the face on the ID, ensuring that a kid didn’t steal mommy’s or daddy’s driver’s license (licence) or passport.
  • Age estimation. For example, the purchaser would need to provide their live face so that the seller can estimate the purchaser’s age. In this case (and in the age verification case if a live face is captured), the seller would need to use liveness dectection to ensure that the face is truly a live face and is not a presentation attack or other deepfake.

And then the seller would need to explain why it was doing all of this.

How can a company explain its age assurance solution in a way that its prospects will understand…and how can the company reassure its prospects that its age assurance method protects their privacy?

Companies other than identity companies must explain their identity solutions

Which brings me to the TRUE call to action in this post. (Sorry Mark and Lindsey. You’re still cool.)

I’ve stated ad nauseum that identity companies need to explain their identity solutions: why they developed them, how they work, what they do, and several other things.

In the same way, firms that incorporate solutions from identity companies got some splainin’ to do.

This applies to a financial institution that requires customers to use an identity verification solution before opening an account, just like it applies to an online gardening implement website that uses an age assurance method to check the age of pruning shear purchasers.

So how can such companies explain their identity and biometrics features in a way their end customers can understand?

Bredemarket can help.

Why Knowledge-Based Authentication Fails at Authentication

In a recent project for a Bredemarket client, I researched how a particular group of organizations identified their online customers. Their authentication methods fell into two categories. One of these methods was much better than the other.

Multifactor authentication

Some of the organizations employed robust authentication procedures that included more than one of the five authentication factors—something you know, something you have, something you are, something you do, and/or somewhere you are.

For example, an organization may require you to authenticate with biometric data, a government-issued identification document, and sometimes some additional textual or location data.

Knowledge-based authentication

Other organizations employed only one of the factors, something you know.

  • Not something as easy to crack as a password.
  • Instead they used the supposedly robust authentication method of “knowledge-based authentication,” or KBA.

The theory behind KBA is that if you ask multiple questions of a person based upon data from various authoritative databases, the chance of a fraudster knowing ALL of this data is minimal.

From Alloy, “Why knowledge-based authentication (KBA) is not effective,” https://www.alloy.com/blog/answering-my-own-authentication-questions-prove-that-theyre-useless.

Steve Craig found out the hard way that KBA is not infallible.

The hotel loyalty hack

Steve Craig is the Founder and CEO of PEAK IDV, a company dedicated to educating individuals on identity verification and fraud prevention.

From PEAK IDV, https://www.peakidv.com/.

Sadly, Craig himself was recently a victim of fraud, and it took him several hours to resolve the issue.

I’m not going to repeat all of Craig’s story, which you can read in his LinkedIn post. But I do want to highlight one detail.

  • When the fraudster took over Craig’s travel-related account, the hotel used KBA to confirm that the fraudster truly was Steve Craig, specifically asking “when and where was your last hotel stay?”
  • Only one problem: the “last hotel stay” was one from the fraudster, NOT from Craig. The scammer fraudulently associated their hotel stay with Craig’s account.
  • This spurious “last hotel stay” allowed the fraudster to not only answer the “last hotel stay” question correctly, but also to take over Craig’s entire account, including all of Craig’s loyalty points.

And with that one piece of knowledge, Craig’s account was breached.

The “knowledge” used by knowledge based authentication

Craig isn’t the only one who can confirm that KBA by itself doesn’t work. I’ve already shared an image from an Alloy article demonstrating the failures of KBA, and there are many similar articles out there.

The biggest drawback of KBA is the assumption that ONLY the person can answer all the knowledge corrections correctly is false. All you have to do is participate in one of those never-ending Facebook memes that tell you something based on your birthday, or your favorite pet. Don’t do it.

Why do organizations use KBA?

So why do organizations continue to use KBA as their preferred authentication method? Fraud.com lists several attractive, um, factors:

  • Ease of implementation. It’s easier to implement KBA than it is to implement biometric authentication and/or ID card-based authentication.
  • Ease of use. It’s easier to click on answers to multiple choice questions than it is to capture an ID card, fingerprint, or face. (Especially if active liveness detection is used.)
  • Ease of remembrance. As many of us can testify, it’s hard to remember which password is associated with a particular website. With KBA, you merely have to answer a multiple choice quiz, using information that you already know (at least in theory).

Let me add one more:

  • Presumed protection of personally identifiable information (PII). Uploading your face, fingerprint, or driver’s license to a mysterious system seems scary. It APPEARS to be a lot safer to just answer some questions.

But in my view, the risks that someone else can get all this information (or create spurious information) and use it to access your account outweigh the benefits listed above. Even Fraud.com, which lists the advantages of KBA, warns about the risks and recommend coupling KBA with some other authentication method.

But KBA isn’t the only risky authentication factor out there

We already know that passwords can be hacked. And by now we should realize that KBA could be hacked.

But frankly, ANY single authentication can be hacked.

  • After Steve Craig resolved his fraud issue, he asked the hotel how it would prevent fraud in the future. The hotel responded that it would use caller ID on phone calls made to the hotel. Wrong answer.
  • While the biometric vendors are improving their algorithms to detect deepfakes, no one can offer 100% assurance that even the best biometric algorithms can prevent all deepfake attempts. And people don’t even bother to use biometric algorithms if the people on the Zoom call LOOK real.
  • While the ID card analysis vendors (and the ID card manufacturers themselves) are constantly improving their ability to detect fraudulent documents, no one can offer 100% assurance that a presented driver’s license is truly a driver’s license.
  • Geolocation has been touted as a solution by some. But geolocation can be hacked also.

In my view, the best way to minimize (not eliminate) fraudulent authentication is to employ multiple factors. While someone could create a fake face, or a fake driver’s license, or a fake location, the chances of someone faking ALL these factors are much lower than the chances of someone faking a single factor.

You knew the pitch was coming, didn’t you?

If your company has a story to tell about how your authentication processes beat all others, I can help.

Identification Perfection is Impossible

(Part of the biometric product marketing expert series)

There are many different types of perfection.

Jehan Cauvin (we don’t spell his name like he spelled it). By Titian – Bridgeman Art Library: Object 80411, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6016067

This post concentrates on IDENTIFICATION perfection, or the ability to enjoy zero errors when identifying individuals.

The risk of claiming identification perfection (or any perfection) is that a SINGLE counter-example disproves the claim.

  • If you assert that your biometric solution offers 100% accuracy, a SINGLE false positive or false negative shatters the assertion.
  • If you claim that your presentation attack detection solution exposes deepfakes (face, voice, or other), then a SINGLE deepfake that gets past your solution disproves your claim.
  • And as for the pre-2009 claim that latent fingerprint examiners never make a mistake in an identification…well, ask Brandon Mayfield about that one.

In fact, I go so far as to avoid using the phrase “no two fingerprints are alike.” Many years ago (before 2009) in an International Association for Identification meeting, I heard someone justify the claim by saying, “We haven’t found a counter-example yet.” That doesn’t mean that we’ll NEVER find one.

You’ve probably heard me tell the story before about how I misspelled the word “quality.”

In a process improvement document.

While employed by Motorola (pre-split).

At first glance, it appears that Motorola would be the last place to make a boneheaded mistake like that. After all, Motorola is known for its focus on quality.

But in actuality, Motorola was the perfect place to make such a mistake, since it was one of the champions of the “Six Sigma” philosophy (which targets a maximum of 3.4 defects per million opportunities). Motorola realized that manufacturing perfection is impossible, so manufacturers (and the people in Motorola’s weird Biometric Business Unit) should instead concentrate on reducing the error rate as much as possible.

So one misspelling could be tolerated, but I shudder to think what would have happened if I had misspelled “quality” a second time.

ICYMI: Voice Spoofing

In case you missed it…

But are computerized systems any better, and can they detect spoofed voices?

Well, in the same way that fingerprint readers worked to overcome gummy bears, voice readers are working to overcome deepfake voices.

This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.

As for independent testing:

For the rest of the story, see “We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.”

(Bredemarket email, meeting, contact, subscribe)

We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.

(Part of the biometric product marketing expert series)

Some of you are probably going to get into an automobile today.

Are you insane?

The National Highway Traffic Safety Administration has released its latest projections for traffic fatalities in 2022, estimating that 42,795 people died in motor vehicle traffic crashes.

From https://www.nhtsa.gov/press-releases/traffic-crash-death-estimates-2022

When you have tens of thousands of people dying, then the only conscionable response is to ban automobiles altogether. Any other action or inaction is completely irresponsible.

After all, you can ask the experts who want us to ban biometrics because it can be spoofed and is racist, so therefore we shouldn’t use biometrics at all.

I disagree with the calls to ban biometrics, and I’ll go through three “biometrics are bad” examples and say why banning biometrics is NOT justified.

  • Even some identity professionals may not know about the old “gummy fingers” story from 20+ years ago.
  • And yes, I know that I’ve talked about Gender Shades ad nauseum, but it bears repeating again.
  • And voice deepfakes are always a good topic to discuss in our AI-obsessed world.

Example 1: Gummy fingers

My recent post “Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event” included the following sentence:

But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/

A biometrics industry colleague noticed the rhyming words “dummy” and “gummy” and wondered if the latter was a typo. It turns out it wasn’t.

To my knowledge, these gummy fingers do NOT have ridges. From https://www.candynation.com/gummy-fingers

Back in 2002, researcher Tsutomu Matsumoto used “gummy bears” gelatin to create a fake finger that fooled a fingerprint reader.

Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.

Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers. Here’s a recent example of presentation attack detection (liveness detection) from TECH5:

TECH5 participated in the 2023 LivDet Non-contact Fingerprint competition to evaluate its latest NN-based fingerprint liveness detection algorithm and has achieved first and second ranks in the “Systems” category for both single- and four-fingerprint liveness detection algorithms respectively. Both submissions achieved the lowest error rates on bonafide (live) fingerprints. TECH5 achieved 100% accuracy in detecting complex spoof types such as Ecoflex, Playdoh, wood glue, and latex with its groundbreaking Neural Network model that is only 1.5MB in size, setting a new industry benchmark for both accuracy and efficiency.

From https://tech5.ai/tech5s-mobile-fingerprint-liveness-detection-technology-ranked-the-most-accurate-in-the-market/

TECH5 excelled in detecting fake fingers for “non-contact” reading where the fingers don’t even touch a surface such as an optical surface. That’s appreciably harder than detecting fake fingers that touch contact devices.

I should note that LivDet is an independent assessment. As I’ve said before, independent technology assessments provide some guidance on the accuracy and performance of technologies.

So gummy fingers and future threats can be addressed as they arrive.

But at least gummy fingers aren’t racist.

Example 2: Gender shades

In 2017-2018, the Algorithmic Justice League set out to answer this question:

How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?

From http://gendershades.org/. Yes, that’s “http,” not “https.” But I digress.

Let’s stop right there for a moment and address two items before we continue. Trust me; it’s important.

  1. This study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.
  2. The study focused on gender classification and race classification. Back in those primitive innocent days of 2018, the world assumed that you could look at a person and tell whether the person was male or female, or tell the race of a person. (The phrase “self-identity” had not yet become popular, despite the Rachel Dolezal episode which happened before the Gender Shades study). Most importantly, the study did not address identification of individuals at all.

However, the findings did find something:

While the companies appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. Let’s explore.

All companies perform better on males than females with an 8.1% – 20.6% difference in error rates.

All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% – 19.2% difference in error rates.

When we analyze the results by intersectional subgroups – darker males, darker females, lighter males, lighter females – we see that all companies perform worst on darker females.

From http://gendershades.org/overview.html

What does this mean? It means that if you are using one of these three algorithms solely for the purpose of determining a person’s gender and race, some results are more accurate than others.

Three algorithms do not predict hundreds of algorithms, and classification is not identification. If you’re interested in more information on the differences between classification and identification, see Bredemarket’s November 2021 submission to the Department of Homeland Security. (Excerpt here.)

And all the stories about people such as Robert Williams being wrongfully arrested based upon faulty facial recognition results have nothing to do with Gender Shades. I’ll address this briefly (for once):

  • In the United States, facial recognition identification results should only be used by the police as an investigative lead, and no one should be arrested solely on the basis of facial recognition. (The city of Detroit stated that Williams’ arrest resulted from “sloppy” detective work.)
  • If you are using facial recognition for criminal investigations, your people had better have forensic face training. (Then they would know, as Detroit investigators apparently didn’t know, that the quality of surveillance footage is important.)
  • If you’re going to ban computerized facial recognition (even when only used as an investigative lead, and even when only used by properly trained individuals), consider the alternative of human witness identification. Or witness misidentification. Roeling Adams, Reggie Cole, Jason Kindle, Adam Riojas, Timothy Atkins, Uriah Courtney, Jason Rivera, Vondell Lewis, Guy Miles, Luis Vargas, and Rafael Madrigal can tell you how inaccurate (and racist) human facial recognition can be. See my LinkedIn article “Don’t ban facial recognition.”

Obviously, facial recognition has been the subject of independent assessments, including continuous bias testing by the National Institute of Standards and Technology as part of its Face Recognition Vendor Test (FRVT), specifically within the 1:1 verification testing. And NIST has measured the identification bias of hundreds of algorithms, not just three.

In fact, people that were calling for facial recognition to be banned just a few years ago are now questioning the wisdom of those decisions.

But those days were quaint. Men were men, women were women, and artificial intelligence was science fiction.

The latter has certainly changed.

Example 3: Voice spoofs

Perhaps it’s an exaggeration to say that recent artificial intelligence advances will change the world. Perhaps it isn’t. Personally I’ve been concentrating on whether AI writing can adopt the correct tone of voice, but what if we take the words “tone of voice” literally? Let’s listen to President Richard Nixon:

From https://www.youtube.com/watch?v=2rkQn-43ixs

Richard Nixon never spoke those words in public, although it’s possible that he may have rehearsed William Safire’s speech, composed in case Apollo 11 had not resulted in one giant leap for mankind. As noted in the video, Nixon’s voice and appearance were spoofed using artificial intelligence to create a “deepfake.”

It’s one thing to alter the historical record. It’s another thing altogether when a fraudster spoofs YOUR voice and takes money out of YOUR bank account. By definition, you will take that personally.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.

What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech…

From https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=8e8417775591

Now I’ll grant that this is an example of human voice verification, which can be as inaccurate as the previously referenced human witness misidentification. But are computerized systems any better, and can they detect spoofed voices?

Well, in the same way that fingerprint readers worked to overcome gummy bears, voice readers are working to overcome deepfake voices. Here’s what one company, ID R&D, is doing to combat voice spoofing:

IDVoice Verified combines ID R&D’s core voice verification biometric engine, IDVoice, with our passive voice liveness detection, IDLive Voice, to create a high-performance solution for strong authentication, fraud prevention, and anti-spoofing verification.

Anti-spoofing verification technology is a critical component in voice biometric authentication for fraud prevention services. Before determining a match, IDVoice Verified ensures that the voice presented is not a recording.

From https://www.idrnd.ai/idvoice-verified-voice-biometrics-and-anti-spoofing/

This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.

As for independent testing:

A final thought

Yes, fraudsters can use advanced tools to do bad things.

But the people who battle fraudsters can also use advanced tools to defeat the fraudsters.

Take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259

I may be a fraudster!

I’ve previously contacted a journalist via Help a Reporter Out (HARO), and I occasionally pitch to journalists on the service. In fact, I submitted a new pitch earlier this month.

So I noted with interest this story of how fraudsters fool Help a Reporter Out pitch recipients with synthetic or otherwise fraudulent identities.

When a reporter is writing a story that requires a source that he or she does not have, that reporter will likely turn to HARO, a service that “connects journalists seeking expertise to include in their content with sources who have that expertise.”…

Now, shady SEOs hide behind fake photos and personalities. The latest black hat search-engine optimization trend is to respond to Help-a-Reporter-Out (HARO) inquiries pretending to be a person of whichever gender/ethnicity the journalist is seeking comment from.

From https://www.johnwdefeo.com/articles/deepfakes-are-ruining-the-internet

As it turns out, I have never responded to a pitch that specifically requested comments from white males. (Probably because if a pitch DOESN’T request gender/ethnicity information, chances are that the respondent will be a white male.) But it’s clear how a HARO pitch scammer could create a synthesized identity of a biometric proposal writing expert.

So if you’re asking your source for a picture, John W. Defeo suggests that you ask for TWO pictures. I think that the technical term for this is MPA, or Multi Photo Authentication.

There’s one other suggestion.

Take those photographs and plug them into a reverse image lookup service like Tineye (or even Google Images). Have they appeared on the web before? Does the context make sense?

From https://www.johnwdefeo.com/articles/deepfakes-are-ruining-the-internet

I often use the picture that is found on my jebredcal Twitter profile.

So I plugged that in to a Google reverse image search. As expected, it hit on Twitter, but also hit on some other social media platforms such as LinkedIn.

I hadn’t heard of TinEye before, so I figured I’d give it a shot. Here’s what TinEye found:

Very odd, since as I previously mentioned this particular image is available on Twitter, LinkedIn, and other sources. But it turns out that TinEye honors requests from social media services NOT to crawl their sites. (No comment.) And TinEye apparently hasn’t crawled the relevant page on bredemarket.com yet.

Which leads to the scary thought – what if someone searched TinEye for me, and didn’t bother to search anywhere else after getting 0 results? Would the searcher conclude that I was a synthetically-generated biobot?

Wow, talk about identity concerns…

“Who Are You” by The Who. Fair use, https://en.wikipedia.org/w/index.php?curid=11316153