Positioning, Messaging, and Your Facial Recognition Product Marketing

(Part of the biometric product marketing expert series)

By Original: Jack Ver at Dutch Wikipedia Vector: Ponor – Own work based on: Plaatsvector.png by Jack Ver at Dutch Wikipedia, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=95477901.

When marketing your facial recognition product (or any product), you need to pay attention to your positioning and messaging. This includes developing the answers to why, how, and what questions. But your positioning and your resulting messaging are deeply influenced by the characteristics of your product.

If facial recognition is your only modality

There are hundreds of facial recognition products on the market that are used for identity verification, authentication, crime solving (but ONLY as an investigative lead), and other purposes.

Some of these solutions ONLY use face as a biometric modality. Others use additional biometric modalities.

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Your positioning depends upon whether your solution only uses face, or uses other factors such as voice.

Of course, if you initially only offer a face solution and then offer a second biometric, you’ll have to rewrite all your material. “You know how we said that face is great? Well, face and gait are even greater!”

If biometrics is your only factor

It’s no secret that I am NOT a fan of the “passwords are dead” movement.

Too many of the tombstones are labeled “12345.” By GreatBernard – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=116933238.

It seems that many of the people that are waiting the long-delayed death of the password think that biometrics is the magic solution that will completely replace passwords.

For this reason, your company might have decided to use biometrics as your sole factor of identity verification and authentication.

Or perhaps your company took a different approach, and believes that multiple factors—perhaps all five factors—are required to truly verify and/or authenticate an individual. Use some combination of biometrics, secure documents such as driver’s licenses, geolocation, “something you do” such as a particular swiping pattern, and even (horrors!) knowledge-based authentication such as passwords or PINs.

This naturally shapes your positioning and messaging.

  • The single factor companies will argue that their approach is very fast, very secure, and completely frictionless. (Sound familiar?) No need to drag out your passport or your key fob, or to turn off your VPN to accurately indicate your location. Biometrics does it all!
  • The multiple factor companies will argue that ANY single factor can be spoofed, but that it is much, much harder to spoof multiple factors at once. (Sound familiar?)

So position yourself however you need to position yourself. Again, be prepared to change if your single factor solution adopts a second factor.

A final thought

Every company has its own way of approaching a problem, and your company is no different. As you prepare to market your products, survey your product, your customers, and your prospects and choose the correct positioning (and messaging) for your own circumstances.

And if you need help with biometric positioning and messaging, feel free to contact the biometric product marketing expert, John E. Bredehoft. (Full-time employment opportunities via LinkedIn, consulting opportunities via Bredemarket.)

In the meantime, take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259.

Defeating Synthetic Identity Fraud

I’ve talked about synthetic identity fraud a lot in the Bredemarket blog over the past several years. I’ll summarize a few examples in this post, talk about how to fight synthetic identity fraud, and wrap up by suggesting how to get the word out about your anti-synthetic identity solution.

But first let’s look at a few examples of synthetic identity.

Synthetic identities pop up everywhere

As far back as December 2020, I discussed Kris’ Rides’ encounter with a synthetic employee from a company with a number of synthetic employees (many of who were young females).

More recently, I discussed attempts to create synthetic identities using gummy fingers and fake/fraudulent voices. The topic of deepfakes continues to be hot across all biometric modalities.

I shared a video I created about synthetic identities and their use to create fraudulent financial identities.

From https://www.youtube.com/watch?v=oDrSBlDJVCk.

I even discussed Kelly Shepherd, the fake vegan mom created by HBO executive Casey Bloys to respond to HBO critics.

And that’s just some of what Bredemarket has written about synthetic identity. You can find the complete list of my synthetic identity posts here.

So what? You must fight!

It isn’t enough to talk about the fact that synthetic identities exist: sometimes for innocent reasons, sometimes for outright fraudulent reasons.

You need to communicate how to fight synthetic identities, especially if your firm offers an anti-fraud solution.

Here are four ways to fight synthetic identities:

  1. Checking the purported identity against private databases, such as credit records.
  2. Checking the person’s driver’s license or other government document to ensure it’s real and not a fake.
  3. Checking the purported identity against government databases, such as driver’s license databases. (What if the person presents a real driver’s license, but that license was subsequently revoked?)
  4. Perform a “who you are” biometric test against the purported identity.

If you conduct all four tests, then you have used multiple factors of authentication to confirm that the person is who they say they are. If the identity is synthetic, chances are the purported person will fail at least one of these tests.

Do you fight synthetic identity fraud?

If you fight synthetic identity fraud, you should let people know about your solution.

Perhaps you can use Bredemarket, the identity content marketing expertI work with you (and I have worked with others) to ensure that your content meets your awareness, consideration, and/or conversion goals.

How can I work with you to communicate your firm’s anti-synthetic identity message? For example, I can apply my identity/biometric blog expert knowledge to create an identity blog post for your firm. Blog posts provide an immediate business impact to your firm, and are easy to reshare and repurpose. For B2B needs, LinkedIn articles provide similar benefits.

If Bredemarket can help your firm convey your message about synthetic identity, let’s talk.

Reasonable Minds Vehemently Disagree On Three Biometric Implementation Choices

(Part of the biometric product marketing expert series)

There are a LOT of biometric companies out there.

The Prism Project’s home page at https://www.the-prism-project.com/, illustrating the Biometric Digital Identity Prism as of March 2024. From Acuity Market Intelligence and FindBiometrics.

With over 100 firms in the biometric industry, their offerings are going to naturally differ—even if all the firms are TRYING to copy each other and offer “me too” solutions.

Will Ferrell and Chad Smith, or maybe vice versa. Fair use. From https://www.billboard.com/music/music-news/will-ferrell-chad-smith-red-hot-benefit-chili-peppers-6898348/, originally from NBC.

I’ve worked for over a dozen biometric firms as an employee or independent contractor, and I’ve analyzed over 80 biometric firms in competitive intelligence exercises, so I’m well aware of the vast implementation differences between the biometric offerings.

Some of the implementation differences provoke vehement disagreements between biometric firms regarding which choice is correct. Yes, we FIGHT.

MMA stands for Messy Multibiometric Authentication. Public Domain, https://commons.wikimedia.org/w/index.php?curid=607428

Let’s look at three (out of many) of these implementation differences and see how they affect YOUR company’s content marketing efforts—whether you’re engaging in identity blog post writing, or some other content marketing activity.

The three biometric implementation choices

Firms that develop biometric solutions make (or should make) the following choices when implementing their solutions.

  1. Presentation attack detection. Assuming the solution incorporates presentation attack detection (liveness detection), or a way of detecting whether the presented biometric is real or a spoof, the firm must decide whether to use active or passive liveness detection.
  2. Age assurance. When choosing age assurance solutions that determine whether a person is old enough to access a product or service, the firm must decide whether or not age estimation is acceptable.
  3. Biometric modality. Finally, the firm must choose which biometric modalities to support. While there are a number of modality wars involving all the biometric modalities, this post is going to limit itself to the question of whether or not voice biometrics are acceptable.

I will address each of these questions in turn, highlighting the pros and cons of each implementation choice. After that, we’ll see how this affects your firm’s content marketing.

Choice 1: Active or passive liveness detection?

Back in June 2023 I defined what a “presentation attack” is.

(I)nstead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).

This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).

Then I talked about standards and testing.

But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.

And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.

(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)

Well…that day is today.

A balanced assessment

Now I could cite a firm using active liveness detection to say why it’s great, or I could cite a firm using passive liveness detection to say why it’s great. But perhaps the most balanced assessment comes from facia, which offers both types of liveness detection. How does facia define the two types of liveness detection?

Active liveness detection, as the name suggests, requires some sort of activity from the user. If a system is unable to detect liveness, it will ask the user to perform some specific actions such as nodding, blinking or any other facial movement. This allows the system to detect natural movements and separate it from a system trying to mimic a human being….

Passive liveness detection operates discreetly in the background, requiring no explicit action from the user. The system’s artificial intelligence continuously analyses facial movements, depth, texture, and other biometric indicators to detect an individual’s liveness.

Pros and cons

Briefly, the pros and cons of the two methods are as follows:

  • While active liveness detection offers robust protection, requires clear consent, and acts as a deterrent, it is hard to use, complex, and slow.
  • Passive liveness detection offers an enhanced user experience via ease of use and speed and is easier to integrate with other solutions, but it incorporates privacy concerns (passive liveness detection can be implemented without the user’s knowledge) and may not be used in high-risk situations.

So in truth the choice is up to each firm. I’ve worked with firms that used both liveness detection methods, and while I’ve spent most of my time with passive implementations, the active ones can work also.

A perfect wishy-washy statement that will get BOTH sides angry at me. (Except perhaps for companies like facia that use both.)

Choice 2: Age estimation, or no age estimation?

Designed by Freepik.

There are a lot of applications for age assurance, or knowing how old a person is. These include smoking tobacco or marijuana, buying firearms, driving a cardrinking alcoholgamblingviewing adult contentusing social media, or buying garden implements.

If you need to know a person’s age, you can ask them. Because people never lie.

Well, maybe they do. There are two better age assurance methods:

  • Age verification, where you obtain a person’s government-issued identity document with a confirmed birthdate, confirm that the identity document truly belongs to the person, and then simply check the date of birth on the identity document and determine whether the person is old enough to access the product or service.
  • Age estimation, where you don’t use a government-issued identity document and instead examine the face and estimate the person’s age.

I changed my mind on age estimation

I’ve gone back and forth on this. As I previously mentioned, my employment history includes time with a firm produces driver’s licenses for the majority of U.S. states. And back when that firm was providing my paycheck, I was financially incentivized to champion age verification based upon the driver’s licenses that my company (or occasionally some inferior company) produced.

But as age assurance applications moved into other areas such as social media use, a problem occurred since 13 year olds usually don’t have government IDs. A few of them may have passports or other government IDs, but none of them have driver’s licenses.

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727.

Pros and cons

But does age estimation work? I’m not sure if ANYONE has posted a non-biased view, so I’ll try to do so myself.

  • The pros of age estimation include its applicability to all ages including young people, its protection of privacy since it requires no information about the individual identity, and its ease of use since you don’t have to dig for your physical driver’s license or your mobile driver’s license—your face is already there.
  • The huge con of age estimation is that it is by definition an estimate. If I show a bartender my driver’s license before buying a beer, they will know whether I am 20 years and 364 days old and ineligible to purchase alcohol, or whether I am 21 years and 0 days old and eligible. Estimates aren’t that precise.

How precise is age estimation? We’ll find out soon, once NIST releases the results of its Face Analysis Technology Evaluation (FATE) Age Estimation & Verification test. The release of results is expected in early May.

Choice 3: Is voice an acceptable biometric modality?

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Fingerprints, palm prints, faces, irises, and everything up to gait. (And behavioral biometrics.) There are a lot of biometric modalities out there, and one that has been around for years is the voice biometric.

I’ve discussed this topic before, and the partial title of the post (“We’ll Survive Voice Spoofing”) gives away how I feel about the matter, but I’ll present both sides of the issue.

White House photo by Kimberlee Hewitt – whitehouse.gov, President George W. Bush and comedian Steve Bridges, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3052515

No one can deny that voice spoofing exists and is effective, but many of the examples cited by the popular press are cases in which a HUMAN (rather than an ALGORITHM) was fooled by a deepfake voice. But voice recognition software can also be fooled.

(Incidentally, there is a difference between voice recognition and speech recognition. Voice recognition attempts to determine who a person is. Speech recognition attempts to determine what a person says.)

Finally facing my Waterloo

Take a study from the University of Waterloo, summarized here, that proclaims: “Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.”

If you re-read that sentence, you will notice that it includes the words “up to.” Those words are significant if you actually read the article.

In a recent test against Amazon Connect’s voice authentication system, they achieved a 10 per cent success rate in one four-second attack, with this rate rising to over 40 per cent in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99 per cent success rate after six attempts.

Other voice spoofing studies

Similar to Gender Shades, the University of Waterloo study does not appear to have tested hundreds of voice recognition algorithms. But there are other studies.

  • The 2021 NIST Speaker Recognition Evaluation (PDF here) tested results from 15 teams, but this test was not specific to spoofing.
  • A test that was specific to spoofing was the ASVspoof 2021 test with 54 team participants, but the ASVspoof 2021 results are only accessible in abstract form, with no detailed results.
  • Another test, this one with results, is the SASV2022 challenge, with 23 valid submissions. Here are the top 10 performers and their error rates.

You’ll note that the top performers don’t have error rates anywhere near the University of Waterloo’s 99 percent.

So some firms will argue that voice recognition can be spoofed and thus cannot be trusted, while other firms will argue that the best voice recognition algorithms are rarely fooled.

What does this mean for your company?

Obviously, different firms are going to respond to the three questions above in different ways.

  • For example, a firm that offers face biometrics but not voice biometrics will convey how voice is not a secure modality due to the ease of spoofing. “Do you want to lose tens of millions of dollars?”
  • A firm that offers voice biometrics but not face biometrics will emphasize its spoof detection capabilities (and cast shade on face spoofing). “We tested our algorithm against that voice fake that was in the news, and we detected the voice as a deepfake!”

There is no universal truth here, and the message your firm conveys depends upon your firm’s unique characteristics.

And those characteristics can change.

  • Once when I was working for a client, this firm had made a particular choice with one of these three questions. Therefore, when I was writing for the client, I wrote in a way that argued the client’s position.
  • After I stopped working for this particular client, the client’s position changed and the firm adopted the opposite view of the question.
  • Therefore I had to message the client and say, “Hey, remember that piece I wrote for you that said this? Well, you’d better edit it, now that you’ve changed your mind on the question…”

Bear this in mind as you create your blog, white paper, case study, or other identity/biometric content, or have someone like the biometric content marketing expert Bredemarket work with you to create your content. There are people who sincerely hold the opposite belief of your firm…but your firm needs to argue that those people are, um, misinformed.

And as a postscript I’ll provide two videos that feature voices. The first is for those who detected my reference to the ABBA song “Waterloo.”

From https://www.youtube.com/watch?v=4XJBNJ2wq0Y.

The second features the late Steve Bridges as President George W. Bush at the White House Correspondents Dinner.

From https://www.youtube.com/watch?v=u5DpKjlgoP4.

Identification Perfection is Impossible

(Part of the biometric product marketing expert series)

There are many different types of perfection.

Jehan Cauvin (we don’t spell his name like he spelled it). By Titian – Bridgeman Art Library: Object 80411, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6016067

This post concentrates on IDENTIFICATION perfection, or the ability to enjoy zero errors when identifying individuals.

The risk of claiming identification perfection (or any perfection) is that a SINGLE counter-example disproves the claim.

  • If you assert that your biometric solution offers 100% accuracy, a SINGLE false positive or false negative shatters the assertion.
  • If you claim that your presentation attack detection solution exposes deepfakes (face, voice, or other), then a SINGLE deepfake that gets past your solution disproves your claim.
  • And as for the pre-2009 claim that latent fingerprint examiners never make a mistake in an identification…well, ask Brandon Mayfield about that one.

In fact, I go so far as to avoid using the phrase “no two fingerprints are alike.” Many years ago (before 2009) in an International Association for Identification meeting, I heard someone justify the claim by saying, “We haven’t found a counter-example yet.” That doesn’t mean that we’ll NEVER find one.

You’ve probably heard me tell the story before about how I misspelled the word “quality.”

In a process improvement document.

While employed by Motorola (pre-split).

At first glance, it appears that Motorola would be the last place to make a boneheaded mistake like that. After all, Motorola is known for its focus on quality.

But in actuality, Motorola was the perfect place to make such a mistake, since it was one of the champions of the “Six Sigma” philosophy (which targets a maximum of 3.4 defects per million opportunities). Motorola realized that manufacturing perfection is impossible, so manufacturers (and the people in Motorola’s weird Biometric Business Unit) should instead concentrate on reducing the error rate as much as possible.

So one misspelling could be tolerated, but I shudder to think what would have happened if I had misspelled “quality” a second time.

Converting Prospects For Your Firm’s “Something You Are” Solution

As identity/biometric professionals well know, there are five authentication factors that you can use to gain access to a person’s account. (You can also use these factors for identity verification to establish the person’s account in the first place.)

I described one of these factors, “something you are,” in a 2021 post on the five authentication factors.

Something You Are. I’ve spent…a long time with this factor, since this is the factor that includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.

From https://bredemarket.com/2021/03/02/the-five-authentication-factors/

As I mentioned in August, there are a number of biometric modalities, including face, fingerprint, iris, hand geometry, palm print, signature, voice, gait, and many more.

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

If your firm offers an identity solution that partially depends upon “something you are,” then you need to create content (blog, case study, social media, white paper, etc.) that converts prospects for your identity/biometric product/service and drives content results.

Bredemarket can help.

Click below for details.

ICYMI: Voice Spoofing

In case you missed it…

But are computerized systems any better, and can they detect spoofed voices?

Well, in the same way that fingerprint readers worked to overcome gummy bears, voice readers are working to overcome deepfake voices.

This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.

As for independent testing:

For the rest of the story, see “We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.”

(Bredemarket email, meeting, contact, subscribe)

Quick Tech Takes on Speech Neuroprosthesis, AEM Dynamic Media, and Graph Databases in IAM

Yes, I’m stealing the Biometric Update practice of combining multiple items into a single post, but this lets me take a brief break from identity (mostly) and examine three general technology stories:

  • Advances in speech neuroprosthesis (the Pat Bennett / Stanford University story).
  • The benefits of Dynamic Media for Adobe Enterprise Manager users, as described by KBWEB Consult.
  • The benefits of graph databases for Identity and Access Management (IAM) implementations, as described by IndyKite.

Speech Neuroprosthesis

First, let’s define “neuroprosthetics/neuroprosthesis”:

Neuroprosthetics “is a discipline related to neuroscience and biomedical engineering concerned with developing neural prostheses, artificial devices to replace or improve the function of an impaired nervous system.

From: Neuromodulation (Second Edition), 2018

Various news sources highlighted the story of amyotrophic lateral sclerosis (ALS) patient Pat Bennett and her somewhat-enhanced ability to formulate words, resulting from research at Stanford University.

Diagram of a human highlighting the areas affected by amyotrophic lateral sclerosis (ALS). By PaulWicks – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=130714816

Because I was curious, I sought the Nature article that discussed the research in detail, “A high-performance speech neuroprosthesis.” The article describes a proof of concept of a speech brain-computer interface (BCI).

Here we demonstrate a speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant—who can no longer speak intelligibly owing to amyotrophic lateral sclerosis—achieved a 9.1% word error rate on a 50-word vocabulary (2.7 times fewer errors than the previous state-of-the-art speech BCI2) and a 23.8% word error rate on a 125,000-word vocabulary (the first successful demonstration, to our knowledge, of large-vocabulary decoding). Our participant’s attempted speech was decoded  at 62 words per minute, which is 3.4 times as fast as the previous record8 and begins to approach the speed of natural conversation (160 words per minute9).

From https://www.nature.com/articles/s41586-023-06377-x

While a 125,000 word vocabulary is impressive (most adult native English speakers have a vocabulary of 20,000-35,000 words), a 76.2% accuracy rate is so-so.

Stanford Medicine published a more lay-oriented article and a video that described Bennett’s condition, and the results of the study.

For Bennett, the (ALS) deterioration began not in her spinal cord, as is typical, but in her brain stem. She can still move around, dress herself and use her fingers to type, albeit with increasing difficulty. But she can no longer use the muscles of her lips, tongue, larynx and jaws to enunciate clearly the phonemes — or units of sound, such as sh — that are the building blocks of speech….

After four months, Bennett’s attempted utterances were being converted into words on a computer screen at 62 words per minute — more than three times as fast as the previous record for BCI-assisted communication.

From https://med.stanford.edu/news/all-news/2023/08/brain-implant-speech-als.html
From https://www.youtube.com/watch?v=DaWb1ukmYHQ

The Benefits of AEM Dynamic Media

Now let’s shift to companies that need to produce marketing collateral. Bredemarket produces collateral, but not to the scale that big companies need to produce. A single company may have to produce millions of pieces of collateral, each of which is specific to a particular product, in a particular region, for a particular audience/persona. Even Bredemarket could potentially produce all sorts of content, if it weren’t so difficult to do so:

  • A YouTube description of the Bredemarket 400 Short Writing Service, targeted to fingerprint/face marketing executives in the identity industry.
  • An Instagram carousel post about the Bredemarket 400 Short Writing Service, targeted to voice sales executives in the identity industry.
  • A TikTok reel about the Bredemarket 400 Short Writing Service, targeted to marketing executives in the AI industry.

All of this specialized content, using all of these different image and video formats? I’m not gonna create all that.

But as KBWEB Consult (a boutique technology consulting firm specializing in the implementation and delivery of Adobe Enterprise Cloud technologies) points out in its article “Implementing Rapid Omnichannel Messaging with AEM Dynamic Media,” Adobe Experience Manager has tools to speed up this process and create correctly-messaged content in ALL the formats for ALL the audiences.

One of those tools is Dynamic Media.

AEM Dynamic Media accelerates omnichannel personalization, ensuring your business messages are presented quickly and in the proper formats. Starting with a master file, Dynamic Media quickly adjusts images and videos to satisfy varying asset specifications, contributing to increased content velocity.

From https://kbwebconsult.com/implementing-rapid-omnichannel-messaging-with-aem-dynamic-media/

For those who aren’t immersed in marketing talk:

The article also discusses further implementation issues that are of interest to AEM users. If you are such a user, check the article out.

Graph Databases in Identity and Access Management (IAM)

I previously said that I was MOSTLY taking a break from identity, but graph databases impact items well beyond identity.

So what is a graph database?

By Originally uploaded by Ahzf (Transferred by Obersachse) – Originally uploaded on en.wikipedia, CC0, https://commons.wikimedia.org/w/index.php?curid=19279472

A graph database, also referred to as a semantic database, is a software application designed to store, query and modify network graphs. A network graph is a visual construct that consists of nodes and edges. Each node represents an entity (such as a person) and each edge represents a connection or relationship between two nodes. 

Graph databases have been around in some variation for along time. For example, a family tree is a very simple graph database…. 

Graph databases are well-suited for analyzing interconnections…

From https://www.techtarget.com/whatis/definition/graph-database

The claim is that the interconnection analysis capabilities of graph databases are much more flexible and comprehensive than the capabilities of traditional relational databases. While graph databases are not always better than relational databases, they are better for cerrtain types of data.

To see how this applies to identity and access management (IAM), I’ll turn to IndyKite, whose Lasse Andersen recently presented on graph database use in IAM (in a webinar sponsored by Strativ Group). IndyKite describes its solution as follows (in part):

A knowledge graph that holistically captures the identities of customers and IoT devices along with the rich relationships between them

A dynamic and real-time data model that unifies disconnected identity data and business metadata into one contextualized layer

From https://www.indykite.com/identity-knowledge-graph

So what?

For example, how does such a solution benefit banking and financial services providers who wish to support financial identity?

Identity-first security to enable trusted, seamless customer experiences

From https://www.indykite.com/banking

Yes, I know that every identity company (with one exception) uses the word “trust,” and they all use the word “seamless.”

But this particular technology benefits banking customers (at least the honest ones) by using the available interconnections to provide all the essential information about the customer and the customer’s devices, in a way that does not inconvenience the customer. IndyKite claims “greater privacy and security,” along with flexibility for future expansion.

In other words, it increases velocity.

What is your technology story?

I hope you provided this quick overview of these three technology advances.

But do you have a technology story that YOU want to tell?

Perhaps Bredemarket, the technology content marketing expert, can help you select the words to tell your story. If you’re interested in talking, let me know.

Bredemarket logo

Pipe Down Before Panicking Over Voice Resonance Alteration

(Part of the biometric product marketing expert series)

By Steve Tan [steve.tan@pvc4pipes.com] – http://www.pvc4pipes.com, Attribution, https://commons.wikimedia.org/w/index.php?curid=22089684

On the surface, it sounds scary. Tricking automated speaker identification systems with PVC pipe?

(D)igital security engineers at the University of Wisconsin–Madison have found these systems are not quite as foolproof when it comes to a novel analog attack. They found that speaking through customized PVC pipes — the type found at most hardware stores — can trick machine learning algorithms that support automatic speaker identification systems.

From https://news.wisc.edu/down-the-tubes-common-pvc-pipes-can-hack-voice-identification-systems/

So how does the trick work?

The project began when the team began probing automatic speaker identification systems for weaknesses. When they spoke clearly, the models behaved as advertised. But when they spoke through their hands or talked into a box instead of speaking clearly, the models did not behave as expected.

(Shimaa) Ahmed investigated whether it was possible to alter the resonance, or specific frequency vibrations, of a voice to defeat the security system. Because her work began while she was stuck at home due to COVID-19, Ahmed began by speaking through paper towel tubes to test the idea. Later, after returning to the lab, the group hired Yash Wani, then an undergraduate and now a PhD student, to help modify PVC pipes at the UW Makerspace. Using various diameters of pipe purchased at a local hardware store, Ahmed, Yani and their team altered the length and diameter of the pipes until they could produce the same resonance as they voice they were attempting to imitate.

Eventually, the team developed an algorithm that can calculate the PVC pipe dimensions needed to transform the resonance of almost any voice to imitate another. In fact, the researchers successfully fooled the security systems with the PVC tube attack 60 percent of the time in a test set of 91 voices, while unaltered human impersonators were able to fool the systems only 6 percent of the time.

From https://news.wisc.edu/down-the-tubes-common-pvc-pipes-can-hack-voice-identification-systems/

Impressive results. But…

Who was fooled?

We’ve run across these biometric spoof claims before, specifically in the first test that asserted that face categorization algorithms were racist and sexist. (Face categorization, not face recognition. That’s another story.) If you didn’t view the Gender Shades website, you’d immediately assume that the hundreds of existing face categorization algorithms had just been proven to be racist and sexist. But if you read the Gender Shades study, you’ll see that it only tested three algorithms (IBM, Microsoft, and Face++). Similarly, the Master Faces study only looked at three algorithms (Dlib, FaceNet, and SphereFace).

So let’s ask the question: which voice algorithms did UW-Madison test?

Here’s what the study (PDF) says.

We evaluate two state-of-the-art ASI models: (1) the x-vector network [51] implemented by Shamsabadi et al. [45], and (2) the emphasized channel attention, propagation and aggregation time delay neural network (ECAPATDNN) [17], implemented by SpeechBrain.1 Both models were trained on VoxCeleb dataset [15, 36, 37], a benchmark dataset for ASI. The x-vector network is trained on 250 speakers using 8 kHz sampling rate. ECAPA-TDNN is trained on 7205 speakers using 16 kHz sampling rate. Both models report a test accuracy within 98-99%.

From https://www.usenix.org/system/files/sec23fall-prepub-452-ahmed.pdf

So what we know is that this test, which used these two ASI models trained on a particular dataset, demonstrated an ability to fool systems 60 percent of the time.

But…

  • What does this mean for other ASI algorithms, including the commercial algorithms in use today?
  • And what does it mean when other datasets are used?

In other words (and I’m adapting my own text here), how do the results of this study affect “current automatic speaker identification products”?

The answer is “We don’t know.”

So pipe down…until we actually test commercial algorithms for this technique.

But I’m sure that the UW-Madison researchers and I agree on one thing: more research is needed.

The Difference Between Identity Factors and Identity Modalities

(Part of the biometric product marketing expert series)

I know that I’m the guy who likes to say that it’s all semantics. After all, I’m the person who has referred to five-page long documents as “battlecards.”

But sometimes the semantics are critically important. Take the terms “factors” and “modalities.” On the surface they sound similar, but in practice there is an extremely important difference between factors of authentication and modalities of authentication. Let’s discuss.

What is a factor?

To answer the question “what is a factor,” let me steal from something I wrote back in 2021 called “The five authentication factors.”

Something You Know. Think “password.” And no, passwords aren’t dead. But the use of your mother’s maiden name as an authentication factor is hopefully decreasing.

Something You Have. I’ve spent much of the last ten years working with this factor, primarily in the form of driver’s licenses. (Yes, MorphoTrak proposed driver’s license systems. No, they eventually stopped doing so. But obviously IDEMIA North America, the former MorphoTrust, has implemented a number of driver’s license systems.) But there are other examples, such as hardware or software tokens.

Something You Are. I’ve spent…a long time with this factor, since this is the factor that includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.

Something You Do. The Cybersecurity Man chose to explain this in a non-behavioral fashion, such as using swiping patterns to unlock a device. This is different from something such as gait recognition, which supposedly remains constant and is thus classified as behavioral biometrics.

Somewhere You Are. This is an emerging factor, as smartphones become more and more prevalent and locations are therefore easier to capture. Even then, however, precision isn’t always as good as we want it to be. For example, when you and a few hundred of your closest friends have illegally entered the U.S. Capitol, you can’t use geolocation alone to determine who exactly is in Speaker Pelosi’s office.

From https://bredemarket.com/2021/03/02/the-five-authentication-factors/

(By the way, if you search the series of tubes for reading material on authentication factors, you’ll find a lot of references to only three authentication factors, including references from some very respectable sources. Those sources are only 60% right, since they leave off the final two factors I listed above. It’s five factors of authentication, folks. Maybe.)

The one striking thing about the five factors is that while they can all be used to authenticate (and verify) identities, they are inherently different from one another. The ridges of my fingerprint bear no relation to my 16 character password, nor do they bear any relation to my driver’s license. These differences are critical, as we shall see.

What is a modality?

In identity usage, a modality refers to different variations of the same factor. This is most commonly used with the “something you are” (biometric) factor, but it doesn’t have to be.

Biometric modalities

The identity company Aware, which offers multiple biometric solutions, spent some time discussing several different biometric modalities.

[M]any businesses and individuals (are adopting) biometric authentication as it been established as the most secure authentication method surpassing passwords and pins. There are many modalities of biometric authentication to pick from, but which method is the best?  

From https://www.aware.com/blog-which-biometric-authentication-method-is-the-best/

After looking at fingerprints, faces, voices, and irises, Aware basically answered its “best” question by concluding “it depends.” Different modalities have their own strengths and weaknesses, depending upon the use case. (If you wear thick gloves as part of your daily work, forget about fingerprints.)

ID R&D goes a step further and argues that it’s best to use multimodal biometrics, in which the two biometrics are face and voice. (By an amazing coincidence, ID R&D offers face and voice solutions.)

And there are many other biometric modalities.

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Non-biometric modalities

But the word “modalities” is not reserved for biometrics alone. The scientific paper “Multimodal User Authentication in Smart Environments: Survey of User Attitudes,” just released in May, includes this image that lists various modalities. As you can see, two of the modalities are not like the others.

From Aloba, Aishat & Morrison-Smith, Sarah & Richlen, Aaliyah & Suarez, Kimberly & Chen, Yu-Peng & Ruiz, Jaime & Anthony, Lisa. (2023). Multimodal User Authentication in Smart Environments: Survey of User Attitudes. Creative Commons Attribution 4.0 International
  • The three modalities in the middle—face, voice, and fingerprint—are all clearly biometric “something you are” modalities.
  • But the modality on the left, “Make a body movement in front of the camera,” is not a biometric modality (despite its reference to the body), but is an example of “something you do.”
  • Passwords, of course, are “something you know.”

In fact, each authentication factor has multiple modalities.

  • For example, a few of the modalities associated with “something you have” include driver’s licenses, passports, hardware tokens, and even smartphones.

Why multifactor is (usually) more robust than multimodal

Modalities within a single authentication factor are more closely related than modalities within multiple authentication factors. As I mentioned above when talking about factors, there is no relationship between my fingerprint, my password, and my driver’s license. However, there is SOME relationship between my driver’s license and my passport, since the two share some common information such as my legal name and my date of birth.

What does this mean?

  • If I’ve fraudulently created a fake driver’s license in your name, I already have some of the information that I need to create a fake passport in your name.
  • If I’ve fraudulently created a fake iris, there’s a chance that I might already have some of the information that I need to create a fake face.
  • However, if I’ve bought your Coinbase password on the dark web, that doesn’t necessarily mean that I was able to also buy your passport information on the dark web (although it is possible).

Therefore, while multimodal authentication is better tha unimodal authentication, multifactor authentication is usually better still (unless, as Incode Technologies notes, one of the factors is really, really weak).

Can an identity content marketing expert help you navigate these issues?

As you can see, you need to be very careful when writing about modalities and factors.

You need a biometric content marketing expert who has worked with many of these modalities.

Actually, you need an identity content marketing expert who has worked with many of these factors.

So if you are with an identity company and need to write a blog post, LinkedIn article, white paper, or other piece of content that touches on multifactor and multimodal issues, why not engage with Bredemarket to help you out?

If you’re interested in receiving my help with your identity written content, contact me.

We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.

(Part of the biometric product marketing expert series)

Some of you are probably going to get into an automobile today.

Are you insane?

The National Highway Traffic Safety Administration has released its latest projections for traffic fatalities in 2022, estimating that 42,795 people died in motor vehicle traffic crashes.

From https://www.nhtsa.gov/press-releases/traffic-crash-death-estimates-2022

When you have tens of thousands of people dying, then the only conscionable response is to ban automobiles altogether. Any other action or inaction is completely irresponsible.

After all, you can ask the experts who want us to ban biometrics because it can be spoofed and is racist, so therefore we shouldn’t use biometrics at all.

I disagree with the calls to ban biometrics, and I’ll go through three “biometrics are bad” examples and say why banning biometrics is NOT justified.

  • Even some identity professionals may not know about the old “gummy fingers” story from 20+ years ago.
  • And yes, I know that I’ve talked about Gender Shades ad nauseum, but it bears repeating again.
  • And voice deepfakes are always a good topic to discuss in our AI-obsessed world.

Example 1: Gummy fingers

My recent post “Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event” included the following sentence:

But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/

A biometrics industry colleague noticed the rhyming words “dummy” and “gummy” and wondered if the latter was a typo. It turns out it wasn’t.

To my knowledge, these gummy fingers do NOT have ridges. From https://www.candynation.com/gummy-fingers

Back in 2002, researcher Tsutomu Matsumoto used “gummy bears” gelatin to create a fake finger that fooled a fingerprint reader.

Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.

Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers. Here’s a recent example of presentation attack detection (liveness detection) from TECH5:

TECH5 participated in the 2023 LivDet Non-contact Fingerprint competition to evaluate its latest NN-based fingerprint liveness detection algorithm and has achieved first and second ranks in the “Systems” category for both single- and four-fingerprint liveness detection algorithms respectively. Both submissions achieved the lowest error rates on bonafide (live) fingerprints. TECH5 achieved 100% accuracy in detecting complex spoof types such as Ecoflex, Playdoh, wood glue, and latex with its groundbreaking Neural Network model that is only 1.5MB in size, setting a new industry benchmark for both accuracy and efficiency.

From https://tech5.ai/tech5s-mobile-fingerprint-liveness-detection-technology-ranked-the-most-accurate-in-the-market/

TECH5 excelled in detecting fake fingers for “non-contact” reading where the fingers don’t even touch a surface such as an optical surface. That’s appreciably harder than detecting fake fingers that touch contact devices.

I should note that LivDet is an independent assessment. As I’ve said before, independent technology assessments provide some guidance on the accuracy and performance of technologies.

So gummy fingers and future threats can be addressed as they arrive.

But at least gummy fingers aren’t racist.

Example 2: Gender shades

In 2017-2018, the Algorithmic Justice League set out to answer this question:

How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?

From http://gendershades.org/. Yes, that’s “http,” not “https.” But I digress.

Let’s stop right there for a moment and address two items before we continue. Trust me; it’s important.

  1. This study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.
  2. The study focused on gender classification and race classification. Back in those primitive innocent days of 2018, the world assumed that you could look at a person and tell whether the person was male or female, or tell the race of a person. (The phrase “self-identity” had not yet become popular, despite the Rachel Dolezal episode which happened before the Gender Shades study). Most importantly, the study did not address identification of individuals at all.

However, the findings did find something:

While the companies appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. Let’s explore.

All companies perform better on males than females with an 8.1% – 20.6% difference in error rates.

All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% – 19.2% difference in error rates.

When we analyze the results by intersectional subgroups – darker males, darker females, lighter males, lighter females – we see that all companies perform worst on darker females.

From http://gendershades.org/overview.html

What does this mean? It means that if you are using one of these three algorithms solely for the purpose of determining a person’s gender and race, some results are more accurate than others.

Three algorithms do not predict hundreds of algorithms, and classification is not identification. If you’re interested in more information on the differences between classification and identification, see Bredemarket’s November 2021 submission to the Department of Homeland Security. (Excerpt here.)

And all the stories about people such as Robert Williams being wrongfully arrested based upon faulty facial recognition results have nothing to do with Gender Shades. I’ll address this briefly (for once):

  • In the United States, facial recognition identification results should only be used by the police as an investigative lead, and no one should be arrested solely on the basis of facial recognition. (The city of Detroit stated that Williams’ arrest resulted from “sloppy” detective work.)
  • If you are using facial recognition for criminal investigations, your people had better have forensic face training. (Then they would know, as Detroit investigators apparently didn’t know, that the quality of surveillance footage is important.)
  • If you’re going to ban computerized facial recognition (even when only used as an investigative lead, and even when only used by properly trained individuals), consider the alternative of human witness identification. Or witness misidentification. Roeling Adams, Reggie Cole, Jason Kindle, Adam Riojas, Timothy Atkins, Uriah Courtney, Jason Rivera, Vondell Lewis, Guy Miles, Luis Vargas, and Rafael Madrigal can tell you how inaccurate (and racist) human facial recognition can be. See my LinkedIn article “Don’t ban facial recognition.”

Obviously, facial recognition has been the subject of independent assessments, including continuous bias testing by the National Institute of Standards and Technology as part of its Face Recognition Vendor Test (FRVT), specifically within the 1:1 verification testing. And NIST has measured the identification bias of hundreds of algorithms, not just three.

In fact, people that were calling for facial recognition to be banned just a few years ago are now questioning the wisdom of those decisions.

But those days were quaint. Men were men, women were women, and artificial intelligence was science fiction.

The latter has certainly changed.

Example 3: Voice spoofs

Perhaps it’s an exaggeration to say that recent artificial intelligence advances will change the world. Perhaps it isn’t. Personally I’ve been concentrating on whether AI writing can adopt the correct tone of voice, but what if we take the words “tone of voice” literally? Let’s listen to President Richard Nixon:

From https://www.youtube.com/watch?v=2rkQn-43ixs

Richard Nixon never spoke those words in public, although it’s possible that he may have rehearsed William Safire’s speech, composed in case Apollo 11 had not resulted in one giant leap for mankind. As noted in the video, Nixon’s voice and appearance were spoofed using artificial intelligence to create a “deepfake.”

It’s one thing to alter the historical record. It’s another thing altogether when a fraudster spoofs YOUR voice and takes money out of YOUR bank account. By definition, you will take that personally.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.

What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech…

From https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=8e8417775591

Now I’ll grant that this is an example of human voice verification, which can be as inaccurate as the previously referenced human witness misidentification. But are computerized systems any better, and can they detect spoofed voices?

Well, in the same way that fingerprint readers worked to overcome gummy bears, voice readers are working to overcome deepfake voices. Here’s what one company, ID R&D, is doing to combat voice spoofing:

IDVoice Verified combines ID R&D’s core voice verification biometric engine, IDVoice, with our passive voice liveness detection, IDLive Voice, to create a high-performance solution for strong authentication, fraud prevention, and anti-spoofing verification.

Anti-spoofing verification technology is a critical component in voice biometric authentication for fraud prevention services. Before determining a match, IDVoice Verified ensures that the voice presented is not a recording.

From https://www.idrnd.ai/idvoice-verified-voice-biometrics-and-anti-spoofing/

This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.

As for independent testing:

A final thought

Yes, fraudsters can use advanced tools to do bad things.

But the people who battle fraudsters can also use advanced tools to defeat the fraudsters.

Take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259