Three Ways to Identify and Share Your Identity Firm’s Differentiators

(Part of the biometric product marketing expert series)

Are you an executive with a small or medium sized identity/biometrics firm?

If so, you want to share the story of your identity firm. But what are you going to say?

How will you figure out what makes your firm better than all the inferior identity firms that compete with you?

How will you get the word out about why your identity firm beats all the others?

Are you getting tired of my repeated questions?

Are you ready for the answers?

Your identity firm differs from all others

Over the last 29 years, I (John E. Bredehoft of Bredemarket) have worked for and with over a dozen identity firms, either as an employee or as a consultant.

You’d think that since I have worked for so many different identity firms, it’s an easy thing to start working with a new firm by simply slapping down the messaging that I’ve created for all the other identity firms.

Nothing could be further from the truth.

Designed by Freepik.

Every identity firm needs different messaging.

  • The messaging that I created in my various roles at IDEMIA and its corporate predecessors was dramatically different than the messaging I created as a Senior Product Marketing Manager at Incode Technologies, which was also very different from the messaging that I created for my previous Bredemarket clients.
  • IDEMIA benefits such as “servicing your needs anywhere in the world” and “applying our decades of identity experience to solve your problems” are not going to help with a U.S.-only firm that’s only a decade old.
  • Similarly, messaging for a company that develops its own facial recognition algorithms will necessarily differ from messaging for a company that chooses the best third-party facial recognition algorithms on the market.

So which messaging is right?

It depends on who is paying me.

How your differences affect your firm’s messaging

When creating messaging for your identity firm, one size does not fit all, for the reasons listed above.

The content of your messaging will differ, based upon your differentiators.

  • For example, if you were the U.S.-only firm established less than ten years ago, your messaging would emphasize the newness of your solution and approach, as opposed to the stodgy legacy companies that never updated their ideas.
  • And if your firm has certain types of end users, such as law enforcement users, your messaging would probably feature an abundance of U.S. flags.

In addition, the channels that you use for your messaging will differ.

Identity firms will not want to market on every single social media channel. They will only market on the channels where their most motivated buyers are present.

  • That may be your own website.
  • Or LinkedIn.
  • Or Facebook.
  • Or Twitter.
  • Or Instagram.
  • Or YouTube.
  • Or TikTok.
  • Or a private system only accessible to people with a Top Secret Clearance.
  • Or display advertisements located in airports.
From https://www.youtube.com/watch?v=H02iwWCrXew

It may be more than one of these channels, but it probably won’t be all of them.

But before you work on your content or channels, you need to know what to say, and how to communicate it.

How to know and communicate your differentiators

As we’ve noted, your firm is different than all others.

  • How do you know the differences?
  • How do you know what you want to talk about?
  • How do you know what you DON’T want to talk about?

Here are three methods to get you started on knowing and communicating your differentiators in your content.

Method One: The time-tested SWOT analysis

If you talk to a marketer for more than two seconds about positioning a company, the marketer will probably throw the acronym “SWOT” back at you. I’ve mentioned the SWOT acronym before.

For those who don’t know the acronym, SWOT stands for

  • Strengths. These are internal attributes that benefit your firm. For example, your firm is winning a lot of business and growing in customer count and market share.
  • Weaknesses. These are also internal attributes, but in this case the attributes that detract from your firm. For example, you have very few customers.
  • Opportunities. These are external factors that enhance your firm. One example is a COVID or similar event that creates a surge in demand for contactless solutions.
  • Threats. The flip side is external factors that can harm your firm. One example is increasing privacy regulations that can slow or halt adoption of your product or service.

If you’re interested in more detail on the topic, there are a number of online sources that discuss SWOT analyses. Here’s TechTarget’s discussion of SWOT.

The common way to create the output from a SWOT analysis is to create four boxes and list each element (S, W, O, and T) within a box.

By Syassine – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=31368987

Once this is done, you’ll know that your messaging should emphasize the strengths and opportunities, and downplay or avoid the weaknesses and threats.

Or alternatively argue that the weaknesses and threats are really strengths and opportunities. (I’ve done this before.)

Method Two: Think before you create

Personally, I believe that a SWOT analysis is not enough. Before you use the SWOT findings to create content, there’s a little more work you have to do.

I recommend that before you create content, you should hold a kickoff of the content creation process and figure out what you want to do before you do it.

During that kickoff meeting, you should ask some questions to make sure you understand what needs to be done.

I’ve written about kickoffs and questions before, and I’m not going to repeat what I already said. If you want to know more:

Method Three: Send in the reinforcements

Now that you’ve locked down the messaging, it’s time to actually create the content that differentiates your identity firm from all the inferior identity firms in the market. While some companies can proceed right to content creation, others may run into one of two problems.

  • The identity firm doesn’t have any knowledgeable writers on staff. To create the content, you need people who understand the identity industry, and who know how to write. Some firms lack people with this knowledge and capability.
  • The identity firm has knowledgeable writers on staff, but they’re busy. Some companies have too many things to do at once, and any knowledgeable writers that are on staff may be unavailable due to other priorities.
Your current staff may have too much to do. By Backlit – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=12225421

This is where you supplement you identity firm’s existing staff with one or more knowledgeable writers who can work with you to create the content that leaves your inferior competitors in the dust.

What is next?

So do you need a knowledgeable biometric content marketing expert to create your content?

One who has been in the biometric industry for 29 years?

One who has been writing short and long form content for more than 29 years?

Are you getting tired of my repeated questions again?

Well then I’ll just tell you that Bredemarket is the answer to your identity/biometric content marketing needs.

Are you ready to take your identity firm to the next level with a compelling message that increases awareness, consideration, conversion, and long-term revenue? Let’s talk today!

Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event

(Part of the biometric product marketing expert series)

(UPDATE JUNE 24: CORRECTED THE YEAR THAT COVID BEGAN.)

I haven’t said anything publicly about Apple Vision Pro, so it’s time for me to be “how do you do fellow kids” trendy and jump on the bandwagon.

Actually…

It ISN’T time for me to jump on the Apple Vision Pro bandwagon, because while Apple Vision Pro affects the biometric industry, it’s not a REVOLUTIONARY biometric event.

The four revolutionary biometric events in the 21st century

How do I define a “revolutionary biometric event”?

By Alberto Korda – Museo Che Guevara, Havana Cuba, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6816940

I define it as something that completely transforms the biometric industry.

When I mention three of the four revolutionary biometric events in the 21st century, you will understand what I mean.

  • 9/11. After 9/11, orders of biometric devices skyrocketed, and biometrics were incorporated into identity documents such as passports and driver’s licenses. Who knows, maybe someday we’ll actually implement REAL ID in the United States. The latest extension of the REAL ID enforcement date moved it out to May 7, 2025. (Subject to change, of course.)
  • The Boston Marathon bombings, April 2013. After the bombings, the FBI was challenged in managing and analyzing countless hours of video evidence. Companies such as IDEMIA National Security Solutions, MorphoTrak, Motorola, Paravision, Rank One Computing, and many others have tirelessly worked to address this challenge, while ensuring that facial recognition results accurately identify perpetrators while protecting the privacy of others in the video feeds.
  • COVID-19, spring 2020 and beyond. COVID accelerated changes that were already taking place in the biometric industry. COVID prioritized mobile, remote, and contactless interactions and forced businesses to address issues that were not as critical previously, such as liveness detection.

These three are cataclysmic world events that had a profound impact on biometrics. The fourth one, which occurred after the Boston Marathon bombings but before COVID, was…an introduction of a product feature.

  • Touch ID, September 2013. When Apple introduced the iPhone 5s, it also introduced a new way to log in to the device. Rather than entering a passcode, iPhone 5S users could just use their finger to log in. The technical accomplishment was dwarfed by the legitimacy that this brought to using fingerprints for identification. Before 2013, attempts to implement fingerprint verification for benefits recipients were resisted because fingerprinting was something that criminals did. After September 2013, fingerprinting was something that the cool Apple kids did. The biometric industry changed overnight.

Of course, Apple followed Touch ID with Face ID, with adherents of the competing biometric modalities sparring over which was better. But Face ID wouldn’t have been accepted as widely if Touch ID hadn’t paved the way.

So why hasn’t iris verification taken off?

Iris verification has been around for decades (I remember Iridian before L-1; it’s now part of IDEMIA), but iris verification is nowhere near as popular in the general population as finger and face verification. There are two reasons for this:

  • Compared to other biometrics, irises are hard to capture. To capture a fingerprint, you can lay your finger on a capture device, or “slap” your four fingers on a capture device, or even “wave” your fingers across a capture device. Faces are even easier to capture; while older face capture systems required you to stand close to the camera, modern face devices can capture your face as you are walking by the camera, or even if you are some distance from the camera.
  • Compared to other biometrics, irises are expensive to capture. Many years ago, my then-employer developed a technological marvel, an iris capture device that could accurately capture irises for people of any height. Unfortunately, the technological marvel cost thousands upon thousands of dollars, and no customers were going to use it when they could acquire fingerprint and face capture devices that were much less costly.

So while people rushed to implement finger and face capture on phones and other devices, iris capture was reserved for narrow verticals that required iris accuracy.

With one exception. Samsung incorporated Princeton Identity technology into its Samsung Galaxy S8 in 2017. But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies. (This is why liveness detection is so important.) While Samsung continues to sell iris verification today, it hadn’t been adopted by Apple and therefore wasn’t cool.

Until now.

About the Apple Vision Pro and Optic ID

The Apple Vision Pro is not the first headset that was ever created, but the iPhone wasn’t the first smartphone either. And coming late to the game doesn’t matter. Apple’s visibility among trendsetters ensures that when Apple releases something, people take notice.

And when all of us heard about Vision Pro, one of the things that Apple shared about it was its verification technique. Not Touch ID or Face ID, but Optic ID. (I like naming consistency.)

According to Apple, Optic ID works by analyzing a user’s iris through LED light exposure and then comparing it with an enrolled Optic ID stored on the device’s Secure Enclave….Optic ID will be used for everything from unlocking Vision Pro to using Apple Pay in your own headspace.

From The Verge, https://www.theverge.com/2023/6/5/23750147/apple-optic-id-vision-pro-iris-biometrics

So why did Apple incorporate Optic ID on this device and not the others?

There are multiple reasons, but one key reason is that the Vision Pro retails for US$3,499, which makes it easier for Apple to justify the cost of the iris components.

But the high price of the Vision Pro comes at…a price

However, that high price is also the reason why the Vision Pro is not going to revolutionize the biometric industry. CNET admitted that the Vision Pro is a niche item:

At $3,499, Apple’s Vision Pro costs more than three weeks worth of pay for the average American, according to Bureau of Labor Statistics data. It’s also significantly more expensive than rival devices like the upcoming $500 Meta Quest 3, $550 Sony PlayStation VR 2 and even the $1,000 Meta Quest Pro

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Now CNET did go on to say the following:

With Vision Pro, Apple is trying to establish what it believes will be the next major evolution of the personal computer. That’s a bigger goal than selling millions of units on launch day, and a shift like that doesn’t happen overnight, no matter what the price is. The version of Vision Pro that Apple launches next year likely isn’t the one that most people will buy.

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Certainly Vision Pro and Optic ID have the potential to revolutionize the computing industry…in the long term. And as that happens, the use of iris biometrics will become more popular with the general public…in the long term.

But not today. You’ll have to wait a little longer for the next biometric revolution. And hopefully it won’t be a catastrophic event like three of the previous revolutions.

Using “Multispectral” and “Liveness” in the Same Sentence

(Part of the biometric product marketing expert series)

Now that I’m plunging back into the fingerprint world, I’m thinking about all the different types of fingerprint readers.

  • The optical fingerprint and palm print readers are still around.
  • And the capacitive fingerprint readers still, um, persist.
  • And of course you have the contactless fingerprint readers such as MorphoWave, one that I know about.
  • And then you have the multispectral fingerprint readers.

What is multispectral?

Bayometric offers a web page that covers some of these fingerprint reader types, and points out the drawbacks of some of the readers they discuss.

Latent prints are usually produced by sweat, skin debris or other sebaceous excretions that cover up the palmar surface of the fingertips. If a latent print is on the glass platen of the optical sensor and light is directed on it, this print can fool the optical scanner….

Capacitive sensors can be spoofed by using gelatin based soft artificial fingers.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

There is another weakness of these types of readers. Some professions damage and wear away a person’s fingerprint ridges. Examples of professions whose practitioners exhibit worn ridges include construction workers and biometric content marketing experts (who, at least in the old days, handled a lot of paper).

The solution is to design a fingerprint reader that not only examines the surface of the finger, but goes deeper.

From HID Global, “A Guide to MSI Technology: How It Works,” https://blog.hidglobal.com/2022/10/guide-msi-technology-how-it-works

The specialty of multispectral sensors is that it can capture the features of the tissue that lie below the skin surface as well as the usual features on the finger surface. The features under the skin surface are able to provide a second representation of the pattern on the fingerprint surface.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

Multispectral sensors are nothing new. When I worked for Motorola, Motorola Ventures had invested in a company called Lumidigm that produced multispectral fingerprint sensors; they were much more expensive than your typical optical or capacitive sensor, but were much more effective in capturing true fingerprints to the subdermal level.

Lumidigm was eventually acquired in 2014: not by Motorola (who sold off its biometric assets such as Printrak and Symbol), but by HID Global. This company continues to produce Lumidigm-branded multispectral fingerprint sensors to this day.

But let’s take a look at the other word I bandied about.

What is liveness?

KISS, Alive! By Obtained from allmusic.com., Fair use, https://en.wikipedia.org/w/index.php?curid=2194847

“Gelatin based soft artificial fingers” aren’t the only way to fool a biometric sensor, whether you’re talking about a fingerprint sensor or some other sensor such as a face sensor.

Regardless of the biometric modality, the intent is the same; instead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).

This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).

But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.

And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.

(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)

[UPDATE 4/24/2024: I FINALLY ADDRESSED THE DIFFERENCE BETWEEN ACTIVE AND PASSIVE LIVENESS HERE.]

Multispectral liveness

While multispectral fingerprint readers aren’t the only fingerprint readers, or the only biometric readers, that iBeta has tested for liveness, the HID Global Lumidigm readers conform to Level 2 (the higher level) of iBeta testing.

There’s a confirmation letter and everything.

From the iBeta website.

This letter was issued in 2021. For some odd reason, HID Global decided to publicize this in 2023.

Oh well. It’s good to occasionally remind people of stuff.

I’m still the biometric content marketing and proposal writing expert…but who benefits?

Beginning about a year ago, I began marketing myself as the biometric proposal writing expert and biometric content marketing expert. From a search engine optimization perspective, I have succeeded at this, so that Bredemarket tops the organic search results for these phrases.

Well, it seemed like a good idea at the time.

And maybe it still is.

Let’s look at why I declared myself the biometric proposal writing expert (BPWE) and biometric content marketing expert (BCME) in mid-2021, what happened over the last few months, why it happened, and who benefits.

Why am I the BPWE and BCME?

At the time that I launched this marketing effort, I wanted to establish Bredemarket’s biometric credentials. I was primarily providing my expertise to identity/biometric firms, so it made sense to emphasize my 25+ years of identity/biometric expertise, coupled with my proposal, marketing, and product experience. Some of my customers already knew this, but others did not.

So I coupled the appropriate identity words with the appropriate proposal and content words, and plunged full-on into the world of biometric proposal writing expert (BPWE within Bredemarket’s luxurious offices) and biometric content marketing expert (BCME here) marketing.

What happened?

There’s been one more thing that’s been happening in Bredemarket’s luxurious offices over the last couple of months.

I’ve been uttering the word “pivot” a lot.

Since March 2022, I’ve made a number of changes at Bredemarket, including pricing changes and modifications to my office hours. But this post concentrates on a change that affects the availability of the BPWE and BCME.

Let’s say that it’s December 2022, and someone performs a Google, Bing, or DuckDuckGo search for a biometric content marketing expert. The person finds Bredemarket, and excitedly goes to Bredemarket’s biometric content marketing expert page, only to encounter this text at the top of the page:

Update 4/25/2022: Effective immediately, Bredemarket does NOT accept client work for solutions that identify individuals using (a) friction ridges (including fingerprints and palm prints), (b) faces, and/or (c) secure documents (including driver’s licenses and passports). 

“Thanks a lot,” thinks the searcher.

Granted, there are others such as Tandem Technical Writing and Applied Forensic Services who can provide biometric consulting services, but the searcher won’t get the chance to work with ME.

Should have contacted me before April 2022.

Sheila Sund from Salem, United States, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons

Why did it happen?

I’ve already shared some (not all) details about why I’m pivoting with the Bredemarket community, but perhaps you didn’t get the memo.

I have accepted a full-time position as a Senior Product Marketing Manager with an identity company. (I’ll post the details later on my personal LinkedIn account, https://www.linkedin.com/in/jbredehoft/.) This dramatically decreases the amount of time I can spend on my Bredemarket consultancy, and also (for non-competition reasons) limits the companies with which I can do business. 

Those of you who have followed Bredemarket from the beginning will remember that Bredemarket was only one part of a two-pronged approach. After becoming a “free agent” (also known as “being laid off”) in July 2020, my initial emphasis was on finding full-time employment. Within a month, however, I found myself accepting independent contracting projects, and formally established Bredemarket to handle that work. Therefore, I was simultaneously (a) looking for full-time work, and (b) growing my consulting business. And I’ve been doing both simultaneously for over a year and a half. 

Now that I’ve found full-time employment again, I’m not going to give up the consulting business. But it’s definitely going to have to change, as outlined in my April 25, 2022 update.

So now all of this SEO traction will not benefit you, the potential Bredemarket finger/face client, but it obviously will benefit my new employer. I can see it now when people talk about my new employer: “Isn’t that the company where the biometric content marketing expert is the Senior Product Marketing Manager?”

At least somebody will benefit.

P.S. There’s a “change” Spotify playlist. Unlike Kevin Meredith, I don’t use my playlists to make sure my presentation is within the alloted time. Especially when I create my longer 100-plus song playlists; no one wants to hear me speak for that long. Thankfully for you, this playlist is only a little over an hour long, and includes various songs on change, moving, endings, beginnings, and time.

Who is THE #1 NIST facial recognition vendor?

(Part of the biometric product marketing expert series)

(When I wrote this in 2022 I used the then-current FRVT terminology. I’ve updated to FRTE as warranted.)

As I’ve noted before, there are a number of facial recognition companies that claim to be the #1 NIST facial recognition vendor. I’m here to help you cut through the clutter so you know who the #1 NIST facial recognition vendor truly is.

You can confirm this information yourself by visiting the NIST FRTE 1:1 Verification and FRTE 1:N Identification pages. The old FRVT, by the way, stood for “Face Recognition Vendor Test”—and has subsequently been replaced by FRTE, “Face Recognition Technology Evaluation.”


From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate.

So I can announce to you that as of February 23, 2022, the #1 NIST facial recognition vendor is Cloudwalk.

And Sensetime.

And Beihang University ERCACAT.

And Cubox.

And Adera.

And Chosun University.

And iSAP Solution Corporation.

And Bitmain.

And Visage Techologies.

And Expasoft LLC.

And Paravision.

And NEC.

And Ptakuratsatu.

And Ayonix.

And Rank One.

And Dermalog.

And Innovatrics.

Now how can ALL dozen-plus of these entities be number 1?

Easy.

The NIST 1:1 and 1:N tests include many different accuracy and performance measurements, and each of the entities listed above placed #1 in at least one of these measurements. And all of the databases, database sizes, and use cases measure very different things.

Transportation Security Administration Checkpoint at John Glenn Columbus International Airport. By Michael Ball – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77279000

For example:

  • Visage Technologies was #1 in the 1:1 performance measurements for template generation time, in milliseconds, for 480×720 and 960×1440 data.
  • Meanwhile, NEC was #1 in the 1:N Identification (T>0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million.
  • Not to be confused with the 1:N Identification (T>0) accuracy measurements for gallery visa, probe border, N = 1.6 million, where the #1 algorithm was not from NEC.
  • And not to be confused with the 1:N Investigation (R = 1, T = 0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million, where the #1 algorithm was not from NEC.

And can I add a few more caveats?

First caveat: Since all of these tests are ongoing tests, you can probably find a slightly different set of #1 algorithms if you look at the January data, and you will probably find a slightly different set of #1 algorithms when the March data is available.

Second caveat: These are the results for the unqualified #1 NIST categories. You can add qualifiers, such as “#1 non-Chinese vendor” or “#1 western vendor” or “#1 U.S. vendor” to vault a particular algorithm to the top of the list.

Third caveat: You can add even more qualifiers, such as “within the top five NIST vendors” and (one I admit to having used before) “a top tier NIST vendor in multiple categories.” This can mean whatever you want it to mean. (As can “dramatically improved” algorithm, which may mean that you vaulted from position #300 to position #200 in one of the categories.)

Fourth caveat: Even if a particular NIST test applies to your specific use case, #1 performance on a NIST test does not guarantee that a facial recognition system supplied by that entity will yield #1 performance with your database in your environment. The algorithm sent to NIST may or may not make it into a production system. And even if it does, performance against a particular NIST test database may not yield the same results as performance against a Rhode Island criminal database, a French driver’s license database, or a Nigerian passport database. For more information on this, see Mike French’s LinkedIn article “Why agencies should conduct their own AFIS benchmarks rather than relying on others.”

So now that you know who the #1 NIST facial recognition vendor is, do you feel more knowledgeable?

Although I’ll grant that a NIST accuracy or performance claim is better than some other claims, such as self-test results.

DNA mixture interpretation outside of the forensic laboratory? Apparently not yet.

(Part of the biometric product marketing expert series)

The National Institute of Standards and Technology has published a draft report entitled DNA Mixture Interpretation: A Scientific Foundation Review.

As NIST explains:

This report, currently published in draft form, reviews the methods that forensic laboratories use to interpret evidence containing a mixture of DNA from two or more people.

From https://www.nist.gov/dna-mixture-interpretation-nist-scientific-foundation-review

The problem of mixtures is more pronounced in DNA analysis than in analysis of other biometrics. You aren’t going to encounter two overlapping irises or two overlapping faces in the real world. (Well, not normally.)

By Olli Niemitalo – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=18707318

You can certainly encounter overlapping voices (in a recorded conversation) or overlapping fingerprints (when two or more people touched the same item).

But there are methods to separate one biometric sample from another.

It’s a little more complicated when you’re dealing with DNA.

Distinguishing one person’s DNA from another in these mixtures, estimating how many individuals contributed DNA, determining whether the DNA is even relevant or is from contamination, or whether there is a trace amount of suspect or victim DNA make DNA mixture interpretation inherently more challenging than examining single-source samples. These issues, if not properly considered and communicated, can lead to misunderstandings regarding the strength and relevance of the DNA evidence in a case.

From the Abstract in https://doi.org/10.6028/NIST.IR.8351-draft%C2%A0

As some of you know, I have experience with “rapid DNA” instruments that provide a mostly-automated way to analyze DNA samples. Because these instruments are mostly automated and designed for use by non-scientific personnel, they are not able to analyze all of the types of DNA that would be analyzed by a forensic laboratory.

Therefore, this draft document is silent on the topic of rapid DNA, despite the fact that co-author Peter Vallone has years of experience in rapid DNA.

I am not a scientist, but in my view the absence of any reference to rapid DNA strongly suggests that it’s premature at this time to apply these instruments to DNA mixtures, such as rape cases in which both the assailant’s and the victim’s DNA are present in a sample.

Granted, there may be rape cases in which the DNA of the assailant may be present with no mixture.

You have to be REALLY careful before claiming that rapid DNA instruments can be used to wipe out the backlog of rape test kits. However, rapid DNA can be used to clear less complicated DNA cases so that the laboratories can concentrate on the more complex cases.

Putting your finger on the distribution of latent prints (the 30% palm estimate)

(Part of the biometric product marketing expert series)

Back when automated fingerprint identification systems (AFIS) were originally expanded to become automated fingerprint/palmprint identification systems (AFPIS), a common rationale for the expansion was the large number of unsolved latent palmprints at crime scenes.

By Etan J. Tal – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=41152228

The statistic that everyone cited was a statistic that 30% of all latent friction ridge prints at crime scenes were from palmprints. Here’s a citation from the National Institute of Justice.

Anecdotally, it is estimated that approximately 30% of comparison cases involve palm impressions.

Note that the NIJ took care to include the word “anecdotally.” Others don’t.

It is estimated that 30 percent of latent prints found at crime scenes come from palms.

But who provided the initial “30% of latents are palms” estimate long ago? And what was the basis for this estimate? This critical information seems to have been lost.

By Apneet Jolly – originally posted to Flickr as Candy corn contest jar, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=10317287

Now I don’t have a problem with imprecise estimates, provided that the assumptions that go behind the estimate are well-documented. I’ve done this many times myself.

But sadly, any assumptions for the “30% of latents are palms” figure have disappeared over the years, and only the percentage remains.

Is there any contemporary evidence that can be used to check the 30% estimate?

Yes.

The blind proficiency study wasn’t blind regarding the test data

Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance. https://lib.dr.iastate.edu/csafe_pubs/84/

A Center for Statistics and Applications in Forensic Science study (downloadable here) was published earlier this year. Although the study was devoted to another purpose, it touched upon this particular issue.

The “Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance” study obviously needed some data, so it analyzed a set of latent prints examined by the Houston Forensic Science Center (HFSC) over a multi-year period.

In the winter of 2017, HFSC implemented a blind quality control
program in latent print comparison. Since its implementation, the
Quality Division within the laboratory has developed and inserted
290 blind cases/requests for analysis into the latent print comparison unit as of August 4, 2020….

Of the 290 blind cases inserted into casework, we were able to
obtain print images for 144 cases, with report dates spanning approximately two years (i.e., January 9, 2018 to January 8, 2020)….

In total, examiners reviewed 376 latent prints submitted as part
of the 144 blind cases/requests for analysis.

So, out of those 376 latent prints, how many were from palms?

The majority of latent prints were fingerprints (94.3%;
n = 350) or palm prints (4.9%; n = 18). Very few were joint impressions or unspecified impressions (0.8%; n = 3)….

The remaining 5 of 376 prints were not attributed to an anatomical source because examiners determined them to be of no comparative value and did not consider them to be latent prints.

For those who are math-challenged, 5 percent is not equal to 30 percent. In fact, 5 percent is much less than 30 percent. (And 4.9% is even less, if you want to get precise about it.)

Now I’ll grant that this is just one study, and other latent examinations may have wildly different percentages. At a minimum, though, this data should cause us to question the universally-accepted “30%” figure.

As any scientific institute that desires funding would proclaim, further research is needed.

And I’ll grant that. Well, I won’t grant it, but some government or private funding entity might.

How the “CSI effect” can obscure the limited role of DNA-based investigative leads

(Part of the biometric product marketing expert series)

People have been talking about the “CSI effect” for decades.

In short, the “CSI effect” is characterized as the common impression that forensic technologies can solve crimes (and must be used to solve crimes) in less than an hour, or within the time of a one-hour television show.

When taken to its extreme, juries may ask why the law enforcement agency didn’t use advanced technological tools to solve that jaywalking case.

Advanced technological tools like DNA, which has been commonly perceived to be the tool that can solve every single crime.

Well, that and video, because video is powerful enough to secure a conviction. But that’s another story.

Can DNA result in an arrest in a Denver homicide case?

A case in point is this story from KDVR entitled “DNA in murder case sits in Denver crime lab for 11 months.”

This is a simple statement of fact, and is not that surprising a statement of fact. Many crime labs are inundated with backlogs of DNA evidence and other forensic evidence that has yet to be tested. And these backlogs ARE creating difficulties in solving crimes such as rapes.

But when you read the article itself, the simple statement of fact is painted as an abrogation of responsibility on the part of law enforcement.

A father is making an emotional plea and putting up $25,000 of his own money to help find his son’s killer.

He is also asking the Problem Solvers to look into the time it has taken for DNA evidence to be tested in this case and others.

Tom O’Keefe said it’s taking too long to get answers and justice.

From this and other statements in the article, a picture emerges of an unsolved crime that can only be solved by the magical tool of DNA. If DNA is applied to this, just like they do on TV, arrests will be made and the killer will be convicted.

So why is it taking so long to do this?

Why is justice not being served?

KDVR is apparently not run by impassioned activists, but by journalists. And it is important from a journalistic perspective to get all sides of the story. Therefore, KDVR contacted the Denver Police Department for its side of the story.

The Denver Police Department has identified all parties involved, and the investigation shows multiple handguns were fired during this incident. While this complex case remains open, which limits details we can provide, we can verify that a significant amount of forensic work has been completed, but some remains. Investigators believe the pending forensic analysis can potentially support a weapon-related charge but will not further the ongoing homicide investigation.

OK, let’s grant that they’re not trying to identify an unknown assailant, since “all parties involved” are known.

But once that DNA is tested, isn’t that going to be the magic tool that provides the police with probable cause to arrest the killer?

Um, no.

Even IF the DNA evidence DOES happen to show a significant probability that an identifiable person committed the homicide, that in itself is not sufficient reason to arrest someone.

Why not?

Because you can’t arrest someone on DNA evidence alone.

DNA evidence can provide an investigative lead, but it has to be corroborated with other evidence in order to secure an arrest and a conviction. (Don’t forget that the evidence has to result in a conviction, and in most of the United States that requires that the evidence show beyond a reasonable doubt that the person committed the crime.)

Why was a serial killer in three European countries never brought to justice, despite overwhelming DNA evidence?

Reasonable schmeasonable.

If DNA ties someone to a crime, then the person committed the crime, right?

Let’s look at the story of a serial killer who terrorized Europe for over a decade, even though ample DNA evidence was found at each of the murder scenes, beginning with this one:

In 1993, a 62-year-old woman was found dead in her house in the town of Idar-Oberstein, strangled by wire taken from a bouquet of flowers discovered near her body.

Nobody had any information on what might have happened to Lieselotte Schlenger. No witnesses, no suspects, no signs of suspicious activity (except for the fact that she’d been strangled to death with a piece of wire, of course). But on a bright teacup near Schlenger, the police found DNA, the only clue to surface at all.

The case went cold, given that the only lead was the DNA of an unknown woman, and there was no match. Yet.

Eight years later, in 2001, there was a match when the same woman’s DNA was found at a murder scene of a strangulation victim in Freiburg, Germany. Police now knew that they were dealing with a serial killer.

But this time, the woman didn’t wait another eight years to strike again.

Five months after the second murder scene, her DNA showed up on a discarded heroin syringe, after a 7-year-old had stepped on it in a playground in Gerolstein. A few weeks later it showed up on an abandoned cookie in a burgled caravan near Bad Kreuznach, like she’d deliberately spat out a Jammy Dodger as a calling card. It was found in a break-in in an office in Dietzenbach, in an abandoned stolen car in Heilbronn, and on two beer bottles and a glass of wine in a burgled bar in Karlsruhe, like she’d robbed the place but stuck around for a few cheeky pints.

And her activities were not confined to Germany.

Over the apparent crime spree, her DNA was sprayed across an impressive 40 crime scenes in Austria, southern Germany, and France, including robberies, armed robberies, and murders.

In 2009, the case took an even more bizarre turn.

Police in France had discovered the burned body of a man, believed to be from an asylum seeker who went missing in 2002. During his application, the man had submitted fingerprints, which the police used to try and confirm his identity. Only, once again, they found the DNA of the phantom.

“Obviously that was impossible, as the asylum seeker was a man and the Phantom’s DNA belonged to a woman,” a spokesperson for the Saarbrücken public prosecutor’s office told Spiegel Online in 2009.

But how could this be?

DNA evidence had tied the woman, or man, or whatever, to six murders and numerous other crimes. There was plenty of evidence to identify the criminal.

What went wrong?

Well, in 2009 police finally figured out how DNA evidence had ended up at all of these crime scenes in three countries.

The man’s death led to an explanation of the case: there was no serial killer, and the DNA could be traced to a woman working in a packing center specializing in medical supplies. It was all down to DNA contamination.

Well, couldn’t that packing woman be convicted of the serial murders and other crimes, based upon the DNA evidence?

No, because there was no other evidence linking the woman to the crimes, and certainly “reasonable doubt” (or the European criminal justice equivalent) that the woman was also the dead male asylum seeker.

This is why DNA is only an investigative lead, and not evidence in and of itself.

But the Innocence Project always believes that DNA is authoritative evidence, right?

Even those who champion the use of DNA admit this.

If you look through the files of people exonerated by the Innocence Project, you find a common thread in many of them.

Much of the evidence gathered before the suspect’s original conviction indicated that the suspect was NOT the person who committed the crime. Maybe the family members testified that the suspect was at home the entire time and couldn’t have committed the crime in question. Or maybe the suspect was in another city.

However, some piece of evidence was so powerful that the person was convicted anyway. Perhaps it was eyewitness testimony, or perhaps something else, but in the end the suspect was convicted.

Eventually the Innocence Project got involved, and subsequent DNA testing indicated that the suspect was NOT the person who committed the crime.

This in and of itself didn’t PROVE that the person was innocent, but the DNA test aligned with much of the other evidence that had previously been collected. It was enough to cast a reasonable doubt on the conviction, allowing the improperly convicted suspect to go free.

But there are some cases in which the Innocence Project says that even DNA evidence is not to be trusted.

Negligence in the Baltimore Police Department’s crime lab tainted DNA analysis in an unknown number of criminal cases for seven years and raises serious questions about other forensic work in the lab, the Innocence Project said today in a formal allegation that the state is legally required to investigate.

DNA contamination, the same thing that caused the issues in Europe, also caused issues in Baltimore.

And there may be other explanations for how a person’s DNA ended up at a crime scene. Perhaps a police officer was careless and left his or her DNA at a crime scene. Perhaps someone was at a crime scene and left DNA evidence, even though that person had nothing to do with the crime.

In short, a high probability DNA match, in and of itself, proves nothing.

Investigative leads and reasonable doubt are very important considerations, even if they don’t fit into a one-hour TV show script.

Investigative leads and DNA booking stations

(Part of the biometric product marketing expert series)

A July Bredemarket post on Facebook has garnered some attention in September.

I wanted to answer some questions about rapid DNA use in a booking station, how (and when) DNA is used in booking (arrests), what an “investigative lead” is, and whether acquiring DNA at booking is Constitutional.

(TL;DR on the last question is “yes,” per Maryland v. King.)

Are rapid DNA booking stations a Big Brother plot?

The post in question was a Facebook post to the Bredemarket Identity Firm Services Facebook group. I posted this way back in July, when Thermo Fisher Scientific became the second rapid DNA vendor (of two rapid DNA vendors; ANDE is the other) whose system was approved by the U.S. Federal Bureau of Investigation (FBI) for use as a law enforcement booking station.

When I shared this on Facebook, I received some concerned comments:

“Big brother total control”

“Is this Constitutional??? Will the results of this test hold up in courtrooms???”

I’ll address the second question later: not just in regard to rapid DNA, but to DNA in general. At this point, however, I will go ahead and say that the use of rapid DNA in booking was authorized legislatively by the Rapid DNA Act of 2017. This was followed by over three years of procedural stuff until rapid DNA booking station use was authorized this year.

To accurately state what “rapid DNA booking station use” actually means, let me refer to the FBI’s language, starting with the purpose:

The FBI Laboratory Division has been working with the FBI Criminal Justice Information Services (CJIS) Division and the CJIS Advisory Policy Board (CJIS APB) Rapid DNA Task Force to plan the effective integration of Rapid DNA into the booking station process.

By way of definition, a “booking station” is a computer that processes individuals who are “booked,” or arrested. The FBI’s plan was that (when authorized by federal, state, or local law) when an arrested individual’s fingerprints were captured, the individual’s DNA would be captured at the same time. (Again, only when authorized.)

The use of the term “reference sample buccal (cheek) swab” is intentional. The FBI’s current development and validation efforts have been focused on the DNA samples obtained from known individuals (e.g., persons under arrest). Because known reference samples are taken directly from the individual, they contain sufficient amounts of DNA, and there are no mixed DNA profiles that would require a scientist to interpret them. For purposes of uploading or searching CODIS, Rapid DNA systems are not authorized for use on crime scene samples.

“CODIS,” by the way, is the Combined DNA Index System, a combination of federal, state, and local systems.

“Rapid DNA” is an accelerated, automated DNA method that can process DNA samples in less than two hours, as opposed to the more traditional DNA processes that can take a lot longer.

The FBI is NOT ready to use rapid DNA to solve crimes, although some local police agencies have chosen to do so. And until February of this year, the FBI was not ready to use rapid DNA in the booking process either.

So what has been authorized?

The Bureau recognizes that National DNA Index System (NDIS) approval of the Rapid DNA Booking Systems and training of law enforcement personnel using the approved systems are integral to ensuring that Rapid DNA is used in a manner that maintains the quality and integrity of CODIS and NDIS.

Rapid DNA Booking System(s) approved for use at NDIS by a law enforcement booking station are listed below.

ANDE 6C Series G (effective February 1, 2021)

RapidHIT™ ID DNA Booking System v1.0 (effective July 1, 2021) 

If you read the FBI rapid DNA page, you can find links to a number of forensic, security, and other standards that have to be followed when using rapid DNA in a booking environment.

But those aren’t the only restrictions on rapid DNA use.

Can ANY law enforcement agency use rapid DNA in booking?

Um, no.

According to the National Conference of State Legislatures (2013; see PDF), not all states authorize the taking of DNA after an arrest. As of 2013, 20 states did NOT allow the taking of DNA from individuals who had been arrested but not convicted. And of the 30 remaining states, some (such as Connecticut) only allowed taking of DNA for “serious felonies,” some (such as California) for all felonies, and various mixtures in between. Oklahoma, for example, only allowed taking of DNA for “aliens unlawfully present under federal immigration law.”

Now, of course, a rogue police officer could take your DNA when not legally authorized to do so. Then again, a rogue restaurant employee could put laxatives in your food; that doesn’t mean we outlaw laxatives.

An “investigative lead”

So let’s say that you’re arrested for a crime, and your state allows the taking of DNA for your crime at arrest, and your local law enforcement agency has a rapid DNA instrument.

Now let’s assume that your DNA is searched against a DNA database of unsolved crimes, and your DNA matches a sample from another crime. What happens next?

If there is a match, police will likely want to take a closer look.

Wait a minute. There’s a DNA match! Doesn’t that mean that the police can swoop in and arrest the individual, and the individual is immediately convicted?

Um, no. Stop trusting your TV.

It takes more than DNA to convict a person of a crime.

While DNA can provide an investigative lead, DNA in and of itself is not sufficient to convict an individual. The DNA evidence usually has to be supported by additional evidence.

Especially since there may be other explanations of how the DNA got there.

In 2011, Adam Scott’s DNA matched with a sperm sample taken from a rape victim in Manchester—a city Scott, who lived more than 200 miles away, had never visited. Non-DNA evidence subsequently cleared Scott. The mixup was due to a careless mistake in the lab, in which a plate used to analyze Scott’s DNA from a minor incident was accidentally reused in the rape case.

Then there’s the uncomfortable and inconvenient truth that any of us could have DNA present at a crime scene—even if we were never there. Moreover, DNA recovered at a crime scene could have been deposited there at a time other than when the crime took place. Someone could have visited beforehand or stumbled upon the scene afterward. Alternatively, their DNA could have arrived via a process called secondary transfer, where their DNA was transferred to someone else, who carried it to the scene.

But there is a DNA case that was (originally) puzzling. Actually, a whole bunch of DNA cases.

There is an interesting case, known as the Phantom of Heilbonn, that dates from 1993 in Austria, France and Germany. From that year the DNA of an unknown female was detected at crime scenes in those countries, including at six murder scenes, one of the victims being a female police officer from Heilbronn, Germany. Between 1993 and March 2009 the woman’s DNA was detected at 40 crime scenes which ranged from murder to burglaries and robberies. The DNA was found on items ranging from a biscuit to a heroin syringe to a stolen car.

Then it got really weird.

In March 2009 investigators discovered the same DNA on the burned body of a male asylum-seeker in France. Now this presented something of an anomaly: the corpse was male but the DNA was of a female.

You guessed it; it was the swabs themselves that were contaminated.

So a DNA match is just the start of an investigative process, but it could provide the investigative lead that eventually leads to the conviction of an individual.

Perhaps you’ve noticed that I use the phrase “investigative lead” a lot when talking about DNA and about facial recognition. Trust me, it’s important.

But is the taking of DNA at booking Constitutional?

Obviously this is a huge question, because technical ability to do something does not automatically mean that you are Constitutionally authorized to do so. There is, after all, Fourth Amendment language protecting us against “unreasonable searches and seizures.”

Is the taking of DNA from arrestees who have not been convicted (assuming state law allows it) reasonable, or unreasonable?

Alonzo Jay King, Jr. had a vested interest in this question.

Alonzo Jay King Jr…was arrested in 2009 on assault charges. Before he was convicted of that crime, police took a DNA sample pursuant to Maryland’s new law allowing for such collections at the time of arrest in certain offenses….

I want to pause right here to make sure that the key point is highlighted. King, an arrestee who had not been convicted at the time of any crime, was compelled to provide evidence. At the time of arrest, collection of certain types of evidence (such as fingerprints) is “reasonable.” But collection of certain other types of evidence (such as a forced confession) is “unreasonable.”

So King’s DNA was taken and was searched against a Maryland database of DNA from unsolved crimes. You won’t believe what happened next! (Actually, you will.)

The DNA matched a sample from an unsolved 2003 rape case, and Mr. King was convicted of that crime.

Sentenced to life in prison, actually.

Wicomico County Assistant State’s Attorney Elizabeth L. Ireland said she requested the court impose a life sentence on King, not only because of his past criminal convictions, but also because it turned out that he was a friend of the victim’s family. She said this proved King was a continuing danger to the community.

Before you say, “well, if he was the rapist, he should be imprisoned, legal niceties notwithstanding,” think of the implications of that statement. The entire U.S. legal system is based upon the premise that it is better for a guilty person to mistakenly go free than for an innocent person to mistakenly be punished.

And if that doesn’t sink in…what if YOU were arrested and convicted unlawfully? What if a plate analyzing YOUR DNA wasn’t cleaned properly, and you were unjustly convicted of rape? Or what if a confession were coerced from YOU, and used to convict you?

So King’s question was certainly important, regardless of whether or not he actually committed the rape for which he was convicted.

King therefore appealed on Fourth Amendment grounds, the Maryland Court of Appeals overturned his conviction (PDF), and the State of Maryland brought the case to the U.S. Supreme Court in 2013 (Maryland v. King). In a close 5-4 decision (PDF) in which both conservatives and liberals were on both sides of the argument, the Court ruled that the taking of DNA from arrestees WAS Constitutional.

But that wasn’t the end of the argument, because a new case arose in the state of California. But the California Supreme Court ruled in 2018 that the practice was allowed in that state.

So the taking of DNA at booking is not only authorized (in some states, for some charges), it’s also Constitutional. (Although the Supreme Court’s opinion is still widely debated.)

So anyone who gets arrested for a felony in my home state of California should be ready for a buccal (cheek) swab.

Faulty “journalism” conclusions: the Israeli “master faces” study DIDN’T test ANY commercial biometric algorithms

(Part of the biometric product marketing expert series)

Modern “journalism” often consists of reprinting a press release without subjecting it to critical analysis. Sadly, I see a lot of this in publications, including both biometric and technology publications.

This post looks at the recently announced master faces study results, the datasets used (and the datasets not used), the algorithms used (and the algorithms not used), and the (faulty) conclusions that have been derived from the study.

Oh, and it also informs you of a way to make sure that you don’t make the same mistakes when talking about biometrics.

Vulnerabilities from master faces

In facial recognition, there is a concept called “master faces” (similar concepts can be found for other biometric modalities). The idea behind master faces is that such data can potentially match against MULTIPLE faces, not just one. This is similar to a master key that can unlock many doors, not just one.

This can conceivably happen because facial recognition algorithms do not match faces to faces, but match derived features from faces to derived features from faces. So if you can create the right “master” feature set, it can potentially match more than one face.

However, this is not just a concept. It’s been done, as Biometric Update informs us in an article entitled ‘Master faces’ make authentication ‘extremely vulnerable’ — researchers.

Ever thought you were being gaslighted by industry claims that facial recognition is trustworthy for authentication and identification? You have been.

The article goes on to discuss an Israeli research project that demonstrated some true “master faces” vulnerabilities. (Emphasis mine.)

One particular approach, which they write was based on Dlib, created nine master faces that unlocked 42 percent to 64 percent of a test dataset. The team also evaluated its work using the FaceNet and SphereFace, which like Dlib, are convolutional neural network-based face descriptors.

They say a single face passed for 20 percent of identities in Labeled Faces in the Wild, an open-source database developed by the University of Massachusetts. That might make many current facial recognition products and strategies obsolete.

Sounds frightening. After all, the study not only used dlib, FaceNet, and SphereFace, but also made reference to a test set from Labeled Faces in the Wild. So it’s obvious why master faces techniques might make many current facial recognition products obsolete.

Right?

Let’s look at the datasets

It’s always more impressive to cite an authority, and citations of the University of Massachusetts’ Labeled Faces in the Wild (LFW) are no exception. After all, this dataset has been used for some time to evaluate facial recognition algorithms.

But what does Labeled Faces in the Wild say about…itself? (I know this is a long excerpt, but it’s important.)

DISCLAIMER:

Labeled Faces in the Wild is a public benchmark for face verification, also known as pair matching. No matter what the performance of an algorithm on LFW, it should not be used to conclude that an algorithm is suitable for any commercial purpose. There are many reasons for this. Here is a non-exhaustive list:

Face verification and other forms of face recognition are very different problems. For example, it is very difficult to extrapolate from performance on verification to performance on 1:N recognition.

Many groups are not well represented in LFW. For example, there are very few children, no babies, very few people over the age of 80, and a relatively small proportion of women. In addition, many ethnicities have very minor representation or none at all.

While theoretically LFW could be used to assess performance for certain subgroups, the database was not designed to have enough data for strong statistical conclusions about subgroups. Simply put, LFW is not large enough to provide evidence that a particular piece of software has been thoroughly tested.

Additional conditions, such as poor lighting, extreme pose, strong occlusions, low resolution, and other important factors do not constitute a major part of LFW. These are important areas of evaluation, especially for algorithms designed to recognize images “in the wild”.

For all of these reasons, we would like to emphasize that LFW was published to help the research community make advances in face verification, not to provide a thorough vetting of commercial algorithms before deployment.

While there are many resources available for assessing face recognition algorithms, such as the Face Recognition Vendor Tests run by the USA National Institute of Standards and Technology (NIST), the understanding of how to best test face recognition algorithms for commercial use is a rapidly evolving area. Some of us are actively involved in developing these new standards, and will continue to make them publicly available when they are ready.

So there are a lot of disclaimers in that text.

  • LFW is a 1:1 test, not a 1:N test. Therefore, while it can test how one face compares to another face, it cannot test how one face compares to a database of faces. The usual law enforcement use case is to compare a single face (for example, one captured from a video camera) against an entire database of known criminals. That’s a computationally different exercise from the act of comparing a crime scene face against a single criminal face, then comparing it against a second criminal face, and so forth.
  • The people in the LFW database are not necessarily representative of the world population, the population of the United States, the population of Massachusetts, or any population at all. So you can’t conclude that a master face that matches against a bunch of LFW faces would match against a bunch of faces from your locality.
  • Captured faces exhibit a variety of quality levels. A face image captured by a camera three feet from you at eye level in good lighting will differ from a face image captured by an overhead camera in poor lighting. LFW doesn’t have a lot of these latter images.

I should mention one more thing about LFW. The researchers allow testers to access the database itself, essentially making LFW an “open book test.” And as any student knows, if a test is open book, it’s much easier to get an A on the test.

By MCPearson – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=25969927

Now let’s take a look at another test that was mentioned by the LFW folks itself: namely, NIST’s Face Recognition Vendor Test.

This is actually a series of tests that has evolved over the years; NIST is now conducting ongoing tests for both 1:1 and 1:N (unlike LFW, which only conducts 1:1 testing). This is important because most of the large-scale facial recognition commercial applications that we think about are 1:N applications (see my example above, in which a facial image captured at a crime scene is compared against an entire database of criminals).

In addition, NIST uses multiple data sets that cover a number of use cases, including mugshots, visa photos, and faces “in the wild” (i.e. not under ideal conditions).

It’s also important to note that NIST’s tests are also intended to benefit research, and do not necessarily indicate that a particular algorithm that performs well for NIST will perform well in a commercial implementation. (If the algorithm is even available in a commercial implementation: some of the algorithms submitted to NIST are research algorithms only that never made it to a production system.) For the difference between testing an algorithm in a NIST test and testing an algorithm in a production system, please see Mike French’s LinkedIn article on the topic. (I’ve cited this article before.)

With those caveats, I will note that NIST’s FRVT tests are NOT open book tests. Vendors and other entities give their algorithms to NIST, NIST tests them, and then NIST tells YOU what the results were.

So perhaps it’s more robust than LFW, but it’s still a research project.

Let’s look at the algorithms

Now that we’ve looked at two test datasets, let’s look at the algorithms themselves and evaluate the claim that results for the three algorithms Dlib, FaceNet, and SphereFace can naturally be extrapolated to ALL facial recognition algorithms.

This isn’t the first time that we’ve seen such an attempt at extrapolation. After all, the MIT Media Lab’s Gender Shades study (which evaluated neither 1:1 nor 1:N use cases, but algorithmic attempts to identify gender and race) itself only used three algorithms. Yet the popular media conclusion from this study was that ALL facial recognition algorithms are racist.

Compare this with NIST’s subsequent study, which evaluated 189 algorithms specially for 1:1 and 1:N use cases. While NIST did find some race/sex differences in algorithms, these were not universal: “Tests showed a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors.”

In other words, just because an earlier test of three algorithms demonstrated issues in determining race or gender, that doesn’t mean that the current crop of hundreds of algorithms will necessarily demonstrate issues in identifying individuals.

So let’s circle back to the master faces study. How do the results of this study affect “current facial recognition products”?

The answer is “We don’t know.”

Has the master faces experiment been duplicated against the leading commercial algorithms tested by Labeled Faces in the Wild? Apparently not.

Has the master faces experiment been duplicated against the leading commercial algorithms tested by NIST? Well, let’s look at the various ways you can define the “leading” commercial algorithms.

For example, here’s the view of the test set that IDEMIA would want you to see: the 1:N test sorted by the “Visa Border” column (results as of August 6, 2021):

And here’s the view of the test set that Paravision would want you to see: the 1:1 test sorted by the “Mugshot” column (results as of August 6, 2021):

From https://pages.nist.gov/frvt/html/frvt11.html as of August 6, 2021.

Now you can play with the sort order in many different ways, but the question remains: have the Israeli researchers, or anyone else, performed a “master faces” test (preferably a 1:N test) on the IDEMIA, Paravision, Sensetime, NtechLab, Anyvision, or ANY other commercial algorithm?

Maybe a future study WILL conclude that even the leading commercial algorithms are vulnerable to master face attacks. However, until such studies are actually performed, we CANNOT conclude that commercial facial recognition algorithms are vulnerable to master face attacks.

So naturally journalists approach the results critically…not

But I’m sure that people are going to make those conclusions anyway.

From https://xkcd.com/386/. Attribution-NonCommercial 2.5 Generic (CC BY-NC 2.5).

Does anyone even UNDERSTAND these studies? (Or do they choose NOT to understand them?)

How can you avoid the same mistakes when communicating about biometrics?

As you can see, people often write about biometric topics without understanding them fully.

Even biometric companies sometimes have difficulty communicating about biometric topics in a way that laypeople can understand. (Perhaps that’s the reason why people misconstrue these studies and conclude that “all facial recognition is racist” and “any facial recognition system can be spoofed by a master face.”)

Are you about to publish something about biometrics that requires a sanity check? (Hopefully not literally, but you know what I mean.)

Well, why not turn to a biometric content marketing expert? Use the identity/biometric blog expert to write your blog post, the identity/biometric case study expert to write your case study, or the identity/biometric white paper expert to…well, you get the idea. (And all three experts are the same person!)

Bredemarket offers over 25 years of experience in biometrics that can be applied to your marketing and writing projects.

If you don’t have a content marketing project now, you can still subscribe to my Bredemarket Identity Firm Services LinkedIn page or my Bredemarket Identity Firm Services Facebook group to keep up with news about biometrics (or about other authentication factors; biometrics isn’t the only one). Or scroll down to the bottom of this blog post and subscribe to my Bredemarket blog.

If my content creation process can benefit your biometric (or other technology) marketing and writing projects, contact me.