Three Ways in Which My Identity/Biometric Experience Exhibits My “Bias”

Yeah, I’m still focused on that statement:

“I think too much knowledge is actually bad in tech: you’re biased.”

Why does this quote affect me so deeply? Because with my 30-plus years of identity/biometric experience, I obviously have too much knowledge of the industry, which is obviously bad. After all, all a biometric company needs is a salesperson, an engineer, an African data labeler, and someone to run the generative AI for everything else. The company doesn’t need someone who knows that Printrak isn’t spelled with a C.

Google Gemini.

In this post I will share three of the “biases” I have developed in my 30-plus years in identity and biometrics, and how to correct these biases by stripping away that 20th century experience and applying novel thinking.

And if that last paragraph made you throw up in your mouth…read to the end of the post.

But first, let’s briefly explore these three biases that I shamefully hold due to my status as a biometric product marketing expert:

  1. Independent algorithmic confirmation is valuable.
  2. Process is valuable.
  3. Artificial intelligence is merely a tool.
Biometric product marketing expert.

Bias 1: Independent Algorithmic Confirmation is Valuable

Biometric products need algorithms to encode and match the biometric samples, and ideally to detect presentation and injection attacks.

But how do prospects know that these algorithms work? How accurate are they? How fast are they? How secure are they?

My bias

My brain, embedded with over 30 years of bias, gravitates to the idea that vendors should submit their algorithms for independent testing and confirmation.

From a NIST facial recognition demographic bias text.

This could be an accuracy test such as the ones NIST and DHS administer, or confirmation of presentation attack detection capabilities (as BixeLab, iBeta, and other organizations perform), or confirmation of injection attack detection capabilities.

Novel thinking

But you’re smarter than that and refuse to support the testing-industrial complex. They have their explicit or implicit agendas and want to force the biometric vendors to do well on the tests. For example, the U.S. Federal Bureau of Investigation’s “Appendix F” fingerprint capture quality standard specifically EXCLUDES contactless solutions, forcing everyone down the same contact path.

But you and your novel thinking reject these unnecessary impediments. You’re not going to constrain yourself by the assertions of others. You are going to assert your own benefits. Develop and administer your own tests. Share with your prospects how wonderful you are without going through an intermediary. That will prove your superiority…right?

Bias 2: Process is Valuable

A biometric company has to perform a variety of tasks. Raise funding. Hire people. Develop, market, propose, sell, and implement products. Throw parties.

How will the company do all these things?

My bias

My brain, encumbered by my experience (including a decade at Motorola), persists in a belief that process is the answer. The process can be as simple as scribblings on a cocktail napkin, but you need some process if you want to cash out in a glorious exit—I mean, deliver superior products to your customers.

Perhaps you need a development processs that defines, among other things, how long a sprint should be. A capture and proposal process (Shipley or simpler) that defines, among other things, who has the authority to approve a $10 million proposal A go-to-market process that defines the deliverables for different tiers, and who is responsible, accountable, consulted, and informed. Or maybe just an onboarding process when starting a new project, dictating the questions you need to ask at the beginning.

Bredemarket’s seven questions. I ask, then I act.

Novel thinking

Sure all that process is fine…if you don’t want to do anything. Do you really want to force your people to wait two weeks for the latest product iteration? Impose a multinational bureauracy on your sales process? Go through an onerous checklist before marketing a product?

Google Gemini.

Just code it.

Just sell it.

Just write it.

Bias 3: Artificial Intelligence is Merely a Tool

The problem with experienced people is that they think that there is nothing new under the sun.

You talk about cloud computing, and they yawn, “Sounds like time sharing.” You talk about quantum computing, and they yawn, “Sounds like the Pentium.” You talk about blockchain, and they yawn, “Sounds like a notary public.”

My bias

As I sip my Pepperidge Farm, I can barely conceal my revulsion at those who think “we use AI” is a world-dominating marketing message. Artificial intelligence is not a way of life. It is a tool. A tool that in and of itself does not merit much of a mention.

Google Gemini.

How many automobile manufacturers proclaim “we use tires” as part of their marketing messaging? Tires are essential to an automobile’s performance, but since everyone has them, they’re not a differentiator and not worthy of mention.

In the same way, everyone has AI…so why talk about its mere presence? Talk about the benefits your implementation provides and how these benefits differentiate you from your competitors.

Novel thinking

Yep, the grandpas that declare “AI is only a tool” are missing the significance entirely. AI is not like a Pentium chip. It is a transformational technology that is already changing the way we create, sell, and market.

Therefore it is critically important to highlight your product’s AI use. AI isn’t a “so what” feature, but an indication of revolutionary transformative technology. You suppress mention of AI at your own peril.

How do I overcome my biases of experience?

OK, so I’ve identified the outmoded thinking that results from too much experience. But how do I overcome it?

I don’t.

Because if you haven’t already detected it, I believe that experience IS valuable, and that all three items above are essential and shouldn’t be jettisoned for the new, novel, and kewl.

  • Are you a identity/biometric marketing leader who needs to tell your prospects that your algorithms are validated by reputable independent bodies?
  • Or that you have a process (simple or not) that governs how your customers receive your products?
  • Or that your AI actually does unique things that your competitors don’t, providing true benefits to your customers?

Bredemarket can help with strategy, analysis, content, and/or proposals for your identity/biometric firm. Talk to me (for free).

By the way, here’s MY process (and my services and pricing).

Bredemareket: Services, Process, and Pricing.

NIST Implements Color Coding to Visually Identify High False Positive Facial Recognition Values

With the exception of colorblind people, the use of colors in dashboards makes information more accessible, particularly in populations where green means “good” and red means “bad.”

(Even if your name IS Bamber.)

The National Institute of Standards and Technology understands the importance of consistent colors, having worked on traffic light colors since the National Bureau of Standards days (PDF).

For more modern applications such as biometrics, NIST recently incorporated a color coding display change to one of its tabs for the “Face Recognition Technology Evaluation (FRTE) 1:N Identification” results. Specifically, the “Demographics: False Positive Dependence” tab.

The change, announced in an email, is as follows:

“The false positive identification error rate tables now include color-coding to highlight anomalously high values.”

In this context, “anomalously high” is bad, or red. (Actually dark pink, but close enough.)

But let’s explain WHY and HOW NIST made this change.

Why does NIST highlight demographic false positive dependence?

NIST has of course explored the demographic effects of face recognition for years, and the “Demographics: False Positive Dependence” tab provides additional tracking for this.

Why does NIST do this?

“False positives occur when searches return wrong identities. Such outcomes have application-dependent consequences, which can be serious.”

Serious as in arresting and jailing innocent people. I previously mentioned Robert Williams.

How does NIST highlight demographic false positive dependence?

Anyway, NIST created the “Demographics: False Positive Dependence” tab.

“The table shows false positive identification rates (FPIR), the fraction of searches that should not return gallery entries above a threshold, but do. The threshold is set for each algorithm to give a FPIR of 0.002 (1 in 500) or less on searches of women born in Eastern Europe.”

And for algorithms that have “anomalously high values” in other demographic populations, NIST has added the color coding.

“A cell is shaded by how much larger FPIR is than that: yellow if FPIR is 20 times larger; pink if FPIR is 40 times larger; and dark pink if FPIR is 80 times larger.”

What does the highlighting look like?

Let me illustrate this with the results from the three algorithms Omnigarde submitted.

Data captured April 8, 2026. Omnigarde.
  • Omnigarde’s first two algorithms, submitted in 2023 and 2024, exhibited high FPIR values for south Asian females, and the second algorithm also exhibited a high FPIR value for east Asian females. See the color coding.
  • The third algorithm, submitted in 2025, had lower FPIR values for these populations and thus no yellow color coding.

Even the less-stellar algorithms show improvement over time.

Data captured April 8, 2026. Anonymized (but you can figure it out if you’re curious).

Final thoughts

Both vendors and customers/prospects can rightfully question whether this is helpful or hurtful. I lean toward “helpful,” because if the facial recognition algorithm you use provides high false positives for certain popularions, you need to know.

And as always, law enforcement in the United States should NEVER solely rely on facial recognition results as the basis for an arrest…even for Eastern European females. They should ONLY be an investigative lead.

In the meantime, take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259.

CITeR and Combating Facial Recognition Demographic Bias

The National Institute of Standards and Technology (NIST) isn’t the only entity that is seeking to combat facial recognition demographic bias. The Center for Identification Technology Research (CITeR) is doing its part.

The Problem

NIST and other entities have documented facial recognition accuracy differences related to skin tone. This is separate from the topic of facial analysis: this relates to facial recognition, or the identification of an individual. (As a note, “Gender Shades” had NOTHING to do with facial recognition.)

It’s fair to summarize that the accuracy of an algorithm depends upon the data used to train the algorithm. For example, if an algorithm is trained entirely on Japanese people, you would expect that it would be very accurate in identifying Japanese, but less accurate in identifying Native Americans or Kenyans.

Many of the most-used facial recognition algorithms are authored by North American/European or Asian companies, and while the good ones seek to employ a broad data set for algorithm training, NIST and other results document clear demographic differences in accuracy.

The Research

The Center for Identification Technology Research (CITeR) is a consortium of universities, government agencies, and private entities. The lead entity in CITeR, Clarkson University, has initiated research on “improving equity in face recognition systems.” Clarkson is using the following methods:

  • Establish a continuous skin color metric that retains accuracy across different image acquisition environments.
  • Develop a statistical approach to measure equity, ensuring FR results fall within a precise margin of error.
  • Employ new FR systems in combination with or instead of existing measures to minimize bias of results.

In this work, Clarkson is cooperating with other entities, such as the International Organization for Standardization (ISO) and the FIDO Alliance.

The final goal is to make facial recognition usable for everyone.

Your problem

Is your identity company and its product marketers also working to reduce demographic bias? How are you telling your story? Bredemarket (the biometric product marketing expert) can help with strategic and tactical solutions for your marketing and writing needs.

Bredemarket services, process, and pricing.

If I can help your firm with analysis, content, or even proposals in this area, talk to me.

Whether You Call It ANSI/NIST-ITL 1-2025 or NIST SP 500-290e4…It’s Out

Regardless of the concerns of Europeans and others about U.S. de facto governance of biometric standards, countries around the world still base their data interchange formats on a document written by the National Institute of Standards and Technology (NIST) and approved by the American National Standards Institute.

This document has been revised many times over the years. I first worked with the 1993 version of the document, which concentrated on binary and grayscale fingerprints with resolutions as high as 500 pixels per inch, but sometimes lower.

The new 2025 version (PDF), released in March 2026, covers a lot more. And sometimes a lot less.

  • 1 Transaction information
  • 2 User-defined descriptive text
  • 3 Deprecated
  • 4 Legacy
  • 5 Deprecated
  • 6 Deprecated
  • 7 User-defined image
  • 8 Signature image
  • 9 Friction ridge metadata
  • 10 Photographic body part imagery (including face and SMT)
  • 11 Voice data
  • 12 Forensic dental and oral data
  • 13 Variable-resolution latent friction ridge image
  • 14 Variable-resolution fingerprint image
  • 15 Variable-resolution palm print image
  • 16 User-defined variable-resolution testing image
  • 17 Iris image
  • 18 DNA data
  • 19 Variable-resolution plantar image
  • 20 Source representation
  • 21 Associated context
  • 22 Non-photographic imagery
  • 23-97 Reserved for future use
  • 98 Information assurance
  • 99 CBEFF biometric data record

Note the “deprecated” and “legacy” data types. In 1993, Type 4 was the gold standard for fingerprint images; now it’s just “legacy.” And forget about binary representations or anything less than 500 ppi.

Time marches on.

But some people have been around for much of the ride. I scanned the lists of working group members and found Kenneth Blue, Tom Buss, Roland Fournier, Patrick Grother, Mike McCabe, John Splain, Mark Walch, and many others who remember Type 4 and 250 ppi binary images.

And the canvassees included government and industry representatives from within and outside of the United States, including Canada, Germany, Japan, Latvia, Slovakia, Switzerland, other countries I probably mnissed, and INTERPOL.

If Europe or other countries do break away from NIST standards, it will be a rupturing break.

Multiple Identifiers: I Didn’t Know What NIST SP 500-290 Was

People memorize arcane text strings when they use them a lot. But until now I never memorized NIST SP 500-290…even though it refers to a document I reference fairly constantly.

Because NIST SP 500-290 is another identifier for the documents in the ANSI/NIST-ITL 1 data exchange series…a series that goes back to 1986 before NIST technically existed.

National Bureau of Standards. We don’t need no stinking technology.
  • The first version to be tagged as a NIST Special Publication (SP) was ANSI/NIST-ITL 1-2011, also known as SP 500-290.
  • This was followed by ANSI/NIST-ITL 1-2011 Update: 2013, a/k/a SP 500-290e2 (Edition 2).
  • Next was ANSI/NIST-ITL 1-2011 Update: 2015, or SP 500-290e3.

What about Edition 4?

Come back to the Bredemarket blog in an hour.

Europe is Looking At More Than Just Biometric Testing

A little more detail, courtesy EU Brussels, regarding the policy brief published by the EU Innovation Hub for Internal Security, coordinated by eu-LISA together with the European Commission, Europol and Frontex.

As I noted earlier today, one proposal is for Europe to perform its own independent biometric testing, reducing Europe’s dependence on the American National Institute of Standards and Technology (NIST).

“The second is a centralised evaluation and testing platform connected to that repository, allowing standardised, independent and continuous assessment of biometric technologies, including benchmarking across vendors.”

But if there is a second proposal (European testing) in the cited European biometric policy brief, there must also be a first proposal—one I failed to discuss this morning.

“The first is a common EU biometric data repository containing datasets that comply with European rules, reflect the demographics and use-cases relevant to EU authorities and are stored in a secure environment.”

Makes sense. If you are going to test you need test data. And NIST has no obligation to ensure its test data complies with the General Data Protection Regulation (GDPR). The subjects in NIST test databases rarely provided the “explicit consent” mentioned in GDPR, and the “right to erasure” from a NIST database is…laughable.

Yes, it’s extremely challenging to construct a testing database that complies with GDPR.

And NIST certainly ain’t gonna do it.

Will a European entity construct it?

And if the right to erasure is maintained, how will you maintain historical consistency of test results?

Why Would Europe Perform Its Own Biometric Testing?

I’ve seen two articles about a possible move by Europe to set up a Europe-wide biometric testing agency, bypassing the need for National Institute of Standards and Technology (NIST) biometric testing.

One reason is that a European-controlled testing methodology can incorporate European regulations, such as the General Data Protection Regulation (GDPR).

A second related reason for Europe to bypass NIST biometric testing is that U.S. government agencies, including NIST and the Federal Bureau of Investigation (FBI), naturally place prime importance on American interests.

Remember when the U.S. House of Representatives Select Committee on the Chinese Communist Party complained that the FBI Certified Products List contained Chinese biometric vendors (the Certified Communist Products List)?

  • Wait until they discover all the Chinese companies that participate in NIST testing.
  • And wait until someone in the legislative or executive branches decides that the FBI or NIST shouldn’t list products from other countries deemed unfriendly to the United States. Denmark? Germany? France?

For these reasons, Europe may be compelled to set up its own biometric testing organization.

And so may China.

On DOJ/DoD/DHS ABIS Interoperability

The image at the top of this post was taken from the NIST website and is a from an interoperability slide in a 2016 FBI presentation. Although the reference to “IAFIS” suggests that the image was created long before 2016. No NGI, and no HART either.

Because—while this may make some uncomfortable—biometric interoperability between the Departments of Defense, Homeland Security, and Justice is critically important.

For years after 9/11, the (then) systems from the three Departments were NOT interoperable.

Which made it difficult to identify if a military person or citizenship applicant was a criminal.

Today, while the three current systems use three different data interchange standards (based upon work by NIST), they CAN talk to each other.

We just have to ensure that the interoperability is legal and proper.

You CAN Modernize…But Should You?

In the past, I have said:

“[T]he technology is easy. The business part is the difficult part.”

But Chris Burt of Biometric Update phrased it more succinctly:

“[P]olicy chases modernization”

As Burt notes, examples of policy chasing modernization include:

  • Digital sovereignty, a topic of discussion with everyone from ID4Africa to an organization called the World Ethical Data Foundation. (As an aside, a Bredemarket client and I were recently discussing the pros and cons of managing digital identities in the cloud vs. peer-to-peer synchronization.)
  • Cybersecurity and digital identity, a topic of discussion in government (the White House, NIST) and industry (Jordan Burris of Socure).
  • Other topics, including police facial recognition policy. (Hmm…I recall that both government and vendor biometric policies were the topic of a Biometric Update guest article last year.)

All of you recall Pandora’s Box. I’ve used the story multiple times, including when discussing my creation of Bredebot and its nearly-instantaneous hallucinations. Yes, I do have “policies” regarding this “modernization,” including full disclosure.

But are policies enough?

Returning to Lattice Identity

The last time I delved into lattices, it was in connection with the NIST FIPS 204 Module-Lattice-Based Digital Signature Standard. To understand why the standard is lattice-based, I turned to NordVPN:

“A lattice is a hierarchical structure that consists of levels, each representing a set of access rights. The levels are ordered based on the level of access they grant, from more restrictive to more permissive.”

In essence, the lattice structure allows more elaborate access rights.

This article (“Lattice-Based Identity and Access Management for AI Agents”) discusses lattices more. Well, not explicitly; the word “lattice” only appears in the title. But here is the article’s main point:

“We are finally moving away from those clunky, “if-this-then-that” systems. The shift to deep learning means agents can actually reason through a mess instead of just crashing when a customer uses a slang word or a shipping invoice is slightly blurry.”

It then says

“Deep learning changes this because it uses neural networks to understand intent, not just keywords.”

Hmm…intent? Sounds a little somewhat you why…or maybe it’s just me.

But it appears that we sometimes don’t care about the intent of AI agents.

“If you gave a new employee the keys to your entire office and every filing cabinet on day one, you’d be sweating, right? Yet, that is exactly what many companies do with ai agents by just slapping an api key on them and hoping for the best.”

This is not recommended. See my prior post on attribute-based access control, which led me to focus more on non-person entities (non-human identities).

As should we all.