The Monk Skin Tone Scale

(Part of the biometric product marketing expert series)

Now that I’ve dispensed with the first paragraph of Google’s page on the Monk Skin Tone Scale, let’s look at the meat of the page.

I believe we all agree on the problem: the need to measure the accuracy of facial analysis and facial recognition algorithms for different populations. For purposes of this post we will concentrate on a proxy for race, a person’s skin tone.

Why skin tone? Because we have hypothesized (I believe correctly) that the performance of facial algorithms is influenced by the skin tone of the person, not by whether or not they are Asian or Latino or whatever. Don’t forget that the designated races have a variety of skin tones within them.

But how many skin tones should one use?

40 point makeup skin tone scale

The beauty industry has identified over 40 different skin tones for makeup, but this granular of an approach would overwhelm a machine learning evaluation:

[L]arger scales like these can be challenging for ML use cases, because of the difficulty of applying that many tones consistently across a wide variety of content, while maintaining statistical significance in evaluations. For example, it can become difficult for human annotators to differentiate subtle variation in skin tone in images captured in poor lighting conditions.

6 point Fitzpatrick skin tone scale

The first attempt at categorizing skin tones was the Fitzpatrick system.

To date, the de-facto tech industry standard for categorizing skin tone has been the 6-point Fitzpatrick Scale. Developed in 1975 by Harvard dermatologist Thomas Fitzpatrick, the Fitzpatrick Scale was originally designed to assess UV sensitivity of different skin types for dermatological purposes.

However, using this skin tone scale led to….(drumroll)…bias.

[T]he scale skews towards lighter tones, which tend to be more UV-sensitive. While this scale may work for dermatological use cases, relying on the Fitzpatrick Scale for ML development has resulted in unintended bias that excludes darker tones.

10 point Monk Skin Tone (MST) Scale

Enter Dr. Ellis Monk, whose biography could be ripped from today’s headlines.

Dr. Ellis Monk—an Associate Professor of Sociology at Harvard University whose research focuses on social inequalities with respect to race and ethnicity—set out to address these biases.

If you’re still reading this and haven’t collapsed in a rage of fury, here’s what Dr. Monk did.

Dr. Monk’s research resulted in the Monk Skin Tone (MST) Scale—a more inclusive 10-tone scale explicitly designed to represent a broader range of communities. The MST Scale is used by the National Institute of Health (NIH) and the University of Chicago’s National Opinion Research Center, and is now available to the entire ML community.

From https://skintone.google/the-scale.

Where is the MST Scale used?

According to Biometric Update, iBeta has developed a demographic bias test based upon ISO/IEC 19795-10, which itself incorporates the Monk Skin Tone Scale.

At least for now. Biometric Update notes that other skin tone measurements are under developoment, including the “Colorimetric Skin Tone (CST)” and INESC TEC/Fraunhofer Institute research that uses “ethnicity labels as a continuous variable instead of a discrete value.”

But will there be enough data for variable 8.675309?

What “Gender Shades” Was Not

Mr. Owl, how many licks does it take to get to the Tootsie Roll center of a Tootsie Pop?

A good question. Let’s find out. One, two, three…(bites) three.

From YouTube.

If you think Mr. Owl’s conclusion was flawed, let’s look at Google.

One, two, three…three

I was researching the Monk Skin Tone Scale for a future Bredemarket blog post, but before I share that post I have to respond to an inaccurate statement from Google.

Google began its page “Developing the Monk Skin Tone Scale” with the following statement:

In 2018, the pioneering Gender Shades study demonstrated that commercial, facial-analysis APIs perform substantially worse on images of people of color and women.

Um…no it didn’t.

I will give Google props for using the phrase “facial-analysis,” which clarifies that Gender Shades was an exercise in categorization, not individualization.

But to say that Gender Shades “demonstrated that commercial, facial-analysis APIs perform substantially worse” in certain situations is an ever-so-slight exaggeration.

Kind of like saying that a bad experience at a Mexican restaurant in Lusk, Wyoming demonstrates that all Mexican restaurants are bad.

How? I’ve said this before:

The Gender Shades study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.

So to conclude that all facial classification algorithms perform substantially worse cannot be supported…because in 2018 the other algorithms weren’t tested.

One, two, three…one hundred and eighty nine

In 2019, NIST tested 189 software algorithms from 99 developers for demographic bias, and has continued to test for demographic bias since.

In these tests, vendors volunteer to have NIST test their algorithms for demographic bias.

Guess which three vendors have NOT submitted their algorithms to NIST for testing?

You guessed it: IBM, Microsoft, and Face++.

Anyway, more on the Monk Skin Tone Scale here, but I had to share this.

The Best Deepfake Defense is NOT Technological

I think about deepfakes a lot. As the identity/biometric product marketing consultant at Bredemarket, it comes with the territory.

When I’m not researching how fraudsters perpetrate deepfake faces, deepfake voices, and other deepfake modalities via presentation attack detection (liveness detection) and injection attack detection

…I’m researching and describing how Bredemarket’s clients and prospects develop innovative technologies to expose these deepfake fraudsters.

You can spend good money on deepfake-fighting industry solutions, and you can often realize a positive return on investment when purchasing these technologies.

But the best defense against these deepfakes isn’t some whiz bang technology.

It’s common sense.

  • Would your CEO really call you at midnight to expedite an urgent financial transaction?
  • Would that Amazon recruiter want to schedule a Zoom call right now?

If you receive an out-of-the-ordinary request, the first and most important thing to do is to take a deep breath.

A real CEO or recruiter would understand.

And…

…if your company offers a fraud-fighting solution to detect and defeat deepfakes, Bredemarket can help you market your solution. My content, proposal, and analysis offerings are at your service. Let’s talk: https://bredemarket.com/cpa/

CPA

(Imagen 4)

Video Analytics is Nothing New or Special

There is nothing new under the sun, despite the MIT Technology Review’s trumpeting of the “new way” to track people. 

The underlying article is gated, but here is what the public summary says:

“Police and federal agencies have found a controversial new way to skirt the growing patchwork of laws that curb how they use facial recognition: an AI model that can track people based on attributes like body size, gender, hair color and style, clothing, and accessories.

“The tool, called Track and built by the video analytics company Veritone, is used by 400 customers….”

Video analytics is nothing new. Viewing a picture of a particular backpack was a critical investigative lead after the Boston Marathon bombing. Two years later, I was adapting Morpho’s video analytics tool (now IDEMIA’s Augmented Vision) to U.S. use.

And it’s important to note that this is not strictly an IDENTIFICATION tool. Just because a tool finds someone with a particular body size, gender, hair color and style, clothing, and accessories means nothing. Hundreds of people may share those same attributes.

But when you combine them with an INDIVIDUALIZATION tool such as facial recognition…only then can you uniquely identify someone. (Augmented Vision can do this.)

And if facial recognition itself is only useful as an investigative lead…then video analytics without facial recognition is also only useful as an investigative lead.

Yawn.

(Imagen 3)

Revisiting Amazon Rekognition, May 2025

(Part of the biometric product marketing expert series)

A recent story about Meta face licensing changes caused me to get reflective.

“This openness to facial recognition could signal a turning point that could affect the biometric industry. 

“The so-called “big” biometric players such as IDEMIA, NEC, and Thales are teeny tiny compared to companies like Meta, Alphabet, and Amazon. If the big tech players ever consented to enter the law enforcement and surveillance market in a big way, they could put IDEMIA, NEC, and Thales out of business. 

“However, wholesale entry into law enforcement/surveillance could damage their consumer business, so the big tech companies have intentionally refused to get involved – or if they have gotten involved, they have kept their involvement a deep dark secret.”

Then I thought about the “Really Big Bunch” product that offered the greatest threat to the “Big 3” (IDEMIA, NEC, and Thales)—Amazon Rekognition, which directly competed in Washington County, Oregon until Amazon imposed a one-year moratorium on police use of facial recognition in June 2020. The moratorium was subsequently extended until further notice.

I last looked at Rekognition in June 2024, when Amazon teamed up with HID Global and may have teamed up with the FBI.

So what’s going on now?

Hard to say. I have been unable to find any newly announced Amazon Rekognition law enforcement customers.

That doesn’t mean that nothing is happening. Perhaps the government buyers are keeping their mouths shut.

Plus, there is this page, “Use cases that involve public safety.”

Nothing controversial on the page itself:

  • “Have appropriately trained humans review all decisions to take action that might impact a person’s civil liberties or equivalent human rights.”
  • “Train personnel on responsible use of facial recognition systems.”
  • “Provide public disclosures of your use of facial recognition systems.”
  • “In all cases, facial comparison matches should be viewed in the context of other compelling evidence, and shouldn’t be used as the sole determinant for taking action.” (In other words, INVESTIGATIVE LEAD only.)

Nothing controversial at all, and I am…um…99% certain (geddit?) that IDEMIA, NEC, and Thales would endorse all these points.

But why does Amazon even need such a page, if Rekognition is only used to find missing children?

Maybe this is a pre-June 2020 page that Amazon forgot to take down.

Or maybe not.

Couple this with the news about Meta, and there’s the possibility that the Really Big Bunch may enter the markets currently dominated by the Big Three.

Imagine if the DHS HART system, delayed for years, were resurrected…with Alphabet or Amazon or Meta technology.

We are still in the time of uncertainty…and may never go back.

(Large and small wildebeests via Imagen 3)

Frictionless Friction Ridges and Other Biometric Modalities

I wanted to write a list of the biometric modalities for which I provide experience.

So I started my usual list from memory: fingerprint, face, iris, voice, and DNA.

Then I stopped myself.

My experience with skin goes way beyond fingerprints, since I’ve spent over two decades working with palm prints.

(Can you say “Cambridgeshire method”? I knew you could. It was a 1990s method to use the 10 standard rolled fingerprint boxes to input palm prints into an automated fingerprint identification system. Because Cambridgeshire had a bias to action and didn’t want to wait for the standards folks to figure out how to enter palm prints. But I digress.)

So instead of saying fingerprints, I thought about saying friction ridges.

But there are two problems with this.

First, many people don’t know what “friction ridges” are. They’re the ridges that form on a person’s fingers, palms, toes, and feet, all of which can conceivably identify individuals.

But there’s a second problem. The word “friction” has two meanings: the one mentioned above, and a meaning that describes how biometric data is captured.

No, there is not a friction method to capture faces.
From https://www.youtube.com/watch?v=4XhWFHKWCSE.

No, there is not a friction method to capture faces. Squishing 

  • If you have to do something to provide your biometric data, such as press your fingers against a platen, that’s friction.
  • If you don’t have to do anything other than wave your fingers, hold your fingers in the air, or show your face as you stand near or walk by a camera, that’s frictionless.

More and more people capture friction ridges with frictionless methods. I did this years ago using MorphoWAVE at MorphoTrak facilities, and I did it today at Whole Foods Market.

So I could list my biometric modalities as friction ridge (fingerprint and palm print via both friction and frictionless capture methods), face, iris, voice, and DNA.

But I won’t.

Anyway, if you need content, proposal, or analysis assistance with any of these modalities, Bredemarket can help you. Book a meeting at https://bredemarket.com/cpa/

Are You Responding to the CBP RFI, “RFI Land Vehicle Primary Zone Traveler Photo Capture Device”?

Facial recognition firms, let’s talk about Requests for Information from the Department of Homeland Security. I wrote about one in 2021, so I figured I’d write about another one that was just published today.

But before I do, let me just say that…um…I’m experienced in responding to Requests for Information (RFIs) from the Department of Homeland Security…and that’s all I can say.

And this new RFI is intriguing.

The RFI with Notice ID RFI-LVPZTPCD was issued by U.S Customs and Border Protection today (April 30) and is due in one month (May 30). The description includes the following:

“CBP is seeking a solution for capturing facial images of vehicle occupants in an officer-manned primary zone at an inbound vehicle point of entry (POE).”

Today’s CBP RFI-LVPZTPCD envisions the use case in which people are entering the U.S. in a car…and are NOT getting out of the car. But you still have to capture their faces at a sufficient quality level, which is easier said than done. Heck, in May 2022 it took me several tries to capture a passport facial image at CVS when I WASN’T in a car. Now add distance, odd camera angles, and possibly an intervening car windshield, and you’re in for big challenges.

I wonder how many facial recognition vendors are planning to respond to this RFI…and how many need the experienced proposal help that Bredemarket can provide.

  • I know one biometric firm that often responds to Department of Homeland Security RFIs, but this firm does not have a “Land Vehicle Primary Zone Traveler Photo Capture Device.” So while this firm has used Bredemarket’s proposal services in the past, it won’t respond to this particular RFI.
  • I know another biometric firm with a keen interest in land vehicle primary zone traveler photo capture devices, and perhaps this firm may respond to this RFI. But this is the firm that didn’t renew my consulting contract in the fall of 2024, and I haven’t heard from them since.

Of course, there are other facial recognition firms out there, some of which may have outstanding solutions to the CBP’s problem.

And in case you haven’t heard, Bredemarket has an opening for a facial recognition client, and can provide winning proposal development services.

So if I can help your facial recognition firm respond to this RFI, book a call: https://bredemarket.com/cpa/

Putting the P in CPA.

(San Ysidro Port of Entry picture by Philkon (Phil Konstantin) – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15343509.)

TSA Photo Requests: “The Current U.S. Government” Can Already Obtain Your Facial Image

There have been many recent stories about Transportation Security Administration (TSA) capture of the facial images of travelers, an outgrowth of the same post-9/11 concerns that resulted in REAL IDs in 2008…I mean 2025. (Maybe.)

One story from HuffPost clearly states its view on the matter. The title of the story? “Why You Can (And Should) Opt Out Of TSA Facial Recognition Right Now.”

I guess we know where HuffPost stands.

As to the “why” of its stance, here’s a succinct statement:

“Do you really want to be submitting a face scan to the current U.S. government?”

And perhaps there are good reasons to distrust the Trump Administration, or any administration. 

After all, the TSA says it only retains the picture for a limited time: “Photos are not stored or saved after a positive ID match has been made, except in a limited testing environment for evaluation of the effectiveness of the technology,”

But maybe…something happens. Someone accidentally forgot to delete the files. Oops.

And if something happens, the federal government has just captured an image of your face!

Guess what? The federal government can probably already get an image of your face, even if you don’t allow TSA to take your photo.

After all, you had to show some sort of identification when you arrived at that TSA checkpoint. Maybe you showed a passport, with a picture that the U.S. State Department received at one point. No, they don’t retain them either. But maybe…something happens.

But who does retain an image of your face?

Your state driver’s license agency. And as of 2019:

“Twenty-one states currently allow federal agencies such as the FBI to run searches of driver’s license and identification photo databases.”

So if a federal agency wants your facial image, it can probably obtain it even if you decline the TSA photo request.

Unless you strictly follow Amish practices. But in that case you probably wouldn’t be going through a TSA checkpoint anyway.

But if you are with a facial recognition company, and you want your prospects and their prospects to understand how your solution protects their privacy…

Bredemarket can help:

  • compelling content creation
  • winning proposal development
  • actionable analysis

Book a call: https://bredemarket.com/cpa/ 

(Security checkpoint picture generated by Imagen 3)

Is Milwaukee Selling PII for Free Facial Recognition Software Access?

(Part of the biometric product marketing expert series)

Perhaps facial recognition product marketers have heard of stories like this. Or perhaps they haven’t.

Tight budgets. Demands that government agencies save money. Is this the solution?

“Milwaukee police are mulling a trade: 2.5 million mugshots for free use of facial recognition technology.

“Officials from the Milwaukee Police Department say swapping the photos with the software firm Biometrica will lead to quicker arrests and solving of crimes.”

Read the article at https://www.jsonline.com/story/news/crime/2025/04/25/milwaukee-police-considering-trading-mugshots-for-facial-recognition-tech/83084223007/

As expected, activists raised all sorts of other concerns about facial recognition in general. But there’s an outstanding question:

What will Biometrica do with the 2.5 million images?

  • Use them for algorithmic training? 
  • Allow other agencies to search them?
  • Something else?
  • And what happens to the images if another company acquires Biometrica and/or its data? (See 23andMe.)

Biometrica didn’t respond to a request for comment.

And other facial recognition vendors operate differently.

How does your company treat customer data?

And how do you tell your story?

Do you have the resources to market your product, or are your resources already stretched thin?

If you need help with your facial recognition product marketing, Bredemarket has an opening for a facial recognition client. I can offer

  • compelling content creation
  • winning proposal development
  • actionable analysis

If Bredemarket can help your stretched staff, book a free meeting with me: https://bredemarket.com/cpa/

(Wheelbarrows from Imagen 3)