The Monk Skin Tone Scale

(Part of the biometric product marketing expert series)

Now that I’ve dispensed with the first paragraph of Google’s page on the Monk Skin Tone Scale, let’s look at the meat of the page.

I believe we all agree on the problem: the need to measure the accuracy of facial analysis and facial recognition algorithms for different populations. For purposes of this post we will concentrate on a proxy for race, a person’s skin tone.

Why skin tone? Because we have hypothesized (I believe correctly) that the performance of facial algorithms is influenced by the skin tone of the person, not by whether or not they are Asian or Latino or whatever. Don’t forget that the designated races have a variety of skin tones within them.

But how many skin tones should one use?

40 point makeup skin tone scale

The beauty industry has identified over 40 different skin tones for makeup, but this granular of an approach would overwhelm a machine learning evaluation:

[L]arger scales like these can be challenging for ML use cases, because of the difficulty of applying that many tones consistently across a wide variety of content, while maintaining statistical significance in evaluations. For example, it can become difficult for human annotators to differentiate subtle variation in skin tone in images captured in poor lighting conditions.

6 point Fitzpatrick skin tone scale

The first attempt at categorizing skin tones was the Fitzpatrick system.

To date, the de-facto tech industry standard for categorizing skin tone has been the 6-point Fitzpatrick Scale. Developed in 1975 by Harvard dermatologist Thomas Fitzpatrick, the Fitzpatrick Scale was originally designed to assess UV sensitivity of different skin types for dermatological purposes.

However, using this skin tone scale led to….(drumroll)…bias.

[T]he scale skews towards lighter tones, which tend to be more UV-sensitive. While this scale may work for dermatological use cases, relying on the Fitzpatrick Scale for ML development has resulted in unintended bias that excludes darker tones.

10 point Monk Skin Tone (MST) Scale

Enter Dr. Ellis Monk, whose biography could be ripped from today’s headlines.

Dr. Ellis Monk—an Associate Professor of Sociology at Harvard University whose research focuses on social inequalities with respect to race and ethnicity—set out to address these biases.

If you’re still reading this and haven’t collapsed in a rage of fury, here’s what Dr. Monk did.

Dr. Monk’s research resulted in the Monk Skin Tone (MST) Scale—a more inclusive 10-tone scale explicitly designed to represent a broader range of communities. The MST Scale is used by the National Institute of Health (NIH) and the University of Chicago’s National Opinion Research Center, and is now available to the entire ML community.

From https://skintone.google/the-scale.

Where is the MST Scale used?

According to Biometric Update, iBeta has developed a demographic bias test based upon ISO/IEC 19795-10, which itself incorporates the Monk Skin Tone Scale.

At least for now. Biometric Update notes that other skin tone measurements are under developoment, including the “Colorimetric Skin Tone (CST)” and INESC TEC/Fraunhofer Institute research that uses “ethnicity labels as a continuous variable instead of a discrete value.”

But will there be enough data for variable 8.675309?

What “Gender Shades” Was Not

Mr. Owl, how many licks does it take to get to the Tootsie Roll center of a Tootsie Pop?

A good question. Let’s find out. One, two, three…(bites) three.

From YouTube.

If you think Mr. Owl’s conclusion was flawed, let’s look at Google.

One, two, three…three

I was researching the Monk Skin Tone Scale for a future Bredemarket blog post, but before I share that post I have to respond to an inaccurate statement from Google.

Google began its page “Developing the Monk Skin Tone Scale” with the following statement:

In 2018, the pioneering Gender Shades study demonstrated that commercial, facial-analysis APIs perform substantially worse on images of people of color and women.

Um…no it didn’t.

I will give Google props for using the phrase “facial-analysis,” which clarifies that Gender Shades was an exercise in categorization, not individualization.

But to say that Gender Shades “demonstrated that commercial, facial-analysis APIs perform substantially worse” in certain situations is an ever-so-slight exaggeration.

Kind of like saying that a bad experience at a Mexican restaurant in Lusk, Wyoming demonstrates that all Mexican restaurants are bad.

How? I’ve said this before:

The Gender Shades study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.

So to conclude that all facial classification algorithms perform substantially worse cannot be supported…because in 2018 the other algorithms weren’t tested.

One, two, three…one hundred and eighty nine

In 2019, NIST tested 189 software algorithms from 99 developers for demographic bias, and has continued to test for demographic bias since.

In these tests, vendors volunteer to have NIST test their algorithms for demographic bias.

Guess which three vendors have NOT submitted their algorithms to NIST for testing?

You guessed it: IBM, Microsoft, and Face++.

Anyway, more on the Monk Skin Tone Scale here, but I had to share this.

Today’s Large Multimodal Model (LMM) is FLUX.1 Kontext

Do you remember when I explained what a Large Multimodal Model (LMM) is, and why an LMM is crucial to correctly render text in generative AI-created images?

Well, Black Forest Labs (with an Impressum…in Delaware) announced a new LMM last Thursday:

“FLUX.1 Kontext marks a significant expansion of classic text-to-image models by unifying instant text-based image editing and text-to-image generation. As a multimodal flow model, it combines state-of-the-art character consistency, context understanding and local editing capabilities with strong text-to-image synthesis.”

FLUX.1 Kontext has also received TechCrunch coverage.

And yes, the company does have a German presence.

(And no, the picture is obviously not from FLUX.1 Kontext. It’s from Imagen 4.)

Presentation Attack Injection, Injection Attack Detection, and Deepfakes on LinkedIn and Substack

Just letting my Bredemarket blog readers know of two items I wrote on other platforms.

  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes.” This LinkedIn article, part of The Wildebeest Speaks newsletter series, is directed toward people who already have some familiarity with deepfake attacks.
  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes (version 2). This Substack post does NOT assume any deepfake attack background.

Okta Talks About Evil Twins

Public wi-fi can be fun, especially when you don’t realize which networks were legitimately set up by the business.

And they’re really fun when someone pulls the “evil twin” trick, described by Okta.

“A hacker looks for a location with free, popular WiFi. The hacker takes note of the Service Set Identifier (SSID) name. Then, the hacker uses a tool like a WiFi Pineapple to set up a new account with the same SSID. Connected devices can’t differentiate between legitimate connections and fake versions.”

The next steps are to trick users into providing the authentication details for the “good” network, lure people into logging in to the “evil” network, then steal any unencrypted data.

Of course you don’t have to go to those extremes. If the business fails to publicize what the “good” network is called, just set up a network called “ReelOffishelWiFi” and see how many suckers you get.

(Imagen 4)

Ubiquity Via Focus…On Where?

So Bredemarket’s talking about “ubiquity via focus”?

Focus on where?

On the Bredemarket blog, your source for the latest identity/biometric and technology news.

And your source for the most up-to-date information on Bredemarket’s content-proposal-analysis services.

Be sure to visit https://bredemarket.com/blog/

Or better yet, subscribe at https://bredemarket.com/subscribe-to-bredemarket/

The Best Deepfake Defense is NOT Technological

I think about deepfakes a lot. As the identity/biometric product marketing consultant at Bredemarket, it comes with the territory.

When I’m not researching how fraudsters perpetrate deepfake faces, deepfake voices, and other deepfake modalities via presentation attack detection (liveness detection) and injection attack detection

…I’m researching and describing how Bredemarket’s clients and prospects develop innovative technologies to expose these deepfake fraudsters.

You can spend good money on deepfake-fighting industry solutions, and you can often realize a positive return on investment when purchasing these technologies.

But the best defense against these deepfakes isn’t some whiz bang technology.

It’s common sense.

  • Would your CEO really call you at midnight to expedite an urgent financial transaction?
  • Would that Amazon recruiter want to schedule a Zoom call right now?

If you receive an out-of-the-ordinary request, the first and most important thing to do is to take a deep breath.

A real CEO or recruiter would understand.

And…

…if your company offers a fraud-fighting solution to detect and defeat deepfakes, Bredemarket can help you market your solution. My content, proposal, and analysis offerings are at your service. Let’s talk: https://bredemarket.com/cpa/

CPA

(Imagen 4)

Don’t Learn to Code

(Imagen 4)

Some of you may remember the 2010s, when learning to code would solve all your problems forever and ever. 

There was even an “Hour of Code” in 2014:

“The White House also announced Monday that seven of the nation’s largest school districts are joining more than 50 others to start offering introductory computer science courses.”

But people on the other side of the aisle endorsed the advice:

“On its own, telling a laid-off journalist to “learn to code” is a profoundly annoying bit of “advice,” a nugget of condescension and antipathy. It’s also a line many of us may have already heard from relatives who pretend to be well-meaning, and who question an idealistic, unstable, and impecunious career choice.”

But the sentiment was the same: get out of dying industries and do something meaningful that will set you up for life.

Well, that’s what they thought in the 2010s.

Where are the “learn to code” advocates in 2025?

They’re talking to non-person entities, not people:

“Microsoft CTO Kevin Scott expects the next half-decade to see more AI-generated code than ever — but that doesn’t mean human beings will be cut out of the programming process.

“”95% is going to be AI-generated,” Scott said when asked about code within the next five years on an episode of the 20VC podcast. “Very little is going to be — line by line — is going to be human-written code.””

So the 2010s “learn to code” movement has been replaced by the 2020s “let AI code” movement. While there are valid questions about whether AI can actually code, it’s clear that companies would prefer not to hire human coders, who they perceive to be as useless as human journalists.

Simeio: Identity is the Perimeter of Cybersecurity

Simeio opened its monthly newsletter with a statement. Here is an excerpt:

“May spotlighted how even the most advanced enterprises are vulnerable when identity systems are fragmented, machine identities go unmanaged, and workflows rely too heavily on manual intervention—creating conditions ripe for risk. Enterprises need to get the message: identity is the perimeter of cybersecurity, and orchestration is the force multiplier. It’s time to learn how to effectively leverage it.”

Read the rest of Simeio’s newsletter on LinkedIn at https://www.linkedin.com/pulse/identity-matters-may-2025-identitywithsimeio-iby0e

Of course, there’s that interesting wrinkle of the identities of non-person entities, which may or may not be bound to human identities. Simeio, with its application onboarding solution, plays in the NPE space.

As for me, I need to start thinking about MY Bredemarket monthly LinkedIn newsletter (The Wildebeest Speaks) soon. June approaches. (Here’s the May edition if you missed it.)