Differentiating the DNA of Twins?

(Part of the biometric product marketing expert series)

There are certain assumptions that you make in biometrics.

Namely, that certain biometrics are unable to differentiate twins: facial recognition, and DNA analysis.

Now as facial recognition algorithms get bettter and better, perhaps they will be able to tell twins apart: even identical twins.

But DNA is DNA, right?

Twins and somatic mutations

Mike Bowers (CSIDDS) links to an article in Forensic Magazine which suggests that twins’ DNA can be differentiated.

For the first time in the U.S., an identical twin has been convicted of a crime based on DNA analysis.

The breakthrough came from Parabon Nanolabs, who’s scientists used deep whole genome sequencing to identify extremely rare “somatic mutations” that differentiated Russell Marubbio and his twin, John. The results were admitted as evidence in court, making last week’s conviction of Russell in the 1987 rape of a 50-year-old woman a landmark case.

Twin DNA.

Parabon Nanolabs (whom I briefly mentioned in 2024) applied somatic mutations as follows:

Somatic mutations are DNA changes that happen after conception and can cause genetic differences between otherwise identical twins. These mutations can arise during the earliest stages of embryonic development, affecting the split of the zygote, and accumulate throughout life due to errors in cell division. Somatic mutations can be present in only one twin, a subset of cells, or both, potentially leading to differences in health and even developmental disorders—and in this case, DNA.

The science behind somatic mutations is not new, and is well-researched, understood and accepted. It’s just uncommon for DNA to lead to twins, and even more uncommon for somatic mutations to be able to distinguish between twins.

Note that “well-researched, understood and accepted” part (even though it lacks an Oxford comma). Because this isn’t the only recent story that touches upon whole genome sequencing.

Whole genome sequencing and legal admissibility

Bowers also links to a CNN article which references Daubert/Frye-like questions about whether evidence is admissable.

Evidence derived from cutting-edge DNA technology that prosecutors say points directly at Rex Heuermann being the Gilgo Beach serial killer will be admissible at his trial, a Suffolk County judge ruled Wednesday….

Heuermann’s defense attorney Michael Brown had argued the DNA technology, known as whole genome sequencing, has not yet been widely accepted by the scientific community and therefore shouldn’t be permitted. He said he plans to argue the validity of the technology before a jury.

Meanwhile, prosecutors have argued this type of DNA extraction has been used by local law enforcement, the FBI and even defense attorneys elsewhere in the country, according to court records.

Let me point out one important detail: the fact that police agencies are using a particular technology doesn’t mean that said technology is “widely accepted by the scientific community.” I suspect that this same question will be raised in other courts, and other judges may hold a different decision.

And after checking my blog, I realize that I have never written an article about Daubert/Frye. Another assignment for Bredebot, I guess…

Your identity/biometric product marketing needs to assert the facts rather than old lies,

Bredemarket can help.

Forget About Milwaukee’s Facial Recognition DATA: We All Want to See Milwaukee’s Facial Recognition POLICY

(Part of the biometric product marketing expert series)

I love how Biometric Update bundles a bunch of stories into a single post. Chris Burt outdid himself on Wednesday, covering a slew of stories regarding use and possible misuse of facial recognition by Texas bounty hunters, the NYPD, and cities ranging from Chicago, Illinois to Houlton, Maine.

But those stories aren’t the ones that I’m focusing on. Before I get to my focus, I want to go off on a tangent and address something else.

Read us any rule, we’ll break it

In a huddle space in an office, a smiling robot named Bredebot places his robotic arms on a wildebeest and a wombat, encouraging them to collaborate on a product marketing initiative.
Bredebot and his pals.

By the time you read this, the first full post by my counterpart “Bredebot” will have published on the Bredemarket blog. This is a completely AI-generated post in which a bot DID write the first draft. More posts are coming.

What I didn’t expect was that competition would arise between me and my bot. I’m writing these words on August 27, two days before the first Bredebot post appears, and I’m already feeling the heat.

What if Bredebot’s posts receive more traffic than the ones I write myself? What does that mean for my own posts…and for the whole premise of hiring Bredemarket to write for others?

I’m treating this as a challenge, vowing to outdo my fast bot counterpart.

And in that spirit, let’s revisit Milwaukee.

Give us any chance, we’ll take it

Access.

When Biometric Update initially visited Milwaukee in its April 28 post, the main concern was the possible agreement for the Milwaukee Police Department to provide “access” to facial data to the company Biometrica in exchange for facial recognition licenses. I subsequently explored the data issue in my own May 6 guest post for Biometric Update.

Vendors must disclose responsible uses of biometric data.

But today the questions addressed to Milwaukee don’t focus on the data, but on the use of facial recognition itself. The Biometric Update article links to a Wisconsin Watch article with more detail. The arguments are familiar to all of you: facial recognition is racist, facial recognition is sometimes relied upon as the sole piece of evidence, facial recognition data can be sent to ICE, and facial recognition can be misused.

However, before Milwaukee’s Common Council can approve facial recognition use, one requirement has to be met.

Since the passage of Wisconsin Act 12, the only official way to amend or reject MPD policy is by a vote of at least two-thirds of the Common Council, or 10 members. 

“However, council members cannot make any decision about it until MPD actually drafts its policy, often referred to as a “standard operating procedure.” 

“Ald. Peter Burgelis – one of four council members who did not sign onto the Common Council letter to Norman – said he is waiting to make a decision until he sees potential policy from MPD or an official piece of legislation considered by the city’s Public Safety and Health Committee.”

The Milwaukee Police Department agrees that such a policy is necessary.

“MPD has consistently stated that a carefully developed policy could help reduce risks associated with facial recognition.

“’Should MPD move forward with acquiring FRT, a policy will be drafted based upon best practices and public input,’ a department spokesperson said.”

An aside from my days at MorphoTrak, when I would load user conference documents into the CrowdCompass mobile app: one year the topic of law enforcement agency facial recognition policies was part of our conference agenda. One agency had such a policy, but the agency would not allow me to upload the policy into the CrowdCompass app. You see, the agency had a policy…but it wasn’t public.

Needless to say, the Milwaukee Police Department’s draft policy WILL be public…and a lot of people will be looking at it.

Although I don’t know if it will make everyone’s dreams come true.

Why retail needs biometrics – the cameras aren’t working, and the people aren’t working either

(Imagen 4)

In a recent post on Biometric Update, “Why retail needs biometrics – the cameras aren’t working,” Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner made several points about the applicability of biometrics to retail. Among the many points he addressed, he dealt with algorithmic inaccuracy and the proper use of facial recognition as an investigative lead:

“It’s true that some early police algorithms were poor, but the biometric matching algorithms offered by some providers is over 99.99% – that’s as close to perfect as anyone has ever got. That’s NASA-level accuracy, better than some medical or military procedures and light years away from people staring at CCTV monitors. What about errors and misidentification? Used properly, LFR is a decision support tool, it’s not making the identification itself. Ultimately, it’s helping shopkeepers make their decisions and that’s where the occasional misidentification happens – by human error, not technical.”

I offered an additional comment:

“One other point: for all those who complain about the lack of perfection of automated facial recognition, it’s much better than manual facial recognition. The U.S. Innocence Project recounts multiple cases of witness MISidentification, where people have been imprisoned due to faulty and inaccurate identification of suspects as perpetrators. I’d much rather have a top tier FR algorithm watching me than a person who knows nothing about facial recognition at all.”

In case you missed it, I’ve written several Bredemarket blog posts on witness MISidentification: two on Robert Williams’ misidentification alone.

Heck, I addressed the topic back in 2021 in “The dangers of removing facial recognition and artificial intelligence from DHS solutions (DHS ICR part four).” This post covers the misidentification of Archie Williams (no relation).

So don’t toss out the automated facial recognition solution unless you have something better. I’ll wait.

Worries About the Certified Communist Products List

(Imagen 4)

(Part of the biometric product marketing expert series)

How many of you have heard of the Certified Products List (CPL)?

The CPL’s vendor coverage

This list, part of the FBI’s Biometric Specifications website (FBI Biospecs), contains fingerprint card printers, fingerprint card scan systems, identification flats systems, live scan systems, mobile ID devices, and other products. Presence on the CPL indicates that the product complies with a relevant image quality specification such as Appendix F of the Electronic Biometric Transmission Specification.

The Certified Products List has existed since the 1990s and includes a number of products with which I am familiar. These products come from companies past and present, including 3M Cogent, Aware, Biometrics4All, Cross Match, DataWorks Plus, IDEMIA Identity & Security France, Identicator, Mentalix, Morpho, Motorola, NEC Technologies, Printrak, Sagem Defense Securite, Thales, and many others.

As of June 26, 2025, it also references companies such as Shenzhen Interface Cognition Technology Co., Ltd. and Shenzhen Zhi Ang Science and Technology Co., Ltd.

A strongly worded letter

Those and other listings caused heartburn for the bipartisan Members of the U.S. House of Representatives Select Committee on the Chinese Communist Party.

So they sent a strongly worded letter.

“We write to respectfully urge the FBI to put an end to its ongoing certification of products from Chinese military-linked and surveillance companies—including companies blacklisted or red-flagged by the U.S. government—that could be used to spy on Americans, strengthen the repressive surveillance state of the People’s Republic of China (PRC), and otherwise threaten U.S. national security.”

Interestingly enough, they make a big deal of Hikvision products on the list, but I searched the CPL multiple times and found no Hikvision products.

The CPL’s purpose

And it’s important to note the FBI’s own caveat about the CPL:

The Certified Product List (CPL) provides users with a list of products that have been tested and are in compliance with Next Generation Identification image quality specifications (IQS) regarding the capture of friction ridge images. Specifications and standards other than image quality may still need to be met. Appearance on the CPL is not, and should not be construed as, an FBI endorsement, nor should it be relied upon for any requirement beyond IQS. Users should contact their State CJIS Systems Officer (CSO) or Information Security Officer (ISO) to ensure compliance with the necessary policies and/or guidelines.

In other words, the ONLY purpose of the CPL is to indicate whether the products in question meet technology standards. It has nothing to do with export controls or any other criteria that any law enforcement agency needs to follow when buying a product.

What about the U.S. Department of Commerce?

But the FBI isn’t the only agency “promoting” Chinese biometrics.

Wait until the Select Committee discovers the Department of Commerce’s NIST FRTE lists, including the FRTE 1:1 and FRTE 1:N lists. The tops of these lists (previously known as FRVT) include many Chinese companies.

And actually, the FRTE testing includes facial recognition products that inspired U.S. export bans. Fingerprint devices are harder to use to repress people.

What next?

What happens if the concern extends beyond China, to products produced in France and products produced in Canada?

Regarding the strongly worded letter, Biometric Update added one detail:

“As of this writing, the FBI has not issued a public response. Whether the bureau will move to decertify the flagged companies or push back on the committee’s recommendations remains to be seen. But with multiple national security statutes already in place, and Congress signaling a willingness to legislate further, the days of quiet certification for foreign adversary-linked tech firms may be numbered.”

The Monk Skin Tone Scale

(Part of the biometric product marketing expert series)

Now that I’ve dispensed with the first paragraph of Google’s page on the Monk Skin Tone Scale, let’s look at the meat of the page.

I believe we all agree on the problem: the need to measure the accuracy of facial analysis and facial recognition algorithms for different populations. For purposes of this post we will concentrate on a proxy for race, a person’s skin tone.

Why skin tone? Because we have hypothesized (I believe correctly) that the performance of facial algorithms is influenced by the skin tone of the person, not by whether or not they are Asian or Latino or whatever. Don’t forget that the designated races have a variety of skin tones within them.

But how many skin tones should one use?

40 point makeup skin tone scale

The beauty industry has identified over 40 different skin tones for makeup, but this granular of an approach would overwhelm a machine learning evaluation:

[L]arger scales like these can be challenging for ML use cases, because of the difficulty of applying that many tones consistently across a wide variety of content, while maintaining statistical significance in evaluations. For example, it can become difficult for human annotators to differentiate subtle variation in skin tone in images captured in poor lighting conditions.

6 point Fitzpatrick skin tone scale

The first attempt at categorizing skin tones was the Fitzpatrick system.

To date, the de-facto tech industry standard for categorizing skin tone has been the 6-point Fitzpatrick Scale. Developed in 1975 by Harvard dermatologist Thomas Fitzpatrick, the Fitzpatrick Scale was originally designed to assess UV sensitivity of different skin types for dermatological purposes.

However, using this skin tone scale led to….(drumroll)…bias.

[T]he scale skews towards lighter tones, which tend to be more UV-sensitive. While this scale may work for dermatological use cases, relying on the Fitzpatrick Scale for ML development has resulted in unintended bias that excludes darker tones.

10 point Monk Skin Tone (MST) Scale

Enter Dr. Ellis Monk, whose biography could be ripped from today’s headlines.

Dr. Ellis Monk—an Associate Professor of Sociology at Harvard University whose research focuses on social inequalities with respect to race and ethnicity—set out to address these biases.

If you’re still reading this and haven’t collapsed in a rage of fury, here’s what Dr. Monk did.

Dr. Monk’s research resulted in the Monk Skin Tone (MST) Scale—a more inclusive 10-tone scale explicitly designed to represent a broader range of communities. The MST Scale is used by the National Institute of Health (NIH) and the University of Chicago’s National Opinion Research Center, and is now available to the entire ML community.

From https://skintone.google/the-scale.

Where is the MST Scale used?

According to Biometric Update, iBeta has developed a demographic bias test based upon ISO/IEC 19795-10, which itself incorporates the Monk Skin Tone Scale.

At least for now. Biometric Update notes that other skin tone measurements are under developoment, including the “Colorimetric Skin Tone (CST)” and INESC TEC/Fraunhofer Institute research that uses “ethnicity labels as a continuous variable instead of a discrete value.”

But will there be enough data for variable 8.675309?

What “Gender Shades” Was Not

Mr. Owl, how many licks does it take to get to the Tootsie Roll center of a Tootsie Pop?

A good question. Let’s find out. One, two, three…(bites) three.

From YouTube.

If you think Mr. Owl’s conclusion was flawed, let’s look at Google.

One, two, three…three

I was researching the Monk Skin Tone Scale for a future Bredemarket blog post, but before I share that post I have to respond to an inaccurate statement from Google.

Google began its page “Developing the Monk Skin Tone Scale” with the following statement:

In 2018, the pioneering Gender Shades study demonstrated that commercial, facial-analysis APIs perform substantially worse on images of people of color and women.

Um…no it didn’t.

I will give Google props for using the phrase “facial-analysis,” which clarifies that Gender Shades was an exercise in categorization, not individualization.

But to say that Gender Shades “demonstrated that commercial, facial-analysis APIs perform substantially worse” in certain situations is an ever-so-slight exaggeration.

Kind of like saying that a bad experience at a Mexican restaurant in Lusk, Wyoming demonstrates that all Mexican restaurants are bad.

How? I’ve said this before:

The Gender Shades study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.

So to conclude that all facial classification algorithms perform substantially worse cannot be supported…because in 2018 the other algorithms weren’t tested.

One, two, three…one hundred and eighty nine

In 2019, NIST tested 189 software algorithms from 99 developers for demographic bias, and has continued to test for demographic bias since.

In these tests, vendors volunteer to have NIST test their algorithms for demographic bias.

Guess which three vendors have NOT submitted their algorithms to NIST for testing?

You guessed it: IBM, Microsoft, and Face++.

Anyway, more on the Monk Skin Tone Scale here, but I had to share this.

The Best Deepfake Defense is NOT Technological

I think about deepfakes a lot. As the identity/biometric product marketing consultant at Bredemarket, it comes with the territory.

When I’m not researching how fraudsters perpetrate deepfake faces, deepfake voices, and other deepfake modalities via presentation attack detection (liveness detection) and injection attack detection

…I’m researching and describing how Bredemarket’s clients and prospects develop innovative technologies to expose these deepfake fraudsters.

You can spend good money on deepfake-fighting industry solutions, and you can often realize a positive return on investment when purchasing these technologies.

But the best defense against these deepfakes isn’t some whiz bang technology.

It’s common sense.

  • Would your CEO really call you at midnight to expedite an urgent financial transaction?
  • Would that Amazon recruiter want to schedule a Zoom call right now?

If you receive an out-of-the-ordinary request, the first and most important thing to do is to take a deep breath.

A real CEO or recruiter would understand.

And…

…if your company offers a fraud-fighting solution to detect and defeat deepfakes, Bredemarket can help you market your solution. My content, proposal, and analysis offerings are at your service. Let’s talk: https://bredemarket.com/cpa/

CPA

(Imagen 4)

Video Analytics is Nothing New or Special

There is nothing new under the sun, despite the MIT Technology Review’s trumpeting of the “new way” to track people. 

The underlying article is gated, but here is what the public summary says:

“Police and federal agencies have found a controversial new way to skirt the growing patchwork of laws that curb how they use facial recognition: an AI model that can track people based on attributes like body size, gender, hair color and style, clothing, and accessories.

“The tool, called Track and built by the video analytics company Veritone, is used by 400 customers….”

Video analytics is nothing new. Viewing a picture of a particular backpack was a critical investigative lead after the Boston Marathon bombing. Two years later, I was adapting Morpho’s video analytics tool (now IDEMIA’s Augmented Vision) to U.S. use.

And it’s important to note that this is not strictly an IDENTIFICATION tool. Just because a tool finds someone with a particular body size, gender, hair color and style, clothing, and accessories means nothing. Hundreds of people may share those same attributes.

But when you combine them with an INDIVIDUALIZATION tool such as facial recognition…only then can you uniquely identify someone. (Augmented Vision can do this.)

And if facial recognition itself is only useful as an investigative lead…then video analytics without facial recognition is also only useful as an investigative lead.

Yawn.

(Imagen 3)

Revisiting Amazon Rekognition, May 2025

(Part of the biometric product marketing expert series)

A recent story about Meta face licensing changes caused me to get reflective.

“This openness to facial recognition could signal a turning point that could affect the biometric industry. 

“The so-called “big” biometric players such as IDEMIA, NEC, and Thales are teeny tiny compared to companies like Meta, Alphabet, and Amazon. If the big tech players ever consented to enter the law enforcement and surveillance market in a big way, they could put IDEMIA, NEC, and Thales out of business. 

“However, wholesale entry into law enforcement/surveillance could damage their consumer business, so the big tech companies have intentionally refused to get involved – or if they have gotten involved, they have kept their involvement a deep dark secret.”

Then I thought about the “Really Big Bunch” product that offered the greatest threat to the “Big 3” (IDEMIA, NEC, and Thales)—Amazon Rekognition, which directly competed in Washington County, Oregon until Amazon imposed a one-year moratorium on police use of facial recognition in June 2020. The moratorium was subsequently extended until further notice.

I last looked at Rekognition in June 2024, when Amazon teamed up with HID Global and may have teamed up with the FBI.

So what’s going on now?

Hard to say. I have been unable to find any newly announced Amazon Rekognition law enforcement customers.

That doesn’t mean that nothing is happening. Perhaps the government buyers are keeping their mouths shut.

Plus, there is this page, “Use cases that involve public safety.”

Nothing controversial on the page itself:

  • “Have appropriately trained humans review all decisions to take action that might impact a person’s civil liberties or equivalent human rights.”
  • “Train personnel on responsible use of facial recognition systems.”
  • “Provide public disclosures of your use of facial recognition systems.”
  • “In all cases, facial comparison matches should be viewed in the context of other compelling evidence, and shouldn’t be used as the sole determinant for taking action.” (In other words, INVESTIGATIVE LEAD only.)

Nothing controversial at all, and I am…um…99% certain (geddit?) that IDEMIA, NEC, and Thales would endorse all these points.

But why does Amazon even need such a page, if Rekognition is only used to find missing children?

Maybe this is a pre-June 2020 page that Amazon forgot to take down.

Or maybe not.

Couple this with the news about Meta, and there’s the possibility that the Really Big Bunch may enter the markets currently dominated by the Big Three.

Imagine if the DHS HART system, delayed for years, were resurrected…with Alphabet or Amazon or Meta technology.

We are still in the time of uncertainty…and may never go back.

(Large and small wildebeests via Imagen 3)