If We Don’t Train Facial Recognition Users, There Will Be No Facial Recognition

(Part of the biometric product marketing expert series)

We get all sorts of great tools, but do we know how to use them? And what are the consequences if we don’t know how to use them? Could we lose the use of those tools entirely due to bad publicity from misuse?

Hida Viloria. By Intersex77 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=98625035

Do your federal facial recognition users know what they are doing?

I recently saw a WIRED article that primarily talked about submitting Parabon Nanolabs-generated images to a facial recognition program. But buried in the article was this alarming quote:

According to a report released in September by the US Government Accountability Office, only 5 percent of the 196 FBI agents who have access to facial recognition technology from outside vendors have completed any training on how to properly use the tools.

From https://www.wired.com/story/parabon-nanolabs-dna-face-models-police-facial-recognition/

Now I had some questions after reading that sentence: namely, what does “have access” mean? To answer those questions, I had to find the study itself, GAO-23-105607, Facial Recognition Services: Federal Law Enforcement Agencies Should Take Actions to Implement Training, and Policies for Civil Liberties.

It turns out that the study is NOT limited to FBI use of facial recognition services, but also addresses six other federal agencies: the Bureau of Alcohol, Tobacco, Firearms and Explosives (the guvmint doesn’t believe in the Oxford comma); U.S. Customs and Border Protection; the Drug Enforcement Administration; Homeland Security Investigations; the U.S. Marshals Service; and the U.S. Secret Service.

In addition, the study confines itself to four facial recognition services: Clearview AI, IntelCenter, Marinus Analytics, and Thorn. It does not address other uses of facial recognition by the agencies, such as the FBI’s use of IDEMIA in its Next Generation Identification system (IDEMIA facial recognition technology is also used by the Department of Defense).

Two of the GAO’s findings:

  • Initially, none of the seven agencies required users to complete facial recognition training. As of April 2023, two of the agencies (Homeland Security Investigations and the U.S. Marshals Service) required training, two (the FBI and Customs and Border Protection) did not, and the other three had quit using these four facial recognition services.
  • The FBI stated that facial recognition training was recommended as a “best practice,” but not mandatory. And when something isn’t mandatory, you can guess what happened:

GAO found that few of these staff completed the training, and across the FBI, only 10 staff completed facial recognition training of 196 staff that accessed the service. FBI said they intend to implement a training requirement for all staff, but have not yet done so. 

From https://www.gao.gov/products/gao-23-105607.

So if you use my three levels of importance (TLOI) model, facial recognition training is important, but not critically important. Therefore, it wasn’t done.

The detailed version of the report includes additional information on the FBI’s training requirements…I mean recommendations:

Although not a requirement, FBI officials said they recommend (as
a best practice) that some staff complete FBI’s Face Comparison and
Identification Training when using Clearview AI. The recommended
training course, which is 24 hours in length, provides staff with information on how to interpret the output of facial recognition services, how to analyze different facial features (such as ears, eyes, and mouths), and how changes to facial features (such as aging) could affect results.

From https://www.gao.gov/assets/gao-23-105607.pdf.

However, this type of training was not recommended for all FBI users of Clearview AI, and was not recommended for any FBI users of Marinus Analytics or Thorn.

I should note that the report was issued in September 2023, based upon data gathered earlier in the year, and that for all I know the FBI now mandates such training.

Or maybe it doesn’t.

What about your state and local facial recognition users?

Of course, training for federal facial recognition users is only a small part of the story, since most of the law enforcement activity takes place at the state and local level. State and local users need training so that they can understand:

  • The anatomy of the face, and how it affects comparisons between two facial images.
  • How cameras work, and how this affects comparisons between two facial images.
  • How poor quality images can adversely affect facial recognition.
  • How facial recognition should ONLY be used as an investigative lead.

If state and local users received this training, none of the false arrests over the last few years would have taken place.

What are the consequences of no training?

Could I repeat that again?

If facial recognition users had been trained, none of the false arrests over the last few years would have taken place.

  • The users would have realized that the poor images were not of sufficient quality to determine a match.
  • The users would have realized that even if they had been of sufficient quality, facial recognition must only be used as an investigative lead, and once other data had been checked, the cases would have fallen apart.

But the false arrests gave the privacy advocates the ammunition they needed.

Not to insist upon proper training in the use of facial recognition.

But to ban the use of facial recognition entirely.

Like nuclear or biological weapons, facial recognition’s threat to human society and civil liberties far outweighs any potential benefits. Silicon Valley lobbyists are disingenuously calling for regulation of facial recognition so they can continue to profit by rapidly spreading this surveillance dragnet. They’re trying to avoid the real debate: whether technology this dangerous should even exist. Industry-friendly and government-friendly oversight will not fix the dangers inherent in law enforcement’s discriminatory use of facial recognition: we need an all-out ban.

From https://www.banfacialrecognition.com/

(And just wait until the anti-facial recognition forces discover that this is not only a plot of evil Silicon Valley, but also a plot of evil non-American foreign interests located in places like Paris and Tokyo.)

Because the anti-facial recognition forces want us to remove the use of technology and go back to the good old days…of eyewitness misidentification.

Eyewitness misidentification contributes to an overwhelming majority of wrongful convictions that have been overturned by post-conviction DNA testing.

Eyewitnesses are often expected to identify perpetrators of crimes based on memory, which is incredibly malleable. Under intense pressure, through suggestive police practices, or over time, an eyewitness is more likely to find it difficult to correctly recall details about what they saw. 

From https://innocenceproject.org/eyewitness-misidentification/.

And these people don’t stay in jail for a night or two. Some of them remain in prison for years until the eyewitness misidentification is reversed.

Archie Williams moments after his exoneration on March 21, 2019. Photo by Innocence Project New Orleans. From https://innocenceproject.org/fingerprint-database-match-establishes-archie-williams-innocence/

Eyewitnesses, unlike facial recognition algorithms, cannot be tested for accuracy or bias.

And if we don’t train facial recognition users in the technology, then we’re going to lose it.

The Double Loop Podcast Discusses Research From the Self-Styled “Inventor of Cross-Fingerprint Recognition”

(Part of the biometric product marketing expert series)

Apologies in advance, but if you’re NOT interested in fingerprints, you’ll want to skip over this Bredemarket identity/biometrics post, my THIRD one about fingerprint uniqueness and/or similarity or whatever because the difference between uniqueness and similarity really isn’t important, is it?

Yes, one more post about the study whose principal author was Gabe Guo, the self-styled “inventor of cross-fingerprint recognition.”

In case you missed it

In case you missed my previous writings on this topic:

But don’t miss this

Well, two other people have weighed in on the paper: Glenn Langenburg and Eric Ray, co-presenters on the Double Loop Podcast. (“Double loop” is a fingerprint thing.)

So who are Langenburg and Ray? You can read their full biographies here, but both of them are certified latent print examiners. This certification, administered by the International Association for Identification, is designed to ensure that the certified person is knowledgeable about both latent (crime scene) fingerprints and known fingerprints, and how to determine whether or not two prints come from the same person. If someone is going to testify in court about fingerprint comparison, this certification is recognized as a way to designate someone as an expert on the subject, as opposed to a college undergraduate. (As of today, the list of IAI certified latent print examiners as of December 2023 can be found here in PDF form.)

Podcast episode 264 dives into the Columbia study in detail, including what the study said, what it didn’t say, and what the publicity for the study said that doesn’t match the study.

Eric and Glenn respond to the recent allegations that a computer science undergraduate at Columbia University, using Artificial Intelligence, has “proven that fingerprints aren’t unique” or at least…that’s how the media is mischaracterizing a new published paper by Guo, et al. The guys dissect the actual publication (“Unveiling intra-person fingerprint similarity via deep contrastive learning” in Science Advances, 2024 by Gabe Guo, et al.). They state very clearly what the paper actually does show, which is a far cry from the headlines and even public dissemination originating from Columbia University and the author. The guys talk about some of the important limitations of the study and how limited the application is to real forensic investigations. They then explore some of the media and social media outlets that have clearly misunderstood this paper and seem to have little understanding of forensic science. Finally, Eric and Glenn look at some quotes and comments from knowledgeable sources who also have recognized the flaws in the paper, the authors’ exaggerations, and lack of understanding of the value of their findings.

From https://doublelooppodcast.com/2024/01/fingerprints-proven-by-ai-to-not-be-unique-episode-264/.

Yes, the episode is over an hour long, but if you want to hear a good discussion of the paper that goes beyond the headlines, I strongly recommend that you listen to it.

TL;DR

If you’re in a TL;DR frame of mind, I’ll just offer one tidbit: “uniqueness” and “similarity” are not identical. Frankly, they’re not even similar.

Will Ferrell and Chad Smith, or maybe vice versa. Fair use. From https://www.billboard.com/music/music-news/will-ferrell-chad-smith-red-hot-benefit-chili-peppers-6898348/, originally from NBC.

Identification Perfection is Impossible

(Part of the biometric product marketing expert series)

There are many different types of perfection.

Jehan Cauvin (we don’t spell his name like he spelled it). By Titian – Bridgeman Art Library: Object 80411, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6016067

This post concentrates on IDENTIFICATION perfection, or the ability to enjoy zero errors when identifying individuals.

The risk of claiming identification perfection (or any perfection) is that a SINGLE counter-example disproves the claim.

  • If you assert that your biometric solution offers 100% accuracy, a SINGLE false positive or false negative shatters the assertion.
  • If you claim that your presentation attack detection solution exposes deepfakes (face, voice, or other), then a SINGLE deepfake that gets past your solution disproves your claim.
  • And as for the pre-2009 claim that latent fingerprint examiners never make a mistake in an identification…well, ask Brandon Mayfield about that one.

In fact, I go so far as to avoid using the phrase “no two fingerprints are alike.” Many years ago (before 2009) in an International Association for Identification meeting, I heard someone justify the claim by saying, “We haven’t found a counter-example yet.” That doesn’t mean that we’ll NEVER find one.

You’ve probably heard me tell the story before about how I misspelled the word “quality.”

In a process improvement document.

While employed by Motorola (pre-split).

At first glance, it appears that Motorola would be the last place to make a boneheaded mistake like that. After all, Motorola is known for its focus on quality.

But in actuality, Motorola was the perfect place to make such a mistake, since it was one of the champions of the “Six Sigma” philosophy (which targets a maximum of 3.4 defects per million opportunities). Motorola realized that manufacturing perfection is impossible, so manufacturers (and the people in Motorola’s weird Biometric Business Unit) should instead concentrate on reducing the error rate as much as possible.

So one misspelling could be tolerated, but I shudder to think what would have happened if I had misspelled “quality” a second time.

In Which I “Nyah Nyah” Tongue Identification

(Part of the biometric product marketing expert series)

If you listen closely, you can hear about all sorts of wonderful biometric identifiers. They range from the common (such as fingerprint ridges and detail) to the esoteric (my favorite was the 2013 story about Japanese car seats that captured butt prints).

The butt sensor at work in a Japanese lab. (Advanced Institute of Industrial Technology photo). From https://www.cartalk.com/content/bottom-line-japanese-butt-sensor-protect-your-car

A former coworker who left the biometric world for the world of Adobe Experience Manager (AEM) expert consulting brought one of the latter to my attention.

Tongue prints.

This article, shared with me by Krassimir Boyanov of KBWEB Consult, links to this article from Science ABC.

As is usual with such articles, the title is breathless: “How Tongue Prints Are Going To Revolutionize Identification Methods.”

Forget about fingerprints and faces and irises and DNA and gait recognition and butt prints. Tongue prints are the answer!

Benefits of tongue print biometrics

To its credit, the article does point out two benefits of using tongue prints as a biometric identifier.

  • Consent and privacy. Unlike fingerprints and irises (and faces) which are always exposed and can conceivably be captured without the person’s knowledge, the subject has to provide consent before a tongue image is captured. For the most part, tongues are privacy-perfect.
  • Liveness. The article claims that “sticking out one’s tongue is an undeniable ‘proof of life.'” Perhaps that’s an exaggeration, but it is admittedly much harder to fake a tongue than it is to fake a finger or a face.

Are tongues unique?

But the article also makes these claims.

Two main attributes are measured for a tongue print. First is the tongue shape, as the shape of the tongue is unique to everyone.

From https://www.scienceabc.com/innovation/how-tongue-prints-are-going-to-revolutionize-identification-methods.html

The other notable feature is the texture of the tongue. Tongues consist of a number of ridges, wrinkles, seams and marks that are unique to every individual.

From https://www.scienceabc.com/innovation/how-tongue-prints-are-going-to-revolutionize-identification-methods.html

So tongue shape and tongue texture are unique to every individual?

Prove it.

Even for some of the more common biometric identifiers, we do not have scientific proof that most biometric identifiers are unique to every individual.

But at least these modalities are under study. Has anyone conducted a rigorous study to prove or disprove the uniqueness of tongues? By “rigorous,” I mean a study that has evaluated millions of tongues in the same way that NIST has evaluated millions of fingerprints, faces, and irises?

We know that NIST hasn’t studied tongues.

I did find this 2017 tongue identification pilot study but it only included a whopping 20 participants. And the study authors (who are always seeking funding anyway) admitted that “large-scale studies are required to validate the results.”

Conclusion

So if a police officer tells you to stick out your tongue for identification purposes, think twice.

More on NIST’s FRTE-FATE Split

(Part of the biometric product marketing expert series)

I’ve talked about why NIST separated its FRVT efforts into FRTE and FATE.

But I haven’t talked bout how NIST did this.

And as you all know, the second most important question after why is how.

Why the great renaming took place

As I noted back in August, NIST chose to split its Face Recognition Vendor Test (FRVT) into two parts—FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation).

At the time, NIST explained why it did this:

To bring clarity to our testing scope and goals

From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate

In essence, the Face Recognition Vendor Test had become a hodgepodge of different things. Some of the older tests were devoted to identification of individuals (face recognition), while some of the newer tests were looking at issues other than individual identification (face analysis).

Of course, this confusion between identification and non-identification is nothing new, which is why some of the people who read Gender Shades falsely concluded that if the three algorithms couldn’t classify people by sex or race, they couldn’t identify them as individuals.

But I digress. (I won’t do it again.)

NIST explained at the time:

Tracks that involve the processing and analysis of images will run under the FATE activity, and tracks that pertain to identity verification will run under FRTE.

From https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate

How the great renaming happened in practice

What is in FRTE?

To date, most of my personal attention (and probably most of yours) was paid to what was previously called FRVT 1:1 and FRVT 1:N.

These two tests are now part of FRTE, and were simply renamed to FRTE 1:1 and FRTE 1:N. They’ve even (for now) retained the same URLs, although that may change in the future.

Other tests that are now part of the FRTE bucket include:

The “Still Face and Iris 1:N Identification” effort (PDF) has apparently also been reclassified as an FRTE effort.

What is in FATE?

Obviously, presentation attack detection (PAD) testing falls into the FATE category, since this does not measure the identification of an individual, but whether a person is truly there or not. The first results have been released; I previously wrote about this here.

The next obvious category is age estimation testing, which again does not try to identify an individual, but estimate how old the person is. This testing has not yet started, but I talked about the concept of age estimation previously.

Other parts of FATE include:

It is very possible that NIST will add additional FRTE and FATE tests in the future. These may be brand new tests, or variations of existing tests. For example, when all of us started wearing face masks a couple of years ago, NIST simulated face masks on its existing facial images and created the data for the face mask test described above.

What do you think NIST should test next, either in the FRTE or the FATE category?

More on morphing

And yes, I’m concluding this post with this video. By the way, this is the full version that (possibly intentionally) caused a ton of controversy and was immediately banned for nearly a quarter century. The morphing starts at 5:30. The crotch-grabbing starts right after the 7:00 mark.

From https://www.youtube.com/watch?v=pTFE8cirkdQ

But on a less controversial note, let’s give equal time to Godley & Creme.

From https://www.youtube.com/watch?v=ypMnBuvP5kA

Perhaps because of the lack of controversy with Godley & Creme’s earlier effort, Ashley Clark prefers it to the later Michael Jackson/John Landis effort.

Whereas Godley & Creme used editing technology to embrace and reflect the ambiguous murk of thwarted love, Jackson and Landis imposed an artificial sheen on the complexity of identity; a sheen that feels poignant if not outright tragic in the wake of Jackson’s ultimate appearance and fate. Really, it did matter if he was black or white.

From https://ashleyclark.substack.com/p/black-or-white-and-crying-all-over

But I digress. (I lied.)

Sadly, morphing escaped from the hands of music video directors and artists and entered the world of fraudsters, as Regula explains.

One of the main application areas of facial morphing for criminal purposes is forging identity documents. The attack targets face-based identity verification systems and procedures. Most often it involves passports; however, any ID document with a photo can be compromised.

One well-known case happened in 2018 when a group of activists merged together a photo of Federica Mogherini, the High Representative of the European Union for Foreign Affairs and Security Policy, and a member of their group. Using this morphed photo, they managed to obtain an authentic German passport.

From https://regulaforensics.com/blog/facial-morphing/

Which is why NIST didn’t just cry about the problem. They tested it to assist the vendors in solving the problem.

The 22 (or more) Types of Content That Product Marketers Create

(Part of the biometric product marketing expert series)

(Updated blog post count 10/23/2023)

I mentioned something in passing in Bredemarket’s recent go-to-market post that I think needs a little more highlighting. So here is a deeper dive into the 22 types of content that product marketers create. (Well, at least 22. I’m probably missing some.)

And by the way, I have created all 22 of these types of content, from blog posts and battlecards to smartphone application content and scientific book chapters. And I can create it for you.

Taylor Swift "22" single cover.
By “22” (Single by Taylor Swift) on 7digital, Fair use, https://en.wikipedia.org/w/index.php?curid=39857014

“But John,” you’re saying, “Don’t you know anything? Content is created by content marketers!”

Read on.

The NON difference between product marketing and content marketing

If you consult with the experts, they will tell you that there is a distinct division between product marketing and content marketing, and that they are two entirely separate disciplines.

Janus, two-headed.
By Loudon dodd – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=7404342

Why is it that so many business-to-business (B2B) marketers confuse product marketing with content marketing?

Because it requires a lot of discipline. That’s why.

B2B marketers who get it right understand the difference between these two fundamentally different types of marketing, what their purposes are and how to use them correctly.

From https://www.forbes.com/sites/forbescommunicationscouncil/2019/08/27/is-your-business-confusing-product-marketing-and-content-marketing/?sh=2edf86f51d88

There certainly is a difference—if you work in a firm that enforces strict definitions and separation between the two.

U.S. - Mexico border.
No dark sarcasm in the blog post. By US Border Patrol – Department of Homeland Security, United States Border Patrol http://www.dhs.gov/xlibrary/photos/sand-dune-fence.jpg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=11951642

Some firms (especially startups) don’t have the luxury to enforce such definitions. They don’t have separate teams to create awareness content, consideration content, and conversion content. They have one team (or perhaps one person) to create all that content PLUS other stuff that I’ll discuss later.

One-man band.
sin, a one-man band in New York City. By slgckgc – https://www.flickr.com/photos/slgc/8037345945/, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=47370848

For example, during my most recent stint as a product marketing employee at a startup, the firm had no official content marketers, so the product marketers had to create a lot of non-product related content. So we product marketers were the de facto content marketers for the company too. (Sadly, we didn’t get two salaries for filling two roles.)

Why did the product marketers end up as content marketers? It turns out that it makes sense—after all, people who write about your product in the lower funnel stages can also write about your product in the upper funnel stages, and also can certainly write about OTHER things, such as company descriptions, speaker submissions, and speaker biographies.

Creating external content and internal content

Man holding a huge pencil.
Designed by Freepik.

And when you find a “you can pry my keyboard out of my cold dead hands” person, you’ll naturally want to get them to write other things.

As a result, I’ve written a ton of stuff over my last 29 years in identity/biometrics. It didn’t take a great leap for me to self-identify as the identity content marketing expert and the biometric content marketing expert (and other expert definitions; I’m an expert in creating expert titles).

I’ve compiled a summary of the types of content that I’ve created over the years, not only for Bredemarket’s clients, but also for my employers at Incode Technologies, IDEMIA, MorphoTrak, Motorola, and Printrak.

Not all of these were created when I was in a formal product marketing role, but depending upon your product or service, you may need any of these content types to support the marketing of your product/service.

It’s helpful to divide the list into two parts: the external (customer-facing) content, and the internal (company-only) content.

10 types of external content I have created

External content is what most people think of when they talk about product marketing or content marketing. After all, this is the visible stuff that the prospects see, and which can move them toward a purchase (conversion). The numbers after some content types indicate the quantities of pieces of collateral that I have created.

  • Articles
  • Blog Posts (500+, including this one)
  • Briefs/Data/Literature Sheets
  • Case Studies (12+)
  • Proposals (100+)
  • Scientific Book Chapters
  • Smartphone Application Content
  • Social Media (Facebook, Instagram, LinkedIn, Threads, TikTok, Twitter)
  • Web Page Content
  • White Papers and E-Books

Here’s an video showing some of the external content that I have created for Bredemarket.

Bredemarket Work Samples, August 2023. Previously posted at https://bredemarket.com/2023/08/14/bredemarket-work-samples-the-video-edition/

9 types of internal content I have created

While external content is sexy, internal content is extremely important, since it’s what equips the people inside a firm to promote your product or service. The numbers after some content types indicate the quantities of pieces of collateral that I have created.

  • Battlecards (80+)
  • Competitive Analyses
  • Event/Conference/Trade Show Demonstration Scripts
  • Plans
  • Playbooks
  • Proposal Templates
  • Quality Improvement Documents
  • Requirements
  • Strategic Analyses

And here are 3 more types

Some content can either be external or internal. Again, numbers indicate the quantities of pieces of collateral I have created.

  • Email Newsletters (200+)
  • FAQs
  • Presentations

Content I can create for you

Does your firm need help creating one of these types of content?

Maybe two?

Maybe 22?

I can create content full-time for you

If your firm needs to create a lot of content types for your products, then consider hiring me as your full-time Senior Product Marketing Manager. My LinkedIn profile is here, documenting my 29 years of experience in identity/biometric technology as a product marketer, a strategist, and in other roles.

Or I can consult for you

But if your firm needs a more limited amount of content and can’t employ me on a full-time basis, then you can contract with me through my consulting firm Bredemarket. For example, I could write a single 400-600 word blog post or short article for you.

Or 2 blog posts/articles.

Or 22 blog posts/articles. (The more the merrier.)

Do you need these services?

Authorize Bredemarket, Ontario California’s content marketing expert, to help your firm produce words that return results.

Bredemarket logo

And yes, I know this post had two separate calls to action. What do you expect from a guy who thinks product marketers are content marketers?

And here’s one for the Swifties. No, it’s not “Taylor’s version.” But we all know that she is the only person who can reconcile differences between so-called standards bodies, since any standard Swift champions will become the de facto standard.

From https://www.youtube.com/watch?v=AgFeZr5ptV8

Pipe Down Before Panicking Over Voice Resonance Alteration

(Part of the biometric product marketing expert series)

By Steve Tan [steve.tan@pvc4pipes.com] – http://www.pvc4pipes.com, Attribution, https://commons.wikimedia.org/w/index.php?curid=22089684

On the surface, it sounds scary. Tricking automated speaker identification systems with PVC pipe?

(D)igital security engineers at the University of Wisconsin–Madison have found these systems are not quite as foolproof when it comes to a novel analog attack. They found that speaking through customized PVC pipes — the type found at most hardware stores — can trick machine learning algorithms that support automatic speaker identification systems.

From https://news.wisc.edu/down-the-tubes-common-pvc-pipes-can-hack-voice-identification-systems/

So how does the trick work?

The project began when the team began probing automatic speaker identification systems for weaknesses. When they spoke clearly, the models behaved as advertised. But when they spoke through their hands or talked into a box instead of speaking clearly, the models did not behave as expected.

(Shimaa) Ahmed investigated whether it was possible to alter the resonance, or specific frequency vibrations, of a voice to defeat the security system. Because her work began while she was stuck at home due to COVID-19, Ahmed began by speaking through paper towel tubes to test the idea. Later, after returning to the lab, the group hired Yash Wani, then an undergraduate and now a PhD student, to help modify PVC pipes at the UW Makerspace. Using various diameters of pipe purchased at a local hardware store, Ahmed, Yani and their team altered the length and diameter of the pipes until they could produce the same resonance as they voice they were attempting to imitate.

Eventually, the team developed an algorithm that can calculate the PVC pipe dimensions needed to transform the resonance of almost any voice to imitate another. In fact, the researchers successfully fooled the security systems with the PVC tube attack 60 percent of the time in a test set of 91 voices, while unaltered human impersonators were able to fool the systems only 6 percent of the time.

From https://news.wisc.edu/down-the-tubes-common-pvc-pipes-can-hack-voice-identification-systems/

Impressive results. But…

Who was fooled?

We’ve run across these biometric spoof claims before, specifically in the first test that asserted that face categorization algorithms were racist and sexist. (Face categorization, not face recognition. That’s another story.) If you didn’t view the Gender Shades website, you’d immediately assume that the hundreds of existing face categorization algorithms had just been proven to be racist and sexist. But if you read the Gender Shades study, you’ll see that it only tested three algorithms (IBM, Microsoft, and Face++). Similarly, the Master Faces study only looked at three algorithms (Dlib, FaceNet, and SphereFace).

So let’s ask the question: which voice algorithms did UW-Madison test?

Here’s what the study (PDF) says.

We evaluate two state-of-the-art ASI models: (1) the x-vector network [51] implemented by Shamsabadi et al. [45], and (2) the emphasized channel attention, propagation and aggregation time delay neural network (ECAPATDNN) [17], implemented by SpeechBrain.1 Both models were trained on VoxCeleb dataset [15, 36, 37], a benchmark dataset for ASI. The x-vector network is trained on 250 speakers using 8 kHz sampling rate. ECAPA-TDNN is trained on 7205 speakers using 16 kHz sampling rate. Both models report a test accuracy within 98-99%.

From https://www.usenix.org/system/files/sec23fall-prepub-452-ahmed.pdf

So what we know is that this test, which used these two ASI models trained on a particular dataset, demonstrated an ability to fool systems 60 percent of the time.

But…

  • What does this mean for other ASI algorithms, including the commercial algorithms in use today?
  • And what does it mean when other datasets are used?

In other words (and I’m adapting my own text here), how do the results of this study affect “current automatic speaker identification products”?

The answer is “We don’t know.”

So pipe down…until we actually test commercial algorithms for this technique.

But I’m sure that the UW-Madison researchers and I agree on one thing: more research is needed.

The Difference Between Identity Factors and Identity Modalities

(Part of the biometric product marketing expert series)

I know that I’m the guy who likes to say that it’s all semantics. After all, I’m the person who has referred to five-page long documents as “battlecards.”

But sometimes the semantics are critically important. Take the terms “factors” and “modalities.” On the surface they sound similar, but in practice there is an extremely important difference between factors of authentication and modalities of authentication. Let’s discuss.

What is a factor?

To answer the question “what is a factor,” let me steal from something I wrote back in 2021 called “The five authentication factors.”

Something You Know. Think “password.” And no, passwords aren’t dead. But the use of your mother’s maiden name as an authentication factor is hopefully decreasing.

Something You Have. I’ve spent much of the last ten years working with this factor, primarily in the form of driver’s licenses. (Yes, MorphoTrak proposed driver’s license systems. No, they eventually stopped doing so. But obviously IDEMIA North America, the former MorphoTrust, has implemented a number of driver’s license systems.) But there are other examples, such as hardware or software tokens.

Something You Are. I’ve spent…a long time with this factor, since this is the factor that includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.

Something You Do. The Cybersecurity Man chose to explain this in a non-behavioral fashion, such as using swiping patterns to unlock a device. This is different from something such as gait recognition, which supposedly remains constant and is thus classified as behavioral biometrics.

Somewhere You Are. This is an emerging factor, as smartphones become more and more prevalent and locations are therefore easier to capture. Even then, however, precision isn’t always as good as we want it to be. For example, when you and a few hundred of your closest friends have illegally entered the U.S. Capitol, you can’t use geolocation alone to determine who exactly is in Speaker Pelosi’s office.

From https://bredemarket.com/2021/03/02/the-five-authentication-factors/

(By the way, if you search the series of tubes for reading material on authentication factors, you’ll find a lot of references to only three authentication factors, including references from some very respectable sources. Those sources are only 60% right, since they leave off the final two factors I listed above. It’s five factors of authentication, folks. Maybe.)

The one striking thing about the five factors is that while they can all be used to authenticate (and verify) identities, they are inherently different from one another. The ridges of my fingerprint bear no relation to my 16 character password, nor do they bear any relation to my driver’s license. These differences are critical, as we shall see.

What is a modality?

In identity usage, a modality refers to different variations of the same factor. This is most commonly used with the “something you are” (biometric) factor, but it doesn’t have to be.

Biometric modalities

The identity company Aware, which offers multiple biometric solutions, spent some time discussing several different biometric modalities.

[M]any businesses and individuals (are adopting) biometric authentication as it been established as the most secure authentication method surpassing passwords and pins. There are many modalities of biometric authentication to pick from, but which method is the best?  

From https://www.aware.com/blog-which-biometric-authentication-method-is-the-best/

After looking at fingerprints, faces, voices, and irises, Aware basically answered its “best” question by concluding “it depends.” Different modalities have their own strengths and weaknesses, depending upon the use case. (If you wear thick gloves as part of your daily work, forget about fingerprints.)

ID R&D goes a step further and argues that it’s best to use multimodal biometrics, in which the two biometrics are face and voice. (By an amazing coincidence, ID R&D offers face and voice solutions.)

And there are many other biometric modalities.

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Non-biometric modalities

But the word “modalities” is not reserved for biometrics alone. The scientific paper “Multimodal User Authentication in Smart Environments: Survey of User Attitudes,” just released in May, includes this image that lists various modalities. As you can see, two of the modalities are not like the others.

From Aloba, Aishat & Morrison-Smith, Sarah & Richlen, Aaliyah & Suarez, Kimberly & Chen, Yu-Peng & Ruiz, Jaime & Anthony, Lisa. (2023). Multimodal User Authentication in Smart Environments: Survey of User Attitudes. Creative Commons Attribution 4.0 International
  • The three modalities in the middle—face, voice, and fingerprint—are all clearly biometric “something you are” modalities.
  • But the modality on the left, “Make a body movement in front of the camera,” is not a biometric modality (despite its reference to the body), but is an example of “something you do.”
  • Passwords, of course, are “something you know.”

In fact, each authentication factor has multiple modalities.

  • For example, a few of the modalities associated with “something you have” include driver’s licenses, passports, hardware tokens, and even smartphones.

Why multifactor is (usually) more robust than multimodal

Modalities within a single authentication factor are more closely related than modalities within multiple authentication factors. As I mentioned above when talking about factors, there is no relationship between my fingerprint, my password, and my driver’s license. However, there is SOME relationship between my driver’s license and my passport, since the two share some common information such as my legal name and my date of birth.

What does this mean?

  • If I’ve fraudulently created a fake driver’s license in your name, I already have some of the information that I need to create a fake passport in your name.
  • If I’ve fraudulently created a fake iris, there’s a chance that I might already have some of the information that I need to create a fake face.
  • However, if I’ve bought your Coinbase password on the dark web, that doesn’t necessarily mean that I was able to also buy your passport information on the dark web (although it is possible).

Therefore, while multimodal authentication is better tha unimodal authentication, multifactor authentication is usually better still (unless, as Incode Technologies notes, one of the factors is really, really weak).

Can an identity content marketing expert help you navigate these issues?

As you can see, you need to be very careful when writing about modalities and factors.

You need a biometric content marketing expert who has worked with many of these modalities.

Actually, you need an identity content marketing expert who has worked with many of these factors.

So if you are with an identity company and need to write a blog post, LinkedIn article, white paper, or other piece of content that touches on multifactor and multimodal issues, why not engage with Bredemarket to help you out?

If you’re interested in receiving my help with your identity written content, contact me.

Iris Recognition, Apple, and Worldcoin

(Part of the biometric product marketing expert series)

Iris recognition continues to make the news. Let’s review what iris recognition is and its benefits (and drawbacks), why Apple made the news last month, and why Worldcoin is making the news this month.

What is iris recognition?

There are a number of biometric modalities that can identify individuals by “who they are” (one of the five factors of authentication). A few examples include fingerprints, faces, voices, and DNA. All of these modalities purport to uniquely (or nearly uniquely) identify an individual.

One other way to identify individuals is via the irises in their eyes. I’m not a doctor, but presumably the Cleveland Clinic employs medical professionals who are qualified to define what the iris is.

The iris is the colored part of your eye. Muscles in your iris control your pupil — the small black opening that lets light into your eye.

From https://my.clevelandclinic.org/health/body/22502-iris
From Cleveland Clinic. (Link)

And here’s what else the Cleveland Clinic says about irises.

The color of your iris is like your fingerprint. It’s unique to you, and nobody else in the world has the exact same colored eye.

From https://my.clevelandclinic.org/health/body/22502-iris

John Daugman and irises

But why use irises rather than, say, fingerprints and faces? The best person to answer this is John Daugman. (At this point several of you are intoning, “John Daugman.” With reason. He’s the inventor of iris recognition.)

Here’s an excerpt from John Daugman’s 2004 paper on iris recognition:

(I)ris patterns become interesting as an alternative approach to reliable visual recognition of persons when imaging can be done at distances of less than a meter, and especially when there is a need to search very large databases without incurring any false matches despite a huge number of possibilities. Although small (11 mm) and sometimes problematic to image, the iris has the great mathematical advantage that its pattern variability among different persons is enormous.

Daugman, John, “How Iris Recognition Works.” IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 1, JANUARY 2004. Quoted from page 21. (PDF)

Or in non-scientific speak, one benefit of iris recognition is that you know it is accurate, even when submitting a pair of irises in a one-to-many search against a huge database. How huge? We’ll discuss later.

Brandon Mayfield and fingerprints

Remember that Daugman’s paper was released roughly two months before Brandon Mayfield was misidentified in a fingerprint comparison. (Everyone now intone “Brandon Mayfield.”)

If you want to know the details of that episode, the Department of Justice Office of the Inspector General issued a 330 page report (PDF) on it. If you don’t have time to read 330 pages, here’s Al Jazeera’s shorter version of Brandon Mayfield’s story.

While some of the issues associated with Mayfield’s misidentification had nothing to do with forensic science (Al Jazeera spends some time discussing bias, and Itiel Dror also looked at bias post-Mayfield), this still shows that fingerprints are remarkably similar and that it takes care to properly identify people.

Police agencies, witnesses, and faces

And of course there are recent examples of facial misidentifications (both by police agencies and witnesses), again not necessarily forensic science related, and again showing the similarity of faces from two different people.

Iris “data richness” and independent testing

Why are irises more accurate than fingerprints and faces? Here’s what one vendor, Iris ID, claims about irises vs. other modalities:

At the root of iris recognition’s accuracy is the data-richness of the iris itself. The IrisAccess system captures over 240 degrees of freedom or unique characteristics in formulating its algorithmic template. Fingerprints, facial recognition and hand geometry have far less detailed input in template construction.

Iris ID, “How It Compares.” (Link)

Enough about claims. What about real results? The IREX 10 test, independently administered by the U.S. National Institute of Standards and Technology, measures the identification (one-to-many) accuracy of submitted algorithms. At the time I am writing this, the ten most accurate algorithms provide false negative identification rates (FNIR) between 0.0022 ± 0.0004 and 0.0037 ± 0.0005 when two eyes are used. (Single eye accuracy is lower.) By the time you see this, the top ten algorithms may have changed, because the vendors are always improving.

IREX10 two-eye accuracy, top ten algorithms as of July 28, 2023. (Link)

While the IREX10 one-to-many tests are conducted against databases of less than a million records, it is estimated that iris one-to-many accuracy remains high even with databases of a billion people—something we will return to later in this post.

Iris drawbacks

OK, so if irises are so accurate, why aren’t we dumping our fingerprint readers and face readers and just using irises?

In short, because of the high friction in capturing irises. You can use high-resolution cameras to capture fingerprints and faces from far away, but as of now iris capture usually requires you to get very close to the capture device.

Iris image capture circa 2020 from the U.S. Federal Bureau of Investigation. (Link)

Which I guess is better than the old days when you had to put your eye right up against the capture device, but it’s still not as friendly (or intrusive) as face capture, which can be achieved as you’re walking down a passageway in an airport or sports stadium.

Irises and Apple Vision Pro

So how are irises being used today? You may or may not have hard last month’s hoopla about the Apple Vision Pro, which uses irises for one-to-one authetication.

I’m not going to spend a ton of time delving into this, because I just discussed Apple Vision Pro in June. In fact, I’m just going to quote from what I already said.

And when all of us heard about Vision Pro, one of the things that Apple shared about it was its verification technique. Not Touch ID or Face ID, but Optic ID. (I like naming consistency.)

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/
From Apple, https://www.apple.com/105/media/us/apple-vision-pro/2023/7e268c13-eb22-493d-a860-f0637bacb569/anim/drawer-privacy-optic-id/large.mp4

In short, as you wear the headset (which by definition is right on your head, not far away), the headset captures your iris images and uses them to authenticate you.

It’s a one-to-one comparison, not the one-to-many comparison that I discussed earlier in this post, but it is used to uniquely identify an individual.

But iris recognition doesn’t have to be used for identification.

Irises and Worldcoin

“But wait a minute, John,” you’re saying. “If you’re not using irises to determine if a person is who they say they are, then why would anyone use irises?”

Enter Worldcoin, which I mentioned in passing in my early July age estimation post.

Over the past several years, I’ve analyzed a variety of identity firms. Earlier this year I took a look at Worldcoin….Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness)…

From https://bredemarket.com/2023/07/03/age-estimation/

That’s the only thing that I’ve said about Worldcoin, at least publicly. (I looked at Worldcoin privately earlier in 2023, but that report is not publicly accessible and even I don’t have it any more.)

Worldcoin’s July 24 announcement

I guess it’s time for me to revisit Worldcoin, since the company made a super-big splashy announcement on Monday, July 24.

The Worldcoin Foundation today announced that Worldcoin, a project co-founded by Sam Altman, Alex Blania and Max Novendstern, is now live and in a production-grade state. 

The launch includes the release of the World ID SDK and plans to scale Orb operations to 35+ cities across 20+ countries around the world. In tandem, the Foundation’s subsidiary, World Assets Ltd., minted and released the Worldcoin token (WLD) to the millions of eligible people who participated in the beta; WLD is now transactable on the blockchain….

“In the age of AI, the need for proof of personhood is no longer a topic of serious debate; instead, the critical question is whether or not the proof of personhood solutions we have can be  privacy-first, decentralized and maximally inclusive,” said Worldcoin co-founder and Tools for Humanity CEO Alex Blania. “Through its unique technology, Worldcoin aims to provide anyone in the world, regardless of background, geography or income, access to the growing digital and global economy in a privacy preserving and decentralized way.”

From https://worldcoin.org/blog/announcements/worldcoin-project-launches

Worldcoin does NOT positively identify people…but it can still pay you

A very important note: Worldcoin’s purpose is not to determine identity (that a person is who they say they are). Worldcoin’s purpose is to determine uniqueness: namely, that a person (whoever they are) is unique among all the billions of people in the world. Once uniqueness is determined, the person can get money money money with an assurance that the same person won’t get money twice.

OK, so how are you going to determine the uniqueness of a person among all of the billions of people in the world?

Using the Orb to capture irises

As far as Worldcoin is concerned, irises are the best way to determine uniqueness, echoing what others have said.

Iris biometrics outperform other biometric modalities and already achieved false match rates beyond 1.2× ⁣10−141.2×10−14 (one false match in one trillion[9]) two decades ago[10]—even without recent advancements in AI. This is several orders of magnitude more accurate than the current state of the art in face recognition.

From https://worldcoin.org/blog/engineering/humanness-in-the-age-of-ai

So how is Worldcoin going to capture millions, and eventually billions, of iris pairs?

By using the Orb. (You may intone “the Orb” now.)

To complete your Worldcoin registration, you need to find an Orb that will capture your irises and verify your uniqueness.

Now you probably won’t find an Orb at your nearby 7 Eleven; as I write this, there are only a little over 100 listed locations in the entire world where Orbs are deployed. I happen to live within 50 miles of Santa Monica, where an Orb was recently deployed (by appointment only, unavailable on weekends, and you know how I feel about driving on Southern California freeways on a weekday).

But now that you can get crypto for enrolling at an Orb, people are getting more excited about the process, and there will be wider adoption.

Whether this will make a difference in the world or just be a fad remains to be seen.

How Remote Work Preserves Your Brain

I remember the day that my car skidded down Monterey Pass Road in Monterey Park, California, upside down, my seatbelt saving my brain from…um…very bad things. (I promised myself that I’d make this post NON-gory.)

Monterey Pass Road and South Fremont Avenue, Monterey Park, California. https://www.google.com/maps/@34.0586679,-118.1445677,19z?entry=ttu

I was returning from lunch to my employer farther south on Monterey Pass Road when a car hit me from the side, flipping my car over so that it skidded down Monterey Pass Road, upside down. Only my seat belt saved my from certain death.

(Mini-call to action: wear seat belts.)

By The cover art can be obtained from Liberty Records., Fair use, https://en.wikipedia.org/w/index.php?curid=25328218

Now some of you who know me are asking, “John, you’ve lived in Ontario and Upland for the past several decades. Why were you 30 miles away, in Monterey Park?”

Well, back in 1991, after working for Rancho Cucamonga companies for several years, I ended up commuting to a company in Monterey Park, California, at least an hour’s drive one way from my home. Driving toward downtown Los Angeles in the morning, and away from downtown Los Angeles in the afternoon. If you know, you know.

After I left the Monterey Park company, I consulted or worked for companies in Pomona, Brea, Anaheim, Irvine, and other cities. But for most of the next three decades, I was still driving at least an hour one-way every day to get from home to work.

And it’s not just a problem in Southern California. By B137 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48998674

As I’ll note later in this post, some people are still commuting today. And for all I know I may commute again also.

I learn the acronym WFH

That all stopped in March 2020 when a worldwide pandemic sent all non-essential personnel at IDEMIA’s Anaheim office to work from home (WFH). Now there were some IDEMIA employees, such as salespeople, who had been working from home for years, but this was the first time that a whole bunch of us were doing it.

Some of us had to upgrade our home equipment: mesh networks, special face illumination lighting, and other things. And now, instead of having a couple of people participating in meetings remotely, ALL of us were doing so. (Before 2020, the two words “Zoom background” would be incomprehensible to me. After 2020, I understood those words intimately.)

This new work practice continued after I left IDEMIA, as I started Bredemarket, joined Incode Technologies for a little over a year, and returned (for now) to Bredemarket again.

The U.S. Marine Corps supported WFH (for certain positions) in 2010, long before COVID. This image was released by the United States Marine Corps with the ID 100324-M-6847A-001 (next). This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing.العربية ∙ বাংলা ∙ Deutsch ∙ Deutsch (Sie-Form) ∙ English ∙ español ∙ euskara ∙ فارسی ∙ français ∙ italiano ∙ 日本語 ∙ 한국어 ∙ македонски ∙ മലയാളം ∙ Plattdüütsch ∙ Nederlands ∙ polski ∙ پښتو ∙ português ∙ slovenščina ∙ svenska ∙ Türkçe ∙ українська ∙ 简体中文 ∙ 繁體中文 ∙ +/−, Public Domain, https://commons.wikimedia.org/w/index.php?curid=23181833

WFH benefits

There are two benefits to working from home:

  • First, it preserves your brain. Not just from the horrible results of a commuting automobile accident. For the last three-plus years, I’ve gotten more rest and sleep since I’m not waking up before 6am and getting home after 6pm. And I’m not sitting in traffic on the 57, waiting for an accident to clear.
  • Second, it provides the best talent to your employer. Why? Because it can hire you. I just spent over a year working for a company headquartered in San Francisco, and I didn’t have to move to San Francisco to do it. In fact, when my product marketing team reached its apex, we had two people in Southern California, one in England, and one in Sweden. None of us had to move to San Francisco to work there, and my company was not restricted to hiring people who could get to San Francisco every day.

But that doesn’t stop some companies from insisting on office work

In-office presence controversy predates COVID (remember Marissa Mayer and Yahoo?), and now that COVID has receded, the “return to office” drumbeat has gotten louder.

Laith Masarweh shared the story of a woman who, like me, is tiring of the L.A. freeway grind.

So she asked her boss for help–

And he told her to change her mindset.

“That’s just life,“ he said. “Everyone has to commute.”…

All she asked for was some flexibility, and he shut her down.

So he’s going to lose her.

Laith Masarweh, LinkedIn. (link)

Now I’m not saying I’ll never work on-site again. Maybe someday I’ll even accept an on-site position in Monterey Park.

But I’m not that thrilled about going down Monterey Pass Road again.

In the meantime…

…since I’m NOT full-time employed, and since my home office is well equipped (I have Nespresso!), I have the time to make YOUR company’s messaging better.

If you can use Bredemarket’s expertise for your biometric, identity, technology, or general blog posts, case studies, white papers, or other written content, contact me.

From https://open.spotify.com/track/2BPEPkeifa5LoOg2Cq9bkx