Iris Recognition, Apple, and Worldcoin

(Part of the biometric product marketing expert series)

Iris recognition continues to make the news. Let’s review what iris recognition is and its benefits (and drawbacks), why Apple made the news last month, and why Worldcoin is making the news this month.

What is iris recognition?

There are a number of biometric modalities that can identify individuals by “who they are” (one of the five factors of authentication). A few examples include fingerprints, faces, voices, and DNA. All of these modalities purport to uniquely (or nearly uniquely) identify an individual.

One other way to identify individuals is via the irises in their eyes. I’m not a doctor, but presumably the Cleveland Clinic employs medical professionals who are qualified to define what the iris is.

The iris is the colored part of your eye. Muscles in your iris control your pupil — the small black opening that lets light into your eye.

From https://my.clevelandclinic.org/health/body/22502-iris
From Cleveland Clinic. (Link)

And here’s what else the Cleveland Clinic says about irises.

The color of your iris is like your fingerprint. It’s unique to you, and nobody else in the world has the exact same colored eye.

From https://my.clevelandclinic.org/health/body/22502-iris

John Daugman and irises

But why use irises rather than, say, fingerprints and faces? The best person to answer this is John Daugman. (At this point several of you are intoning, “John Daugman.” With reason. He’s the inventor of iris recognition.)

Here’s an excerpt from John Daugman’s 2004 paper on iris recognition:

(I)ris patterns become interesting as an alternative approach to reliable visual recognition of persons when imaging can be done at distances of less than a meter, and especially when there is a need to search very large databases without incurring any false matches despite a huge number of possibilities. Although small (11 mm) and sometimes problematic to image, the iris has the great mathematical advantage that its pattern variability among different persons is enormous.

Daugman, John, “How Iris Recognition Works.” IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 1, JANUARY 2004. Quoted from page 21. (PDF)

Or in non-scientific speak, one benefit of iris recognition is that you know it is accurate, even when submitting a pair of irises in a one-to-many search against a huge database. How huge? We’ll discuss later.

Brandon Mayfield and fingerprints

Remember that Daugman’s paper was released roughly two months before Brandon Mayfield was misidentified in a fingerprint comparison. (Everyone now intone “Brandon Mayfield.”)

If you want to know the details of that episode, the Department of Justice Office of the Inspector General issued a 330 page report (PDF) on it. If you don’t have time to read 330 pages, here’s Al Jazeera’s shorter version of Brandon Mayfield’s story.

While some of the issues associated with Mayfield’s misidentification had nothing to do with forensic science (Al Jazeera spends some time discussing bias, and Itiel Dror also looked at bias post-Mayfield), this still shows that fingerprints are remarkably similar and that it takes care to properly identify people.

Police agencies, witnesses, and faces

And of course there are recent examples of facial misidentifications (both by police agencies and witnesses), again not necessarily forensic science related, and again showing the similarity of faces from two different people.

Iris “data richness” and independent testing

Why are irises more accurate than fingerprints and faces? Here’s what one vendor, Iris ID, claims about irises vs. other modalities:

At the root of iris recognition’s accuracy is the data-richness of the iris itself. The IrisAccess system captures over 240 degrees of freedom or unique characteristics in formulating its algorithmic template. Fingerprints, facial recognition and hand geometry have far less detailed input in template construction.

Iris ID, “How It Compares.” (Link)

Enough about claims. What about real results? The IREX 10 test, independently administered by the U.S. National Institute of Standards and Technology, measures the identification (one-to-many) accuracy of submitted algorithms. At the time I am writing this, the ten most accurate algorithms provide false negative identification rates (FNIR) between 0.0022 ± 0.0004 and 0.0037 ± 0.0005 when two eyes are used. (Single eye accuracy is lower.) By the time you see this, the top ten algorithms may have changed, because the vendors are always improving.

IREX10 two-eye accuracy, top ten algorithms as of July 28, 2023. (Link)

While the IREX10 one-to-many tests are conducted against databases of less than a million records, it is estimated that iris one-to-many accuracy remains high even with databases of a billion people—something we will return to later in this post.

Iris drawbacks

OK, so if irises are so accurate, why aren’t we dumping our fingerprint readers and face readers and just using irises?

In short, because of the high friction in capturing irises. You can use high-resolution cameras to capture fingerprints and faces from far away, but as of now iris capture usually requires you to get very close to the capture device.

Iris image capture circa 2020 from the U.S. Federal Bureau of Investigation. (Link)

Which I guess is better than the old days when you had to put your eye right up against the capture device, but it’s still not as friendly (or intrusive) as face capture, which can be achieved as you’re walking down a passageway in an airport or sports stadium.

Irises and Apple Vision Pro

So how are irises being used today? You may or may not have hard last month’s hoopla about the Apple Vision Pro, which uses irises for one-to-one authetication.

I’m not going to spend a ton of time delving into this, because I just discussed Apple Vision Pro in June. In fact, I’m just going to quote from what I already said.

And when all of us heard about Vision Pro, one of the things that Apple shared about it was its verification technique. Not Touch ID or Face ID, but Optic ID. (I like naming consistency.)

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/
From Apple, https://www.apple.com/105/media/us/apple-vision-pro/2023/7e268c13-eb22-493d-a860-f0637bacb569/anim/drawer-privacy-optic-id/large.mp4

In short, as you wear the headset (which by definition is right on your head, not far away), the headset captures your iris images and uses them to authenticate you.

It’s a one-to-one comparison, not the one-to-many comparison that I discussed earlier in this post, but it is used to uniquely identify an individual.

But iris recognition doesn’t have to be used for identification.

Irises and Worldcoin

“But wait a minute, John,” you’re saying. “If you’re not using irises to determine if a person is who they say they are, then why would anyone use irises?”

Enter Worldcoin, which I mentioned in passing in my early July age estimation post.

Over the past several years, I’ve analyzed a variety of identity firms. Earlier this year I took a look at Worldcoin….Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness)…

From https://bredemarket.com/2023/07/03/age-estimation/

That’s the only thing that I’ve said about Worldcoin, at least publicly. (I looked at Worldcoin privately earlier in 2023, but that report is not publicly accessible and even I don’t have it any more.)

Worldcoin’s July 24 announcement

I guess it’s time for me to revisit Worldcoin, since the company made a super-big splashy announcement on Monday, July 24.

The Worldcoin Foundation today announced that Worldcoin, a project co-founded by Sam Altman, Alex Blania and Max Novendstern, is now live and in a production-grade state. 

The launch includes the release of the World ID SDK and plans to scale Orb operations to 35+ cities across 20+ countries around the world. In tandem, the Foundation’s subsidiary, World Assets Ltd., minted and released the Worldcoin token (WLD) to the millions of eligible people who participated in the beta; WLD is now transactable on the blockchain….

“In the age of AI, the need for proof of personhood is no longer a topic of serious debate; instead, the critical question is whether or not the proof of personhood solutions we have can be  privacy-first, decentralized and maximally inclusive,” said Worldcoin co-founder and Tools for Humanity CEO Alex Blania. “Through its unique technology, Worldcoin aims to provide anyone in the world, regardless of background, geography or income, access to the growing digital and global economy in a privacy preserving and decentralized way.”

From https://worldcoin.org/blog/announcements/worldcoin-project-launches

Worldcoin does NOT positively identify people…but it can still pay you

A very important note: Worldcoin’s purpose is not to determine identity (that a person is who they say they are). Worldcoin’s purpose is to determine uniqueness: namely, that a person (whoever they are) is unique among all the billions of people in the world. Once uniqueness is determined, the person can get money money money with an assurance that the same person won’t get money twice.

OK, so how are you going to determine the uniqueness of a person among all of the billions of people in the world?

Using the Orb to capture irises

As far as Worldcoin is concerned, irises are the best way to determine uniqueness, echoing what others have said.

Iris biometrics outperform other biometric modalities and already achieved false match rates beyond 1.2× ⁣10−141.2×10−14 (one false match in one trillion[9]) two decades ago[10]—even without recent advancements in AI. This is several orders of magnitude more accurate than the current state of the art in face recognition.

From https://worldcoin.org/blog/engineering/humanness-in-the-age-of-ai

So how is Worldcoin going to capture millions, and eventually billions, of iris pairs?

By using the Orb. (You may intone “the Orb” now.)

To complete your Worldcoin registration, you need to find an Orb that will capture your irises and verify your uniqueness.

Now you probably won’t find an Orb at your nearby 7 Eleven; as I write this, there are only a little over 100 listed locations in the entire world where Orbs are deployed. I happen to live within 50 miles of Santa Monica, where an Orb was recently deployed (by appointment only, unavailable on weekends, and you know how I feel about driving on Southern California freeways on a weekday).

But now that you can get crypto for enrolling at an Orb, people are getting more excited about the process, and there will be wider adoption.

Whether this will make a difference in the world or just be a fad remains to be seen.

How Remote Work Preserves Your Brain

I remember the day that my car skidded down Monterey Pass Road in Monterey Park, California, upside down, my seatbelt saving my brain from…um…very bad things. (I promised myself that I’d make this post NON-gory.)

Monterey Pass Road and South Fremont Avenue, Monterey Park, California. https://www.google.com/maps/@34.0586679,-118.1445677,19z?entry=ttu

I was returning from lunch to my employer farther south on Monterey Pass Road when a car hit me from the side, flipping my car over so that it skidded down Monterey Pass Road, upside down. Only my seat belt saved my from certain death.

(Mini-call to action: wear seat belts.)

By The cover art can be obtained from Liberty Records., Fair use, https://en.wikipedia.org/w/index.php?curid=25328218

Now some of you who know me are asking, “John, you’ve lived in Ontario and Upland for the past several decades. Why were you 30 miles away, in Monterey Park?”

Well, back in 1991, after working for Rancho Cucamonga companies for several years, I ended up commuting to a company in Monterey Park, California, at least an hour’s drive one way from my home. Driving toward downtown Los Angeles in the morning, and away from downtown Los Angeles in the afternoon. If you know, you know.

After I left the Monterey Park company, I consulted or worked for companies in Pomona, Brea, Anaheim, Irvine, and other cities. But for most of the next three decades, I was still driving at least an hour one-way every day to get from home to work.

And it’s not just a problem in Southern California. By B137 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48998674

As I’ll note later in this post, some people are still commuting today. And for all I know I may commute again also.

I learn the acronym WFH

That all stopped in March 2020 when a worldwide pandemic sent all non-essential personnel at IDEMIA’s Anaheim office to work from home (WFH). Now there were some IDEMIA employees, such as salespeople, who had been working from home for years, but this was the first time that a whole bunch of us were doing it.

Some of us had to upgrade our home equipment: mesh networks, special face illumination lighting, and other things. And now, instead of having a couple of people participating in meetings remotely, ALL of us were doing so. (Before 2020, the two words “Zoom background” would be incomprehensible to me. After 2020, I understood those words intimately.)

This new work practice continued after I left IDEMIA, as I started Bredemarket, joined Incode Technologies for a little over a year, and returned (for now) to Bredemarket again.

The U.S. Marine Corps supported WFH (for certain positions) in 2010, long before COVID. This image was released by the United States Marine Corps with the ID 100324-M-6847A-001 (next). This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing.العربية ∙ বাংলা ∙ Deutsch ∙ Deutsch (Sie-Form) ∙ English ∙ español ∙ euskara ∙ فارسی ∙ français ∙ italiano ∙ 日本語 ∙ 한국어 ∙ македонски ∙ മലയാളം ∙ Plattdüütsch ∙ Nederlands ∙ polski ∙ پښتو ∙ português ∙ slovenščina ∙ svenska ∙ Türkçe ∙ українська ∙ 简体中文 ∙ 繁體中文 ∙ +/−, Public Domain, https://commons.wikimedia.org/w/index.php?curid=23181833

WFH benefits

There are two benefits to working from home:

  • First, it preserves your brain. Not just from the horrible results of a commuting automobile accident. For the last three-plus years, I’ve gotten more rest and sleep since I’m not waking up before 6am and getting home after 6pm. And I’m not sitting in traffic on the 57, waiting for an accident to clear.
  • Second, it provides the best talent to your employer. Why? Because it can hire you. I just spent over a year working for a company headquartered in San Francisco, and I didn’t have to move to San Francisco to do it. In fact, when my product marketing team reached its apex, we had two people in Southern California, one in England, and one in Sweden. None of us had to move to San Francisco to work there, and my company was not restricted to hiring people who could get to San Francisco every day.

But that doesn’t stop some companies from insisting on office work

In-office presence controversy predates COVID (remember Marissa Mayer and Yahoo?), and now that COVID has receded, the “return to office” drumbeat has gotten louder.

Laith Masarweh shared the story of a woman who, like me, is tiring of the L.A. freeway grind.

So she asked her boss for help–

And he told her to change her mindset.

“That’s just life,“ he said. “Everyone has to commute.”…

All she asked for was some flexibility, and he shut her down.

So he’s going to lose her.

Laith Masarweh, LinkedIn. (link)

Now I’m not saying I’ll never work on-site again. Maybe someday I’ll even accept an on-site position in Monterey Park.

But I’m not that thrilled about going down Monterey Pass Road again.

In the meantime…

…since I’m NOT full-time employed, and since my home office is well equipped (I have Nespresso!), I have the time to make YOUR company’s messaging better.

If you can use Bredemarket’s expertise for your biometric, identity, technology, or general blog posts, case studies, white papers, or other written content, contact me.

From https://open.spotify.com/track/2BPEPkeifa5LoOg2Cq9bkx

I Changed My Mind on Age Estimation

(Part of the biometric product marketing expert series)

I’ll admit that I previously thought that age estimation was worthless, but I’ve since changed my mind about the necessity for it. Which is a good thing, because the U.S. National Institute of Standards and Technology (NIST) is about to add age estimation to its Face Recognition Vendor Test suite.

What is age estimation?

Before continuing, I should note that age estimation is not a way to identify people, but a way to classify people. For once, I’m stepping out of my preferred identity environment and looking at a classification question. Not “gender shades,” but “get off my lawn” (or my tricycle).

Designed by Freepik.

Age estimation uses facial features to estimate how old a person is, in the absence of any other information such as a birth certificate. In a Yoti white paper that I’ll discuss in a minute, the Western world has two primary use cases for age estimation:

  1. First, to estimate whether a person is over or under the age of 18 years. In many Western countries, the age of 18 is a significant age that grants many privileges. In my own state of California, you have to be 18 years old to vote, join the military without parental consent, marry (and legally have sex), get a tattoo, play the lottery, enter into binding contracts, sue or be sued, or take on a number of other responsibilities. Therefore, there is a pressing interest to know whether the person at the U.S. Army Recruiting Center, a tattoo parlor, or the lottery window is entitled to use the service.
  2. Second, to estimate whether a person is over or under the age of 13 years. Although age 13 is not as great a milestone as age 18, this is usually the age at which social media companies allow people to open accounts. Thus the social media companies and other companies that cater to teens have a pressing interest to know the teen’s age.

Why was I against age estimation?

Because I felt it was better to know an age, rather than estimate it.

My opinion was obviously influenced by my professional background. When IDEMIA was formed in 2017, I became part of a company that produced government-issued driver’s licenses for the majority of states in the United States. (OK, MorphoTrak was previously contracted to produce driver’s licenses for North Carolina, but…that didn’t last.)

With a driver’s license, you know the age of the person and don’t have to estimate anything.

And estimation is not an exact science. Here’s what Yoti’s March 2023 white paper says about age estimation accuracy:

Our True Positive Rate (TPR) for 13-17 year olds being correctly estimated as under 25 is 99.93% and there is no discernible bias across gender or skin tone. The TPRs for female and male 13-17 year olds are 99.90% and 99.94% respectively. The TPRs for skin tone 1, 2 and 3 are 99.93%, 99.89% and 99.92% respectively. This gives regulators globally a very high level of confidence that children will not be able to access adult content.

Our TPR for 6-11 year olds being correctly estimated as under 13 is 98.35%. The TPRs for female and male 6-11 year olds are 98.00% and 98.71% respectively. The TPRs for skin tone 1, 2 and 3 are 97.88%, 99.24% and 98.18% respectively so there is no material bias in this age group either.

Yoti’s facial age estimation is performed by a ‘neural network’, trained to be able to estimate human age by analysing a person’s face. Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.

From https://www.yoti.com/wp-content/uploads/Yoti-Age-Estimation-White-Paper-March-2023.pdf

While this is admirable, is it precise enough to comply with government regulations? Mean absolute errors of over a year don’t mean a hill of beans. By the letter of the law, if you are 17 years and 364 days old and you try to vote, you are breaking the law.

Why did I change my mind?

Over the last couple of months I’ve thought about this a bit more and have experienced a Jim Bakker “I was wrong” moment.

I was wrong for two reasons.

Kids don’t have government IDs

Designed by Freepik.

I asked myself some questions.

  • How many 13 year olds do you know that have driver’s licenses? Probably none.
  • How many 13 year olds do you know that have government-issued REAL IDs? Probably very few.
  • How many 13 year olds do you know that have passports? Maybe a few more (especially after 9/11), but not that many.

Even at age 18, there is no guarantee that a person will have a government-issued REAL ID.

So how are 18 year olds, or 13 year olds, supposed to prove that they are old enough for services? Carry their birth certificate around?

You’ll note that Yoti didn’t target a use case for 21 year olds. This is partially because Yoti is a UK firm and therefore may not focus on the strict U.S. laws regarding alcohol, tobacco, and casino gambling. But it’s also because it’s much, much more likely that a 21 year old will have a government-issued ID, eliminating the need for age estimation.

Sometimes.

In some parts of the world, no one has government IDs

Over the past several years, I’ve analyzed a variety of identity firms. Earlier this year I took a look at Worldcoin. While Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness), and makes no attempt to provide the age of the person with the World ID, Worldcoin does have something to say about government issued IDs.

Online services often request proof of ID (usually a passport or driver’s license) to comply with Know your Customer (KYC) regulations. In theory, this could be used to deduplicate individuals globally, but it fails in practice for several reasons.

KYC services are simply not inclusive on a global scale; more than 50% of the global population does not have an ID that can be verified digitally.

From https://worldcoin.org/blog/engineering/humanness-in-the-age-of-ai

But wait. There’s more:

IDs are issued by states and national governments, with no global system for verification or accountability. Many verification services (i.e. KYC providers) rely on data from credit bureaus that is accumulated over time, hence stale, without the means to verify its authenticity with the issuing authority (i.e. governments), as there are often no APIs available. Fake IDs, as well as real data to create them, are easily available on the black market. Additionally, due to their centralized nature, corruption at the level of the issuing and verification organizations cannot be eliminated.

Same source as above.

Now this (in my opinion) doesn’t make the case for Worldcoin, but it certainly casts some doubt on a universal way to document ages.

So we’d better start measuring the accuracy of age estimation.

If only there were an independent organization that could measure age estimation, in the same way that NIST measures the accuracy of fingerprint, face, and iris identification.

You know where this is going.

How will NIST test age estimation?

Yes, NIST is in the process of incorporating an age estimation test in its battery of Face Recognition Vendor Tests.

NIST’S FRVT Age Estimation page explains why.

Facial age verification has recently been mandated in legislation in a number of jurisdictions. These laws are typically intended to protect minors from various harms by verifying that the individual is above a certain age. Less commonly some applications extend benefits to groups below a certain age. Further use-cases seek only to determine actual age. The mechanism for estimating age is usually not specified in legislation. Face analysis using software is one approach, and is attractive when a photograph is available or can be captured.

In 2014, NIST published a NISTIR 7995 on Performance of Automated Age Estimation. The report showed using a database with 6 million images, the most accurate age estimation algorithm have accurately estimated 67% of the age of a person in the images within five years of their actual age, with a mean absolute error (MAE) of 4.3 years. Since then, more research has dedicated to further improve the accuracy in facial age verification.

From https://pages.nist.gov/frvt/html/frvt_age_estimation.html

Note that this was in 2014. As we have seen above, Yoti asserts a dramatically lower error rate in 2023.

NIST is just ramping up the testing right now, but once it moves forward, it will be possible to compare age estimation accuracy of various algorithms, presumably in multiple scenarios.

Well, for those algorithm providers who choose to participate.

Does your firm need to promote its age estimation solution?

Does your company have an age estimation solution that is superior to all others?

Do you need an experienced identity professional to help you spread the word about your solution?

Why not consider Bredemarket? If your identity business needs a written content creator, look no further.

Applying the “Six Questions” to LinkedIn Self-promotion

(UPDATE OCTOBER 23, 2023: “SIX QUESTIONS YOUR CONTENT CREATOR SHOULD ASK YOU IS SO 2022. DOWNLOAD THE NEWER “SEVEN QUESTIONS YOUR CONTENT CREATOR SHOULD ASK YOU” HERE.)

I’ve previously talked about the six questions your content creator should ask you. And I eat my own wildebeest food. I used the six questions to create a self-promotion blog post and LinkedIn post.

But since you care about YOUR self-promotion rather than mine, I’ll provide three tips for writing and promoting your own LinkedIn post.

How I promoted my content

Before I wrote the blog post or the LinkedIn post, I used my six questions to guide me. For my specific example, here are the questions and the answers.

QuestionPrimary AnswerSecondary Answer (if applicable)
Why?I want full-time employmentI want consulting work
How?State identity and marketing qualifications, ask employers to hire meState identity and marketing qualifications, ask consulting clients to contract with me
What?Blog post (jebredcal), promoted by a personal LinkedIn postBlog post (jebredcal), promoted by a Bredemarket Identity Firm Services LinkedIn post
Goal?Employers contact me for full-time employmentConsulting prospects contact me for contract work
Benefits?(1) No identity learning curve
(2) No content learning curve
(3) Proven results
(same)
Target Audience?Identity companies hiring Senior Product Marketing Managers and Senior Content Marketing ManagersIdentity companies contracting with content marketing consultants
For more information on the six questions, see https://bredemarket.com/2022/12/18/six-questions-your-content-creator-should-ask-you-the-e-book-version/.

You’ll notice that I immediately broke a cardinal rule by having both a primary goal and a secondary goal. When you perform your own self-promotion, you will probably want to make things less messy by having only a single goal.

So based upon these responses, I created…

First, the blog post

The Bredemarket blog is primarily to promote my consulting work. I have a different blog (jebredcal) to promote my full-time employment (or attempts to secure full-time employment).

Because the primary goal was to secure full-time employment, I posted to jebredcal instead of Bredemarket.

After the introduction (pictured above) with its “If you need a full-time employee” call to action, I then shared three identity-related blog posts from the Bredemarket blog to establish my “biometric content marketing expert” (and “identity content marketing expert”) credentials. I then closed with a dual call to action for employers and potential consulting clients. (I told you it is messy to have two goals.)

If you want to see my jebredcal post “Top 3 Bredemarket Identity Posts in June 2023 (so far),” click here.

So how did I get the word out about this personal blog post? I chose LinkedIn. (In my case, hiring managers probably aren’t going to check my two Instagram accounts.)

Second, the LinkedIn post

I often reshare my Bredemarket blog posts on various Bredemarket social media accounts. In this instance I only reshared it on LinkedIn, since that’s where the hiring managers are. While I shared the blog post to my Bredemarket Identity Firm Services LinkedIn page (since the post talked about identity), my primary goal was to share it to my personal LinkedIn feed.

It was simple to write the LinkedIn text, since I repurposed the introduction of the blog post itself. I added four hashtags, and then the post went live. You can see it here.

And by the way, feel free to like the LinkedIn post, comment on it, or even reshare it. I’ll explain why below.

Third, the “LinkedIn Love” promotion

So how did I promote it? Via the “LinkedIn Love” concept. (Some of you know where I learned about LinkedIn Love.)

To get LinkedIn love, I asked a few trusted friends in the identity industry to like, comment, or reshare the post. This places the post on my friends’ feeds, where their identity contacts will see it.

A few comments:

  • I don’t do this for every post, or else I will have no friends. In fact, this is the first time that I’ve employed “LinkedIn Love” in months.
  • I only asked friends in the identity industry, since these friends have followers who are most likely to hire a Senior Product Marketing Manager or Senior Content Marketing Manager.
  • I only asked a few friends in the identity industry, although eventually some friends that I didn’t ask ended up engaging with the post anyway.

I have wonderful friends. After several of them gave “LinkedIn Love,” The post received significant engagement. As of Friday morning, the post had acquired over 1,700 impresions. That’s many, many more than my posts usually acquire.

I don’t know if this activity will directly result in full-time employment or increased consulting work. But it certainly won’t hurt.

Three steps to promote YOUR content

But the point of this post isn’t MY job search. It’s YOURS (or whatever it is you want to promote).

For example, one of my friends who is also seeking full-time employment wanted to know how to use a LinkedIn post to promote THEIR OWN job search.

Now you don’t need to use my six questions. You don’t need to create a blog post before creating the LinkedIn post. And you certainly don’t need to create two goals. (Please don’t…unless you want to.)

In fact, you can create and promote your own LinkedIn post in just THREE steps.

Step One: What do you want to say?

My six questions obviously aren’t the only method to collect your thoughts. There are many, many other tools that achieve the same purpose. The important thing is to figure out what you want to say.

  • Start at the end. What action do you want the reader to take after reading your LinkedIn post? Do you want them to read your LinkedIn profile, or download your resume, or watch your video, or join your mailing list, or email or call you? Whatever it is, make sure your LinkedIn post includes the appropriate “call to action.”
  • Work on the rest. Now that you know how your post will end, you can work on the rest of the post. Persuade your reader to follow your call to action. Explain how you will benefit them. Address the post to the reader, your customer (for example, a potential employer), and adopt a customer focus.

Step Two: Say it.

If you don’t want to write the post yourself, then ask a consultant, a friend, or even a generative AI tool to write something for you. (Just because I’m a “get off my lawn” guy regarding generative AI doesn’t mean that you have to be.)

(And before you ask, there are better consultants than Bredemarket for THIS writing job. My services are designed and priced for businesses, not individuals.)

After your post is written by you or someone (or something) else, have one of your trusted friends review it and see if the written words truly reflect how amazing and outstanding you are.

Once you’re ready, post it to LinkedIn. Don’t delay, even if it isn’t perfect. (Heaven knows this blog post isn’t perfect, but I posted it anyway.) Remember that if you don’t post your promotional LinkedIn post, you are guaranteed to get a 0% response to it.

Step Three: Promote it.

Your trusted friends will come in handy for the promotion part—if they have LinkedIn accounts. Privately ask your trusted friends to apply “LinkedIn Love” to your post in the same way that my trusted friends did it for me.

By the way—if I know you, and you’d like me to promote your LinkedIn post, contact me via LinkedIn (or one of the avenues on the Bredemarket contact page) and I’ll do what I can.

And even if I DON’T know you, I can promote it anyway.

I’ve never met Mary Smith in my life, but she says that she read my Bredemarket blog post “Applying the “Six Questions” to LinkedIn Self-promotion.” Because she selects such high-quality reading material, I’m resharing Mary’s post about how she wants to be the first human to visit Venus. If you can help her realize her dream, scroll to the bottom of her post and donate to her GoFundMe.

Hey, whatever it takes to get the word out.

Let me know if you use my tips…or if you have better ways to achieve the same purpose.

We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.

(Part of the biometric product marketing expert series)

Some of you are probably going to get into an automobile today.

Are you insane?

The National Highway Traffic Safety Administration has released its latest projections for traffic fatalities in 2022, estimating that 42,795 people died in motor vehicle traffic crashes.

From https://www.nhtsa.gov/press-releases/traffic-crash-death-estimates-2022

When you have tens of thousands of people dying, then the only conscionable response is to ban automobiles altogether. Any other action or inaction is completely irresponsible.

After all, you can ask the experts who want us to ban biometrics because it can be spoofed and is racist, so therefore we shouldn’t use biometrics at all.

I disagree with the calls to ban biometrics, and I’ll go through three “biometrics are bad” examples and say why banning biometrics is NOT justified.

  • Even some identity professionals may not know about the old “gummy fingers” story from 20+ years ago.
  • And yes, I know that I’ve talked about Gender Shades ad nauseum, but it bears repeating again.
  • And voice deepfakes are always a good topic to discuss in our AI-obsessed world.

Example 1: Gummy fingers

My recent post “Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event” included the following sentence:

But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/

A biometrics industry colleague noticed the rhyming words “dummy” and “gummy” and wondered if the latter was a typo. It turns out it wasn’t.

To my knowledge, these gummy fingers do NOT have ridges. From https://www.candynation.com/gummy-fingers

Back in 2002, researcher Tsutomu Matsumoto used “gummy bears” gelatin to create a fake finger that fooled a fingerprint reader.

Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.

Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers. Here’s a recent example of presentation attack detection (liveness detection) from TECH5:

TECH5 participated in the 2023 LivDet Non-contact Fingerprint competition to evaluate its latest NN-based fingerprint liveness detection algorithm and has achieved first and second ranks in the “Systems” category for both single- and four-fingerprint liveness detection algorithms respectively. Both submissions achieved the lowest error rates on bonafide (live) fingerprints. TECH5 achieved 100% accuracy in detecting complex spoof types such as Ecoflex, Playdoh, wood glue, and latex with its groundbreaking Neural Network model that is only 1.5MB in size, setting a new industry benchmark for both accuracy and efficiency.

From https://tech5.ai/tech5s-mobile-fingerprint-liveness-detection-technology-ranked-the-most-accurate-in-the-market/

TECH5 excelled in detecting fake fingers for “non-contact” reading where the fingers don’t even touch a surface such as an optical surface. That’s appreciably harder than detecting fake fingers that touch contact devices.

I should note that LivDet is an independent assessment. As I’ve said before, independent technology assessments provide some guidance on the accuracy and performance of technologies.

So gummy fingers and future threats can be addressed as they arrive.

But at least gummy fingers aren’t racist.

Example 2: Gender shades

In 2017-2018, the Algorithmic Justice League set out to answer this question:

How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?

From http://gendershades.org/. Yes, that’s “http,” not “https.” But I digress.

Let’s stop right there for a moment and address two items before we continue. Trust me; it’s important.

  1. This study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.
  2. The study focused on gender classification and race classification. Back in those primitive innocent days of 2018, the world assumed that you could look at a person and tell whether the person was male or female, or tell the race of a person. (The phrase “self-identity” had not yet become popular, despite the Rachel Dolezal episode which happened before the Gender Shades study). Most importantly, the study did not address identification of individuals at all.

However, the findings did find something:

While the companies appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. Let’s explore.

All companies perform better on males than females with an 8.1% – 20.6% difference in error rates.

All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% – 19.2% difference in error rates.

When we analyze the results by intersectional subgroups – darker males, darker females, lighter males, lighter females – we see that all companies perform worst on darker females.

From http://gendershades.org/overview.html

What does this mean? It means that if you are using one of these three algorithms solely for the purpose of determining a person’s gender and race, some results are more accurate than others.

Three algorithms do not predict hundreds of algorithms, and classification is not identification. If you’re interested in more information on the differences between classification and identification, see Bredemarket’s November 2021 submission to the Department of Homeland Security. (Excerpt here.)

And all the stories about people such as Robert Williams being wrongfully arrested based upon faulty facial recognition results have nothing to do with Gender Shades. I’ll address this briefly (for once):

  • In the United States, facial recognition identification results should only be used by the police as an investigative lead, and no one should be arrested solely on the basis of facial recognition. (The city of Detroit stated that Williams’ arrest resulted from “sloppy” detective work.)
  • If you are using facial recognition for criminal investigations, your people had better have forensic face training. (Then they would know, as Detroit investigators apparently didn’t know, that the quality of surveillance footage is important.)
  • If you’re going to ban computerized facial recognition (even when only used as an investigative lead, and even when only used by properly trained individuals), consider the alternative of human witness identification. Or witness misidentification. Roeling Adams, Reggie Cole, Jason Kindle, Adam Riojas, Timothy Atkins, Uriah Courtney, Jason Rivera, Vondell Lewis, Guy Miles, Luis Vargas, and Rafael Madrigal can tell you how inaccurate (and racist) human facial recognition can be. See my LinkedIn article “Don’t ban facial recognition.”

Obviously, facial recognition has been the subject of independent assessments, including continuous bias testing by the National Institute of Standards and Technology as part of its Face Recognition Vendor Test (FRVT), specifically within the 1:1 verification testing. And NIST has measured the identification bias of hundreds of algorithms, not just three.

In fact, people that were calling for facial recognition to be banned just a few years ago are now questioning the wisdom of those decisions.

But those days were quaint. Men were men, women were women, and artificial intelligence was science fiction.

The latter has certainly changed.

Example 3: Voice spoofs

Perhaps it’s an exaggeration to say that recent artificial intelligence advances will change the world. Perhaps it isn’t. Personally I’ve been concentrating on whether AI writing can adopt the correct tone of voice, but what if we take the words “tone of voice” literally? Let’s listen to President Richard Nixon:

From https://www.youtube.com/watch?v=2rkQn-43ixs

Richard Nixon never spoke those words in public, although it’s possible that he may have rehearsed William Safire’s speech, composed in case Apollo 11 had not resulted in one giant leap for mankind. As noted in the video, Nixon’s voice and appearance were spoofed using artificial intelligence to create a “deepfake.”

It’s one thing to alter the historical record. It’s another thing altogether when a fraudster spoofs YOUR voice and takes money out of YOUR bank account. By definition, you will take that personally.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.

What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech…

From https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=8e8417775591

Now I’ll grant that this is an example of human voice verification, which can be as inaccurate as the previously referenced human witness misidentification. But are computerized systems any better, and can they detect spoofed voices?

Well, in the same way that fingerprint readers worked to overcome gummy bears, voice readers are working to overcome deepfake voices. Here’s what one company, ID R&D, is doing to combat voice spoofing:

IDVoice Verified combines ID R&D’s core voice verification biometric engine, IDVoice, with our passive voice liveness detection, IDLive Voice, to create a high-performance solution for strong authentication, fraud prevention, and anti-spoofing verification.

Anti-spoofing verification technology is a critical component in voice biometric authentication for fraud prevention services. Before determining a match, IDVoice Verified ensures that the voice presented is not a recording.

From https://www.idrnd.ai/idvoice-verified-voice-biometrics-and-anti-spoofing/

This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.

As for independent testing:

A final thought

Yes, fraudsters can use advanced tools to do bad things.

But the people who battle fraudsters can also use advanced tools to defeat the fraudsters.

Take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259

Three Ways to Identify and Share Your Identity Firm’s Differentiators

(Part of the biometric product marketing expert series)

Are you an executive with a small or medium sized identity/biometrics firm?

If so, you want to share the story of your identity firm. But what are you going to say?

How will you figure out what makes your firm better than all the inferior identity firms that compete with you?

How will you get the word out about why your identity firm beats all the others?

Are you getting tired of my repeated questions?

Are you ready for the answers?

Your identity firm differs from all others

Over the last 29 years, I (John E. Bredehoft of Bredemarket) have worked for and with over a dozen identity firms, either as an employee or as a consultant.

You’d think that since I have worked for so many different identity firms, it’s an easy thing to start working with a new firm by simply slapping down the messaging that I’ve created for all the other identity firms.

Nothing could be further from the truth.

Designed by Freepik.

Every identity firm needs different messaging.

  • The messaging that I created in my various roles at IDEMIA and its corporate predecessors was dramatically different than the messaging I created as a Senior Product Marketing Manager at Incode Technologies, which was also very different from the messaging that I created for my previous Bredemarket clients.
  • IDEMIA benefits such as “servicing your needs anywhere in the world” and “applying our decades of identity experience to solve your problems” are not going to help with a U.S.-only firm that’s only a decade old.
  • Similarly, messaging for a company that develops its own facial recognition algorithms will necessarily differ from messaging for a company that chooses the best third-party facial recognition algorithms on the market.

So which messaging is right?

It depends on who is paying me.

How your differences affect your firm’s messaging

When creating messaging for your identity firm, one size does not fit all, for the reasons listed above.

The content of your messaging will differ, based upon your differentiators.

  • For example, if you were the U.S.-only firm established less than ten years ago, your messaging would emphasize the newness of your solution and approach, as opposed to the stodgy legacy companies that never updated their ideas.
  • And if your firm has certain types of end users, such as law enforcement users, your messaging would probably feature an abundance of U.S. flags.

In addition, the channels that you use for your messaging will differ.

Identity firms will not want to market on every single social media channel. They will only market on the channels where their most motivated buyers are present.

  • That may be your own website.
  • Or LinkedIn.
  • Or Facebook.
  • Or Twitter.
  • Or Instagram.
  • Or YouTube.
  • Or TikTok.
  • Or a private system only accessible to people with a Top Secret Clearance.
  • Or display advertisements located in airports.
From https://www.youtube.com/watch?v=H02iwWCrXew

It may be more than one of these channels, but it probably won’t be all of them.

But before you work on your content or channels, you need to know what to say, and how to communicate it.

How to know and communicate your differentiators

As we’ve noted, your firm is different than all others.

  • How do you know the differences?
  • How do you know what you want to talk about?
  • How do you know what you DON’T want to talk about?

Here are three methods to get you started on knowing and communicating your differentiators in your content.

Method One: The time-tested SWOT analysis

If you talk to a marketer for more than two seconds about positioning a company, the marketer will probably throw the acronym “SWOT” back at you. I’ve mentioned the SWOT acronym before.

For those who don’t know the acronym, SWOT stands for

  • Strengths. These are internal attributes that benefit your firm. For example, your firm is winning a lot of business and growing in customer count and market share.
  • Weaknesses. These are also internal attributes, but in this case the attributes that detract from your firm. For example, you have very few customers.
  • Opportunities. These are external factors that enhance your firm. One example is a COVID or similar event that creates a surge in demand for contactless solutions.
  • Threats. The flip side is external factors that can harm your firm. One example is increasing privacy regulations that can slow or halt adoption of your product or service.

If you’re interested in more detail on the topic, there are a number of online sources that discuss SWOT analyses. Here’s TechTarget’s discussion of SWOT.

The common way to create the output from a SWOT analysis is to create four boxes and list each element (S, W, O, and T) within a box.

By Syassine – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=31368987

Once this is done, you’ll know that your messaging should emphasize the strengths and opportunities, and downplay or avoid the weaknesses and threats.

Or alternatively argue that the weaknesses and threats are really strengths and opportunities. (I’ve done this before.)

Method Two: Think before you create

Personally, I believe that a SWOT analysis is not enough. Before you use the SWOT findings to create content, there’s a little more work you have to do.

I recommend that before you create content, you should hold a kickoff of the content creation process and figure out what you want to do before you do it.

During that kickoff meeting, you should ask some questions to make sure you understand what needs to be done.

I’ve written about kickoffs and questions before, and I’m not going to repeat what I already said. If you want to know more:

Method Three: Send in the reinforcements

Now that you’ve locked down the messaging, it’s time to actually create the content that differentiates your identity firm from all the inferior identity firms in the market. While some companies can proceed right to content creation, others may run into one of two problems.

  • The identity firm doesn’t have any knowledgeable writers on staff. To create the content, you need people who understand the identity industry, and who know how to write. Some firms lack people with this knowledge and capability.
  • The identity firm has knowledgeable writers on staff, but they’re busy. Some companies have too many things to do at once, and any knowledgeable writers that are on staff may be unavailable due to other priorities.
Your current staff may have too much to do. By Backlit – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=12225421

This is where you supplement you identity firm’s existing staff with one or more knowledgeable writers who can work with you to create the content that leaves your inferior competitors in the dust.

What is next?

So do you need a knowledgeable biometric content marketing expert to create your content?

One who has been in the biometric industry for 29 years?

One who has been writing short and long form content for more than 29 years?

Are you getting tired of my repeated questions again?

Well then I’ll just tell you that Bredemarket is the answer to your identity/biometric content marketing needs.

Are you ready to take your identity firm to the next level with a compelling message that increases awareness, consideration, conversion, and long-term revenue? Let’s talk today!

Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event

(Part of the biometric product marketing expert series)

(UPDATE JUNE 24: CORRECTED THE YEAR THAT COVID BEGAN.)

I haven’t said anything publicly about Apple Vision Pro, so it’s time for me to be “how do you do fellow kids” trendy and jump on the bandwagon.

Actually…

It ISN’T time for me to jump on the Apple Vision Pro bandwagon, because while Apple Vision Pro affects the biometric industry, it’s not a REVOLUTIONARY biometric event.

The four revolutionary biometric events in the 21st century

How do I define a “revolutionary biometric event”?

By Alberto Korda – Museo Che Guevara, Havana Cuba, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6816940

I define it as something that completely transforms the biometric industry.

When I mention three of the four revolutionary biometric events in the 21st century, you will understand what I mean.

  • 9/11. After 9/11, orders of biometric devices skyrocketed, and biometrics were incorporated into identity documents such as passports and driver’s licenses. Who knows, maybe someday we’ll actually implement REAL ID in the United States. The latest extension of the REAL ID enforcement date moved it out to May 7, 2025. (Subject to change, of course.)
  • The Boston Marathon bombings, April 2013. After the bombings, the FBI was challenged in managing and analyzing countless hours of video evidence. Companies such as IDEMIA National Security Solutions, MorphoTrak, Motorola, Paravision, Rank One Computing, and many others have tirelessly worked to address this challenge, while ensuring that facial recognition results accurately identify perpetrators while protecting the privacy of others in the video feeds.
  • COVID-19, spring 2020 and beyond. COVID accelerated changes that were already taking place in the biometric industry. COVID prioritized mobile, remote, and contactless interactions and forced businesses to address issues that were not as critical previously, such as liveness detection.

These three are cataclysmic world events that had a profound impact on biometrics. The fourth one, which occurred after the Boston Marathon bombings but before COVID, was…an introduction of a product feature.

  • Touch ID, September 2013. When Apple introduced the iPhone 5s, it also introduced a new way to log in to the device. Rather than entering a passcode, iPhone 5S users could just use their finger to log in. The technical accomplishment was dwarfed by the legitimacy that this brought to using fingerprints for identification. Before 2013, attempts to implement fingerprint verification for benefits recipients were resisted because fingerprinting was something that criminals did. After September 2013, fingerprinting was something that the cool Apple kids did. The biometric industry changed overnight.

Of course, Apple followed Touch ID with Face ID, with adherents of the competing biometric modalities sparring over which was better. But Face ID wouldn’t have been accepted as widely if Touch ID hadn’t paved the way.

So why hasn’t iris verification taken off?

Iris verification has been around for decades (I remember Iridian before L-1; it’s now part of IDEMIA), but iris verification is nowhere near as popular in the general population as finger and face verification. There are two reasons for this:

  • Compared to other biometrics, irises are hard to capture. To capture a fingerprint, you can lay your finger on a capture device, or “slap” your four fingers on a capture device, or even “wave” your fingers across a capture device. Faces are even easier to capture; while older face capture systems required you to stand close to the camera, modern face devices can capture your face as you are walking by the camera, or even if you are some distance from the camera.
  • Compared to other biometrics, irises are expensive to capture. Many years ago, my then-employer developed a technological marvel, an iris capture device that could accurately capture irises for people of any height. Unfortunately, the technological marvel cost thousands upon thousands of dollars, and no customers were going to use it when they could acquire fingerprint and face capture devices that were much less costly.

So while people rushed to implement finger and face capture on phones and other devices, iris capture was reserved for narrow verticals that required iris accuracy.

With one exception. Samsung incorporated Princeton Identity technology into its Samsung Galaxy S8 in 2017. But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies. (This is why liveness detection is so important.) While Samsung continues to sell iris verification today, it hadn’t been adopted by Apple and therefore wasn’t cool.

Until now.

About the Apple Vision Pro and Optic ID

The Apple Vision Pro is not the first headset that was ever created, but the iPhone wasn’t the first smartphone either. And coming late to the game doesn’t matter. Apple’s visibility among trendsetters ensures that when Apple releases something, people take notice.

And when all of us heard about Vision Pro, one of the things that Apple shared about it was its verification technique. Not Touch ID or Face ID, but Optic ID. (I like naming consistency.)

According to Apple, Optic ID works by analyzing a user’s iris through LED light exposure and then comparing it with an enrolled Optic ID stored on the device’s Secure Enclave….Optic ID will be used for everything from unlocking Vision Pro to using Apple Pay in your own headspace.

From The Verge, https://www.theverge.com/2023/6/5/23750147/apple-optic-id-vision-pro-iris-biometrics

So why did Apple incorporate Optic ID on this device and not the others?

There are multiple reasons, but one key reason is that the Vision Pro retails for US$3,499, which makes it easier for Apple to justify the cost of the iris components.

But the high price of the Vision Pro comes at…a price

However, that high price is also the reason why the Vision Pro is not going to revolutionize the biometric industry. CNET admitted that the Vision Pro is a niche item:

At $3,499, Apple’s Vision Pro costs more than three weeks worth of pay for the average American, according to Bureau of Labor Statistics data. It’s also significantly more expensive than rival devices like the upcoming $500 Meta Quest 3, $550 Sony PlayStation VR 2 and even the $1,000 Meta Quest Pro

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Now CNET did go on to say the following:

With Vision Pro, Apple is trying to establish what it believes will be the next major evolution of the personal computer. That’s a bigger goal than selling millions of units on launch day, and a shift like that doesn’t happen overnight, no matter what the price is. The version of Vision Pro that Apple launches next year likely isn’t the one that most people will buy.

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Certainly Vision Pro and Optic ID have the potential to revolutionize the computing industry…in the long term. And as that happens, the use of iris biometrics will become more popular with the general public…in the long term.

But not today. You’ll have to wait a little longer for the next biometric revolution. And hopefully it won’t be a catastrophic event like three of the previous revolutions.

Using “Multispectral” and “Liveness” in the Same Sentence

(Part of the biometric product marketing expert series)

Now that I’m plunging back into the fingerprint world, I’m thinking about all the different types of fingerprint readers.

  • The optical fingerprint and palm print readers are still around.
  • And the capacitive fingerprint readers still, um, persist.
  • And of course you have the contactless fingerprint readers such as MorphoWave, one that I know about.
  • And then you have the multispectral fingerprint readers.

What is multispectral?

Bayometric offers a web page that covers some of these fingerprint reader types, and points out the drawbacks of some of the readers they discuss.

Latent prints are usually produced by sweat, skin debris or other sebaceous excretions that cover up the palmar surface of the fingertips. If a latent print is on the glass platen of the optical sensor and light is directed on it, this print can fool the optical scanner….

Capacitive sensors can be spoofed by using gelatin based soft artificial fingers.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

There is another weakness of these types of readers. Some professions damage and wear away a person’s fingerprint ridges. Examples of professions whose practitioners exhibit worn ridges include construction workers and biometric content marketing experts (who, at least in the old days, handled a lot of paper).

The solution is to design a fingerprint reader that not only examines the surface of the finger, but goes deeper.

From HID Global, “A Guide to MSI Technology: How It Works,” https://blog.hidglobal.com/2022/10/guide-msi-technology-how-it-works

The specialty of multispectral sensors is that it can capture the features of the tissue that lie below the skin surface as well as the usual features on the finger surface. The features under the skin surface are able to provide a second representation of the pattern on the fingerprint surface.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

Multispectral sensors are nothing new. When I worked for Motorola, Motorola Ventures had invested in a company called Lumidigm that produced multispectral fingerprint sensors; they were much more expensive than your typical optical or capacitive sensor, but were much more effective in capturing true fingerprints to the subdermal level.

Lumidigm was eventually acquired in 2014: not by Motorola (who sold off its biometric assets such as Printrak and Symbol), but by HID Global. This company continues to produce Lumidigm-branded multispectral fingerprint sensors to this day.

But let’s take a look at the other word I bandied about.

What is liveness?

KISS, Alive! By Obtained from allmusic.com., Fair use, https://en.wikipedia.org/w/index.php?curid=2194847

“Gelatin based soft artificial fingers” aren’t the only way to fool a biometric sensor, whether you’re talking about a fingerprint sensor or some other sensor such as a face sensor.

Regardless of the biometric modality, the intent is the same; instead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).

This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).

But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.

And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.

(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)

[UPDATE 4/24/2024: I FINALLY ADDRESSED THE DIFFERENCE BETWEEN ACTIVE AND PASSIVE LIVENESS HERE.]

Multispectral liveness

While multispectral fingerprint readers aren’t the only fingerprint readers, or the only biometric readers, that iBeta has tested for liveness, the HID Global Lumidigm readers conform to Level 2 (the higher level) of iBeta testing.

There’s a confirmation letter and everything.

From the iBeta website.

This letter was issued in 2021. For some odd reason, HID Global decided to publicize this in 2023.

Oh well. It’s good to occasionally remind people of stuff.

I’m still the biometric content marketing and proposal writing expert…but who benefits?

Beginning about a year ago, I began marketing myself as the biometric proposal writing expert and biometric content marketing expert. From a search engine optimization perspective, I have succeeded at this, so that Bredemarket tops the organic search results for these phrases.

Well, it seemed like a good idea at the time.

And maybe it still is.

Let’s look at why I declared myself the biometric proposal writing expert (BPWE) and biometric content marketing expert (BCME) in mid-2021, what happened over the last few months, why it happened, and who benefits.

Why am I the BPWE and BCME?

At the time that I launched this marketing effort, I wanted to establish Bredemarket’s biometric credentials. I was primarily providing my expertise to identity/biometric firms, so it made sense to emphasize my 25+ years of identity/biometric expertise, coupled with my proposal, marketing, and product experience. Some of my customers already knew this, but others did not.

So I coupled the appropriate identity words with the appropriate proposal and content words, and plunged full-on into the world of biometric proposal writing expert (BPWE within Bredemarket’s luxurious offices) and biometric content marketing expert (BCME here) marketing.

What happened?

There’s been one more thing that’s been happening in Bredemarket’s luxurious offices over the last couple of months.

I’ve been uttering the word “pivot” a lot.

Since March 2022, I’ve made a number of changes at Bredemarket, including pricing changes and modifications to my office hours. But this post concentrates on a change that affects the availability of the BPWE and BCME.

Let’s say that it’s December 2022, and someone performs a Google, Bing, or DuckDuckGo search for a biometric content marketing expert. The person finds Bredemarket, and excitedly goes to Bredemarket’s biometric content marketing expert page, only to encounter this text at the top of the page:

Update 4/25/2022: Effective immediately, Bredemarket does NOT accept client work for solutions that identify individuals using (a) friction ridges (including fingerprints and palm prints), (b) faces, and/or (c) secure documents (including driver’s licenses and passports). 

“Thanks a lot,” thinks the searcher.

Granted, there are others such as Tandem Technical Writing and Applied Forensic Services who can provide biometric consulting services, but the searcher won’t get the chance to work with ME.

Should have contacted me before April 2022.

Sheila Sund from Salem, United States, CC BY 2.0 https://creativecommons.org/licenses/by/2.0, via Wikimedia Commons

Why did it happen?

I’ve already shared some (not all) details about why I’m pivoting with the Bredemarket community, but perhaps you didn’t get the memo.

I have accepted a full-time position as a Senior Product Marketing Manager with an identity company. (I’ll post the details later on my personal LinkedIn account, https://www.linkedin.com/in/jbredehoft/.) This dramatically decreases the amount of time I can spend on my Bredemarket consultancy, and also (for non-competition reasons) limits the companies with which I can do business. 

Those of you who have followed Bredemarket from the beginning will remember that Bredemarket was only one part of a two-pronged approach. After becoming a “free agent” (also known as “being laid off”) in July 2020, my initial emphasis was on finding full-time employment. Within a month, however, I found myself accepting independent contracting projects, and formally established Bredemarket to handle that work. Therefore, I was simultaneously (a) looking for full-time work, and (b) growing my consulting business. And I’ve been doing both simultaneously for over a year and a half. 

Now that I’ve found full-time employment again, I’m not going to give up the consulting business. But it’s definitely going to have to change, as outlined in my April 25, 2022 update.

So now all of this SEO traction will not benefit you, the potential Bredemarket finger/face client, but it obviously will benefit my new employer. I can see it now when people talk about my new employer: “Isn’t that the company where the biometric content marketing expert is the Senior Product Marketing Manager?”

At least somebody will benefit.

P.S. There’s a “change” Spotify playlist. Unlike Kevin Meredith, I don’t use my playlists to make sure my presentation is within the alloted time. Especially when I create my longer 100-plus song playlists; no one wants to hear me speak for that long. Thankfully for you, this playlist is only a little over an hour long, and includes various songs on change, moving, endings, beginnings, and time.

Who is THE #1 NIST facial recognition vendor?

(Part of the biometric product marketing expert series)

As I’ve noted before, there are a number of facial recognition companies that claim to be the #1 NIST facial recognition vendor. I’m here to help you cut through the clutter so you know who the #1 NIST facial recognition vendor truly is.

You can confirm this information yourself by visiting the NIST FRVT 1:1 Verification and FRVT 1:N Identification pages. FRVT, by the way, stands for “Face Recognition Vendor Test.”

So I can announce to you that as of February 23, 2022, the #1 NIST facial recognition vendor is Cloudwalk.

And Sensetime.

And Beihang University ERCACAT.

And Cubox.

And Adera.

And Chosun University.

And iSAP Solution Corporation.

And Bitmain.

And Visage Techologies.

And Expasoft LLC.

And Paravision.

And NEC.

And Ptakuratsatu.

And Ayonix.

And Rank One.

And Dermalog.

And Innovatrics.

Now how can ALL dozen-plus of these entities be number 1?

Easy.

The NIST 1:1 and 1:N tests include many different accuracy and performance measurements, and each of the entities listed above placed #1 in at least one of these measurements. And all of the databases, database sizes, and use cases measure very different things.

Transportation Security Administration Checkpoint at John Glenn Columbus International Airport. By Michael Ball – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77279000

For example:

  • Visage Technologies was #1 in the 1:1 performance measurements for template generation time, in milliseconds, for 480×720 and 960×1440 data.
  • Meanwhile, NEC was #1 in the 1:N Identification (T>0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million.
  • Not to be confused with the 1:N Identification (T>0) accuracy measurements for gallery visa, probe border, N = 1.6 million, where the #1 algorithm was not from NEC.
  • And not to be confused with the 1:N Investigation (R = 1, T = 0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million, where the #1 algorithm was not from NEC.

And can I add a few more caveats?

First caveat: Since all of these tests are ongoing tests, you can probably find a slightly different set of #1 algorithms if you look at the January data, and you will probably find a slightly different set of #1 algorithms when the March data is available.

Second caveat: These are the results for the unqualified #1 NIST categories. You can add qualifiers, such as “#1 non-Chinese vendor” or “#1 western vendor” or “#1 U.S. vendor” to vault a particular algorithm to the top of the list.

Third caveat: You can add even more qualifiers, such as “within the top five NIST vendors” and (one I admit to having used before) “a top tier NIST vendor in multiple categories.” This can mean whatever you want it to mean. (As can “dramatically improved” algorithm, which may mean that you vaulted from position #300 to position #200 in one of the categories.)

Fourth caveat: Even if a particular NIST test applies to your specific use case, #1 performance on a NIST test does not guarantee that a facial recognition system supplied by that entity will yield #1 performance with your database in your environment. The algorithm sent to NIST may or may not make it into a production system. And even if it does, performance against a particular NIST test database may not yield the same results as performance against a Rhode Island criminal database, a French driver’s license database, or a Nigerian passport database. For more information on this, see Mike French’s LinkedIn article “Why agencies should conduct their own AFIS benchmarks rather than relying on others.”

So now that you know who the #1 NIST facial recognition vendor is, do you feel more knowledgeable?

Although I’ll grant that a NIST accuracy or performance claim is better than some other claims, such as self-test results.