Updates, updates, updates, updates…

If I hired myself to update the Bredemarket website, I’d be employed full time.

Early June website updates

My “opportunity” that allowed me to service identity clients again necessitated several changes to the website, which I documented in a June 1 post entitled “Updates, updates, updates…

Then I had to return to this website to make some hurried updates, since my April 2022 prohibition on taking certain types of work is no longer in effect as of June 2023. Hence, my home page, my “What I Do” page, and (obviously) my identity page are all corrected.

From https://bredemarket.com/2023/06/01/updates-updates-updates/

Basically, I had gone through great trouble to document that Bredemarket would NOT take identity work, so I had to reverse a lot of pages to say that Bredemarket WOULD take identity work.

I may have found a few additional pages after June 1, but eventually I reached the point where everything on the Bredemarket website was completely and totally updated, and I wouldn’t have to perform any other changes.

You can predict where this is going.

Who I…was

Today it occurred to me that some of the readers of the LinkedIn Bredemarket page may not know the person behind Bredemarket, so I took the opportunity to share Bredemarket’s “Who I Am” web page on the LinkedIn page.

Only then did I read what the page actually said.

So THAT page was also updated (updates in red).

From https://bredemarket.com/who-i-am/ as of August 8, 1:35 pm PDT. Subject to change.

So yes, this biometric content marketing expert/identity content marketing expert IS available for your content marketing needs. If you’re interested in receiving my help with your identity written content, contact me.

To be continued, probably…

The Difference Between Identity Factors and Identity Modalities

(Part of the biometric product marketing expert series)

I know that I’m the guy who likes to say that it’s all semantics. After all, I’m the person who has referred to five-page long documents as “battlecards.”

But sometimes the semantics are critically important. Take the terms “factors” and “modalities.” On the surface they sound similar, but in practice there is an extremely important difference between factors of authentication and modalities of authentication. Let’s discuss.

What is a factor?

To answer the question “what is a factor,” let me steal from something I wrote back in 2021 called “The five authentication factors.”

Something You Know. Think “password.” And no, passwords aren’t dead. But the use of your mother’s maiden name as an authentication factor is hopefully decreasing.

Something You Have. I’ve spent much of the last ten years working with this factor, primarily in the form of driver’s licenses. (Yes, MorphoTrak proposed driver’s license systems. No, they eventually stopped doing so. But obviously IDEMIA North America, the former MorphoTrust, has implemented a number of driver’s license systems.) But there are other examples, such as hardware or software tokens.

Something You Are. I’ve spent…a long time with this factor, since this is the factor that includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.

Something You Do. The Cybersecurity Man chose to explain this in a non-behavioral fashion, such as using swiping patterns to unlock a device. This is different from something such as gait recognition, which supposedly remains constant and is thus classified as behavioral biometrics.

Somewhere You Are. This is an emerging factor, as smartphones become more and more prevalent and locations are therefore easier to capture. Even then, however, precision isn’t always as good as we want it to be. For example, when you and a few hundred of your closest friends have illegally entered the U.S. Capitol, you can’t use geolocation alone to determine who exactly is in Speaker Pelosi’s office.

From https://bredemarket.com/2021/03/02/the-five-authentication-factors/

(By the way, if you search the series of tubes for reading material on authentication factors, you’ll find a lot of references to only three authentication factors, including references from some very respectable sources. Those sources are only 60% right, since they leave off the final two factors I listed above. It’s five factors of authentication, folks. Maybe.)

The one striking thing about the five factors is that while they can all be used to authenticate (and verify) identities, they are inherently different from one another. The ridges of my fingerprint bear no relation to my 16 character password, nor do they bear any relation to my driver’s license. These differences are critical, as we shall see.

What is a modality?

In identity usage, a modality refers to different variations of the same factor. This is most commonly used with the “something you are” (biometric) factor, but it doesn’t have to be.

Biometric modalities

The identity company Aware, which offers multiple biometric solutions, spent some time discussing several different biometric modalities.

[M]any businesses and individuals (are adopting) biometric authentication as it been established as the most secure authentication method surpassing passwords and pins. There are many modalities of biometric authentication to pick from, but which method is the best?  

From https://www.aware.com/blog-which-biometric-authentication-method-is-the-best/

After looking at fingerprints, faces, voices, and irises, Aware basically answered its “best” question by concluding “it depends.” Different modalities have their own strengths and weaknesses, depending upon the use case. (If you wear thick gloves as part of your daily work, forget about fingerprints.)

ID R&D goes a step further and argues that it’s best to use multimodal biometrics, in which the two biometrics are face and voice. (By an amazing coincidence, ID R&D offers face and voice solutions.)

And there are many other biometric modalities.

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Non-biometric modalities

But the word “modalities” is not reserved for biometrics alone. The scientific paper “Multimodal User Authentication in Smart Environments: Survey of User Attitudes,” just released in May, includes this image that lists various modalities. As you can see, two of the modalities are not like the others.

From Aloba, Aishat & Morrison-Smith, Sarah & Richlen, Aaliyah & Suarez, Kimberly & Chen, Yu-Peng & Ruiz, Jaime & Anthony, Lisa. (2023). Multimodal User Authentication in Smart Environments: Survey of User Attitudes. Creative Commons Attribution 4.0 International
  • The three modalities in the middle—face, voice, and fingerprint—are all clearly biometric “something you are” modalities.
  • But the modality on the left, “Make a body movement in front of the camera,” is not a biometric modality (despite its reference to the body), but is an example of “something you do.”
  • Passwords, of course, are “something you know.”

In fact, each authentication factor has multiple modalities.

  • For example, a few of the modalities associated with “something you have” include driver’s licenses, passports, hardware tokens, and even smartphones.

Why multifactor is (usually) more robust than multimodal

Modalities within a single authentication factor are more closely related than modalities within multiple authentication factors. As I mentioned above when talking about factors, there is no relationship between my fingerprint, my password, and my driver’s license. However, there is SOME relationship between my driver’s license and my passport, since the two share some common information such as my legal name and my date of birth.

What does this mean?

  • If I’ve fraudulently created a fake driver’s license in your name, I already have some of the information that I need to create a fake passport in your name.
  • If I’ve fraudulently created a fake iris, there’s a chance that I might already have some of the information that I need to create a fake face.
  • However, if I’ve bought your Coinbase password on the dark web, that doesn’t necessarily mean that I was able to also buy your passport information on the dark web (although it is possible).

Therefore, while multimodal authentication is better tha unimodal authentication, multifactor authentication is usually better still (unless, as Incode Technologies notes, one of the factors is really, really weak).

Can an identity content marketing expert help you navigate these issues?

As you can see, you need to be very careful when writing about modalities and factors.

You need a biometric content marketing expert who has worked with many of these modalities.

Actually, you need an identity content marketing expert who has worked with many of these factors.

So if you are with an identity company and need to write a blog post, LinkedIn article, white paper, or other piece of content that touches on multifactor and multimodal issues, why not engage with Bredemarket to help you out?

If you’re interested in receiving my help with your identity written content, contact me.

Iris Recognition, Apple, and Worldcoin

(Part of the biometric product marketing expert series)

Iris recognition continues to make the news. Let’s review what iris recognition is and its benefits (and drawbacks), why Apple made the news last month, and why Worldcoin is making the news this month.

What is iris recognition?

There are a number of biometric modalities that can identify individuals by “who they are” (one of the five factors of authentication). A few examples include fingerprints, faces, voices, and DNA. All of these modalities purport to uniquely (or nearly uniquely) identify an individual.

One other way to identify individuals is via the irises in their eyes. I’m not a doctor, but presumably the Cleveland Clinic employs medical professionals who are qualified to define what the iris is.

The iris is the colored part of your eye. Muscles in your iris control your pupil — the small black opening that lets light into your eye.

From https://my.clevelandclinic.org/health/body/22502-iris
From Cleveland Clinic. (Link)

And here’s what else the Cleveland Clinic says about irises.

The color of your iris is like your fingerprint. It’s unique to you, and nobody else in the world has the exact same colored eye.

From https://my.clevelandclinic.org/health/body/22502-iris

John Daugman and irises

But why use irises rather than, say, fingerprints and faces? The best person to answer this is John Daugman. (At this point several of you are intoning, “John Daugman.” With reason. He’s the inventor of iris recognition.)

Here’s an excerpt from John Daugman’s 2004 paper on iris recognition:

(I)ris patterns become interesting as an alternative approach to reliable visual recognition of persons when imaging can be done at distances of less than a meter, and especially when there is a need to search very large databases without incurring any false matches despite a huge number of possibilities. Although small (11 mm) and sometimes problematic to image, the iris has the great mathematical advantage that its pattern variability among different persons is enormous.

Daugman, John, “How Iris Recognition Works.” IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 1, JANUARY 2004. Quoted from page 21. (PDF)

Or in non-scientific speak, one benefit of iris recognition is that you know it is accurate, even when submitting a pair of irises in a one-to-many search against a huge database. How huge? We’ll discuss later.

Brandon Mayfield and fingerprints

Remember that Daugman’s paper was released roughly two months before Brandon Mayfield was misidentified in a fingerprint comparison. (Everyone now intone “Brandon Mayfield.”)

If you want to know the details of that episode, the Department of Justice Office of the Inspector General issued a 330 page report (PDF) on it. If you don’t have time to read 330 pages, here’s Al Jazeera’s shorter version of Brandon Mayfield’s story.

While some of the issues associated with Mayfield’s misidentification had nothing to do with forensic science (Al Jazeera spends some time discussing bias, and Itiel Dror also looked at bias post-Mayfield), this still shows that fingerprints are remarkably similar and that it takes care to properly identify people.

Police agencies, witnesses, and faces

And of course there are recent examples of facial misidentifications (both by police agencies and witnesses), again not necessarily forensic science related, and again showing the similarity of faces from two different people.

Iris “data richness” and independent testing

Why are irises more accurate than fingerprints and faces? Here’s what one vendor, Iris ID, claims about irises vs. other modalities:

At the root of iris recognition’s accuracy is the data-richness of the iris itself. The IrisAccess system captures over 240 degrees of freedom or unique characteristics in formulating its algorithmic template. Fingerprints, facial recognition and hand geometry have far less detailed input in template construction.

Iris ID, “How It Compares.” (Link)

Enough about claims. What about real results? The IREX 10 test, independently administered by the U.S. National Institute of Standards and Technology, measures the identification (one-to-many) accuracy of submitted algorithms. At the time I am writing this, the ten most accurate algorithms provide false negative identification rates (FNIR) between 0.0022 ± 0.0004 and 0.0037 ± 0.0005 when two eyes are used. (Single eye accuracy is lower.) By the time you see this, the top ten algorithms may have changed, because the vendors are always improving.

IREX10 two-eye accuracy, top ten algorithms as of July 28, 2023. (Link)

While the IREX10 one-to-many tests are conducted against databases of less than a million records, it is estimated that iris one-to-many accuracy remains high even with databases of a billion people—something we will return to later in this post.

Iris drawbacks

OK, so if irises are so accurate, why aren’t we dumping our fingerprint readers and face readers and just using irises?

In short, because of the high friction in capturing irises. You can use high-resolution cameras to capture fingerprints and faces from far away, but as of now iris capture usually requires you to get very close to the capture device.

Iris image capture circa 2020 from the U.S. Federal Bureau of Investigation. (Link)

Which I guess is better than the old days when you had to put your eye right up against the capture device, but it’s still not as friendly (or intrusive) as face capture, which can be achieved as you’re walking down a passageway in an airport or sports stadium.

Irises and Apple Vision Pro

So how are irises being used today? You may or may not have hard last month’s hoopla about the Apple Vision Pro, which uses irises for one-to-one authetication.

I’m not going to spend a ton of time delving into this, because I just discussed Apple Vision Pro in June. In fact, I’m just going to quote from what I already said.

And when all of us heard about Vision Pro, one of the things that Apple shared about it was its verification technique. Not Touch ID or Face ID, but Optic ID. (I like naming consistency.)

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/
From Apple, https://www.apple.com/105/media/us/apple-vision-pro/2023/7e268c13-eb22-493d-a860-f0637bacb569/anim/drawer-privacy-optic-id/large.mp4

In short, as you wear the headset (which by definition is right on your head, not far away), the headset captures your iris images and uses them to authenticate you.

It’s a one-to-one comparison, not the one-to-many comparison that I discussed earlier in this post, but it is used to uniquely identify an individual.

But iris recognition doesn’t have to be used for identification.

Irises and Worldcoin

“But wait a minute, John,” you’re saying. “If you’re not using irises to determine if a person is who they say they are, then why would anyone use irises?”

Enter Worldcoin, which I mentioned in passing in my early July age estimation post.

Over the past several years, I’ve analyzed a variety of identity firms. Earlier this year I took a look at Worldcoin….Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness)…

From https://bredemarket.com/2023/07/03/age-estimation/

That’s the only thing that I’ve said about Worldcoin, at least publicly. (I looked at Worldcoin privately earlier in 2023, but that report is not publicly accessible and even I don’t have it any more.)

Worldcoin’s July 24 announcement

I guess it’s time for me to revisit Worldcoin, since the company made a super-big splashy announcement on Monday, July 24.

The Worldcoin Foundation today announced that Worldcoin, a project co-founded by Sam Altman, Alex Blania and Max Novendstern, is now live and in a production-grade state. 

The launch includes the release of the World ID SDK and plans to scale Orb operations to 35+ cities across 20+ countries around the world. In tandem, the Foundation’s subsidiary, World Assets Ltd., minted and released the Worldcoin token (WLD) to the millions of eligible people who participated in the beta; WLD is now transactable on the blockchain….

“In the age of AI, the need for proof of personhood is no longer a topic of serious debate; instead, the critical question is whether or not the proof of personhood solutions we have can be  privacy-first, decentralized and maximally inclusive,” said Worldcoin co-founder and Tools for Humanity CEO Alex Blania. “Through its unique technology, Worldcoin aims to provide anyone in the world, regardless of background, geography or income, access to the growing digital and global economy in a privacy preserving and decentralized way.”

From https://worldcoin.org/blog/announcements/worldcoin-project-launches

Worldcoin does NOT positively identify people…but it can still pay you

A very important note: Worldcoin’s purpose is not to determine identity (that a person is who they say they are). Worldcoin’s purpose is to determine uniqueness: namely, that a person (whoever they are) is unique among all the billions of people in the world. Once uniqueness is determined, the person can get money money money with an assurance that the same person won’t get money twice.

OK, so how are you going to determine the uniqueness of a person among all of the billions of people in the world?

Using the Orb to capture irises

As far as Worldcoin is concerned, irises are the best way to determine uniqueness, echoing what others have said.

Iris biometrics outperform other biometric modalities and already achieved false match rates beyond 1.2× ⁣10−141.2×10−14 (one false match in one trillion[9]) two decades ago[10]—even without recent advancements in AI. This is several orders of magnitude more accurate than the current state of the art in face recognition.

From https://worldcoin.org/blog/engineering/humanness-in-the-age-of-ai

So how is Worldcoin going to capture millions, and eventually billions, of iris pairs?

By using the Orb. (You may intone “the Orb” now.)

To complete your Worldcoin registration, you need to find an Orb that will capture your irises and verify your uniqueness.

Now you probably won’t find an Orb at your nearby 7 Eleven; as I write this, there are only a little over 100 listed locations in the entire world where Orbs are deployed. I happen to live within 50 miles of Santa Monica, where an Orb was recently deployed (by appointment only, unavailable on weekends, and you know how I feel about driving on Southern California freeways on a weekday).

But now that you can get crypto for enrolling at an Orb, people are getting more excited about the process, and there will be wider adoption.

Whether this will make a difference in the world or just be a fad remains to be seen.

We Survived Gummy Fingers. We’re Surviving Facial Recognition Inaccuracy. We’ll Survive Voice Spoofing.

(Part of the biometric product marketing expert series)

Some of you are probably going to get into an automobile today.

Are you insane?

The National Highway Traffic Safety Administration has released its latest projections for traffic fatalities in 2022, estimating that 42,795 people died in motor vehicle traffic crashes.

From https://www.nhtsa.gov/press-releases/traffic-crash-death-estimates-2022

When you have tens of thousands of people dying, then the only conscionable response is to ban automobiles altogether. Any other action or inaction is completely irresponsible.

After all, you can ask the experts who want us to ban biometrics because it can be spoofed and is racist, so therefore we shouldn’t use biometrics at all.

I disagree with the calls to ban biometrics, and I’ll go through three “biometrics are bad” examples and say why banning biometrics is NOT justified.

  • Even some identity professionals may not know about the old “gummy fingers” story from 20+ years ago.
  • And yes, I know that I’ve talked about Gender Shades ad nauseum, but it bears repeating again.
  • And voice deepfakes are always a good topic to discuss in our AI-obsessed world.

Example 1: Gummy fingers

My recent post “Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event” included the following sentence:

But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.

From https://bredemarket.com/2023/06/12/vision-pro-not-revolutionary-biometrics-event/

A biometrics industry colleague noticed the rhyming words “dummy” and “gummy” and wondered if the latter was a typo. It turns out it wasn’t.

To my knowledge, these gummy fingers do NOT have ridges. From https://www.candynation.com/gummy-fingers

Back in 2002, researcher Tsutomu Matsumoto used “gummy bears” gelatin to create a fake finger that fooled a fingerprint reader.

Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.

Except that the fingerprint reader world didn’t stand still after 2002, and the industry developed ways to detect spoofed fingers. Here’s a recent example of presentation attack detection (liveness detection) from TECH5:

TECH5 participated in the 2023 LivDet Non-contact Fingerprint competition to evaluate its latest NN-based fingerprint liveness detection algorithm and has achieved first and second ranks in the “Systems” category for both single- and four-fingerprint liveness detection algorithms respectively. Both submissions achieved the lowest error rates on bonafide (live) fingerprints. TECH5 achieved 100% accuracy in detecting complex spoof types such as Ecoflex, Playdoh, wood glue, and latex with its groundbreaking Neural Network model that is only 1.5MB in size, setting a new industry benchmark for both accuracy and efficiency.

From https://tech5.ai/tech5s-mobile-fingerprint-liveness-detection-technology-ranked-the-most-accurate-in-the-market/

TECH5 excelled in detecting fake fingers for “non-contact” reading where the fingers don’t even touch a surface such as an optical surface. That’s appreciably harder than detecting fake fingers that touch contact devices.

I should note that LivDet is an independent assessment. As I’ve said before, independent technology assessments provide some guidance on the accuracy and performance of technologies.

So gummy fingers and future threats can be addressed as they arrive.

But at least gummy fingers aren’t racist.

Example 2: Gender shades

In 2017-2018, the Algorithmic Justice League set out to answer this question:

How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?

From http://gendershades.org/. Yes, that’s “http,” not “https.” But I digress.

Let’s stop right there for a moment and address two items before we continue. Trust me; it’s important.

  1. This study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.
  2. The study focused on gender classification and race classification. Back in those primitive innocent days of 2018, the world assumed that you could look at a person and tell whether the person was male or female, or tell the race of a person. (The phrase “self-identity” had not yet become popular, despite the Rachel Dolezal episode which happened before the Gender Shades study). Most importantly, the study did not address identification of individuals at all.

However, the findings did find something:

While the companies appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. Let’s explore.

All companies perform better on males than females with an 8.1% – 20.6% difference in error rates.

All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% – 19.2% difference in error rates.

When we analyze the results by intersectional subgroups – darker males, darker females, lighter males, lighter females – we see that all companies perform worst on darker females.

From http://gendershades.org/overview.html

What does this mean? It means that if you are using one of these three algorithms solely for the purpose of determining a person’s gender and race, some results are more accurate than others.

Three algorithms do not predict hundreds of algorithms, and classification is not identification. If you’re interested in more information on the differences between classification and identification, see Bredemarket’s November 2021 submission to the Department of Homeland Security. (Excerpt here.)

And all the stories about people such as Robert Williams being wrongfully arrested based upon faulty facial recognition results have nothing to do with Gender Shades. I’ll address this briefly (for once):

  • In the United States, facial recognition identification results should only be used by the police as an investigative lead, and no one should be arrested solely on the basis of facial recognition. (The city of Detroit stated that Williams’ arrest resulted from “sloppy” detective work.)
  • If you are using facial recognition for criminal investigations, your people had better have forensic face training. (Then they would know, as Detroit investigators apparently didn’t know, that the quality of surveillance footage is important.)
  • If you’re going to ban computerized facial recognition (even when only used as an investigative lead, and even when only used by properly trained individuals), consider the alternative of human witness identification. Or witness misidentification. Roeling Adams, Reggie Cole, Jason Kindle, Adam Riojas, Timothy Atkins, Uriah Courtney, Jason Rivera, Vondell Lewis, Guy Miles, Luis Vargas, and Rafael Madrigal can tell you how inaccurate (and racist) human facial recognition can be. See my LinkedIn article “Don’t ban facial recognition.”

Obviously, facial recognition has been the subject of independent assessments, including continuous bias testing by the National Institute of Standards and Technology as part of its Face Recognition Vendor Test (FRVT), specifically within the 1:1 verification testing. And NIST has measured the identification bias of hundreds of algorithms, not just three.

In fact, people that were calling for facial recognition to be banned just a few years ago are now questioning the wisdom of those decisions.

But those days were quaint. Men were men, women were women, and artificial intelligence was science fiction.

The latter has certainly changed.

Example 3: Voice spoofs

Perhaps it’s an exaggeration to say that recent artificial intelligence advances will change the world. Perhaps it isn’t. Personally I’ve been concentrating on whether AI writing can adopt the correct tone of voice, but what if we take the words “tone of voice” literally? Let’s listen to President Richard Nixon:

From https://www.youtube.com/watch?v=2rkQn-43ixs

Richard Nixon never spoke those words in public, although it’s possible that he may have rehearsed William Safire’s speech, composed in case Apollo 11 had not resulted in one giant leap for mankind. As noted in the video, Nixon’s voice and appearance were spoofed using artificial intelligence to create a “deepfake.”

It’s one thing to alter the historical record. It’s another thing altogether when a fraudster spoofs YOUR voice and takes money out of YOUR bank account. By definition, you will take that personally.

In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.

What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech…

From https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=8e8417775591

Now I’ll grant that this is an example of human voice verification, which can be as inaccurate as the previously referenced human witness misidentification. But are computerized systems any better, and can they detect spoofed voices?

Well, in the same way that fingerprint readers worked to overcome gummy bears, voice readers are working to overcome deepfake voices. Here’s what one company, ID R&D, is doing to combat voice spoofing:

IDVoice Verified combines ID R&D’s core voice verification biometric engine, IDVoice, with our passive voice liveness detection, IDLive Voice, to create a high-performance solution for strong authentication, fraud prevention, and anti-spoofing verification.

Anti-spoofing verification technology is a critical component in voice biometric authentication for fraud prevention services. Before determining a match, IDVoice Verified ensures that the voice presented is not a recording.

From https://www.idrnd.ai/idvoice-verified-voice-biometrics-and-anti-spoofing/

This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.

As for independent testing:

A final thought

Yes, fraudsters can use advanced tools to do bad things.

But the people who battle fraudsters can also use advanced tools to defeat the fraudsters.

Take care of yourself, and each other.

Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259

Fill Your Company Gap With A Biometric Content Marketing Expert

Companies often have a lot of things they want to do, but don’t have the people to do them. It takes a long time to hire someone, and it even takes time to find a consultant that knows your industry and can do the work.

This affects identity/biometric companies just like it affects other companies. When an identity/biometric company needs a specific type of expertise and needs it NOW, it’s often hard to find the person they need.

If your company needs a biometric content marketing expert (or an identity content marketing expert) NOW, you’ve come to the right place—Bredemarket. Bredemarket has no identity learning curve, no content learning curve, and offers proven results.

Identity/biometric consulting in the 1990s

I remember when I first started working as an identity/biometric consultant, long before Bredemarket was a thing.

OK, not quite THAT long ago. I started working in biometrics in the 1990s—NOT the 1940s.

In 1994, the proposals department at Printrak International needed additional writers due to the manager’s maternity leave, and she was so valuable that Printrak needed to bring in TWO consultants to take her place.

At least initially, the other consultant and I couldn’t fill the manager’s shoes.

Designed by Freepik.
  • Both of us could write.
  • Both of us could spell “AFIS.”
  • Both of us could spell “RAID.” Not the bug spray, but the storage mechanism that stored all those “huge” fingerprint images.
  • But on that first night that I was cranking out proposal letters for something called a “Latent Station 2000,” I didn’t really know WHAT I was writing about.

As time went on, the other consultant and I learned much more—so much that the company brought both of us on as full-time employees.

After we were hired full-time, we spent a combined 45+ years at Printrak and its corporate successors in proposals, marketing, and product management positions, contributing to industry knowledge.

Which shows that learning how to spell “AFIS” can have long-term benefits.

Printrak’s problem

When Printrak needed biometric proposal writing experts quickly, it found two people who filled the bill. Sort of.

But neither of us knew biometrics before we started consuting at Printrak.

And I had never written a proposal before I started consulting at Printrak. (I had written an RFP. Sort of.)

But frankly, there weren’t a lot of identity/biometric consultants out in the field in the 1990s. There were the 20th century equivalents of Applied Forensic Services LLC, but at the time I don’t think there were any 20th century equivalents of Tandem Technical Writing LLC.

The 21st century solution

Unlike the 1990s, identity/biometric firms that need consulting help have many options. In addition to Applied Forensic Services and Tandem Technical Writing you have…me.

Mike and Laurel can tell you what they can do, and I heartily endorse both of them.

Let me share with you why I call myself a biometric content marketing expert who can help your identity/biometric company get marketing content out now:

  • No identity learning curve
  • No content learning curve
  • Proven results

No identity learning curve

I have worked with finger, face, iris, DNA, and other biometrics, as well as government-issued identity documents and geolocation. If you are interested, you can read my Bredemarket blog posts that mention the following topics:

No content learning curve

Because I’ve produced both external and internal content on identity/biometric topics, I offer the experience to produce your content in a number of formats.

  • External content: account-based marketing content, articles, blog posts (I am the identity/biometric blog expert), case studies, data sheets, partner comarketing content, presentations, proposals, sales literature sheets, scientific book chapters, smartphone application content (events), social media posts, web page content, and white papers.
  • Internal content: battlecards, competitive analyses, demonstration scripts (events), email internal newsletters, FAQs, multi-year plans, playbooks, project plans, proposal templates, quality improvement documents, requirements documents, strategic analyses, and website/social media analyses.

Proven results

Read about them here.

So how can you take advantage of my identity/biometric expertise?

If you need day-one help for an identity/biometric content marketing or proposal writing project, consider Bredemarket.

Why do print people capture 14 fingerprint impression blocks? Why not 10? Or 20?

In the course of writing something on another blog, I mentioned the following:

You see, my fingerprint experience was primarily rooted in the traditional 14 (yes, 14) fingerprint impression block livescan capture technology used by law enforcement agencies to submit full sets of tenprints to the U.S. Federal Bureau of Investigation (FBI), and state and local agencies that submit to the FBI.

From https://jebredcal.wordpress.com/2023/06/12/when-one-type-of-experience-is-not-enough/

I’d be willing to bet that the vast majority of you have ten fingers.

So why do tenprint livescan devices capture 14 fingerprint impression blocks?

Why 14 fingerprint impression blocks are as good as 20 fingers

It’s important to understand that tenprint livescan devices, which only began to emerge in the 1980s, were originally designed as an electronic way to duplicate the traditional inking process in which ink was placed on arrestees’ fingers, and the ink was transferred to a tenprint fingerprint card.

The criminal fingeprint card (and, with some changes, the applicant fingerprint card) looks something like this:

If you look at the lower half of the front of a fingerprint card, you will see 14 fingerprint impression blocks arranged in 3 rows.

  • The first row is where you place five “rolled” (nail to nail) fingerprints taken from the right hand, starting with the right thumb and ending with the right little finger.
  • The second row is where you place five rolled fingerprints from the left hand, again starting with the thumb and ending with the little finger.

So now you’ve captured ten fingerprints. But you’re not done. You still have to fill four more impression blocks. Here’s how:

Identification flat impressions are taken simultaneously without rolling. These are referred to as plain, slap, or flat impressions. The individual’s right and left four fingers should be captured first, followed by the two thumbs (4-4-2 method).

From https://le.fbi.gov/science-and-lab/biometrics-and-fingerprints/biometrics/recording-legible-fingerprints

To clarify, on the third row, for the large box in the lower left corner of the card, you “slap” all four fingers of the left hand down at the same time. Then you skip over the the large box on the lower right corner of the card and slap all four fingers of the right hand down at the same time. Finally you slap the two thumbs down at the same time, capturing the left thumb in the small middle left box, and the right thumb in the small middle right box.

Well, at least that’s how you do it on a traditional inked card. On a tenprint livescan device, you roll and slap your fingers on the large platen, without worrying (that much) about staying within the lines.

Why 14 fingerprint impression blocks are better than 20 fingers

So by the time you’re done, you’ve filled 14 fingerprint impression blocks by 13 distinct actions (the two slap thumbs are captured simultaneously), and you’ve effectively captured 20 fingerprints.

Why?

Quality control.

Because since every finger should theoretically be captured twice, the slaps can be compared against the rolls to ensure that the fingerprints were captured in the correct order.

Locations of finger 2 (green) and finger 3 (blue) for rolled and slap prints.

If you capture the rolled and slap prints in the correct order, then the right index finger (finger 2) should appear in the green area on the first row as a rolled print, and in the green area on the third row as a slap print. Similarly, the middle finger (finger 3) should appear in the blue areas.

If the green rolled print is NOT the same as the green slap print, or if the blue rolled print is NOT the same as the blue slap print, then you captured the fingerprints in the wrong order.

In the old pre-livescan days of inking, a trained tenprint fingerprint examiner (or someone who pretended to be one) had to look at the prints to ensure that the fingers were captured properly. Now the roll to slap comparisons are all done in software, either at the tenprint livescan device itself, or at the automated fingerprint identification system (AFIS) or the automated biometric identification system (ABIS) that receives the prints.

For a mention of companions to roll-to-slap comparison, as well as a number of other issues regarding fingerprint capture quality, see this 2006 presentation given by Behnam Bavarian, then a Vice President at Motorola.

In the 4-4-2 method, groups of prints are captured together, rather than individually. While it is possible to completely mess things up by capturing the left slaps when you are supposed to capture the right slaps, or by twisting your hands in a bizarre manner to capture the thumbs in reverse order, 4-4-2 gives you a reasonable assurance that the slap prints are captured in the correct order, ensuring a proper roll-to-slap comparison.

Well, unless the fingerprints are captured in an unattended fashion, or the police officer capturing the fingerprints is crooked.

But today’s ABIS systems are powerful enough to compare all ten submitted fingers against all ten fingers of every record in an ABIS database, so even if the submitted fingerprints falsely record finger 2 as finger 3, the ABIS will still find the matching print anyway.

Book ’em, Danno.

Book ’em, Danno! By CBS Television – eBay item photo front photo back, Public Domain, https://commons.wikimedia.org/w/index.php?curid=19674714

Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event

(Part of the biometric product marketing expert series)

(UPDATE JUNE 24: CORRECTED THE YEAR THAT COVID BEGAN.)

I haven’t said anything publicly about Apple Vision Pro, so it’s time for me to be “how do you do fellow kids” trendy and jump on the bandwagon.

Actually…

It ISN’T time for me to jump on the Apple Vision Pro bandwagon, because while Apple Vision Pro affects the biometric industry, it’s not a REVOLUTIONARY biometric event.

The four revolutionary biometric events in the 21st century

How do I define a “revolutionary biometric event”?

By Alberto Korda – Museo Che Guevara, Havana Cuba, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6816940

I define it as something that completely transforms the biometric industry.

When I mention three of the four revolutionary biometric events in the 21st century, you will understand what I mean.

  • 9/11. After 9/11, orders of biometric devices skyrocketed, and biometrics were incorporated into identity documents such as passports and driver’s licenses. Who knows, maybe someday we’ll actually implement REAL ID in the United States. The latest extension of the REAL ID enforcement date moved it out to May 7, 2025. (Subject to change, of course.)
  • The Boston Marathon bombings, April 2013. After the bombings, the FBI was challenged in managing and analyzing countless hours of video evidence. Companies such as IDEMIA National Security Solutions, MorphoTrak, Motorola, Paravision, Rank One Computing, and many others have tirelessly worked to address this challenge, while ensuring that facial recognition results accurately identify perpetrators while protecting the privacy of others in the video feeds.
  • COVID-19, spring 2020 and beyond. COVID accelerated changes that were already taking place in the biometric industry. COVID prioritized mobile, remote, and contactless interactions and forced businesses to address issues that were not as critical previously, such as liveness detection.

These three are cataclysmic world events that had a profound impact on biometrics. The fourth one, which occurred after the Boston Marathon bombings but before COVID, was…an introduction of a product feature.

  • Touch ID, September 2013. When Apple introduced the iPhone 5s, it also introduced a new way to log in to the device. Rather than entering a passcode, iPhone 5S users could just use their finger to log in. The technical accomplishment was dwarfed by the legitimacy that this brought to using fingerprints for identification. Before 2013, attempts to implement fingerprint verification for benefits recipients were resisted because fingerprinting was something that criminals did. After September 2013, fingerprinting was something that the cool Apple kids did. The biometric industry changed overnight.

Of course, Apple followed Touch ID with Face ID, with adherents of the competing biometric modalities sparring over which was better. But Face ID wouldn’t have been accepted as widely if Touch ID hadn’t paved the way.

So why hasn’t iris verification taken off?

Iris verification has been around for decades (I remember Iridian before L-1; it’s now part of IDEMIA), but iris verification is nowhere near as popular in the general population as finger and face verification. There are two reasons for this:

  • Compared to other biometrics, irises are hard to capture. To capture a fingerprint, you can lay your finger on a capture device, or “slap” your four fingers on a capture device, or even “wave” your fingers across a capture device. Faces are even easier to capture; while older face capture systems required you to stand close to the camera, modern face devices can capture your face as you are walking by the camera, or even if you are some distance from the camera.
  • Compared to other biometrics, irises are expensive to capture. Many years ago, my then-employer developed a technological marvel, an iris capture device that could accurately capture irises for people of any height. Unfortunately, the technological marvel cost thousands upon thousands of dollars, and no customers were going to use it when they could acquire fingerprint and face capture devices that were much less costly.

So while people rushed to implement finger and face capture on phones and other devices, iris capture was reserved for narrow verticals that required iris accuracy.

With one exception. Samsung incorporated Princeton Identity technology into its Samsung Galaxy S8 in 2017. But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies. (This is why liveness detection is so important.) While Samsung continues to sell iris verification today, it hadn’t been adopted by Apple and therefore wasn’t cool.

Until now.

About the Apple Vision Pro and Optic ID

The Apple Vision Pro is not the first headset that was ever created, but the iPhone wasn’t the first smartphone either. And coming late to the game doesn’t matter. Apple’s visibility among trendsetters ensures that when Apple releases something, people take notice.

And when all of us heard about Vision Pro, one of the things that Apple shared about it was its verification technique. Not Touch ID or Face ID, but Optic ID. (I like naming consistency.)

According to Apple, Optic ID works by analyzing a user’s iris through LED light exposure and then comparing it with an enrolled Optic ID stored on the device’s Secure Enclave….Optic ID will be used for everything from unlocking Vision Pro to using Apple Pay in your own headspace.

From The Verge, https://www.theverge.com/2023/6/5/23750147/apple-optic-id-vision-pro-iris-biometrics

So why did Apple incorporate Optic ID on this device and not the others?

There are multiple reasons, but one key reason is that the Vision Pro retails for US$3,499, which makes it easier for Apple to justify the cost of the iris components.

But the high price of the Vision Pro comes at…a price

However, that high price is also the reason why the Vision Pro is not going to revolutionize the biometric industry. CNET admitted that the Vision Pro is a niche item:

At $3,499, Apple’s Vision Pro costs more than three weeks worth of pay for the average American, according to Bureau of Labor Statistics data. It’s also significantly more expensive than rival devices like the upcoming $500 Meta Quest 3, $550 Sony PlayStation VR 2 and even the $1,000 Meta Quest Pro

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Now CNET did go on to say the following:

With Vision Pro, Apple is trying to establish what it believes will be the next major evolution of the personal computer. That’s a bigger goal than selling millions of units on launch day, and a shift like that doesn’t happen overnight, no matter what the price is. The version of Vision Pro that Apple launches next year likely isn’t the one that most people will buy.

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Certainly Vision Pro and Optic ID have the potential to revolutionize the computing industry…in the long term. And as that happens, the use of iris biometrics will become more popular with the general public…in the long term.

But not today. You’ll have to wait a little longer for the next biometric revolution. And hopefully it won’t be a catastrophic event like three of the previous revolutions.

How Can Your Identity Business Create the RIGHT Written Content?

Does your identity business provide biometric or non-biometric products and services that use finger, face, iris, DNA, voice, government documents, geolocation, or other factors or modalities?

Does your identity business need written content, such as blog posts (from the identity/biometric blog expert), case studies, data sheets, proposal text, social media posts, or white papers?

How can your identity business (with the help of an identity content marketing expert) create the right written content?

For the answer, click here.

Using “Multispectral” and “Liveness” in the Same Sentence

(Part of the biometric product marketing expert series)

Now that I’m plunging back into the fingerprint world, I’m thinking about all the different types of fingerprint readers.

  • The optical fingerprint and palm print readers are still around.
  • And the capacitive fingerprint readers still, um, persist.
  • And of course you have the contactless fingerprint readers such as MorphoWave, one that I know about.
  • And then you have the multispectral fingerprint readers.

What is multispectral?

Bayometric offers a web page that covers some of these fingerprint reader types, and points out the drawbacks of some of the readers they discuss.

Latent prints are usually produced by sweat, skin debris or other sebaceous excretions that cover up the palmar surface of the fingertips. If a latent print is on the glass platen of the optical sensor and light is directed on it, this print can fool the optical scanner….

Capacitive sensors can be spoofed by using gelatin based soft artificial fingers.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

There is another weakness of these types of readers. Some professions damage and wear away a person’s fingerprint ridges. Examples of professions whose practitioners exhibit worn ridges include construction workers and biometric content marketing experts (who, at least in the old days, handled a lot of paper).

The solution is to design a fingerprint reader that not only examines the surface of the finger, but goes deeper.

From HID Global, “A Guide to MSI Technology: How It Works,” https://blog.hidglobal.com/2022/10/guide-msi-technology-how-it-works

The specialty of multispectral sensors is that it can capture the features of the tissue that lie below the skin surface as well as the usual features on the finger surface. The features under the skin surface are able to provide a second representation of the pattern on the fingerprint surface.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

Multispectral sensors are nothing new. When I worked for Motorola, Motorola Ventures had invested in a company called Lumidigm that produced multispectral fingerprint sensors; they were much more expensive than your typical optical or capacitive sensor, but were much more effective in capturing true fingerprints to the subdermal level.

Lumidigm was eventually acquired in 2014: not by Motorola (who sold off its biometric assets such as Printrak and Symbol), but by HID Global. This company continues to produce Lumidigm-branded multispectral fingerprint sensors to this day.

But let’s take a look at the other word I bandied about.

What is liveness?

KISS, Alive! By Obtained from allmusic.com., Fair use, https://en.wikipedia.org/w/index.php?curid=2194847

“Gelatin based soft artificial fingers” aren’t the only way to fool a biometric sensor, whether you’re talking about a fingerprint sensor or some other sensor such as a face sensor.

Regardless of the biometric modality, the intent is the same; instead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).

This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).

But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.

And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.

(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)

[UPDATE 4/24/2024: I FINALLY ADDRESSED THE DIFFERENCE BETWEEN ACTIVE AND PASSIVE LIVENESS HERE.]

Multispectral liveness

While multispectral fingerprint readers aren’t the only fingerprint readers, or the only biometric readers, that iBeta has tested for liveness, the HID Global Lumidigm readers conform to Level 2 (the higher level) of iBeta testing.

There’s a confirmation letter and everything.

From the iBeta website.

This letter was issued in 2021. For some odd reason, HID Global decided to publicize this in 2023.

Oh well. It’s good to occasionally remind people of stuff.

Updates, updates, updates…

When keeping your websites updated, I advise you to do as I say, not as I do. Two of my websites were significantly out of date and needed hurried corrections.

Designed by Freepik.

I realized this morning that the “My Experience” page on my jebredcal website was roughly a year out of date, so I hurriedly added content to it. Now the page will turn up in searches for the acronym “ABM” (OK, maybe not on the first page of the search results).

Then I had to return to this website to make some hurried updates, since my April 2022 prohibition on taking certain types of work is no longer in effect as of June 2023. Hence, my home page, my “What I Do” page, and (obviously) my identity page are all corrected.

Oh yeah, I updated my Calendly availability hours also. Which is good, because I already have two meetings booked this week.

Which reminds me…if you need Bredemarket’s services: