Basically, I had gone through great trouble to document that Bredemarket would NOT take identity work, so I had to reverse a lot of pages to say that Bredemarket WOULD take identity work.
I may have found a few additional pages after June 1, but eventually I reached the point where everything on the Bredemarket website was completely and totally updated, and I wouldn’t have to perform any other changes.
You can predict where this is going.
Who I…was
Today it occurred to me that some of the readers of the LinkedIn Bredemarket page may not know the person behind Bredemarket, so I took the opportunity to share Bredemarket’s “Who I Am” web page on the LinkedIn page.
So yes, this biometric content marketing expert/identity content marketing expert IS available for your content marketing needs. If you’re interested in receiving my help with your identity written content, contact me.
I know that I’m the guy who likes to say that it’s all semantics. After all, I’m the person who has referred to five-page long documents as “battlecards.”
But sometimes the semantics are critically important. Take the terms “factors” and “modalities.” On the surface they sound similar, but in practice there is an extremely important difference between factors of authentication and modalities of authentication. Let’s discuss.
What is a factor?
To answer the question “what is a factor,” let me steal from something I wrote back in 2021 called “The five authentication factors.”
Something You Know. Think “password.” And no, passwords aren’t dead. But the use of your mother’s maiden name as an authentication factor is hopefully decreasing.
Something You Have. I’ve spent much of the last ten years working with this factor, primarily in the form of driver’s licenses. (Yes, MorphoTrak proposed driver’s license systems. No, they eventually stopped doing so. But obviously IDEMIA North America, the former MorphoTrust, has implemented a number of driver’s license systems.) But there are other examples, such as hardware or software tokens.
Something You Are. I’ve spent…a long time with this factor, since this is the factor that includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.
Something You Do. The Cybersecurity Man chose to explain this in a non-behavioral fashion, such as using swiping patterns to unlock a device. This is different from something such as gait recognition, which supposedly remains constant and is thus classified as behavioral biometrics.
Somewhere You Are. This is an emerging factor, as smartphones become more and more prevalent and locations are therefore easier to capture. Even then, however, precision isn’t always as good as we want it to be. For example, when you and a few hundred of your closest friends have illegally entered the U.S. Capitol, you can’t use geolocation alone to determine who exactly is in Speaker Pelosi’s office.
(By the way, if you search the series of tubes for reading material on authentication factors, you’ll find a lot of references to only three authentication factors, including references from some very respectable sources. Those sources are only 60% right, since they leave off the final two factors I listed above. It’s five factors of authentication, folks. Maybe.)
The one striking thing about the five factors is that while they can all be used to authenticate (and verify) identities, they are inherently different from one another. The ridges of my fingerprint bear no relation to my 16 character password, nor do they bear any relation to my driver’s license. These differences are critical, as we shall see.
What is a modality?
In identity usage, a modality refers to different variations of the same factor. This is most commonly used with the “something you are” (biometric) factor, but it doesn’t have to be.
[M]any businesses and individuals (are adopting) biometric authentication as it been established as the most secure authentication method surpassing passwords and pins. There are many modalities of biometric authentication to pick from, but which method is the best?
After looking at fingerprints, faces, voices, and irises, Aware basically answered its “best” question by concluding “it depends.” Different modalities have their own strengths and weaknesses, depending upon the use case. (If you wear thick gloves as part of your daily work, forget about fingerprints.)
ID R&D goes a step further and argues that it’s best to use multimodal biometrics, in which the two biometrics are face and voice. (By an amazing coincidence, ID R&D offers face and voice solutions.)
The three modalities in the middle—face, voice, and fingerprint—are all clearly biometric “something you are” modalities.
But the modality on the left, “Make a body movement in front of the camera,” is not a biometric modality (despite its reference to the body), but is an example of “something you do.”
Passwords, of course, are “something you know.”
In fact, each authentication factor has multiple modalities.
For example, a few of the modalities associated with “something you have” include driver’s licenses, passports, hardware tokens, and even smartphones.
Why multifactor is (usually) more robust than multimodal
Modalities within a single authentication factor are more closely related than modalities within multiple authentication factors. As I mentioned above when talking about factors, there is no relationship between my fingerprint, my password, and my driver’s license. However, there is SOME relationship between my driver’s license and my passport, since the two share some common information such as my legal name and my date of birth.
What does this mean?
If I’ve fraudulently created a fake driver’s license in your name, I already have some of the information that I need to create a fake passport in your name.
If I’ve fraudulently created a fake iris, there’s a chance that I might already have some of the information that I need to create a fake face.
However, if I’ve bought your Coinbase password on the dark web, that doesn’t necessarily mean that I was able to also buy your passport information on the dark web (although it is possible).
Can an identity content marketing expert help you navigate these issues?
As you can see, you need to be very careful when writing about modalities and factors.
You need a biometric content marketing expert who has worked with many of these modalities.
Actually, you need an identity content marketing expert who has worked with many of these factors.
So if you are with an identity company and need to write a blog post, LinkedIn article, white paper, or other piece of content that touches on multifactor and multimodal issues, why not engage with Bredemarket to help you out?
If you’re interested in receiving my help with your identity written content, contact me.
Iris recognition continues to make the news. Let’s review what iris recognition is and its benefits (and drawbacks), why Apple made the news last month, and why Worldcoin is making the news this month.
What is iris recognition?
There are a number of biometric modalities that can identify individuals by “who they are” (one of the five factors of authentication). A few examples include fingerprints, faces, voices, and DNA. All of these modalities purport to uniquely (or nearly uniquely) identify an individual.
One other way to identify individuals is via the irises in their eyes. I’m not a doctor, but presumably the Cleveland Clinic employs medical professionals who are qualified to define what the iris is.
The iris is the colored part of your eye. Muscles in your iris control your pupil — the small black opening that lets light into your eye.
But why use irises rather than, say, fingerprints and faces? The best person to answer this is John Daugman. (At this point several of you are intoning, “John Daugman.” With reason. He’s the inventor of iris recognition.)
(I)ris patterns become interesting as an alternative approach to reliable visual recognition of persons when imaging can be done at distances of less than a meter, and especially when there is a need to search very large databases without incurring any false matches despite a huge number of possibilities. Although small (11 mm) and sometimes problematic to image, the iris has the great mathematical advantage that its pattern variability among different persons is enormous.
Daugman, John, “How Iris Recognition Works.” IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 1, JANUARY 2004. Quoted from page 21. (PDF)
Or in non-scientific speak, one benefit of iris recognition is that you know it is accurate, even when submitting a pair of irises in a one-to-many search against a huge database. How huge? We’ll discuss later.
Brandon Mayfield and fingerprints
Remember that Daugman’s paper was released roughly two months before Brandon Mayfield was misidentified in a fingerprint comparison. (Everyone now intone “Brandon Mayfield.”)
While some of the issues associated with Mayfield’s misidentification had nothing to do with forensic science (Al Jazeera spends some time discussing bias, and Itiel Dror also looked at bias post-Mayfield), this still shows that fingerprints are remarkably similar and that it takes care to properly identify people.
Police agencies, witnesses, and faces
And of course there are recent examples of facial misidentifications (both by police agencies and witnesses), again not necessarily forensic science related, and again showing the similarity of faces from two different people.
At the root of iris recognition’s accuracy is the data-richness of the iris itself. The IrisAccess system captures over 240 degrees of freedom or unique characteristics in formulating its algorithmic template. Fingerprints, facial recognition and hand geometry have far less detailed input in template construction.
Enough about claims. What about real results? The IREX 10 test, independently administered by the U.S. National Institute of Standards and Technology, measures the identification (one-to-many) accuracy of submitted algorithms. At the time I am writing this, the ten most accurate algorithms provide false negative identification rates (FNIR) between 0.0022 ± 0.0004 and 0.0037 ± 0.0005 when two eyes are used. (Single eye accuracy is lower.) By the time you see this, the top ten algorithms may have changed, because the vendors are always improving.
IREX10 two-eye accuracy, top ten algorithms as of July 28, 2023. (Link)
While the IREX10 one-to-many tests are conducted against databases of less than a million records, it is estimated that iris one-to-many accuracy remains high even with databases of a billion people—something we will return to later in this post.
Iris drawbacks
OK, so if irises are so accurate, why aren’t we dumping our fingerprint readers and face readers and just using irises?
In short, because of the high friction in capturing irises. You can use high-resolution cameras to capture fingerprints and faces from far away, but as of now iris capture usually requires you to get very close to the capture device.
Iris image capture circa 2020 from the U.S. Federal Bureau of Investigation. (Link)
Which I guess is better than the old days when you had to put your eye right up against the capture device, but it’s still not as friendly (or intrusive) as face capture, which can be achieved as you’re walking down a passageway in an airport or sports stadium.
Irises and Apple Vision Pro
So how are irises being used today? You may or may not have hard last month’s hoopla about the Apple Vision Pro, which uses irises for one-to-one authetication.
I’m not going to spend a ton of time delving into this, because I just discussed Apple Vision Pro in June. In fact, I’m just going to quote from what I already said.
In short, as you wear the headset (which by definition is right on your head, not far away), the headset captures your iris images and uses them to authenticate you.
It’s a one-to-one comparison, not the one-to-many comparison that I discussed earlier in this post, but it is used to uniquely identify an individual.
But iris recognition doesn’t have to be used for identification.
Irises and Worldcoin
“But wait a minute, John,” you’re saying. “If you’re not using irises to determine if a person is who they say they are, then why would anyone use irises?”
Over the past several years, I’ve analyzed a variety of identity firms. Earlier this year I took a look at Worldcoin….Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness)…
That’s the only thing that I’ve said about Worldcoin, at least publicly. (I looked at Worldcoin privately earlier in 2023, but that report is not publicly accessible and even I don’t have it any more.)
The Worldcoin Foundation today announced that Worldcoin, a project co-founded by Sam Altman, Alex Blania and Max Novendstern, is now live and in a production-grade state.
The launch includes the release of the World ID SDK and plans to scale Orb operations to 35+ cities across 20+ countries around the world. In tandem, the Foundation’s subsidiary, World Assets Ltd., minted and released the Worldcoin token (WLD) to the millions of eligible people who participated in the beta; WLD is now transactable on the blockchain….
“In the age of AI, the need for proof of personhood is no longer a topic of serious debate; instead, the critical question is whether or not the proof of personhood solutions we have can be privacy-first, decentralized and maximally inclusive,” said Worldcoin co-founder and Tools for Humanity CEO Alex Blania. “Through its unique technology, Worldcoin aims to provide anyone in the world, regardless of background, geography or income, access to the growing digital and global economy in a privacy preserving and decentralized way.”
Worldcoin does NOT positively identify people…but it can still pay you
A very important note: Worldcoin’s purpose is not to determine identity (that a person is who they say they are). Worldcoin’s purpose is to determine uniqueness: namely, that a person (whoever they are) is unique among all the billions of people in the world. Once uniqueness is determined, the person can get money money money with an assurance that the same person won’t get money twice.
Iris biometrics outperform other biometric modalities and already achieved false match rates beyond 1.2× 10−141.2×10−14 (one false match in one trillion[9]) two decades ago[10]—even without recent advancements in AI. This is several orders of magnitude more accurate than the current state of the art in face recognition.
When you have tens of thousands of people dying, then the only conscionable response is to ban automobiles altogether. Any other action or inaction is completely irresponsible.
After all, you can ask the experts who want us to ban biometrics because it can be spoofed and is racist, so therefore we shouldn’t use biometrics at all.
I disagree with the calls to ban biometrics, and I’ll go through three “biometrics are bad” examples and say why banning biometrics is NOT justified.
Even some identity professionals may not know about the old “gummy fingers” story from 20+ years ago.
And yes, I know that I’ve talked about Gender Shades ad nauseum, but it bears repeating again.
And voice deepfakes are always a good topic to discuss in our AI-obsessed world.
But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies.
Back in 2002, this news WAS really “scary,” since it suggested that you could access a fingerprint reader-protected site with something that wasn’t a finger. Gelatin. A piece of metal. A photograph.
TECH5 participated in the 2023 LivDet Non-contact Fingerprint competition to evaluate its latest NN-based fingerprint liveness detection algorithm and has achieved first and second ranks in the “Systems” category for both single- and four-fingerprint liveness detection algorithms respectively. Both submissions achieved the lowest error rates on bonafide (live) fingerprints. TECH5 achieved 100% accuracy in detecting complex spoof types such as Ecoflex, Playdoh, wood glue, and latex with its groundbreaking Neural Network model that is only 1.5MB in size, setting a new industry benchmark for both accuracy and efficiency.
TECH5 excelled in detecting fake fingers for “non-contact” reading where the fingers don’t even touch a surface such as an optical surface. That’s appreciably harder than detecting fake fingers that touch contact devices.
I should note that LivDet is an independent assessment. As I’ve said before, independent technology assessments provide some guidance on the accuracy and performance of technologies.
So gummy fingers and future threats can be addressed as they arrive.
Let’s stop right there for a moment and address two items before we continue. Trust me; it’s important.
This study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.
The study focused on gender classification and race classification. Back in those primitive innocent days of 2018, the world assumed that you could look at a person and tell whether the person was male or female, or tell the race of a person. (The phrase “self-identity” had not yet become popular, despite the Rachel Dolezal episode which happened before the Gender Shades study). Most importantly, the study did not address identification of individuals at all.
However, the findings did find something:
While the companies appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. Let’s explore.
All companies perform better on males than females with an 8.1% – 20.6% difference in error rates.
All companies perform better on lighter subjects as a whole than on darker subjects as a whole with an 11.8% – 19.2% difference in error rates.
When we analyze the results by intersectional subgroups – darker males, darker females, lighter males, lighter females – we see that all companies perform worst on darker females.
What does this mean? It means that if you are using one of these three algorithms solely for the purpose of determining a person’s gender and race, some results are more accurate than others.
And all the stories about people such as Robert Williams being wrongfully arrested based upon faulty facial recognition results have nothing to do with Gender Shades. I’ll address this briefly (for once):
In the United States, facial recognition identification results should only be used by the police as an investigative lead, and no one should be arrested solely on the basis of facial recognition. (The city of Detroit stated that Williams’ arrest resulted from “sloppy” detective work.)
If you are using facial recognition for criminal investigations, your people had better have forensic face training. (Then they would know, as Detroit investigators apparently didn’t know, that the quality of surveillance footage is important.)
If you’re going to ban computerized facial recognition (even when only used as an investigative lead, and even when only used by properly trained individuals), consider the alternative of human witness identification. Or witness misidentification. Roeling Adams, Reggie Cole, Jason Kindle, Adam Riojas, Timothy Atkins, Uriah Courtney, Jason Rivera, Vondell Lewis, Guy Miles, Luis Vargas, and Rafael Madrigal can tell you how inaccurate (and racist) human facial recognition can be. See my LinkedIn article “Don’t ban facial recognition.”
Obviously, facial recognition has been the subject of independent assessments, including continuous bias testing by the National Institute of Standards and Technology as part of its Face Recognition Vendor Test (FRVT), specifically within the 1:1 verification testing. And NIST has measured the identification bias of hundreds of algorithms, not just three.
Richard Nixon never spoke those words in public, although it’s possible that he may have rehearsed William Safire’s speech, composed in case Apollo 11 had not resulted in one giant leap for mankind. As noted in the video, Nixon’s voice and appearance were spoofed using artificial intelligence to create a “deepfake.”
In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the branch manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The manager, believing everything appeared legitimate, began making the transfers.
What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech…
Now I’ll grant that this is an example of human voice verification, which can be as inaccurate as the previously referenced human witness misidentification. But are computerized systems any better, and can they detect spoofed voices?
IDVoice Verified combines ID R&D’s core voice verification biometric engine, IDVoice, with our passive voice liveness detection, IDLive Voice, to create a high-performance solution for strong authentication, fraud prevention, and anti-spoofing verification.
Anti-spoofing verification technology is a critical component in voice biometric authentication for fraud prevention services. Before determining a match, IDVoice Verified ensures that the voice presented is not a recording.
This is only the beginning of the war against voice spoofing. Other companies will pioneer new advances that will tell the real voices from the fake ones.
As for independent testing:
ID R&D has participated in multiple ASVspoof tests, and performed well in them.
Companies often have a lot of things they want to do, but don’t have the people to do them. It takes a long time to hire someone, and it even takes time to find a consultant that knows your industry and can do the work.
This affects identity/biometric companies just like it affects other companies. When an identity/biometric company needs a specific type of expertise and needs it NOW, it’s often hard to find the person they need.
If your company needs a biometric content marketing expert (or an identity content marketing expert) NOW, you’ve come to the right place—Bredemarket. Bredemarket has no identity learning curve, no content learning curve, and offers proven results.
Identity/biometric consulting in the 1990s
I remember when I first started working as an identity/biometric consultant, long before Bredemarket was a thing.
OK, not quite THAT long ago. I started working in biometrics in the 1990s—NOT the 1940s.
In 1994, the proposals department at Printrak International needed additional writers due to the manager’s maternity leave, and she was so valuable that Printrak needed to bring in TWO consultants to take her place.
At least initially, the other consultant and I couldn’t fill the manager’s shoes.
Both of us could spell “RAID.” Not the bug spray, but the storage mechanism that stored all those “huge” fingerprint images.
But on that first night that I was cranking out proposal letters for something called a “Latent Station 2000,” I didn’t really know WHAT I was writing about.
As time went on, the other consultant and I learned much more—so much that the company brought both of us on as full-time employees.
After we were hired full-time, we spent a combined 45+ years at Printrak and its corporate successors in proposals, marketing, and product management positions, contributing to industry knowledge.
But neither of us knew biometrics before we started consuting at Printrak.
And I had never written a proposal before I started consulting at Printrak. (I had written an RFP. Sort of.)
But frankly, there weren’t a lot of identity/biometric consultants out in the field in the 1990s. There were the 20th century equivalents of Applied Forensic Services LLC, but at the time I don’t think there were any 20th century equivalents of Tandem Technical Writing LLC.
Unlike the 1990s, identity/biometric firms that need consulting help have many options. In addition to Applied Forensic Services and Tandem Technical Writing you have…me.
Mike and Laurel can tell you what they can do, and I heartily endorse both of them.
Let me share with you why I call myself a biometric content marketing expert who can help your identity/biometric company get marketing content out now:
No identity learning curve
No content learning curve
Proven results
No identity learning curve
I have worked with finger, face, iris, DNA, and other biometrics, as well as government-issued identity documents and geolocation. If you are interested, you can read my Bredemarket blog posts that mention the following topics:
Because I’ve produced both external and internal content on identity/biometric topics, I offer the experience to produce your content in a number of formats.
External content: account-based marketing content, articles, blog posts (I am the identity/biometric blog expert), case studies, data sheets, partner comarketing content, presentations, proposals, sales literature sheets, scientific book chapters, smartphone application content (events), social media posts, web page content, and white papers.
You see, my fingerprint experience was primarily rooted in the traditional 14 (yes, 14) fingerprint impression block livescan capture technology used by law enforcement agencies to submit full sets of tenprints to the U.S. Federal Bureau of Investigation (FBI), and state and local agencies that submit to the FBI.
I’d be willing to bet that the vast majority of you have ten fingers.
So why do tenprint livescan devices capture 14 fingerprint impression blocks?
Why 14 fingerprint impression blocks are as good as 20 fingers
It’s important to understand that tenprint livescan devices, which only began to emerge in the 1980s, were originally designed as an electronic way to duplicate the traditional inking process in which ink was placed on arrestees’ fingers, and the ink was transferred to a tenprint fingerprint card.
The criminal fingeprint card (and, with some changes, the applicant fingerprint card) looks something like this:
If you look at the lower half of the front of a fingerprint card, you will see 14 fingerprint impression blocks arranged in 3 rows.
The first row is where you place five “rolled” (nail to nail) fingerprints taken from the right hand, starting with the right thumb and ending with the right little finger.
The second row is where you place five rolled fingerprints from the left hand, again starting with the thumb and ending with the little finger.
So now you’ve captured ten fingerprints. But you’re not done. You still have to fill four more impression blocks. Here’s how:
Identification flat impressions are taken simultaneously without rolling. These are referred to as plain, slap, or flat impressions. The individual’s right and left four fingers should be captured first, followed by the two thumbs (4-4-2 method).
To clarify, on the third row, for the large box in the lower left corner of the card, you “slap” all four fingers of the left hand down at the same time. Then you skip over the the large box on the lower right corner of the card and slap all four fingers of the right hand down at the same time. Finally you slap the two thumbs down at the same time, capturing the left thumb in the small middle left box, and the right thumb in the small middle right box.
Well, at least that’s how you do it on a traditional inked card. On a tenprint livescan device, you roll and slap your fingers on the large platen, without worrying (that much) about staying within the lines.
Why 14 fingerprint impression blocks are better than 20 fingers
So by the time you’re done, you’ve filled 14 fingerprint impression blocks by 13 distinct actions (the two slap thumbs are captured simultaneously), and you’ve effectively captured 20 fingerprints.
Why?
Quality control.
Because since every finger should theoretically be captured twice, the slaps can be compared against the rolls to ensure that the fingerprints were captured in the correct order.
Locations of finger 2 (green) and finger 3 (blue) for rolled and slap prints.
If you capture the rolled and slap prints in the correct order, then the right index finger (finger 2) should appear in the green area on the first row as a rolled print, and in the green area on the third row as a slap print. Similarly, the middle finger (finger 3) should appear in the blue areas.
If the green rolled print is NOT the same as the green slap print, or if the blue rolled print is NOT the same as the blue slap print, then you captured the fingerprints in the wrong order.
In the old pre-livescan days of inking, a trained tenprint fingerprint examiner (or someone who pretended to be one) had to look at the prints to ensure that the fingers were captured properly. Now the roll to slap comparisons are all done in software, either at the tenprint livescan device itself, or at the automated fingerprint identification system (AFIS) or the automated biometric identification system (ABIS) that receives the prints.
In the 4-4-2 method, groups of prints are captured together, rather than individually. While it is possible to completely mess things up by capturing the left slaps when you are supposed to capture the right slaps, or by twisting your hands in a bizarre manner to capture the thumbs in reverse order, 4-4-2 gives you a reasonable assurance that the slap prints are captured in the correct order, ensuring a proper roll-to-slap comparison.
Well, unless the fingerprints are captured in an unattended fashion, or the police officer capturing the fingerprints is crooked.
But today’s ABIS systems are powerful enough to compare all ten submitted fingers against all ten fingers of every record in an ABIS database, so even if the submitted fingerprints falsely record finger 2 as finger 3, the ABIS will still find the matching print anyway.
It ISN’T time for me to jump on the Apple Vision Pro bandwagon, because while Apple Vision Pro affects the biometric industry, it’s not a REVOLUTIONARY biometric event.
The four revolutionary biometric events in the 21st century
How do I define a “revolutionary biometric event”?
I define it as something that completely transforms the biometric industry.
When I mention three of the four revolutionary biometric events in the 21st century, you will understand what I mean.
9/11. After 9/11, orders of biometric devices skyrocketed, and biometrics were incorporated into identity documents such as passports and driver’s licenses. Who knows, maybe someday we’ll actually implement REAL ID in the United States. The latest extension of the REAL ID enforcement date moved it out to May 7, 2025. (Subject to change, of course.)
The Boston Marathon bombings, April 2013. After the bombings, the FBI was challenged in managing and analyzing countless hours of video evidence. Companies such as IDEMIA National Security Solutions, MorphoTrak, Motorola, Paravision, Rank One Computing, and many others have tirelessly worked to address this challenge, while ensuring that facial recognition results accurately identify perpetrators while protecting the privacy of others in the video feeds.
COVID-19, spring 2020 and beyond. COVID accelerated changes that were already taking place in the biometric industry. COVID prioritized mobile, remote, and contactless interactions and forced businesses to address issues that were not as critical previously, such as liveness detection.
These three are cataclysmic world events that had a profound impact on biometrics. The fourth one, which occurred after the Boston Marathon bombings but before COVID, was…an introduction of a product feature.
Touch ID, September 2013. When Apple introduced the iPhone 5s, it also introduced a new way to log in to the device. Rather than entering a passcode, iPhone 5S users could just use their finger to log in. The technical accomplishment was dwarfed by the legitimacy that this brought to using fingerprints for identification. Before 2013, attempts to implement fingerprint verification for benefits recipients were resisted because fingerprinting was something that criminals did. After September 2013, fingerprinting was something that the cool Apple kids did. The biometric industry changed overnight.
Of course, Apple followed Touch ID with Face ID, with adherents of the competing biometric modalities sparring over which was better. But Face ID wouldn’t have been accepted as widely if Touch ID hadn’t paved the way.
So why hasn’t iris verification taken off?
Iris verification has been around for decades (I remember Iridian before L-1; it’s now part of IDEMIA), but iris verification is nowhere near as popular in the general population as finger and face verification. There are two reasons for this:
Compared to other biometrics, irises are hard to capture. To capture a fingerprint, you can lay your finger on a capture device, or “slap” your four fingers on a capture device, or even “wave” your fingers across a capture device. Faces are even easier to capture; while older face capture systems required you to stand close to the camera, modern face devices can capture your face as you are walking by the camera, or even if you are some distance from the camera.
Compared to other biometrics, irises are expensive to capture. Many years ago, my then-employer developed a technological marvel, an iris capture device that could accurately capture irises for people of any height. Unfortunately, the technological marvel cost thousands upon thousands of dollars, and no customers were going to use it when they could acquire fingerprint and face capture devices that were much less costly.
So while people rushed to implement finger and face capture on phones and other devices, iris capture was reserved for narrow verticals that required iris accuracy.
The Apple Vision Pro is not the first headset that was ever created, but the iPhone wasn’t the first smartphone either. And coming late to the game doesn’t matter. Apple’s visibility among trendsetters ensures that when Apple releases something, people take notice.
According to Apple, Optic ID works by analyzing a user’s iris through LED light exposure and then comparing it with an enrolled Optic ID stored on the device’s Secure Enclave….Optic ID will be used for everything from unlocking Vision Pro to using Apple Pay in your own headspace.
So why did Apple incorporate Optic ID on this device and not the others?
There are multiple reasons, but one key reason is that the Vision Pro retails for US$3,499, which makes it easier for Apple to justify the cost of the iris components.
But the high price of the Vision Pro comes at…a price
However, that high price is also the reason why the Vision Pro is not going to revolutionize the biometric industry. CNET admitted that the Vision Pro is a niche item:
With Vision Pro, Apple is trying to establish what it believes will be the next major evolution of the personal computer. That’s a bigger goal than selling millions of units on launch day, and a shift like that doesn’t happen overnight, no matter what the price is. The version of Vision Pro that Apple launches next year likely isn’t the one that most people will buy.
Certainly Vision Pro and Optic ID have the potential to revolutionize the computing industry…in the long term. And as that happens, the use of iris biometrics will become more popular with the general public…in the long term.
But not today. You’ll have to wait a little longer for the next biometric revolution. And hopefully it won’t be a catastrophic event like three of the previous revolutions.
Does your identity business provide biometric or non-biometric products and services that use finger, face, iris, DNA, voice, government documents, geolocation, or other factors or modalities?
Does your identity business need written content, such as blog posts (from the identity/biometric blog expert), case studies, data sheets, proposal text, social media posts, or white papers?
How can your identity business (with the help of an identity content marketing expert) create the right written content?
Latent prints are usually produced by sweat, skin debris or other sebaceous excretions that cover up the palmar surface of the fingertips. If a latent print is on the glass platen of the optical sensor and light is directed on it, this print can fool the optical scanner….
Capacitive sensors can be spoofed by using gelatin based soft artificial fingers.
There is another weakness of these types of readers. Some professions damage and wear away a person’s fingerprint ridges. Examples of professions whose practitioners exhibit worn ridges include construction workers and biometric content marketing experts (who, at least in the old days, handled a lot of paper).
The solution is to design a fingerprint reader that not only examines the surface of the finger, but goes deeper.
The specialty of multispectral sensors is that it can capture the features of the tissue that lie below the skin surface as well as the usual features on the finger surface. The features under the skin surface are able to provide a second representation of the pattern on the fingerprint surface.
Multispectral sensors are nothing new. When I worked for Motorola, Motorola Ventures had invested in a company called Lumidigm that produced multispectral fingerprint sensors; they were much more expensive than your typical optical or capacitive sensor, but were much more effective in capturing true fingerprints to the subdermal level.
“Gelatin based soft artificial fingers” aren’t the only way to fool a biometric sensor, whether you’re talking about a fingerprint sensor or some other sensor such as a face sensor.
Regardless of the biometric modality, the intent is the same; instead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).
This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).
And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.
(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)
[UPDATE 4/24/2024: I FINALLY ADDRESSED THE DIFFERENCE BETWEEN ACTIVE AND PASSIVE LIVENESS HERE.]
Multispectral liveness
While multispectral fingerprint readers aren’t the only fingerprint readers, or the only biometric readers, that iBeta has tested for liveness, the HID Global Lumidigm readers conform to Level 2 (the higher level) of iBeta testing.
When keeping your websites updated, I advise you to do as I say, not as I do. Two of my websites were significantly out of date and needed hurried corrections.
I realized this morning that the “My Experience” page on my jebredcal website was roughly a year out of date, so I hurriedly added content to it. Now the page will turn up in searches for the acronym “ABM” (OK, maybe not on the first page of the search results).