One advantage of an open source project is that there are far fewer secrets to hide. If a commercial firm develops biometric products, it has a responsibility to its investors to not release sensitive information.
Although findings…describe potential attack surfaces and are of high or medium severity, (Trail of Bits’) analysis did not uncover vulnerabilities in the Orb’s code…
My belief that everything on the Internet is true has been irrevocably shattered, all because of what an entertainment executive ordered in his spare time. But the Casey Bloys / “Kelly Shepherd” story is just a tiny bit of what is going on with synthetic identities. And X isn’t the only platform plagued by them, as my LinkedIn experience attests.
By the way, this blog post contains pictures of a lot of people. Casey Bloys is real. Some of the others, not so much.
Blame COVID
Casey Bloys is the Chairman and CEO of HBO and Max Content. Bloys had to start a recent 2024 schedule presentation with an apology, according to Variety. After explaining how passionate he is about his programming, he went back in time a couple of years to a period that we all remember.
So when you think of that mindset, and then think of 2020 and 2021, I’m home, working from home and spending an unhealthy amount of scrolling through Twitter. And I come up with a very, very dumb idea to vent my frustration.
So why did Bloys have to apologize on Thursday? Because of an article that Rolling Stone published on Wednesday. The article led off with this juicy showbiz tidbit about Bloys’ idea for responding to a critic.
“Maybe a Twitter user should tweet that that’s a pretty blithe response to what soldiers legitimately go through on [the] battlefield,” he texted. “Do you have a secret handle? Couldn’t we say especially given that it’s D-Day to dismiss a soldier’s experience like that seems pretty disrespectful … this must be answered!”
(A note to my younger readers: Twitter used to be a popular social media service that no longer exists. It was replaced by X.)
Eventually Bloys found someone to create the “secret handle.” Sully Temori is now alleging wrongful termination by HBO (which is why we’re learning about these juicy tidbits, via court filings). But in 2021 he was an executive assistant who wanted to get ahead by pleasing his bosses.
This is where Kelly Shepherd enters the story.
Kelly Shepherd, fake vegan mom
Ms. Shepherd seems like a nice woman. A mom, a Texan, a herbalist and aromatherapist, and a vegan. (The cows love that last part.)
Most critically, Shepherd is a normal person, not one of those Hollywood showbiz folks. Although Shepherd, who never posted anything on her own, seems to have a distinct motivation to respond to critics of HBO shows. Take her first reply to a critic from (checks notes) Rolling Stone. (Two years later, Rolling Stone would gleefully report on this story. Watch out who you anger.)
alan is always predictably safe and scared in his opinions
Kelly’s other three replies were along the same lines.
All were short one-sentence blurbs.
Most were completely in lower case, because that’s how regular non-Hollywood folk tweet.
All were critical of those who were critical of HBO, accusing them of “shitting on a show about women,” getting their “panties in a bunch,” and being “busy virtue signaling.”
Hey, if I couldn’t eat hamburgers and my home was filled with weird herbs and aromas, I’d be a little mad too.
And then, a little over a week later, it was over, and Kelly Shepherd never tweeted again. Although Temori apparently performed other activities against HBO critics via other methods. Well, until he was terminated.
Did Kelly Shepherd open a LinkedIn account?
But as part of the plan to satisfy Casey Bloys’ angry whims, Kelly Shepherd acquired a social media account, which she could use as a possible proof of identity.
Even though we now know she doesn’t exist.
But X isn’t the only platform plagued with synthetic identities, and some synthetic identities can do much more than anger an entertainment reviewer.
Many of us on LinkedIn are regularly receiving InMails and connection requests (in my case, from profiles with pictures of beautiful women) who say that we are constantly recommended by LinkedIn, who tell us how impressive our profiles are, and who want to contact us outside of the LinkedIn platform via text message or WhatsApp.
Now perhaps some of these messages are from real people, but I seriously doubt that so many of the employees at John Q Wine & Liquor Winery in New York happen to have the last name “Walter.” And the exact same job title.
Let’s take a close look at what Karina has been doing for the last 4+ years. Other than posing in front of her car, of course.
Ms. Walter is a pretty busy freelance general manager / director / content partnerships manager.
As for her colleague Ms. Alice Walter, she has more experience (having started in 2018) but also has an extensive biography that begins:
The United States is a country with innovative challenges, and there is more room for development in the wine industry at John Q Wine & Liquor Winery. I am motivated and love to learn, and like to be exposed to more different cultures, and hope to develop more careers in my future life.
And you can check out Maria Walter’s profile if you’re so inclined. Or at least check out “her” picture.
Now none of the Walters women tried to contact me, but another “employee” (or maybe it was a “freelancer,” I forget) of this company tried to do so, which led my curious nature to discover yet another hive of fake LinkedIn profiles.
Sadly, one person from this company is a second-degree connection, which means that one of my connections accepted “her” connection request.
Synthetic identities are harmless…right?
Who knows what Karina, Alice, and Maria will do with their LinkedIn profiles?
Will they connect with other professionals?
Will they ask said professionals to move the conversation to SMS or WhatsApp, for whatever reason?
Will they apply for new jobs, using their impressive work history? A 98.8% customer satisfaction rate while managing 1,800 sub-partnerships is remarkable.
Will they apply for bank accounts…or loans?
The fraud possibilities from fake LinkedIn accounts are endless, and could be very costly for any company who falls for a fake synthetic identity. In fact, FiVerity reports that “in 2020, an estimated $20 billion was lost to SIF” (synthetic identity fraud). Which means that LinkedIn account holders and Partnerships Managers Karina, Alice, and Maria Walter could make a LOT of money.
Now banks and other financial institutions have safeguards to verify financial identities of people who open accounts and apply for loans, because fraud reduction is critically important to financial institutions.
Social media companies? Identity is only “important” to them.
They don’t even care about uniqueness (as Worldcoin does), evidenced by the fact that I have more than two X accounts (but none in which I portray a female Texas mom and vegan).
So if someone comes up to you on X or LinkedIn, remember that all may not be as it seems.
Can someone pretend to be you if they have no idea who you are?
It’s been a couple of weeks since I last addressed Worldcoin’s activities, but a lot has happened in Kenya, and now in Argentina also. Here’s a succinct (I hope) update that looks beyond the blaring headlines to see what is REALLY happening.
And, at the end of this post, I address what COULD happen if a fraudster “cut off someone’s face, including gouging out their eyes, and then you draped it all over your own face.” Hey, you have to consider ALL the use cases.
According to the AAIP, an entity like Worldcoin must register with the AAIP, provide information about its data processing policy, and indicate the purpose for collecting sensitive data and the retention period for such data. Additionally, the agency requires details of the security and confidentiality measures applied to safeguard personal information. The AAIP did not confirm whether Worldcoin complies with the standards.
Worldcoin told CoinDesk in an emailed statement that “the project complies with all laws and regulations governing the processing of personal data in the markets where Worldcoin is available, including but not limited to Argentina’s Personal Data Protection Act 25.326.”
But what is this “personal data” that concerns Argentina so much?
The data that Worldcoin collects
Now a number of companies need to comply with local privacy regulations in numerous countries, and Worldcoin obviously must obey the law in the countries where it conducts business, including laws about personally identifiable information (PII). For illustration, here is an incomplete list of examples of PII, compiled by the University of Pittsburgh:
Name: full name, maiden name, mother’s maiden name, or alias
Personal identification numbers: social security number (SSN), passport number, driver’s license number, taxpayer identification number, patient identification number, financial account number, or credit card number
Personal address information: street address, or email address
Personal telephone numbers
Personal characteristics: photographic images (particularly of face or other identifying characteristics), fingerprints, or handwriting
Biometric data: retina scans, voice signatures, or facial geometry
Information identifying personally owned property: VIN number or title number
Asset information: Internet Protocol (IP) or Media Access Control (MAC) addresses that consistently link to a particular person
To my knowledge, Worldcoin acquires PII in two separate instances: when downloading the World App, and when registering at an Orb.
Data collected by the World App
First, Worldcoin collects data when you download the World App. The data that is collected by the iOS version of the World App includes a user ID, the user’s coarse location, a name, contacts, and a phone number. I’ll admit that the collection of contacts is a little odd, but let’s see what happens to that data later in the process.
Data collected by the Orb
Second, Worldcoin collects data when you enroll at an Orb.
Obviously the Orb collects iris images, and also collects face images. But what else is collected at the Orb?
Your biometric data is first processed locally on the Orb and then permanently deleted. The only data that remains is your iris code. This iris code is a set of numbers generated by the Orb and is not linked to your wallet or any of your personal information. As a result, it really tells us — and everyone else — nothing about you. All it does is stop you from being able to sign up again.
But what about the second use case, in which the user consents to have Worldcoin retain information (so that the user does not have to re-enroll if they get a new phone)?
Your biometric data is first processed locally on the Orb and then sent, via encrypted communication channels, to our distributed secure data stores, where it is encrypted at rest. Once it arrives, your biometric data is permanently deleted from the Orb.
Regardless of whether biometric data is retained or not, other PII isn’t even collected at the Orb:
Since you are not required to provide personal information like your name, email address, physical address or phone number, this means that you can easily sign up without us ever knowing anything about you.
“But John,” you’re saying, “names and phone numbers are not collected at the Orb, but names and phone numbers ARE collected by the World App. So how are the name, phone number, user ID, and ‘iris code’ linked together?” Let me reprint what Worldcoin says about the app:
Your Worldcoin App is your self-custodial wallet. That means, just like a physical wallet, that no banks, governments or corporations can do anything to it — like lose or freeze your money — you’re in complete control.
You also don’t need to enter any personal information to get or use the App. But even if you do, you can rest assured that, unlike others, we will never sell or try to profit from your personal information.
So apparently, while the World App asks for your name, it is not a mandatory field. I just confirmed this on my World App (which I enabled on May 16, without orb verification); the only identifying information that I could find was my phone number and my user ID.
And I’m assuming that if I were to enroll at an Orb, the iris code would be linked to my user ID.
Depending upon Worldcoin’s internal architecture:
It’s possible that the iris code could be linked to my phone number, either intentionally or unintentionally. But even if it is, an iris code in and of itself is useless outside of the Worldcoin ecosystem. In the same way that an Aware, IDEMIA, NEC, or Thales fingerprint template (not the fingerprint image) can’t be used to generate a full fingerprint image, a Worldcoin iris code can’t be used to generate a full iris image.
If I choose the “with data custody” option, my biometric images could be linked to my phone number. Again, they could be linked either intentionally or unintentionally. If such a linkage exists, then that IS a problem. If a user chooses to back up both their World App data and their Orb biometric image data with Worldcoin (and again, the user must CHOOSE to back up both sets of data), how does Worldcoin ensure that the two sets of data can’t be linked?
Presumably Argentina’s AAIP will investigate Worldcoin’s architecture to ensure that there are no financial identity threats.
Which leads us to Kenya.
Kenya and data protection laws
When we last visited Kenya and Worldcoin on August 2, the government had announced that “(r)elevant security, financial services and data protection agencies have commenced inquiries and investigations to establish the authenticity and legality of the aforesaid activities, the safety and protection of the data being harvested, and how the harvesters intend to use the data.”
Those investigations continue, Worldcoin’s Kenya offices have been raided, and Parliament is angry at the regulatory authorities…for not doing enough. The article that reports this states that the Data Protection Unit feels it is not responsible for investigating the “core business” of the registered companies, but Parliament feels otherwise.
The article also makes another interesting statement:
…the office failed to conduct background checks on the company, whose operations have been banned in both the United States of America (USA) and Germany.
Now what I CAN’T do is obtain some Worldcoin when I register my irises.
In addition, Worldcoin tokens (“WLD”) are not intended to be available for use, purchase, or access by US persons, including US citizens, residents, or persons in the United States, or companies incorporated, located, or resident in the United States, or who have a registered agent in the United States. We do not make WLD available to such US persons. Furthermore, you agree that you will not sell, transfer or make available WLD to US persons.
I continued on a darker vein: What if a criminal mastermind decided to cut out someone’s eyes, and use them to steal their identity?
The Orb engineer told me that it wouldn’t work. This Orb needs to see alive, blinking eyes, and a human face that is real attached to them. A picture of someone’s eyes won’t scan, robot eyes won’t scan, canine eyes won’t scan.
But then I got him.
If you cut off someone’s face, including gouging out their eyes, and then you draped it all over your own face, could you register as them with a Worldcoin scanner and steal their identity?
Yes.
Although he promised that the Worldcoin R&D team has not tested this particular edge case.
“Relevant security, financial services and data protection agencies have commenced inquiries and investigations to establish the authenticity and legality of the aforesaid activities, the safety and protection of the data being harvested, and how the harvesters intend to use the data,” read part of the statement.
“Further, it will be critical that assurances of public safety and the integrity of the financial transactions involving such a large number of citizens be satisfactorily provided upfront.”
And even the iris image data that Worldcoin DOES collect isn’t retained unless people request it.
Since no two people have the same iris pattern and these patterns are very hard to fake, the Orb can accurately tell you apart from everyone else without having to collect any other information about you — not even your name.
Importantly, the images of you and your iris pattern are permanently deleted as soon as you have signed up, unless you opt in to Data Custody to reduce the number of times you may need to go back to an Orb. Either way, the images are not connected to your Worldcoin tokens, transactions, or World ID.
Ah, but Worldcoin does retain…an iris code. A lot of good THAT’S gonna do a scammer.
Your biometric data is first processed locally on the Orb and then permanently deleted. The only data that remains is your iris code. This iris code is a set of numbers generated by the Orb and is not linked to your wallet or any of your personal information. As a result, it really tells us — and everyone else — nothing about you. All it does is stop you from being able to sign up again.
Since you are not required to provide personal information like your name, email address, physical address or phone number, this means that you can easily sign up without us ever knowing anything about you.
And no, you cannot reverse engineer an iris image from the iris code. In fact, you can’t reverse engineer any biometric image from its biometric template.
And even if you could reverse engineer an iris image, what are you going to do with it? You don’t know who owns it. It probably doesn’t belong to Bill Gates. It probably belongs to an impoverished Kenyan. (Good luck getting that person’s US$2.00. Which they probably already sold.)
Because—and here’s the thing that people forget about Worldcoin—”Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness).” (Link)
Companies could pay Worldcoin to use its digital identity system, for example if a coffee shop wants to give everyone one free coffee, then Worldcoin’s technology could be used to ensure that people do not claim more than one coffee without the shop needing to gather personal data, Macieira said.
Yup, that’s the use case. To allow 8 billion people to each claim one cup of coffee.
Not just the people who are members of the coffee company’s rewards club.
Not just the people who have purchased a certain amount of coffee.
Not just the people in the United States and Colombia.
Worldcoin can’t do those things, because even Worldcoin doesn’t know anything about its users.
Which means, by the way, that the World ID can’t be used in elections or national/state government welfare benefits distribution.
Sure it can be used to prove that someone hasn’t voted twice, or received benefits under two different names.
But it has no way of knowing whether the individual is qualified to vote or receive benefits. Maybe the person doesn’t live in the local jurisdiction. For voting, maybe the person lives there but is not a citizen. For benefits, maybe the person has too much income to qualify. Worldcoin doesn’t have a clue if any of these things are true.
So apparently the Kenyan authorities are worried that Worldcoin is gathering too much data.
I’m worried that Worldcoin is gathering not enough data for most practical use cases.
Iris recognition continues to make the news. Let’s review what iris recognition is and its benefits (and drawbacks), why Apple made the news last month, and why Worldcoin is making the news this month.
What is iris recognition?
There are a number of biometric modalities that can identify individuals by “who they are” (one of the five factors of authentication). A few examples include fingerprints, faces, voices, and DNA. All of these modalities purport to uniquely (or nearly uniquely) identify an individual.
One other way to identify individuals is via the irises in their eyes. I’m not a doctor, but presumably the Cleveland Clinic employs medical professionals who are qualified to define what the iris is.
The iris is the colored part of your eye. Muscles in your iris control your pupil — the small black opening that lets light into your eye.
But why use irises rather than, say, fingerprints and faces? The best person to answer this is John Daugman. (At this point several of you are intoning, “John Daugman.” With reason. He’s the inventor of iris recognition.)
(I)ris patterns become interesting as an alternative approach to reliable visual recognition of persons when imaging can be done at distances of less than a meter, and especially when there is a need to search very large databases without incurring any false matches despite a huge number of possibilities. Although small (11 mm) and sometimes problematic to image, the iris has the great mathematical advantage that its pattern variability among different persons is enormous.
Daugman, John, “How Iris Recognition Works.” IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 14, NO. 1, JANUARY 2004. Quoted from page 21. (PDF)
Or in non-scientific speak, one benefit of iris recognition is that you know it is accurate, even when submitting a pair of irises in a one-to-many search against a huge database. How huge? We’ll discuss later.
Brandon Mayfield and fingerprints
Remember that Daugman’s paper was released roughly two months before Brandon Mayfield was misidentified in a fingerprint comparison. (Everyone now intone “Brandon Mayfield.”)
While some of the issues associated with Mayfield’s misidentification had nothing to do with forensic science (Al Jazeera spends some time discussing bias, and Itiel Dror also looked at bias post-Mayfield), this still shows that fingerprints are remarkably similar and that it takes care to properly identify people.
Police agencies, witnesses, and faces
And of course there are recent examples of facial misidentifications (both by police agencies and witnesses), again not necessarily forensic science related, and again showing the similarity of faces from two different people.
At the root of iris recognition’s accuracy is the data-richness of the iris itself. The IrisAccess system captures over 240 degrees of freedom or unique characteristics in formulating its algorithmic template. Fingerprints, facial recognition and hand geometry have far less detailed input in template construction.
Enough about claims. What about real results? The IREX 10 test, independently administered by the U.S. National Institute of Standards and Technology, measures the identification (one-to-many) accuracy of submitted algorithms. At the time I am writing this, the ten most accurate algorithms provide false negative identification rates (FNIR) between 0.0022 ± 0.0004 and 0.0037 ± 0.0005 when two eyes are used. (Single eye accuracy is lower.) By the time you see this, the top ten algorithms may have changed, because the vendors are always improving.
While the IREX10 one-to-many tests are conducted against databases of less than a million records, it is estimated that iris one-to-many accuracy remains high even with databases of a billion people—something we will return to later in this post.
Iris drawbacks
OK, so if irises are so accurate, why aren’t we dumping our fingerprint readers and face readers and just using irises?
In short, because of the high friction in capturing irises. You can use high-resolution cameras to capture fingerprints and faces from far away, but as of now iris capture usually requires you to get very close to the capture device.
Which I guess is better than the old days when you had to put your eye right up against the capture device, but it’s still not as friendly (or intrusive) as face capture, which can be achieved as you’re walking down a passageway in an airport or sports stadium.
Irises and Apple Vision Pro
So how are irises being used today? You may or may not have hard last month’s hoopla about the Apple Vision Pro, which uses irises for one-to-one authetication.
I’m not going to spend a ton of time delving into this, because I just discussed Apple Vision Pro in June. In fact, I’m just going to quote from what I already said.
In short, as you wear the headset (which by definition is right on your head, not far away), the headset captures your iris images and uses them to authenticate you.
It’s a one-to-one comparison, not the one-to-many comparison that I discussed earlier in this post, but it is used to uniquely identify an individual.
But iris recognition doesn’t have to be used for identification.
Irises and Worldcoin
“But wait a minute, John,” you’re saying. “If you’re not using irises to determine if a person is who they say they are, then why would anyone use irises?”
Over the past several years, I’ve analyzed a variety of identity firms. Earlier this year I took a look at Worldcoin….Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness)…
That’s the only thing that I’ve said about Worldcoin, at least publicly. (I looked at Worldcoin privately earlier in 2023, but that report is not publicly accessible and even I don’t have it any more.)
The Worldcoin Foundation today announced that Worldcoin, a project co-founded by Sam Altman, Alex Blania and Max Novendstern, is now live and in a production-grade state.
The launch includes the release of the World ID SDK and plans to scale Orb operations to 35+ cities across 20+ countries around the world. In tandem, the Foundation’s subsidiary, World Assets Ltd., minted and released the Worldcoin token (WLD) to the millions of eligible people who participated in the beta; WLD is now transactable on the blockchain….
“In the age of AI, the need for proof of personhood is no longer a topic of serious debate; instead, the critical question is whether or not the proof of personhood solutions we have can be privacy-first, decentralized and maximally inclusive,” said Worldcoin co-founder and Tools for Humanity CEO Alex Blania. “Through its unique technology, Worldcoin aims to provide anyone in the world, regardless of background, geography or income, access to the growing digital and global economy in a privacy preserving and decentralized way.”
Worldcoin does NOT positively identify people…but it can still pay you
A very important note: Worldcoin’s purpose is not to determine identity (that a person is who they say they are). Worldcoin’s purpose is to determine uniqueness: namely, that a person (whoever they are) is unique among all the billions of people in the world. Once uniqueness is determined, the person can get money money money with an assurance that the same person won’t get money twice.
OK, so how are you going to determine the uniqueness of a person among all of the billions of people in the world?
Iris biometrics outperform other biometric modalities and already achieved false match rates beyond 1.2× 10−141.2×10−14 (one false match in one trillion[9]) two decades ago[10]—even without recent advancements in AI. This is several orders of magnitude more accurate than the current state of the art in face recognition.
I’ll admit that I previously thought that age estimation was worthless, but I’ve since changed my mind about the necessity for it. Which is a good thing, because the U.S. National Institute of Standards and Technology (NIST) is about to add age estimation to its Face Recognition Vendor Test suite.
What is age estimation?
Before continuing, I should note that age estimation is not a way to identify people, but a way to classify people. For once, I’m stepping out of my preferred identity environment and looking at a classification question. Not “gender shades,” but “get off my lawn” (or my tricycle).
Age estimation uses facial features to estimate how old a person is, in the absence of any other information such as a birth certificate. In a Yoti white paper that I’ll discuss in a minute, the Western world has two primary use cases for age estimation:
First, to estimate whether a person is over or under the age of 18 years. In many Western countries, the age of 18 is a significant age that grants many privileges. In my own state of California, you have to be 18 years old to vote, join the military without parental consent, marry (and legally have sex), get a tattoo, play the lottery, enter into binding contracts, sue or be sued, or take on a number of other responsibilities. Therefore, there is a pressing interest to know whether the person at the U.S. Army Recruiting Center, a tattoo parlor, or the lottery window is entitled to use the service.
Second, to estimate whether a person is over or under the age of 13 years. Although age 13 is not as great a milestone as age 18, this is usually the age at which social media companies allow people to open accounts. Thus the social media companies and other companies that cater to teens have a pressing interest to know the teen’s age.
Why was I against age estimation?
Because I felt it was better to know an age, rather than estimate it.
My opinion was obviously influenced by my professional background. When IDEMIA was formed in 2017, I became part of a company that produced government-issued driver’s licenses for the majority of states in the United States. (OK, MorphoTrak was previously contracted to produce driver’s licenses for North Carolina, but…that didn’t last.)
With a driver’s license, you know the age of the person and don’t have to estimate anything.
And estimation is not an exact science. Here’s what Yoti’s March 2023 white paper says about age estimation accuracy:
Our True Positive Rate (TPR) for 13-17 year olds being correctly estimated as under 25 is 99.93% and there is no discernible bias across gender or skin tone. The TPRs for female and male 13-17 year olds are 99.90% and 99.94% respectively. The TPRs for skin tone 1, 2 and 3 are 99.93%, 99.89% and 99.92% respectively. This gives regulators globally a very high level of confidence that children will not be able to access adult content.
Our TPR for 6-11 year olds being correctly estimated as under 13 is 98.35%. The TPRs for female and male 6-11 year olds are 98.00% and 98.71% respectively. The TPRs for skin tone 1, 2 and 3 are 97.88%, 99.24% and 98.18% respectively so there is no material bias in this age group either.
Yoti’s facial age estimation is performed by a ‘neural network’, trained to be able to estimate human age by analysing a person’s face. Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.
While this is admirable, is it precise enough to comply with government regulations? Mean absolute errors of over a year don’t mean a hill of beans. By the letter of the law, if you are 17 years and 364 days old and you try to vote, you are breaking the law.
Why did I change my mind?
Over the last couple of months I’ve thought about this a bit more and have experienced a Jim Bakker “I was wrong” moment.
I was wrong for two reasons.
Kids don’t have government IDs
I asked myself some questions.
How many 13 year olds do you know that have driver’s licenses? Probably none.
How many 13 year olds do you know that have government-issued REAL IDs? Probably very few.
How many 13 year olds do you know that have passports? Maybe a few more (especially after 9/11), but not that many.
Even at age 18, there is no guarantee that a person will have a government-issued REAL ID.
So how are 18 year olds, or 13 year olds, supposed to prove that they are old enough for services? Carry their birth certificate around?
You’ll note that Yoti didn’t target a use case for 21 year olds. This is partially because Yoti is a UK firm and therefore may not focus on the strict U.S. laws regarding alcohol, tobacco, and casino gambling. But it’s also because it’s much, much more likely that a 21 year old will have a government-issued ID, eliminating the need for age estimation.
Sometimes.
In some parts of the world, no one has government IDs
Over the past several years, I’ve analyzed a variety of identity firms. Earlier this year I took a look at Worldcoin. While Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness), and makes no attempt to provide the age of the person with the World ID, Worldcoin does have something to say about government issued IDs.
Online services often request proof of ID (usually a passport or driver’s license) to comply with Know your Customer (KYC) regulations. In theory, this could be used to deduplicate individuals globally, but it fails in practice for several reasons.
KYC services are simply not inclusive on a global scale; more than 50% of the global population does not have an ID that can be verified digitally.
IDs are issued by states and national governments, with no global system for verification or accountability. Many verification services (i.e. KYC providers) rely on data from credit bureaus that is accumulated over time, hence stale, without the means to verify its authenticity with the issuing authority (i.e. governments), as there are often no APIs available. Fake IDs, as well as real data to create them, are easily available on the black market. Additionally, due to their centralized nature, corruption at the level of the issuing and verification organizations cannot be eliminated.
Same source as above.
Now this (in my opinion) doesn’t make the case for Worldcoin, but it certainly casts some doubt on a universal way to document ages.
So we’d better start measuring the accuracy of age estimation.
If only there were an independent organization that could measure age estimation, in the same way that NIST measures the accuracy of fingerprint, face, and iris identification.
You know where this is going.
How will NIST test age estimation?
Yes, NIST is in the process of incorporating an age estimation test in its battery of Face Recognition Vendor Tests.
Facial age verification has recently been mandated in legislation in a number of jurisdictions. These laws are typically intended to protect minors from various harms by verifying that the individual is above a certain age. Less commonly some applications extend benefits to groups below a certain age. Further use-cases seek only to determine actual age. The mechanism for estimating age is usually not specified in legislation. Face analysis using software is one approach, and is attractive when a photograph is available or can be captured.
In 2014, NIST published a NISTIR 7995 on Performance of Automated Age Estimation. The report showed using a database with 6 million images, the most accurate age estimation algorithm have accurately estimated 67% of the age of a person in the images within five years of their actual age, with a mean absolute error (MAE) of 4.3 years. Since then, more research has dedicated to further improve the accuracy in facial age verification.
Note that this was in 2014. As we have seen above, Yoti asserts a dramatically lower error rate in 2023.
NIST is just ramping up the testing right now, but once it moves forward, it will be possible to compare age estimation accuracy of various algorithms, presumably in multiple scenarios.
Well, for those algorithm providers who choose to participate.
Does your firm need to promote its age estimation solution?
Does your company have an age estimation solution that is superior to all others?
Do you need an experienced identity professional to help you spread the word about your solution?