New York was the state with the largest share of the nation’s tax revenue in the (third) quarter of 2023: $188.53 million or more than 37% of total tax revenue and gross receipts from sports betting in the United States. Indiana ($38.6 million) and Ohio ($32.9 million) followed.
Are you wondering why populous states such as California and Texas don’t appear on the list? That’s because sports betting is only legal in 38 states and the District of Columbia.
Sports betting in any form is currently illegal in California, Texas, Idaho, Utah, Minnesota, Missouri, Alabama, Georgia, South Carolina, Oklahoma, Alaska and Hawaii.
But the remaining states that allow sports betting need to ensure that the gamblers meet age verification requirements. (Even though they have a powerful incentive to let underage people gamble so that they receive more tax revenue.)
So far the best alternative to “target audience” that I’ve found is “hungry people,” which not only focuses on people rather than an abstraction, but also focuses on those who are ready to purchase your product or service.
But I just found an instance in which “thirsty people” may be better than “hungry people.” Specifically, for the Colorado spirits company Friday Deployment, which engages in product marketing in a very…um…targeted way. Including the use of a micro-influencer who is well-known to Friday Deployment’s thirsty people.
Heads up for regular Bredemarket blog readers: the “why” and “how” questions are coming.
Why are Friday Deployment’s “thirsty people” technologists?
Why does Friday Deployment aim its product marketing at technologists?
Presumably because of this background, Friday Deployment’s product marketing is filled with tech references. Here’s a sample from Friday Deployment’s web page (as of Friday, February 2, 2024).
It was inevitable. The tree is out of date, the history is a mess, and you just want to start your weekend. Maybe you just do a quick little git push --force? Maybe someone already did, and you now get to figure out the correct commit history?
But that isn’t the only way that Friday Deployment markets to its “thirsty people.”
How does Friday Deployment’s marketing resonate with its thirsty people?
How else does Friday Deployment address a technologist audience?
Those of you who are familiar with LinkedIn’s tempests in a teapot realize that LinkedIn users don’t spend all of their time talking about green banners or vaping during remote interviews.
Well, she was until one day when she and about 40 others were terminated.
Pietsch was terminated by two people that she didn’t know and who could not tell her why she was terminated.
This story would have disappeared under the rug…except that Pietsch knew that people were losing their jobs, so when she was invited to a meeting she videorecorded the first part of the termination, and shared it on the tubes.
The video went viral and launched a ton of discussion both for and against what Pietsch did. I lean toward the “for,” if you’re wondering.
Since Friday Deployment’s “thirsty people” were probably familiar with the Brittany Pietsch story, the company worked with her to re-create her termination video…with a twist. (Not literally, since Pietsch drank the gin straight.)
Well, the product marketing ploy worked, since I clicked on the website of a spirits company that was new to me, and now I’m on their mailing list.
But let’s talk alcohol age verification
The Friday Deployment product marketing partnership with Brittany Pietsch worked…mostly. Except that I have one word of advice for company owner Rishi Malik.
With your Varo Bank engineering experience, you of all people should realize that Friday Deployment’s age verification system is hopelessly inadequate. A robust age verification system, or even an age estimation system, or even a question asking you to provide your date of birth would be better.
Bredemarket can’t create a viral video for your tech firm, but…
But enough about Friday Deployment. Let’s talk about YOUR technology firm.
How can your company market to your thirsty (or hungry) people? Bredemarket can’t create funny videos with micro-influencers, but Bredemarket can craft the words that speak to your audience.
To learn more about Bredemarket’s marketing and writing services for technology firms, click on the image below.
Why did I mention the “future implementation” of the UK Online Safety Act? Because the passage of the UK Online Safety Act is just the FIRST step in a long process. Ofcom still has to figure out how to implement the Act.
Ofcom started to work on this on November 9, but it’s going to take many months to finalize—I mean finalise things. This is the UK Online Safety Act, after all.
This is the first of four major consultations that Ofcom, as regulator of the new Online Safety Act, will publish as part of our work to establish the new regulations over the next 18 months.
It focuses on our proposals for how internet services that enable the sharing of user-generated content (‘user-to-user services’) and search services should approach their new duties relating to illegal content.
On November 9 Ofcom published a slew of summary and detailed documents. Here’s a brief excerpt from the overview.
Mae’r ddogfen hon yn rhoi crynodeb lefel uchel o bob pennod o’n hymgynghoriad ar niwed anghyfreithlon i helpu rhanddeiliaid i ddarllen a defnyddio ein dogfen ymgynghori. Mae manylion llawn ein cynigion a’r sail resymegol sylfaenol, yn ogystal â chwestiynau ymgynghori manwl, wedi’u nodi yn y ddogfen lawn. Dyma’r cyntaf o nifer o ymgyngoriadau y byddwn yn eu cyhoeddi o dan y Ddeddf Diogelwch Ar-lein. Mae ein strategaeth a’n map rheoleiddio llawn ar gael ar ein gwefan.
Oops, I seem to have quoted from the Welsh version. Maybe you’ll have better luck reading the English version.
This document sets out a high-level summary of each chapter of our illegal harms consultation to help stakeholders navigate and engage with our consultation document. The full detail of our proposals and the underlying rationale, as well as detailed consultation questions, are set out in the full document. This is the first of several consultations we will be publishing under the Online Safety Act. Our full regulatory roadmap and strategy is available on our website.
And if you need help telling your firm’s UK Online Safety Act story, Bredemarket can help. (Unless the final content needs to be in Welsh.) Click below!
Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent (October 26)….
[A]dded in (to the Act) is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.
This not only opens up issues regarding encryption and privacy, but also specific identity technologies such as age verification and age estimation.
This post looks at three types of firms that are affected by the UK Online Safety Act, the stories they are telling, and the stories they may need to tell in the future. What is YOUR firm’s Online Safety Act-related story?
What three types of firms are affected by the UK Online Safety Act?
As of now I have been unable to locate a full version of the final final Act, but presumably the provisions from this July 2023 version (PDF) have only undergone minor tweaks.
Among other things, this version discusses “User identity verification” in 65, “Category 1 service” in 96(10)(a), “United Kingdom user” in 228(1), and a multitude of other terms that affect how companies will conduct business under the Act.
I am focusing on three different types of companies:
Technology services (such as Yoti) that provide identity verification, including but not limited to age verification and age estimation.
User-to-user services (such as WhatsApp) that provide encrypted messages.
User-to-user services (such as Wikipedia) that allow users (including United Kingdom users) to contribute content.
What types of stories will these firms have to tell, now that the Act is law?
For ALL services, the story will vary as Ofcom decides how to implement the Act, but we are already seeing the stories from identity verification services. Here is what Yoti stated after the Act became law:
We have a range of age assurance solutions which allow platforms to know the age of users, without collecting vast amounts of personal information. These include:
Age estimation: a user’s age is estimated from a live facial image. They do not need to use identity documents or share any personal information. As soon as their age is estimated, their image is deleted – protecting their privacy at all times. Facial age estimation is 99% accurate and works fairly across all skin tones and ages.
Digital ID app: a free app which allows users to verify their age and identity using a government-issued identity document. Once verified, users can use the app to share specific information – they could just share their age or an ‘over 18’ proof of age.
MailOnline has approached WhatsApp’s parent company Meta for comment now that the Bill has received Royal Assent, but the firm has so far refused to comment.
[T]o comply with the new law, the platform says it would be forced to weaken its security, which would not only undermine the privacy of WhatsApp messages in the UK but also for every user worldwide.
‘Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98 per cent of users,’ Mr Cathcart has previously said.
Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.)
All of these firms have shared their stories either before or after the Act became law, and those stories will change depending upon what Ofcom decides.
For example, when biometric companies want to justify the use of their technology, they have found that it is very effective to position biometrics as a way to combat sex trafficking.
Similarly, moves to rein in social media are positioned as a way to preserve mental health.
Now that’s a not-so-pretty picture, but it effectively speaks to emotions.
“If poor vulnerable children are exposed to addictive, uncontrolled social media, YOUR child may end up in a straitjacket!”
In New York state, four government officials have declared that the ONLY way to preserve the mental health of underage social media users is via two bills, one of which is the “New York Child Data Protection Act.”
But there is a challenge to enforce ALL of the bill’s provisions…and only one way to solve it. An imperfect way—age estimation.
Because they want to protect the poor vulnerable children.
By Paolo Monti – Available in the BEIC digital library and uploaded in partnership with BEIC Foundation.The image comes from the Fondo Paolo Monti, owned by BEIC and located in the Civico Archivio Fotografico of Milan., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48057924
And because the major U.S. social media companies are headquartered in California. But I digress.
So why do they say that children need protection?
Recent research has shown devastating mental health effects associated with children and young adults’ social media use, including increased rates of depression, anxiety, suicidal ideation, and self-harm. The advent of dangerous, viral ‘challenges’ being promoted through social media has further endangered children and young adults.
Of course one can also argue that social media is harmful to adults, but the New Yorkers aren’t going to go that far.
So they are just going to protect the poor vulnerable children.
CC BY-SA 4.0.
This post isn’t going to deeply analyze one of the two bills the quartet have championed, but I will briefly mention that bill now.
The “Stop Addictive Feeds Exploitation (SAFE) for Kids Act” (S7694/A8148) defines “addictive feeds” as those that are arranged by a social media platform’s algorithm to maximize the platform’s use.
Those of us who are flat-out elderly vaguely recall that this replaced the former “chronological feed” in which the most recent content appeared first, and you had to scroll down to see that really cool post from two days ago. New York wants the chronological feed to be the default for social media users under 18.
The bill also proposes to limit under 18 access to social media without parental consent, especially between midnight and 6:00 am.
And those who love Illinois BIPA will be pleased to know that the bill allows parents (and their lawyers) to sue for damages.
Previous efforts to control underage use of social media have faced legal scrutinity, but since Attorney General James has sworn to uphold the U.S. Constitution, presumably she has thought about all this.
Enough about SAFE for Kids. Let’s look at the other bill.
The New York Child Data Protection Act
The second bill, and the one that concerns me, is the “New York Child Data Protection Act” (S7695/A8149). Here is how the quartet describes how this bill will protect the poor vulnerable children.
CC BY-SA 4.0.
With few privacy protections in place for minors online, children are vulnerable to having their location and other personal data tracked and shared with third parties. To protect children’s privacy, the New York Child Data Protection Act will prohibit all online sites from collecting, using, sharing, or selling personal data of anyone under the age of 18 for the purposes of advertising, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, this informed consent must come from a parent.
And again, this bill provides a BIPA-like mechanism for parents or guardians (and their lawyers) to sue for damages.
But let’s dig into the details. With apologies to the New York State Assembly, I’m going to dig into the Senate version of the bill (S7695). Bear in mind that this bill could be amended after I post this, and some of the portions that I cite could change.
This only applies to natural persons. So the bots are safe, regardless of age.
Speaking of age, the age of 18 isn’t the only age referenced in the bill. Here’s a part of the “privacy protection by default” section:
§ 899-FF. PRIVACY PROTECTION BY DEFAULT.
1. EXCEPT AS PROVIDED FOR IN SUBDIVISION SIX OF THIS SECTION AND SECTION EIGHT HUNDRED NINETY-NINE-JJ OF THIS ARTICLE, AN OPERATOR SHALL NOT PROCESS, OR ALLOW A THIRD PARTY TO PROCESS, THE PERSONAL DATA OF A COVERED USER COLLECTED THROUGH THE USE OF A WEBSITE, ONLINE SERVICE, ONLINE APPLICATION, MOBILE APPLICA- TION, OR CONNECTED DEVICE UNLESS AND TO THE EXTENT:
(A) THE COVERED USER IS TWELVE YEARS OF AGE OR YOUNGER AND PROCESSING IS PERMITTED UNDER 15 U.S.C. § 6502 AND ITS IMPLEMENTING REGULATIONS; OR
(B) THE COVERED USER IS THIRTEEN YEARS OF AGE OR OLDER AND PROCESSING IS STRICTLY NECESSARY FOR AN ACTIVITY SET FORTH IN SUBDIVISION TWO OF THIS SECTION, OR INFORMED CONSENT HAS BEEN OBTAINED AS SET FORTH IN SUBDIVISION THREE OF THIS SECTION.
So a lot of this bill depends upon whether a person is over or under the age of eighteen, or over or under the age of thirteen.
And that’s a problem.
How old are you?
The bill needs to know whether or not a person is 18 years old. And I don’t think the quartet will be satisfied with the way that alcohol websites determine whether someone is 21 years old.
Attorney General James and the others would presumably prefer that the social media companies verify ages with a government-issued ID such as a state driver’s license, a state identification card, or a national passport. This is how most entities verify ages when they have to satisfy legal requirements.
For some people, even some minors, this is not that much of a problem. Anyone who wants to drive in New York State must have a driver’s license, and you have to be at least 16 years old to get a driver’s license. Admittedly some people in the city never bother to get a driver’s license, but at some point these people will probably get a state ID card.
However, there are going to be some 17 year olds who don’t have a driver’s license, government ID or passport.
And some 16 year olds.
And once you look at younger people—15 year olds, 14 year olds, 13 year olds, 12 year olds—the chances of them having a government-issued identification document are much less.
What are these people supposed to do? Provide a birth certificate? And how will the social media companies know if the birth certificate is legitimate?
But there’s another way to determine ages—age estimation.
How old are you, part 2
As long-time readers of the Bredemarket blog know, I have struggled with the issue of age verification, especially for people who do not have driver’s licenses or other government identification. Age estimation in the absence of a government ID is still an inexact science, as even Yoti has stated.
Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.
So if a minor does not have a government ID, and the social media firm has to use age estimation to determine a minor’s age for purposes of the New York Child Data Protection Act, the following two scenarios are possible:
An 11 year old may be incorrectly allowed to give informed consent for purposes of the Act.
A 14 year old may be incorrectly denied the ability to give informed consent for purposes of the Act.
Is age estimation “good enough for government work”?
If you ask any one of us in the identity verification industry, we’ll tell you how identity verification proves that you know who is accessing your service.
During the identity verification/onboarding step, one common technique is to capture the live face of the person who is being onboarded, then compare that to the face captured from the person’s government identity document. As long as you have assurance that (a) the face is live and not a photo, and (b) the identity document has not been tampered, you positively know who you are onboarding.
The authentication step usually captures a live face and compares it to the face that was captured during onboarding, thus positively showing that the right person is accessing the previously onboarded account.
Sound like the perfect solution, especially in industries that rely on age verification to ensure that people are old enough to access the service.
Therefore, if you are employing robust identity verification and authentication that includes age verification, this should never happen.
Eduardo Montanari, who manages delivery logistics at a burger shop north of São Paulo, has noticed a pattern: Every time an order pickup is assigned to a female driver, there’s a good chance the worker is a minor.
On YouTube, a tutorial — one of many — explains “how to deliver as a minor.” It has over 31,000 views. “You have to create an account in the name of a person who’s the right age. I created mine in my mom’s name,” says a boy, who identifies himself as a minor in the video.
Once a cooperative parent or older sibling agrees to help, the account is created in the older person’s name, the older person’s face and identity document is used to create the account, and everything is valid.
Outsmarting authentication
Yes, but what about authentication?
That’s why it’s helpful to use a family member, or someone who lives in the minor’s home.
Let’s say little Maria is at home, during her homework, when her gig economy app rings with a delivery request. Now Maria was smart enough to have her older sister Irene or her mama Cecile perform the onboarding with the delivery app. If she’s at home, she can go to Irene or Cecile, have them perform the authentication, and then she’s off on her bike to make money.
(Alternatively, if the app does not support liveness detection, Maria can just hold a picture of Irene or Cecile up to the camera and authenticate.)
The onboarding process was completed by the account holder.
The authentication was completed by the account holder.
But the account holder isn’t the one that’s actually using the service. Once authentication is complete, anyone can access the service.
So how do you stop underage gig economy use?
According to Rest of World, one possible solution is to tattle on underage delivery people. If you see something, say something.
But what’s the incentive for a restaurant owner or delivery recipient to report that their deliveries are being performed by a kid?
“The feeling we have is that, at least this poor boy is working. I know this is horrible, but here in Brazil we end up seeing it as an opportunity … It’s ridiculous,” (psychologist Regiane Couto) said.
A much better solution is to replace one-time authetication with continuous authentication, or at least be smarter in authentication. For example, a gig delivery worker could be required to authenticate at multiple points in the process:
When the worker receives the delivery request.
When the worker arrives at the restaurant.
When the worker makes the delivery.
It’s too difficult to drag big sister Irene or mama Cecile to ALL of these points.
As an added bonus, these authetications provide timestamps of critical points in the delivery process, which the delivery company and/or restaurant can use for their analytics.
Problem solved.
Except that little Maria doesn’t have any excuse and has to complete her homework.
Approximately 2,700 years ago, the Greek poet Hesiod is recorded as saying “moderation is best in all things.” This applies to government regulations, including encryption and age verification regulations. As the United Kingdom’s House of Lords works through drafts of its Online Safety Bill, interested parties are seeking to influence the level of regulation.
In Allan’s assessment, he wondered whether the mandated encryption and age verification regulations would apply to all services, or just critical services.
Allan considered a number of services, but I’m just going to hone in on two of them: WhatsApp and Wikipedia.
The Online Safety Bill and WhatsApp
WhatsApp is owned by a large American company called Meta, which causes two problems for regulators in the United Kingdom (and in Europe):
Meta is a large company.
Meta is an American company.
WhatsApp itself causes another problem for UK regulators:
WhatsApp encrypts messages.
Because of these three truths, UK regulators are not necessarily inclined to play nice with WhatsApp, which may affect whether WhatsApp will be required to comply with the Online Safety Bill’s regulations.
Allan explains the issue:
One of the powers the Bill gives to OFCOM (the UK Office of Communications) is the ability to order services to deploy specific technologies to detect terrorist and child sexual exploitation and abuse content….
But there may be cases where a provider believes that the technology it is being ordered to deploy would break essential functionality of its service and so would prefer to leave the UK rather than accept compliance with the order as a condition of remaining….
If OFCOM does issue this kind of order then we should expect to see some encrypted services leave the UK market, potentially including very popular ones like WhatsApp and iMessage.
Speaking during a UK visit in which he will meet legislators to discuss the government’s flagship internet regulation, Will Cathcart, Meta’s head of WhatsApp, described the bill as the most concerning piece of legislation currently being discussed in the western world.
He said: “It’s a remarkable thing to think about. There isn’t a way to change it in just one part of the world. Some countries have chosen to block it: that’s the reality of shipping a secure product. We’ve recently been blocked in Iran, for example. But we’ve never seen a liberal democracy do that.
“The reality is, our users all around the world want security,” said Cathcart. “Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98% of users.”
In passing, the March Guardian article noted that WhatsApp requires UK users to be 16 years old. This doesn’t appear to be an issue for Meta, but could be an issue for another very popular online service.
The Online Safety Bill and Wikipedia
So how does the Online Safety Bill affect Wikipedia?
It depends on how the Online Safety Bill is implemented via the rulemaking process.
As in other countries, the true effects of legislation aren’t apparent until the government writes the rules that implement the legislation. It’s possible that the rulemaking will carve out an exemption allowing Wikipedia to NOT enforce age verification. Or it’s possible that Wikipedia will be mandated to enforce age verification for its writers.
If they do not (carve out exemptions) then there could be real challenges for the continued operation of some valuable services in the UK given what we know about the requirements in the Bill and the operating principles of services like Wikipedia.
For example, it would be entirely inconsistent with Wikipedia’s privacy principles to start collecting additional data about the age of their users and yet this is what will be expected from regulated services more generally.
Left unsaid is the same issue that affects encryption: age verification for Wikipedia may be required in the United Kingdom, but may not be required for other countries.
(Wales) used the example of Wikipedia, in which none of its 700 staff or contractors plays a role in content or in moderation.
Instead, the organisation relies on its global community to make democratic decisions on content moderation, and have contentious discussions in public.
By contrast, the “feudal” approach sees major platforms make decisions centrally, erratically, inconsistently, often using automation, and in secret.
By regulating all social media under the assumption that it’s all exactly like Facebook and Twitter, Wales said that authorities would impose rules on upstart competitors that force them into that same model.
One common thread between these two cases is that implementation of the regulations results in a privacy threat to the affected individuals.
For WhatsApp users, the privacy threat is obvious. If WhatsApp is forced to fully or partially disable encryption, or is forced to use an encryption scheme that the UK Government could break, then the privacy of every message (including messages between people outside the UK) would be threatened.
For Wikipedia users, anyone contributing to the site would need to undergo substantial identity verification so that the UK Government would know the ages of Wikipedia contributors.
This is yet another example of different government agencies working at cross purposes with each other, as the “catch the pornographers” bureaucrats battle with the “preserve privacy” advocates.
Meta, Wikipedia, and other firms would like the legislation to explicitly carve out exemptions for their firms and services. Opponents say that legislative carve outs aren’t necessary, because no one would ever want to regulate Wikipedia.
Yeah, and the U.S. Social Security Number isn’t an identificaiton number either. (Not true.)
I’ll admit that I previously thought that age estimation was worthless, but I’ve since changed my mind about the necessity for it. Which is a good thing, because the U.S. National Institute of Standards and Technology (NIST) is about to add age estimation to its Face Recognition Vendor Test suite.
What is age estimation?
Before continuing, I should note that age estimation is not a way to identify people, but a way to classify people. For once, I’m stepping out of my preferred identity environment and looking at a classification question. Not “gender shades,” but “get off my lawn” (or my tricycle).
Age estimation uses facial features to estimate how old a person is, in the absence of any other information such as a birth certificate. In a Yoti white paper that I’ll discuss in a minute, the Western world has two primary use cases for age estimation:
First, to estimate whether a person is over or under the age of 18 years. In many Western countries, the age of 18 is a significant age that grants many privileges. In my own state of California, you have to be 18 years old to vote, join the military without parental consent, marry (and legally have sex), get a tattoo, play the lottery, enter into binding contracts, sue or be sued, or take on a number of other responsibilities. Therefore, there is a pressing interest to know whether the person at the U.S. Army Recruiting Center, a tattoo parlor, or the lottery window is entitled to use the service.
Second, to estimate whether a person is over or under the age of 13 years. Although age 13 is not as great a milestone as age 18, this is usually the age at which social media companies allow people to open accounts. Thus the social media companies and other companies that cater to teens have a pressing interest to know the teen’s age.
Why was I against age estimation?
Because I felt it was better to know an age, rather than estimate it.
My opinion was obviously influenced by my professional background. When IDEMIA was formed in 2017, I became part of a company that produced government-issued driver’s licenses for the majority of states in the United States. (OK, MorphoTrak was previously contracted to produce driver’s licenses for North Carolina, but…that didn’t last.)
With a driver’s license, you know the age of the person and don’t have to estimate anything.
And estimation is not an exact science. Here’s what Yoti’s March 2023 white paper says about age estimation accuracy:
Our True Positive Rate (TPR) for 13-17 year olds being correctly estimated as under 25 is 99.93% and there is no discernible bias across gender or skin tone. The TPRs for female and male 13-17 year olds are 99.90% and 99.94% respectively. The TPRs for skin tone 1, 2 and 3 are 99.93%, 99.89% and 99.92% respectively. This gives regulators globally a very high level of confidence that children will not be able to access adult content.
Our TPR for 6-11 year olds being correctly estimated as under 13 is 98.35%. The TPRs for female and male 6-11 year olds are 98.00% and 98.71% respectively. The TPRs for skin tone 1, 2 and 3 are 97.88%, 99.24% and 98.18% respectively so there is no material bias in this age group either.
Yoti’s facial age estimation is performed by a ‘neural network’, trained to be able to estimate human age by analysing a person’s face. Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.
While this is admirable, is it precise enough to comply with government regulations? Mean absolute errors of over a year don’t mean a hill of beans. By the letter of the law, if you are 17 years and 364 days old and you try to vote, you are breaking the law.
Why did I change my mind?
Over the last couple of months I’ve thought about this a bit more and have experienced a Jim Bakker “I was wrong” moment.
How many 13 year olds do you know that have driver’s licenses? Probably none.
How many 13 year olds do you know that have government-issued REAL IDs? Probably very few.
How many 13 year olds do you know that have passports? Maybe a few more (especially after 9/11), but not that many.
Even at age 18, there is no guarantee that a person will have a government-issued REAL ID.
So how are 18 year olds, or 13 year olds, supposed to prove that they are old enough for services? Carry their birth certificate around?
You’ll note that Yoti didn’t target a use case for 21 year olds. This is partially because Yoti is a UK firm and therefore may not focus on the strict U.S. laws regarding alcohol, tobacco, and casino gambling. But it’s also because it’s much, much more likely that a 21 year old will have a government-issued ID, eliminating the need for age estimation.
Sometimes.
In some parts of the world, no one has government IDs
Over the past several years, I’ve analyzed a variety of identity firms. Earlier this year I took a look at Worldcoin. While Worldcoin’s World ID emphasizes privacy so much that it does not conclusively prove a person’s identity (it only proves a person’s uniqueness), and makes no attempt to provide the age of the person with the World ID, Worldcoin does have something to say about government issued IDs.
Online services often request proof of ID (usually a passport or driver’s license) to comply with Know your Customer (KYC) regulations. In theory, this could be used to deduplicate individuals globally, but it fails in practice for several reasons.
KYC services are simply not inclusive on a global scale; more than 50% of the global population does not have an ID that can be verified digitally.
IDs are issued by states and national governments, with no global system for verification or accountability. Many verification services (i.e. KYC providers) rely on data from credit bureaus that is accumulated over time, hence stale, without the means to verify its authenticity with the issuing authority (i.e. governments), as there are often no APIs available. Fake IDs, as well as real data to create them, are easily available on the black market. Additionally, due to their centralized nature, corruption at the level of the issuing and verification organizations cannot be eliminated.
Same source as above.
Now this (in my opinion) doesn’t make the case for Worldcoin, but it certainly casts some doubt on a universal way to document ages.
So we’d better start measuring the accuracy of age estimation.
If only there were an independent organization that could measure age estimation, in the same way that NIST measures the accuracy of fingerprint, face, and iris identification.
You know where this is going.
How will NIST test age estimation?
Yes, NIST is in the process of incorporating an age estimation test in its battery of Face Recognition Vendor Tests.
Facial age verification has recently been mandated in legislation in a number of jurisdictions. These laws are typically intended to protect minors from various harms by verifying that the individual is above a certain age. Less commonly some applications extend benefits to groups below a certain age. Further use-cases seek only to determine actual age. The mechanism for estimating age is usually not specified in legislation. Face analysis using software is one approach, and is attractive when a photograph is available or can be captured.
In 2014, NIST published a NISTIR 7995 on Performance of Automated Age Estimation. The report showed using a database with 6 million images, the most accurate age estimation algorithm have accurately estimated 67% of the age of a person in the images within five years of their actual age, with a mean absolute error (MAE) of 4.3 years. Since then, more research has dedicated to further improve the accuracy in facial age verification.
Note that this was in 2014. As we have seen above, Yoti asserts a dramatically lower error rate in 2023.
NIST is just ramping up the testing right now, but once it moves forward, it will be possible to compare age estimation accuracy of various algorithms, presumably in multiple scenarios.
Well, for those algorithm providers who choose to participate.
Does your firm need to promote its age estimation solution?
Does your company have an age estimation solution that is superior to all others?
Do you need an experienced identity professional to help you spread the word about your solution?