Federal Trade Commission Age Verification (and estimation?) Workshop January 28

A dizzying array of federal government agencies is interested in biometric verification and biometric classification, for example by age (either age verification or age estimation). As Biometric Update announced, we can add the Federal Trade Commission (FTC) to the list with an upcoming age verification workshop.

Rejecting age estimation in 2024

The FTC has a history with this, having rejected a proposed age estimation scheme in 2024.

“Re: Request from Entertainment Software Rating Board, Yoti Ltd., Yoti (USA) Inc., and Kids Web Services Ltd. for Commission Approval of Children’s Online Privacy Protection Rule Parental Consent Method (FTC Matter No. P235402)

“This letter is to inform you that the Federal Trade Commission has reviewed your group’s (“the ESRB group”) application for approval of a proposed verifiable parental consent (“VPC”) method under the Children’s Online Privacy Protection Rule (“COPPA” or “the Rule”). At this time, the Commission declines to approve the method, without prejudice to your refiling the application in the future….

“The ESRB group submitted a proposed VPC method for approval on June 2, 2023. The method involves the use of “Privacy-Protective Facial Age Estimation” technology, which analyzes the geometry of a user’s face to confirm that the user is an adult….The Commission received 354 comments regarding the application. Commenters opposed to the application raised concerns about privacy protections, accuracy, and deepfakes. Those in support of the application wrote that the VPC method is similar to those approved previously and that it had sufficient privacy guardrails….

“The Commission is aware that Yoti submitted a facial age estimation model to the National Institute of Standards and Technology (“NIST”) in September 2023, and Yoti has stated that it anticipates that a report reflecting NIST’s evaluation of the model is forthcoming. The Commission expects that this report will materially assist the Commission, and the public, in better understanding age verification technologies and the ESRB group’s application.”

You can see the current NIST age estimation results on NIST’s “Face Analysis Technology Evaluation (FATE) Age Estimation & Verification” page, not only for Yoti, but for many other vendors including my former employers IDEMIA and Incode.

But the FTC rejection was in 2024. Things may be different now.

Grok.

Revisiting age verification and age estimation in 2026?

The FTC has scheduled an in-person and online age verification workshop on January 28.

  • The in-person event will be at the Constitution Center at 400 7th St SW in Washington DC.
  • Details regarding online attendance will be published on this page in the coming weeks.

“The Age Verification Workshop will bring together a diverse group of stakeholders, including researchers, academics, industry representatives, consumer advocates, and government regulators, to discuss topics including:  why age verification matters, age verification and estimation tools, navigating the regulatory contours of age verification, how to deploy age verification more widely, and interplay between age verification technologies and the Children’s Online Privacy Protection Act (COPPA Rule).”

Will the participants reconsider age estimation in light of recent test results?

We Know All About You, Music Lover

This is the week that we celebrate how much companies in Sweden and elsewhere know about us.

Including estimated ages.

Which may or may not (I’m not telling) be as accurate as software that analyzes your face for age estimation.

And the companies gathering the data can then sell it to advertisers and others who use it in all sorts of ways.

It will be interesting to see the corporate messaging that I and other Spotify users will receive over the next few days.

“If you listen to Depeche Mode, perhaps our Medicare plans may interest you.”

If Only Job Applicant Deepfake Detection Were This Easy

In reality, job applicant deepfake detection is (so far) unable to determine who the fraudster really is, but it can determine who the fraudster is NOT.

Something to remember when hiring people for sensitive positions. You don’t want to unknowingly hire a North Korean spy.

Face Product Marketing Expert (27 posts)

To ensure that my social media followers don’t have all the fun with my “biometric product marketing expert” shares, here are links to some Bredemarket blog posts on facial recognition (identification) and facial analysis (classification).

Facial recognition:

Facial analysis:

Using Grok For Evil: Deepfake Celebrity Endorsement

Using Grok for evil: a deepfake celebrity endorsement of Bredemarket?

Although in the video the fake Taylor Swift ends up looking a little like a fake Drew Barrymore.

Needless to say, I’m taking great care to fully disclose that this is a deepfake.

But some people don’t.

Why is Morph Detection Important?

We’re all familiar with the morphing of faces from subject 1 to subject 2, in which there is an intermediate subject 1.5 that combines the features of both of them. But did you know that this simple trick can form the basis for fraudulent activity?

Back in the 20th century, morphing was primarily used for entertainment purposes. Nothing that would make you cry, even though there were shades of gray in the black or white representations of the morphed people.

Godley and Creme, “Cry.”
Michael Jackson, “Black or White.” (The full version with the grabbing.) The morphing begins about 5 1/2 minutes into the video.

But Godley, Creme, and Jackson weren’t trying to commit fraud. As I’ve previously noted, a morphed picture can be used for fraudulent activity. Let me illustrate this with a visual example. Take a look at the guy below.

From NISTIR 8584.

Does this guy look familiar to you? Some of you may think he kinda sorta looks like one person, while others may think he kinda sorta looks like a different person.

The truth is, the person above does not exist. This is actually a face morph of two different people.

From NISTIR 8584.

Now imagine a scenario in which a security camera is patrolling the entrance to the Bush ranch in Crawford, Texas. But instead of having Bush’s facial image in the database, someone has tampered with the database and inserted the “Obushama” image instead…and that image is similar enough to Barack Obama to allow Obama to fraudulently enter Bush’s ranch.

Or alternative, the “Obushama” image is used to create a new synthetic identity, unconnected to either of the two.

But what if you could detect that a particular facial image is not a true image of a person, but some type of morph attempt? NIST has a report on this:

“To address this issue, the National Institute of Standards and Technology (NIST) has released guidelines that can help organizations deploy and use modern detection methods designed to catch morph attacks before they succeed.”

The report, “NIST Interagency Report NISTIR 8584, Face Analysis Technology Evaluation (FATE) MORPH Part 4B: Considerations for Implementing Morph Detection in Operations,” is available in PDF form at https://doi.org/10.6028/NIST.IR.8584.

And a personal aside to anyone who worked for Safran in the early 2010s: we’re talking about MORPH detection, not MORPHO detection. I kept on mistyping the name as I wrote this.

Age Assurance Moves to Fast Food at a Chick-fil-A in Kettering, Ohio

(Imagen 4)

How old are you? The question that’s been asked at bars, pornography sites, and social media sites is now being asked at…a fast food restaurant in Kettering, Ohio.

I’ve talked about age assurance, age verification, and age estimation in a variety of use cases, including:

  • alcohol
  • tobacco
  • firearms
  • cannabis
  • driver’s licenses
  • gambling
  • “mature” adult content
  • car rentals
  • social media access

But what about fast food?

Anti-teen dining policies are nothing new, but this particular one is getting national attention.

The Kettering Chick-fil-A Teen Chaperone Policy

The Chick-fil-A in Kettering, Ohio (which apparently is a franchise and not company owned) posted the following last week:

“With school starting, we wanted to make sure that everyone is aware of our Teen Chaperone Policy. We are grateful for your support and want to make sure Chick-fil-A Kettering is a safe and enjoyable place for everyone! Thank you so much!”

From the Chick-fil-A Kettering Facebook page. (LINK)

Chick-fil-A Kettering Teen Chaperone Policy

To ensure a safe and respectful environment for all guests:

Guests 17 and under must be accompanied by a parent, guardian, or adult chaperone (age 21+) to dine in.

Unaccompanied minors may be asked to leave.

Thank you for helping us keep Chick-fil-Afamily-friendly!

Chick-fil-A Kettering

    For the moment let’s admit that the Chick-fil-A worker (who may or may not be 17 years old themselves) tasked with enforcing the rule will probably just eyeball the person and decide if they’re old enough.

    And let’s also ignore the business ramifications of this franchise’s actions, not only for the franchise location itself, but for all Chick-fil-A restaurants, including those who welcome people of all ages at all times.

    Brick-and-mortar, underage

    But there are some ramifications I want to address now.

    This is definitely a brand new use case unlike the others, both because

    • it affects a brick-and-mortar establishment (not a virtual one), and
    • it affects people under the age of 18 whose ages are difficult to authenticate.

    The last point is a big one I’ve addressed before. People under the age of 18 may not have a driver’s license or any valid government ID that proves their age. And if I’m a kid and walking to the Chick-fil-A, I’m not taking my passport with me.

    In a way that’s precisely the point, and the lack of a government ID may be enough to keep the kids out…except that people over the age of 18 may not have a driver’s license either, and thus may be thrown out unjustly.

    Enforcing a business-only rule without government backing

    In addition, unlike alcohol or cannabis laws, there are very few laws that can be used to enforce this. Yes, there are curfew laws at night, and laws that affect kids during school hours, but this franchise’s regulation affects the establishment 24 hours a day (Sundays excluded, of course).

    So Chick-fil-A Kettering is on its own regarding the enforcement of its new rule.

    Unless Kettering modifies its municipal code to put the rule of law behind this rule and force ALL fast food establishments to enforce it.

    And then what’s next? Enforcement at the Kettering equivalent of James Games?

    The Monk Skin Tone Scale

    (Part of the biometric product marketing expert series)

    Now that I’ve dispensed with the first paragraph of Google’s page on the Monk Skin Tone Scale, let’s look at the meat of the page.

    I believe we all agree on the problem: the need to measure the accuracy of facial analysis and facial recognition algorithms for different populations. For purposes of this post we will concentrate on a proxy for race, a person’s skin tone.

    Why skin tone? Because we have hypothesized (I believe correctly) that the performance of facial algorithms is influenced by the skin tone of the person, not by whether or not they are Asian or Latino or whatever. Don’t forget that the designated races have a variety of skin tones within them.

    But how many skin tones should one use?

    40 point makeup skin tone scale

    The beauty industry has identified over 40 different skin tones for makeup, but this granular of an approach would overwhelm a machine learning evaluation:

    [L]arger scales like these can be challenging for ML use cases, because of the difficulty of applying that many tones consistently across a wide variety of content, while maintaining statistical significance in evaluations. For example, it can become difficult for human annotators to differentiate subtle variation in skin tone in images captured in poor lighting conditions.

    6 point Fitzpatrick skin tone scale

    The first attempt at categorizing skin tones was the Fitzpatrick system.

    To date, the de-facto tech industry standard for categorizing skin tone has been the 6-point Fitzpatrick Scale. Developed in 1975 by Harvard dermatologist Thomas Fitzpatrick, the Fitzpatrick Scale was originally designed to assess UV sensitivity of different skin types for dermatological purposes.

    However, using this skin tone scale led to….(drumroll)…bias.

    [T]he scale skews towards lighter tones, which tend to be more UV-sensitive. While this scale may work for dermatological use cases, relying on the Fitzpatrick Scale for ML development has resulted in unintended bias that excludes darker tones.

    10 point Monk Skin Tone (MST) Scale

    Enter Dr. Ellis Monk, whose biography could be ripped from today’s headlines.

    Dr. Ellis Monk—an Associate Professor of Sociology at Harvard University whose research focuses on social inequalities with respect to race and ethnicity—set out to address these biases.

    If you’re still reading this and haven’t collapsed in a rage of fury, here’s what Dr. Monk did.

    Dr. Monk’s research resulted in the Monk Skin Tone (MST) Scale—a more inclusive 10-tone scale explicitly designed to represent a broader range of communities. The MST Scale is used by the National Institute of Health (NIH) and the University of Chicago’s National Opinion Research Center, and is now available to the entire ML community.

    From https://skintone.google/the-scale.

    Where is the MST Scale used?

    According to Biometric Update, iBeta has developed a demographic bias test based upon ISO/IEC 19795-10, which itself incorporates the Monk Skin Tone Scale.

    At least for now. Biometric Update notes that other skin tone measurements are under developoment, including the “Colorimetric Skin Tone (CST)” and INESC TEC/Fraunhofer Institute research that uses “ethnicity labels as a continuous variable instead of a discrete value.”

    But will there be enough data for variable 8.675309?

    What “Gender Shades” Was Not

    Mr. Owl, how many licks does it take to get to the Tootsie Roll center of a Tootsie Pop?

    A good question. Let’s find out. One, two, three…(bites) three.

    From YouTube.

    If you think Mr. Owl’s conclusion was flawed, let’s look at Google.

    One, two, three…three

    I was researching the Monk Skin Tone Scale for a future Bredemarket blog post, but before I share that post I have to respond to an inaccurate statement from Google.

    Google began its page “Developing the Monk Skin Tone Scale” with the following statement:

    In 2018, the pioneering Gender Shades study demonstrated that commercial, facial-analysis APIs perform substantially worse on images of people of color and women.

    Um…no it didn’t.

    I will give Google props for using the phrase “facial-analysis,” which clarifies that Gender Shades was an exercise in categorization, not individualization.

    But to say that Gender Shades “demonstrated that commercial, facial-analysis APIs perform substantially worse” in certain situations is an ever-so-slight exaggeration.

    Kind of like saying that a bad experience at a Mexican restaurant in Lusk, Wyoming demonstrates that all Mexican restaurants are bad.

    How? I’ve said this before:

    The Gender Shades study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.

    So to conclude that all facial classification algorithms perform substantially worse cannot be supported…because in 2018 the other algorithms weren’t tested.

    One, two, three…one hundred and eighty nine

    In 2019, NIST tested 189 software algorithms from 99 developers for demographic bias, and has continued to test for demographic bias since.

    In these tests, vendors volunteer to have NIST test their algorithms for demographic bias.

    Guess which three vendors have NOT submitted their algorithms to NIST for testing?

    You guessed it: IBM, Microsoft, and Face++.

    Anyway, more on the Monk Skin Tone Scale here, but I had to share this.

    Age Estimation is Challenging

    (Part of the biometric product marketing expert series)

    Two Biometric Update stories that were published on March 27, 2025 reminded me of something I wrote before.

    One involved Paravision.

    An announcement from Paravision says its biometric age estimation technology has achieved Level 3 certification from the Age Check Certification Scheme (ACCS), the leading independent certification body for age estimation. The results make it one of only six companies globally to receive ACCS’s highest-level designation for compliance.

    San Francisco-based Paravision’s age estimation tech posted 100 percent precision in Challenge 25 compliance, with 0 subjects falsely identified as over 25 years old. It also scored a 0 percent Failure to Acquire Rate, meaning that every image submitted for analysis returned a result. Mean Absolute Error (MAE) was 1.37 years, with Standard Deviation of 1.17.

    Now this is an impressive achievement, and Paravision is a quality company, and Joey Pritikin is a quality biometric executive, but…well, let me share the other story first, involving a Yoti customer (not Yoti).

    Fenix responded that it set a challenge threshold at 23 years of age. Any user estimated to be that age or younger based on their face biometrics is required to use a secondary method for age verification.

    Fenix had set OnlyFans challenge age, it turns out, at 20 years old. A correction to 23 years old was carried out on January 16, and then Fenix changed it again three days later, to 21 years old, Ofcom says.

    Now Biometric Update was very clear that “Yoti provides the tech, but does not set the threshold.”

    Challenge ages and legal ages

    But do challenge thresholds have any meaning? I addressed that issue back in May 2024.

    Many of the tests used a “Challenge-T” policy, such as “Challenge 25.” In other words, the test doesn’t estimate whether a person IS a particular age, but whether a person is WELL ABOVE a particular age….

    So if you have to be 21 to access a good or service, the algorithm doesn’t estimate if you are over 21. Instead, it estimates whether you are over 25. If the algorithm thinks you’re over 25, you’re good to go. If it thinks you’re 24, pull out your ID card.

    And if you want to be more accurate, raise the challenge age from 25 to 28.

    NIST admits that this procedure results in a “tradeoff between protecting young people and inconveniencing older subjects” (where “older” is someone who is above the legal age but below the challenge age).

    You may be asking why the algorithms have to set a challenge age above the lawful age, thus inconveniencing people above the lawful age but below the challenge age.

    The reason is simple.

    Age estimation is not all that accurate.

    I mean, it’s accurate enough if I (a person well above the age of 21 years) must indicate whether I’m old enough to drink, but it’s not sufficiently accurate for a drinker on their 21st birthday (in the U.S.), or a 13 year old getting their first social media account (where lawful).

    Not an official document.

    If you have a government issued ID, age verification based upon that ID is a much better (albeit less convenient) solution.

    (Kid computer picture by Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727.)

    (Fake driver license picture from https://www.etsy.com/listing/1511398513/editable-little-drivers-license.)