Federal Trade Commission Age Verification (and estimation?) Workshop January 28

A dizzying array of federal government agencies is interested in biometric verification and biometric classification, for example by age (either age verification or age estimation). As Biometric Update announced, we can add the Federal Trade Commission (FTC) to the list with an upcoming age verification workshop.

Rejecting age estimation in 2024

The FTC has a history with this, having rejected a proposed age estimation scheme in 2024.

“Re: Request from Entertainment Software Rating Board, Yoti Ltd., Yoti (USA) Inc., and Kids Web Services Ltd. for Commission Approval of Children’s Online Privacy Protection Rule Parental Consent Method (FTC Matter No. P235402)

“This letter is to inform you that the Federal Trade Commission has reviewed your group’s (“the ESRB group”) application for approval of a proposed verifiable parental consent (“VPC”) method under the Children’s Online Privacy Protection Rule (“COPPA” or “the Rule”). At this time, the Commission declines to approve the method, without prejudice to your refiling the application in the future….

“The ESRB group submitted a proposed VPC method for approval on June 2, 2023. The method involves the use of “Privacy-Protective Facial Age Estimation” technology, which analyzes the geometry of a user’s face to confirm that the user is an adult….The Commission received 354 comments regarding the application. Commenters opposed to the application raised concerns about privacy protections, accuracy, and deepfakes. Those in support of the application wrote that the VPC method is similar to those approved previously and that it had sufficient privacy guardrails….

“The Commission is aware that Yoti submitted a facial age estimation model to the National Institute of Standards and Technology (“NIST”) in September 2023, and Yoti has stated that it anticipates that a report reflecting NIST’s evaluation of the model is forthcoming. The Commission expects that this report will materially assist the Commission, and the public, in better understanding age verification technologies and the ESRB group’s application.”

You can see the current NIST age estimation results on NIST’s “Face Analysis Technology Evaluation (FATE) Age Estimation & Verification” page, not only for Yoti, but for many other vendors including my former employers IDEMIA and Incode.

But the FTC rejection was in 2024. Things may be different now.

Grok.

Revisiting age verification and age estimation in 2026?

The FTC has scheduled an in-person and online age verification workshop on January 28.

  • The in-person event will be at the Constitution Center at 400 7th St SW in Washington DC.
  • Details regarding online attendance will be published on this page in the coming weeks.

“The Age Verification Workshop will bring together a diverse group of stakeholders, including researchers, academics, industry representatives, consumer advocates, and government regulators, to discuss topics including:  why age verification matters, age verification and estimation tools, navigating the regulatory contours of age verification, how to deploy age verification more widely, and interplay between age verification technologies and the Children’s Online Privacy Protection Act (COPPA Rule).”

Will the participants reconsider age estimation in light of recent test results?

You’re Fired, This Week’s Version

This week, well-known privacy advocate Alvaro Bedoya is not happy.

““The president just illegally fired me. This is corruption plain and simple,” Bedoya, who was appointed [to the Federal Trade Commission] in 2021 by President Joe Biden and confirmed in May 2022, posted on X. 

“He added, “The FTC is an independent agency founded 111 years ago to fight fraudsters and monopolists” but now “the president wants the FTC to be a lapdog for his golfing buddies.””

The other ousted FTC Commissioner, Rebecca Kelly Slaughter, had been appointed by…Donald Trump.

Nearly $3 Billion Lost to Imposter Scams in the U.S. in 2024

(Imposter scam wildebeest image from Imagen 3)

According to the Federal Trade Commission, fraud is being reported at the same rate, but more people are saying they are losing money from it.

In 2023, 27% of people who reported a fraud said they lost money, while in 2024, that figure jumped to 38%.

In a way this is odd, since you would think that we would better detect fraud attempts now. But I guess we don’t. (I’ll say why in a minute.)

Imposter scams

The second fraud category, after investment scams, was imposter scams.

The second highest reported loss amount came from imposter scams, with $2.95 billion reported lost. In 2024, consumers reported losing more money to scams where they paid with bank transfers or cryptocurrency than all other payment methods combined.

Deepfakes

I’ve spent…a long time in the business of determining who people are, and who people aren’t. While the FTC summary didn’t detail the methods of imposter scams, at least some of these may have used deepfakes to perpetuate the scam.

The FTC addressed deepfakes two years ago, speaking of

…technology that simulates human activity, such as software that creates deepfake videos and voice clones….They can use deepfakes and voice clones to facilitate imposter scamsextortion, and financial fraud. And that’s very much a non-exhaustive list.

Creating deepfakes

And the need for advanced skills to create deepfakes has disappeared. ZD NET reported on a Consumer Reports study that analyzed six voice cloning software packages:

The results found that four of the six products — from ElevenLabs, Speechify, PlayHT, and Lovo — did not have the technical mechanisms necessary to prevent cloning someone’s voice without their knowledge or to limit the AI cloning to only the user’s voice. 

Instead, the protection was limited to a box users had to check off, confirming they had the legal right to clone the voice.

Which is just as effective as verifying someone’s identity by asking for their name and date of birth.

(Not) detecting deepfakes

And of course the identity/biometric vendor commuity is addressing deepfakes also. Research from iProov indicates one reason why 38% of the FTC reporters lost money to fraud:

[M]ost people can’t identify deepfakes – those incredibly realistic AI-generated videos and images often designed to impersonate people. The study tested 2,000 UK and US consumers, exposing them to a series of real and deepfake content. The results are alarming: only 0.1% of participants could accurately distinguish real from fake content across all stimuli which included images and video… in a study where participants were primed to look for deepfakes. In real-world scenarios, where people are less aware, the vulnerability to deepfakes is likely even higher.

So what’s the solution? Throw more technology at the problem? Multi factor authentication (requiring the fraudster to deepfake multiple items)? Something else?