Government Anti-Fraud Efforts: They’re Still Siloed

When the United States was attacked on September 11, 2001—an attack that caused NATO to invoke Article 5, but I digress—Congress and the President decided that the proper response was to reorganize the government and place homeland security efforts under a single Cabinet secretary. While we may question the practical wisdom of that move, the intent was to ensure that the U.S. Government mounted a coordinated response to that specific threat.

Today Americans face the threat of fraud. Granted it isn’t as showy as burning buildings, but fraud clearly impacts many if not most of us. My financial identity has been compromised multiple times in the last several years, and yours probably has also.

But don’t expect Congress and the President to create a single Department of Anti-Fraud any time soon.

Stop Identity Fraud and Identity Theft Bill

As Biometric Update reported, Congresspeople Bill Foster (D-IL) and Pete Sessions (R-TX) recently introduced H.R. 7270, “To establish a government-wide approach to stopping identity fraud and theft in the financial services industry, and for other purposes.”

Because this is government-wide and necessarily complex, the bill will be referred to at least THREE House Committees:

“Referred to the Committee on Oversight and Government Reform, and in addition to the Committees on Financial Services, and Energy and Commerce, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned.”

Why? As I type this the bill text is not available at congress.gov, but Foster’s press release links to a preliminary (un-numbered) copy of the bill. Here are some excerpts:

“9 (9) The National Institute of Standards and
10 Technology (NIST) was directed in the CHIPS and
11 Science Act of 2022 to launch new work to develop
12 a framework of common definitions and voluntary
13 guidance for digital identity management systems,
14 including identity and attribute validation services
15 provided by Federal, State, and local governments,
16 and work is underway at NIST to create this guid
17 ance. However, State and local agencies lack re
18 sources to implement this new guidance, and if this
19 does not change, it will take decades to harden defi
20 ciencies in identity infrastructure.”

Even in the preamble the bill mentions NIST, part of the U.S. Department of Commerce, and the individual states, after mentioning the U.S. Department of the Treasury (FinCEN) earlier in the bill.

But let’s get to the meat of the bill:

“3 SEC. 3. IDENTITY FRAUD PREVENTION INNOVATION
4 GRANTS.
5 (a) IN GENERAL.—The Secretary of the Treasury
6 shall, not later than 1 year after the date of the enactment
7 of this section, establish a grant program to provide iden
8 tity fraud prevention innovation grants to States.”

The specifics:

  • The states can use the grants to develop mobile driver’s licenses “and other identity credentials.”
  • They can also use the grants to protect individuals from deepfake attacks.
  • Another purpose is to develop “interoperable solutions.”
  • A fourth is to replace vulnerable legacy systems.
  • The final uses are to make sure the federal government gets its money, because that’s the important thing to Congress.

But there are some limitations in how the funds are spent.

  • They can’t be used to require mDLs or eliminate physical driver’s licenses.
  • They can’t be used to “support the issuance of drivers licenses or
    identity credentials to unauthorized immigrants.” (I could go off on a complete tangent here, but for now I’ll just say that this prevents a STATE from issuing such an identity credential.)

The bill is completely silent on REAL ID, therefore not mandating that everyone HAS to get a REAL ID.

And everything else

So although the bill claims to implement a government-wide solution, the only legislative changes to the federal government involve a single department, Treasury.

But Treasury (FinCEN plus IRS) and the tangentially-mentioned Commerce (NIST) aren’t the only Cabinet departments and independent agencies involved in anti-fraud efforts. Others include:

  • The Department of Justice, through the Federal Bureau of Investigation and the new Division for National Fraud Enforcement.
  • The Department of Homeland Security, through the Secret Service and every enforcement agency that checks identities at U.S. borders and other locations.
  • The Federal Trade Commission (FTC).
  • The Social Security Admistration. Not that SSNs are a national ID…but they de facto are.
  • The U.S. Postal Inspection Service.
  • The Consumer Financial Protection Bureau.

These agencies are not ignored, but are funded under mandates separate from H.R. 7270. Or maybe not; there’s an effort to move Consumer Financial Protection Bureau work to the Department of Justice so that the CFPB can be shut down.

And that’s just one example of how anti-fraud efforts are siloed. Much of this is unavoidable in our governmental system (regardless of political parties), in which states and federal government agencies constantly war against each other.

  • What happens, for example, if the Secret Service decides that the states (funded by Treasury) or the FBI (part of Justice) are impeding its anti-fraud efforts?
  • Or if someone complains about NIST listing evil Commie Chinese facial recognition algorithms that COULD fight fraud?

Despite what Biometric Update and the Congresspeople say, we do NOT have a government-wide anti-fraud solution.

(And yes, I know that the Capitol is not north of the Washington Monument…yet.)

Google Gemini. Results may not be accurate.

Federal Trade Commission Age Verification (and estimation?) Workshop January 28

A dizzying array of federal government agencies is interested in biometric verification and biometric classification, for example by age (either age verification or age estimation). As Biometric Update announced, we can add the Federal Trade Commission (FTC) to the list with an upcoming age verification workshop.

Rejecting age estimation in 2024

The FTC has a history with this, having rejected a proposed age estimation scheme in 2024.

“Re: Request from Entertainment Software Rating Board, Yoti Ltd., Yoti (USA) Inc., and Kids Web Services Ltd. for Commission Approval of Children’s Online Privacy Protection Rule Parental Consent Method (FTC Matter No. P235402)

“This letter is to inform you that the Federal Trade Commission has reviewed your group’s (“the ESRB group”) application for approval of a proposed verifiable parental consent (“VPC”) method under the Children’s Online Privacy Protection Rule (“COPPA” or “the Rule”). At this time, the Commission declines to approve the method, without prejudice to your refiling the application in the future….

“The ESRB group submitted a proposed VPC method for approval on June 2, 2023. The method involves the use of “Privacy-Protective Facial Age Estimation” technology, which analyzes the geometry of a user’s face to confirm that the user is an adult….The Commission received 354 comments regarding the application. Commenters opposed to the application raised concerns about privacy protections, accuracy, and deepfakes. Those in support of the application wrote that the VPC method is similar to those approved previously and that it had sufficient privacy guardrails….

“The Commission is aware that Yoti submitted a facial age estimation model to the National Institute of Standards and Technology (“NIST”) in September 2023, and Yoti has stated that it anticipates that a report reflecting NIST’s evaluation of the model is forthcoming. The Commission expects that this report will materially assist the Commission, and the public, in better understanding age verification technologies and the ESRB group’s application.”

You can see the current NIST age estimation results on NIST’s “Face Analysis Technology Evaluation (FATE) Age Estimation & Verification” page, not only for Yoti, but for many other vendors including my former employers IDEMIA and Incode.

But the FTC rejection was in 2024. Things may be different now.

Grok.

Revisiting age verification and age estimation in 2026?

The FTC has scheduled an in-person and online age verification workshop on January 28.

  • The in-person event will be at the Constitution Center at 400 7th St SW in Washington DC.
  • Details regarding online attendance will be published on this page in the coming weeks.

“The Age Verification Workshop will bring together a diverse group of stakeholders, including researchers, academics, industry representatives, consumer advocates, and government regulators, to discuss topics including:  why age verification matters, age verification and estimation tools, navigating the regulatory contours of age verification, how to deploy age verification more widely, and interplay between age verification technologies and the Children’s Online Privacy Protection Act (COPPA Rule).”

Will the participants reconsider age estimation in light of recent test results?

You’re Fired, This Week’s Version

This week, well-known privacy advocate Alvaro Bedoya is not happy.

““The president just illegally fired me. This is corruption plain and simple,” Bedoya, who was appointed [to the Federal Trade Commission] in 2021 by President Joe Biden and confirmed in May 2022, posted on X. 

“He added, “The FTC is an independent agency founded 111 years ago to fight fraudsters and monopolists” but now “the president wants the FTC to be a lapdog for his golfing buddies.””

The other ousted FTC Commissioner, Rebecca Kelly Slaughter, had been appointed by…Donald Trump.

Nearly $3 Billion Lost to Imposter Scams in the U.S. in 2024

(Imposter scam wildebeest image from Imagen 3)

According to the Federal Trade Commission, fraud is being reported at the same rate, but more people are saying they are losing money from it.

In 2023, 27% of people who reported a fraud said they lost money, while in 2024, that figure jumped to 38%.

In a way this is odd, since you would think that we would better detect fraud attempts now. But I guess we don’t. (I’ll say why in a minute.)

Imposter scams

The second fraud category, after investment scams, was imposter scams.

The second highest reported loss amount came from imposter scams, with $2.95 billion reported lost. In 2024, consumers reported losing more money to scams where they paid with bank transfers or cryptocurrency than all other payment methods combined.

Deepfakes

I’ve spent…a long time in the business of determining who people are, and who people aren’t. While the FTC summary didn’t detail the methods of imposter scams, at least some of these may have used deepfakes to perpetuate the scam.

The FTC addressed deepfakes two years ago, speaking of

…technology that simulates human activity, such as software that creates deepfake videos and voice clones….They can use deepfakes and voice clones to facilitate imposter scamsextortion, and financial fraud. And that’s very much a non-exhaustive list.

Creating deepfakes

And the need for advanced skills to create deepfakes has disappeared. ZD NET reported on a Consumer Reports study that analyzed six voice cloning software packages:

The results found that four of the six products — from ElevenLabs, Speechify, PlayHT, and Lovo — did not have the technical mechanisms necessary to prevent cloning someone’s voice without their knowledge or to limit the AI cloning to only the user’s voice. 

Instead, the protection was limited to a box users had to check off, confirming they had the legal right to clone the voice.

Which is just as effective as verifying someone’s identity by asking for their name and date of birth.

(Not) detecting deepfakes

And of course the identity/biometric vendor commuity is addressing deepfakes also. Research from iProov indicates one reason why 38% of the FTC reporters lost money to fraud:

[M]ost people can’t identify deepfakes – those incredibly realistic AI-generated videos and images often designed to impersonate people. The study tested 2,000 UK and US consumers, exposing them to a series of real and deepfake content. The results are alarming: only 0.1% of participants could accurately distinguish real from fake content across all stimuli which included images and video… in a study where participants were primed to look for deepfakes. In real-world scenarios, where people are less aware, the vulnerability to deepfakes is likely even higher.

So what’s the solution? Throw more technology at the problem? Multi factor authentication (requiring the fraudster to deepfake multiple items)? Injection attack detection? Something else?