Data Labelers Gonna Label, and Class Action Lawyers Gonna Lawyer

On Wednesday, I described how Meta’s Kenyan data labelers ended up watching explicit videos from people who presumably didn’t know that smart glasses were recording their activity.

To no one’s surprise, class action lawyers are now involved.

“In the newly filed complaint, plaintiffs Gina Bartone of New Jersey and Mateo Canu of California, represented by the public interest-focused Clarkson Law Firm, allege that Meta violated privacy laws and engaged in false advertising.

“The complaint alleges that the Meta AI smart glasses are advertised using promises like “designed for privacy, controlled by you,” and “built for your privacy,” which might not lead customers to assume their glasses’ footage, including intimate moments, was being watched by overseas workers. The plaintiffs believed Meta’s marketing and said they saw no disclaimer or information that contradicted the advertised privacy protections.”

So what does Meta say?

“Clear, easy device and app settings help you manage your information, giving you control over what content you choose to share with others, and when.”

Except that according to Clarkson, people can’t opt out of the data labeling process.

This could get very revealing.

“We Use AI.” And We Use YOUR (Non-copyrighted) AI.

A private social media comment got me thinking. I will gladly credit the author, with their permission.

“If a U.S. federal court says that you can’t copyright AI generated content, an appellate court upholds that ruling, and the SCOTUS refuses to hear the case, what are the implications for software generated by LLMs?”

Think about that the next time Company X publishes its marketing message “we use AI.”

What if Company X’s code and prompts were themselves written with AI?

Couldn’t Company Y take Company X’s non-copyrightable code and run it without penalty, like open source code?

Now Company X would be forced to prove that it does NOT use AI. For its code, anyway.

Fingerprint Evidence in Court

For…a long time I’ve been talking about whether fingerprint evidence is accepted in court. But until now I never had access to an easy-to-use database of court cases.

Mike Bowers shared a release from the Wilson Center for Science and Justice at Duke Law, “New Database Documents a Century of Court Decisions on Forensic Expert Evidence Testimony.”

The fingerprint database can be accessed here.

From the Fingerprint Expert Evidence database, https://forensic-case-databases.law.duke.edu/data/fingerprints/,

Here’s an example of the case details for the (current) most recent record:

Case

Commonwealth v. Honsch, 22 N.E.3d 287 (Mass. 2024)

Year

2024

Jurisdiction

Massachusetts

Type of Proceeding

Appellate

Other fields

CourtSupreme Judicial Court of Massachusetts, Hampden

Expert Evidence Ruling Reversing  or Affirming on AppealAdmitted

RulingCorrect to admit

Type of EvidenceFingerprint

Defense or Prosecution ExpertProsecution

Summary of Reasons for Ruling

The Commonwealth here presented two latent print analysts as experts. One multiple times that it was his “scientific opinion” that there were three latent prints that were “identified to” the palms of the defendant. The term “scientific” to describe his opinion “arguably verged on suggesting that the ACE-V process is more scientific than warranted,” and there was one instance in which Dolan testified without using the term “opinion.” The court concludes that there was no error because, “viewed as a whole,” his testimony was largely expressed in terms of an “opinion” and his testimony did not claim that the ACE-V process was infallible or absolutely certain.

On the other hand, Pivovar testified that she (i) “identified [a palm print from one of the garbage bags and the print of the defendant’s left palm] as originating from the same source”; (ii) “identif[ied] [another latent print] and the right palm print of [the defendant] as being the same, they originated from the same source”; and (iii) “identif[ied] the [third latent print] as originating from the same source as the right palm of [the defendant] that [she] compared it to.” Pivovar did not frame her testimony in terms of an “opinion” and expressed the identification of the defendant with certainty. This was error. However, the court concluded that Pivovar’s testimony did not likely influence the jury’s conclusion. Defense counsel countered the notion that individualization under the ACE-V methodology is infallible by cross-examining Pivovar on the subjectivity of latent print analysis, the fact that two prints are never identical, and a recent incident in which the Federal Bureau of Investigation erroneously identified a suspect based on an incorrect latent print analysis. The defendant also presented an expert detailing the risks of cognitive bias in latent print analysis. Additionally, the Commonwealth’s other latent print examiner, Dolan, testified as to the same findings as Pivovar. If Pivovar’s testimony had been properly framed as an opinion, there still would have been strong evidence that the prints found at Elizabeth’s crime scene originated from the defendant. Thus, even though we determine that Pivovar’s testimony was erroneously presented as fact, the error did not create a substantial likelihood of a miscarriage of justice.

Admissibility StandardLanigan-Daubert

Lower Court HearingN

Discussion of 2009 NAS ReportY

Discussion of Error Rates or ReliabilityN

Frye RulingN

Limiting Testimony RulingN

Language Imposed by Court to Limit TestimonyN

Ruling Based in Prior PrecedentY

Daubert FactorsN

Ruling on Qualifications of ExpertN

Ruling on 702(a)N

Ruling on 702(b)N

Ruling on 702(c)N

Ruling on 702(d)N

Notes—

Good resource to keep in mind.

Digital Identity: Endorsed, Or Bestowed?

Joel R. McConvey’s recent article in Biometric Update made my head spin.

“Utah’s state legislature has voted unanimously to pass SB 275, the State-Endorsed Digital Identity Program Amendments bill. The law makes Utah unique among states, in that it defines identity as something that is inherent to a person and endorsed by the state rather than bestowed by the state.

“The distinction has implications for discussions about data sovereignty – who gets to control a person’s personal information – as well as for other states pursuing digital identity programs.”

Endorsed? Bestowed? What’s up? An earlier McConvey article quotes from Utah’s Chief Privacy Officer Christopher Bramwell:

“Part of Utah’s history,” Bramwell says – “why we care so much about privacy, and this does translate directly to digital identity – is when pioneers came to Utah, it was literally for autonomy, and it was to be left alone to live their life according to the dictates of their heart. That’s why many people came to America, whether as pilgrims or pioneers or immigrants: because you want something better and you want to do it according to your conscience.”

For those whose history is rusty, Bramwell is referring to the migration of the Mormons out west. As he points out, the Mormons are not the only ones in U.S. history who came to a new land to enjoy freedom from the perceived oppressive state. The original inhabitants of Massachusetts, Rhode Island, Maryland, and Pennsylvania also fall within this tradition.

Bramwell continues:

““And that’s a lot of what we’re talking about with digital ID. You need to engage in the free market, but do it according to your choice without being tracked, without being surveilled, without undue influence on how you’re operating. So you can live your life in the digital realm according to the dictates of your heart and how you and your family see fit.”

“Our approach is to separate identity from any privileges or licenses that are given by government. Identity should be separate, so that it is not something that there’s any reason to ever take away.”

But this is not just a religious issue, as the American Civil Liberties Union points out.

“The philosophical underpinning of the state’s SEDI concept is that “identity” is not something bestowed by the state, but that inherently belongs to the individual; the state merely “endorses” a person’s ID.”

Of the six major underpinnings of SEDI, the third is of interest here:

“Individual control,” in which the state throws its weight behind a movement known as “user-centric” or “self-sovereign” identity, that strives to ensure that government identification systems are used to empower individuals, not to control them.

So what does self-sovereign, endorsed identity mean from a legal standpoint? Let’s look at the opening section of the most recent bill, Utah’s SB 275:

63A-20-101. Digital identity bill of rights.

The following rights constitute the digital identity bill of rights in this state:

(1)An individual possesses an individual identity innate to the individual’s existence and independent of the state, which identity is fundamental and inalienable.

(2)An individual has a right to the management and control of the individual’s digital identity to protect individual privacy.

(3)An individual has a right to choose, receive, and use a physical form of identity assertion that is endorsed by the state.

(4)An individual has a right to not be compelled by the state to possess, use, or rely upon a digital form of identity assertion in place of a physical form of identity assertion that is endorsed by the state.

(5)An individual has a right to state endorsement of the individual’s digital identity upon meeting objective, uniform standards for eligibility and verification established by law, and a right to not have such endorsement arbitrarily or discriminatorily withheld or revoked.

(6)An individual has a right to have the state’s operation of digital identity systems governed by clear standards established by the Legislature, including for eligibility, issuance, endorsement, acceptance, revocation, or interoperability of digital identityassertions.

(7)An individual has a right to transparency in the design and operation of a state digital identity, including the right to access, read, and review the standards and technical specifications upon which the state digital identity is built and operates.

(8)An individual has the right to choose what identity attributes are disclosed by the individual’s state digital identity in accordance with standards established by theLegislature.

(9)An individual has the right to any service or benefit to which the individual is otherwise lawfully entitled based on the individual’s choice of a lawful format or means of identity assertion without denial, diminishment, or condition.

(10)An individual has a right to be free from surveillance, profiling, tracking, or persistent monitoring of the individual’s assertions of digital identity by the state, except as authorized by law.

(11)An individual has a right to not be required by the state to surrender the individual’s device in order to present the individual’s digital identity.

Of course, once you leave the state of Utah and reside in another state, that state will BESTOW an identity upon you.

And while this controls what the state of Utah can do, it does not apply to a FEDERAL digital identity, such as a future digital U.S. passport.

If the City Fails, Try the County (Milwaukee and Biometrica)

The facial recognition brouhaha in southeastern Wisconsin has taken an interesting turn.

According to Urban Milwaukee, the Milwaukee County Sheriff’s Office is pursuing an agreement with Biometrica for facial recognition services.

The, um, benefit? No cost to the county.

“However, the contract would not need to be approved by the Milwaukee County Board of Supervisors, because there would be no cost to the county associated with the contract. Biometrica offers its services to law enforcement agencies in exchange for millions of mugshots.”

Sound familiar? Chris Burt thinks so.

“Milwaukee Police Department has also attempted to contract Biometrica’s services, prompting pushback, at least some of which reflected confusion about how the system works….

“The mooted agreement between Biometrica and MPD would have added 2.5 million images to the database.

“In theory, if MCSO signs a contract with Biometrica, it could perform facial recognition searches at the request of MPD.”

See Bredemarket’s previous posts on the city efforts that are now on hold.

And counties also.

No guarantee that the County will approve what the City didn’t. And considering the bad press from the City’s efforts, including using software BEFORE adopting a policy on its use, it’s going to be an uphill struggle.

CCAASS(tm)

“Commercials, Concerts, And a Sports Show”(tm) is a trademark of Bredemarket. CCAASS may be freely used by any entity to refer to the sporting event taking place in Santa Clara, California on Sunday, February 8, 2026. This saves you from having to refer to The Big Game or The Bowl That Will Not Be Named. See FindLaw for the legalities: https://www.findlaw.com/legalblogs/small-business/legal-to-use-super-bowl-in-ads-for-your-biz/ 

So for those of us not on Kalshi or other futures or betting markets, who will win the CCAASS? (The sporting part, not the commercial competition.)

As a Commanders fan, I have no wildebeest in the hunt.

Bredemarket has no current clients in the states of Massachusetts or Washington.

There are former IDEMIA employees in both states.

Ex Incode employee (and ex employee of a former Bredemarket client) Gene Volfe lives in an NFC West city, but the team in that city is a bitter rival of the Seahawks.

With no clear preference, I lean toward the NFC rather than the AFC in the CCAASS.

Go Saltwater Birds!

Fact: Cities Must Disclose Responsible Uses of Biometric Data

“Fact: Cities must disclose responsible uses of biometric data” is a parody of the title of my May 2025 guest post for Biometric Update, “Opinion: Vendors must disclose responsible uses of biometric data.”

From Biometric Update.

But I could have chosen another title: “Fact: lack of deadlines sinks behavior.” That’s a mixture of two quotes from Tracy “Trace” Wilkins and Chris Burt, as we will see.

Whether Vanilla Ice and Gordon Lightfoot would agree with the sentiment is not known.

But back to my Biometric Update guest post (expect my next appearance in Biometric Update in 2035).

That guest post touched on Milwaukee, Wisconsin, but had nothing to do with ICE.

Vanilla Ice.

One of the “responsible uses” questions was one that Biometric Update had raised in the previous month: whether it was proper for the Milwaukee Police Department (MPD) to share information with facial recognition vendor Biometrica.

Milwaukee needed a policy

But the conversation subsequently redirected to another topic, as I noted in August. Before Milwaukee’s “Common Council” could approve any use of facial recognition, with or without Biometrica data sharing, MPD needed to develop a facial recognition policy.

According to a quote from MPD, it agreed.

“Should MPD move forward with acquiring FRT, a policy will be drafted based upon best practices and public input.”

It was clear that the policy would come first, facial recognition use afterward.

Google Gemini.

Well, until last night, when a fact was revealed that caused Chris Burt to write an article entitled “Milwaukee police sink efforts to contract facial recognition with unsanctioned use.”

Sounds like the biggest wreck since the one Gordon Lightfoot sang about. (A different lake, but bear with me here.)

Gordon Lightfoot.

Milwaukee didn’t get a policy

The details are in an article by WUWM, Milwaukee’s NPR station, which took a break from ICE coverage to report on a Thursday night Fire and Police Commission meeting.

“Commissioner Krissie Fung pressed MPD inspector Paul Lao on the department’s past use of facial recognition.

““Just to clarify,” asked Fung, “Is the practice still continuing?”

““As needed right now, we are still using [FRT],” Lao responded.”

It was after 10:00 pm Central time, but the commissioner pressed the issue.

Fung asked Lao if the department was currently still using FRT without an SOP in place.

“As we said that’s correct and we’re trying to work on getting an SOP,” Lao said.

That brought the wolves out, because SOP or no SOP, there are people who hate facial recognition, especially because of other things going on in the city that have nothing to do with MPD. Add the “facial recognition is racist” claims, and MPD was (in Burt’s words) sunk.

Yes, a follow-up meeting will be held, but Burt notes (via WISN) that MPD has imposed its own moratorium on facial recognition technology use.

“Despite our belief that this is useful technology to assist in generating leads for apprehending violent criminals, we recognize that the public trust is far more valuable.”

Milwaukee should have asked, then acted

From Bredemarket’s self-interested perspective this is a content problem.

  • Back in August 2025, Milwaukee knew that it needed a facial recognition policy.
  • Several months later, in February 2026, it didn’t have one, and didn’t have a timeframe regarding when a policy would be ready for review.

Now I appreciate that a facial recognition policy is not a short writing job. I’ve worked on policies, and you can’t complete one in a couple of days.

But couldn’t you at least come up with a DRAFT in six months?

To create a policy, you need a process.

Bredemarket asks, then it acts.

Deadlines drive behavior

Coincidentally, I live-blogged a Never Search Alone webinar this morning at which Tracy “Trace” Wilkins made this statement.

“Deadlines drive behavior.”

Frankly, I see this a lot. Companies (or governments) require content, but don’t set a deadline for finalizing that content.

And when you don’t set a deadline, then it never gets done.

And no, “as soon as possible” is not a deadline, because “as soon as possible” REALLY means “within a year, if we feel like it.”

Lack of deadlines sinks behavior.

To All the People Who Wanted to Defund the Police

I discussed the whole “defund the police” movement years ago, and now in 2026 we are still depending upon the police to protect us.

According to KARE, here is what happened when the police investigated the death of Alex Pretti…or tried to do so.

“Despite having a signed warrant from a judge, the Minnesota Bureau of Criminal Apprehension (BCA) was denied access to the scene where a man was fatally shot by federal agents Saturday morning in south Minneapolis, according to the BCA.

“Minnesota BCA Superintendent Drew Evans said the department was initially turned away at the scene by the Department of Homeland Security (DHS), so the BCA obtained a warrant from an independent judge. Evans said the judge agreed that the BCA had probable cause to investigate the scene, but DHS officials wouldn’t allow the BCA access to the scene.”

And I might as well say this also…I don’t believe in abolishing ICE either.

Avoiding Bot Medical Malpractice Via…Standards!

Back in the good old days, Dr. Welby’s word was law and was unquestioned.

Then we started to buy medical advice books and researched things ourselves.

Later we started to access peer-reviewed consumer medical websites and researched things ourselves.

Then we obtained our medical advice via late night TV commercials and Internet advertisements.

OK, this one’s a parody, but you know the real ones I’m talking about. Silver Solution?

Finally, we turned to generative AI to answer our medical questions.

With potentially catastrophic results.

So how do we fix this?

The U.S. National Institute of Standards and Technology (NIST) says that we should…drumroll…adopt standards.

Which is what you’d expect a standards-based government agency to say.

But since I happen to like NIST, I’ll listen to its argument.

“One way AI can prove its trustworthiness is by demonstrating its correctness. If you’ve ever had a generative AI tool confidently give you the wrong answer to a question, you probably appreciate why this is important. If an AI tool says a patient has cancer, the doctor and patient need to know the odds that the AI is right or wrong.

“Another issue is reliability, particularly of the datasets AI tools rely on for information. Just as a hacker can inject a virus into a computer network, someone could intentionally infect an AI dataset to make it work nefariously.”

So we know the risks, but how do we mitigate them?

“Like all technology, AI comes with risks that should be considered and managed. Learn about how NIST is helping to manage those risks with our AI Risk Management Framework. This free tool is recommended for use by AI users, including doctors and hospitals, to help them reap the benefits of AI while also managing the risks.”