If the City Fails, Try the County (Milwaukee and Biometrica)

The facial recognition brouhaha in southeastern Wisconsin has taken an interesting turn.

According to Urban Milwaukee, the Milwaukee County Sheriff’s Office is pursuing an agreement with Biometrica for facial recognition services.

The, um, benefit? No cost to the county.

“However, the contract would not need to be approved by the Milwaukee County Board of Supervisors, because there would be no cost to the county associated with the contract. Biometrica offers its services to law enforcement agencies in exchange for millions of mugshots.”

Sound familiar? Chris Burt thinks so.

“Milwaukee Police Department has also attempted to contract Biometrica’s services, prompting pushback, at least some of which reflected confusion about how the system works….

“The mooted agreement between Biometrica and MPD would have added 2.5 million images to the database.

“In theory, if MCSO signs a contract with Biometrica, it could perform facial recognition searches at the request of MPD.”

See Bredemarket’s previous posts on the city efforts that are now on hold.

And counties also.

No guarantee that the County will approve what the City didn’t. And considering the bad press from the City’s efforts, including using software BEFORE adopting a policy on its use, it’s going to be an uphill struggle.

CCAASS(tm)

“Commercials, Concerts, And a Sports Show”(tm) is a trademark of Bredemarket. CCAASS may be freely used by any entity to refer to the sporting event taking place in Santa Clara, California on Sunday, February 8, 2026. This saves you from having to refer to The Big Game or The Bowl That Will Not Be Named. See FindLaw for the legalities: https://www.findlaw.com/legalblogs/small-business/legal-to-use-super-bowl-in-ads-for-your-biz/ 

So for those of us not on Kalshi or other futures or betting markets, who will win the CCAASS? (The sporting part, not the commercial competition.)

As a Commanders fan, I have no wildebeest in the hunt.

Bredemarket has no current clients in the states of Massachusetts or Washington.

There are former IDEMIA employees in both states.

Ex Incode employee (and ex employee of a former Bredemarket client) Gene Volfe lives in an NFC West city, but the team in that city is a bitter rival of the Seahawks.

With no clear preference, I lean toward the NFC rather than the AFC in the CCAASS.

Go Saltwater Birds!

Fact: Cities Must Disclose Responsible Uses of Biometric Data

“Fact: Cities must disclose responsible uses of biometric data” is a parody of the title of my May 2025 guest post for Biometric Update, “Opinion: Vendors must disclose responsible uses of biometric data.”

From Biometric Update.

But I could have chosen another title: “Fact: lack of deadlines sinks behavior.” That’s a mixture of two quotes from Tracy “Trace” Wilkins and Chris Burt, as we will see.

Whether Vanilla Ice and Gordon Lightfoot would agree with the sentiment is not known.

But back to my Biometric Update guest post (expect my next appearance in Biometric Update in 2035).

That guest post touched on Milwaukee, Wisconsin, but had nothing to do with ICE.

Vanilla Ice.

One of the “responsible uses” questions was one that Biometric Update had raised in the previous month: whether it was proper for the Milwaukee Police Department (MPD) to share information with facial recognition vendor Biometrica.

Milwaukee needed a policy

But the conversation subsequently redirected to another topic, as I noted in August. Before Milwaukee’s “Common Council” could approve any use of facial recognition, with or without Biometrica data sharing, MPD needed to develop a facial recognition policy.

According to a quote from MPD, it agreed.

“Should MPD move forward with acquiring FRT, a policy will be drafted based upon best practices and public input.”

It was clear that the policy would come first, facial recognition use afterward.

Google Gemini.

Well, until last night, when a fact was revealed that caused Chris Burt to write an article entitled “Milwaukee police sink efforts to contract facial recognition with unsanctioned use.”

Sounds like the biggest wreck since the one Gordon Lightfoot sang about. (A different lake, but bear with me here.)

Gordon Lightfoot.

Milwaukee didn’t get a policy

The details are in an article by WUWM, Milwaukee’s NPR station, which took a break from ICE coverage to report on a Thursday night Fire and Police Commission meeting.

“Commissioner Krissie Fung pressed MPD inspector Paul Lao on the department’s past use of facial recognition.

““Just to clarify,” asked Fung, “Is the practice still continuing?”

““As needed right now, we are still using [FRT],” Lao responded.”

It was after 10:00 pm Central time, but the commissioner pressed the issue.

Fung asked Lao if the department was currently still using FRT without an SOP in place.

“As we said that’s correct and we’re trying to work on getting an SOP,” Lao said.

That brought the wolves out, because SOP or no SOP, there are people who hate facial recognition, especially because of other things going on in the city that have nothing to do with MPD. Add the “facial recognition is racist” claims, and MPD was (in Burt’s words) sunk.

Yes, a follow-up meeting will be held, but Burt notes (via WISN) that MPD has imposed its own moratorium on facial recognition technology use.

“Despite our belief that this is useful technology to assist in generating leads for apprehending violent criminals, we recognize that the public trust is far more valuable.”

Milwaukee should have asked, then acted

From Bredemarket’s self-interested perspective this is a content problem.

  • Back in August 2025, Milwaukee knew that it needed a facial recognition policy.
  • Several months later, in February 2026, it didn’t have one, and didn’t have a timeframe regarding when a policy would be ready for review.

Now I appreciate that a facial recognition policy is not a short writing job. I’ve worked on policies, and you can’t complete one in a couple of days.

But couldn’t you at least come up with a DRAFT in six months?

To create a policy, you need a process.

Bredemarket asks, then it acts.

Deadlines drive behavior

Coincidentally, I live-blogged a Never Search Alone webinar this morning at which Tracy “Trace” Wilkins made this statement.

“Deadlines drive behavior.”

Frankly, I see this a lot. Companies (or governments) require content, but don’t set a deadline for finalizing that content.

And when you don’t set a deadline, then it never gets done.

And no, “as soon as possible” is not a deadline, because “as soon as possible” REALLY means “within a year, if we feel like it.”

Lack of deadlines sinks behavior.

To All the People Who Wanted to Defund the Police

I discussed the whole “defund the police” movement years ago, and now in 2026 we are still depending upon the police to protect us.

According to KARE, here is what happened when the police investigated the death of Alex Pretti…or tried to do so.

“Despite having a signed warrant from a judge, the Minnesota Bureau of Criminal Apprehension (BCA) was denied access to the scene where a man was fatally shot by federal agents Saturday morning in south Minneapolis, according to the BCA.

“Minnesota BCA Superintendent Drew Evans said the department was initially turned away at the scene by the Department of Homeland Security (DHS), so the BCA obtained a warrant from an independent judge. Evans said the judge agreed that the BCA had probable cause to investigate the scene, but DHS officials wouldn’t allow the BCA access to the scene.”

And I might as well say this also…I don’t believe in abolishing ICE either.

Avoiding Bot Medical Malpractice Via…Standards!

Back in the good old days, Dr. Welby’s word was law and was unquestioned.

Then we started to buy medical advice books and researched things ourselves.

Later we started to access peer-reviewed consumer medical websites and researched things ourselves.

Then we obtained our medical advice via late night TV commercials and Internet advertisements.

OK, this one’s a parody, but you know the real ones I’m talking about. Silver Solution?

Finally, we turned to generative AI to answer our medical questions.

With potentially catastrophic results.

So how do we fix this?

The U.S. National Institute of Standards and Technology (NIST) says that we should…drumroll…adopt standards.

Which is what you’d expect a standards-based government agency to say.

But since I happen to like NIST, I’ll listen to its argument.

“One way AI can prove its trustworthiness is by demonstrating its correctness. If you’ve ever had a generative AI tool confidently give you the wrong answer to a question, you probably appreciate why this is important. If an AI tool says a patient has cancer, the doctor and patient need to know the odds that the AI is right or wrong.

“Another issue is reliability, particularly of the datasets AI tools rely on for information. Just as a hacker can inject a virus into a computer network, someone could intentionally infect an AI dataset to make it work nefariously.”

So we know the risks, but how do we mitigate them?

“Like all technology, AI comes with risks that should be considered and managed. Learn about how NIST is helping to manage those risks with our AI Risk Management Framework. This free tool is recommended for use by AI users, including doctors and hospitals, to help them reap the benefits of AI while also managing the risks.”

Order in the Court: California AI Policies

Technology is one thing. But policy must govern technology.

For example, is your court using artificial intelligence?

If your court is in California, it must abide by this rule by next week:

“Any court that does not prohibit the use of generative AI by court staff or judicial officers must adopt a generative AI use policy by December 15, 2025. This rule applies to the superior courts, the Courts of Appeal, and the Supreme Court.”

According to Procopio, such a policy may cover items such as a prohibition on entering private data into public systems, the need to verify and correct AI-generated results, and disclosures on AI use.

Good ideas outside the courtroom also.

For example, the picture illustrating this post was created by Google Gemini—as of this week using Nano Banana.

Which is not a baseball team.

Google Gemini.

Detecting Deceptively Authoritative Deepfakes

I referenced this on one of my LinkedIn showcase pages earlier this week, but I need to say more on it.

We all agree that deepfakes can (sometimes) result in bad things, but some deepfakes present particular dangers that may not be detected. Let’s look at how deepfakes can harm the healthcare and legal professions.

Arielle Waldman of Dark Reading pointed out these dangers in her post “Sora 2 Makes Videos So Believable, Reality Checks Are Required.”

But I don’t want to talk about the general issues with believable AI (whether it’s Sora 2, Nano Banana Pro, or something else). I want to hone in on this:

“Sora 2 security risks will affect an array of industries, primarily the legal and healthcare sectors. AI generated evidence continues to pose challenges for lawyers and judges because it’s difficult to distinguish between reality and illusion. And deepfakes could affect healthcare, where many benefits are doled out virtually, including appointments and consultations.”

Actually these are two separate issues, and I’ll deal with them both.

Health Deepfakes

It’s bad enough that people can access your health records just by knowing your name and birthdate. But what happens when your medical practitioner sends you a telehealth appointment link…except your medical practitioner didn’t send it?

Grok.

So here you are, sharing your protected health information with…who exactly?

And once you realize you’ve been duped, you turn to a lawyer.

This one is not a deepfake. From YouTube.

Or you think you turn to a lawyer.

Legal Deepfakes

First off, is that lawyer truly a lawyer? And are you speaking to the lawyer to whom you think you’re speaking?

Not Johnnie Cochran.

And even if you are, when the lawyer gathers information for the case, who knows if it’s real. And I’m not talking about the lawyers who cited hallucinated legal decisions. I’m talking about the lawyers whose eDiscovery platforms gather faked evidence.

Liquor store owner.

The detection of deepfakes is currently concentrated in particular industries, such as financial services. But many more industries require this detection.

When Technology Catches Up: 1980 Murderer of Michelle “Missy” Jones Just Convicted

The Las Vegas Review-Journal reported this in September 2020:

“The Fontana Police Department in San Bernardino County, California, said it arrested Leonard Nash, 66, of Las Vegas on a warrant charging him with murder in the July 5, 1980, slaying of Michelle “Missy” Jones. The young woman was found slain in a grapefruit grove in Fontana.”

So why did it take 40 years to arrest Nash?

“Police said forensic evidence was collected during Jones’ autopsy, but technology at the time did not allow it to be connected to an offender.”

In 2020, the Riverside/San Bernardino CAL-DNA Laboratory successfully obtained a profile, but it did not match the DNA profile of any known offender. Nash, a person of interest, was matched to the profile via “discarded DNA.”

Anyway, Nash was convicted this month, but the news stories that described his conviction are inaccessible to you and me.

Skagit County: No Data to Scrape (For Now)

My Friday post about Sedro-Woolley, Stanwood, and Flock Safety is already out of date.

Original post: Flock Safety data is public record

That post, “Privacy: What Happens When You Data Scrape FROM the Identity Vendors?”, discussed the case involving the two cities above and a private resident, Jose Rodriguez. The resident requested all Flock Safety camera output during particular short time periods. The cities semantically argued they didn’t have the data; Flock Safety did. Meanwhile the requested data was auto-deleted, technically making the request moot.

But not legally.

“IT IS HEREBY ORDERED, ADJUDGED AND DECREED that Plaintiff’s motion for Declaratory Judgment that the Flock camera records are not public records is DENIED.”

So the police attempt to keep the records private failed. Since it’s government material, it’s public record and accessible by anyone.

Update: the cameras are turned off

Now here’s the part I missed in my original post, according to CarScoops:

“[I]t turns out that those pictures are public data, according to a judge’s recent ruling. And almost as soon as the decision landed, local officials scrambled to shut the cameras down….

“Attorneys for the cities said they will review the decision before determining whether to appeal. For now, their Flock cameras aren’t coming back online.”

Because CarScoops didn’t link to the specific decisions by the cities, I investigated further.

Update 2: the cameras were turned off a long time ago

I sought other sources regarding Stanwood, Sedro-Woolley, and Flock Safety, and discovered that CarScoops did not state the truth when it said “almost as soon as the decision landed, local officials scrambled to shut the cameras down.”

Turns out Stanwood shut its cameras off in May, awaiting the judge’s eventual ruling.

“Fourteen Flock cameras were installed in Stanwood this February. Since May, they have been turned off.

“In November 2024, the Stanwood City Council approved a $92,000 contract with Flock Safety to install the cameras….

“The city is seeking a court judgment on whether Flock footage is public record or if it is exempt from the state Public Records Act.”

Moving on to Sedro-Woolley:

“The city of Sedro-Woolley is no longer using cameras that read license plates while it seeks a court ruling on whether images recorded by the cameras are considered public records.

“Police Chief Dan McIlraith said the seven Flock Safety cameras that went live in Sedro-Woolley in March were disabled in June.”

How to turn the cameras on again

From my perspective, the only way I see the Flock Safety cameras being turned on again is if the cities of Stanwood and Sedro-Woolley take the following two actions.

  • First, the cities need to establish or beef up their license plate recognition policies. Specifically, they need to set the rules for how to reply to public records requests. (And no, “stall for 30 days until the records are auto-deleted” doesn’t count.)
  • Second, and only after a policy is established, implement some form of redaction software. Something that protects the privacy of license plates, faces, and other personally identifying information of people who are NOT part of a criminal investigation.

And yes, such software exists. Flock Safety itself does not offer it—apparently it never, um, envisioned that a city would be forced to release all its data. But companies such as Veritone and CaseGuard do offer such software offering automatic redaction.

If you are a police agency capturing video feeds, plan now.