Tag Archives: legal
Fact: Cities Must Disclose Responsible Uses of Biometric Data
“Fact: Cities must disclose responsible uses of biometric data” is a parody of the title of my May 2025 guest post for Biometric Update, “Opinion: Vendors must disclose responsible uses of biometric data.”

But I could have chosen another title: “Fact: lack of deadlines sinks behavior.” That’s a mixture of two quotes from Tracy “Trace” Wilkins and Chris Burt, as we will see.
Whether Vanilla Ice and Gordon Lightfoot would agree with the sentiment is not known.
But back to my Biometric Update guest post (expect my next appearance in Biometric Update in 2035).
That guest post touched on Milwaukee, Wisconsin, but had nothing to do with ICE.
One of the “responsible uses” questions was one that Biometric Update had raised in the previous month: whether it was proper for the Milwaukee Police Department (MPD) to share information with facial recognition vendor Biometrica.
Milwaukee needed a policy
But the conversation subsequently redirected to another topic, as I noted in August. Before Milwaukee’s “Common Council” could approve any use of facial recognition, with or without Biometrica data sharing, MPD needed to develop a facial recognition policy.
According to a quote from MPD, it agreed.
“Should MPD move forward with acquiring FRT, a policy will be drafted based upon best practices and public input.”
It was clear that the policy would come first, facial recognition use afterward.

Well, until last night, when a fact was revealed that caused Chris Burt to write an article entitled “Milwaukee police sink efforts to contract facial recognition with unsanctioned use.”
Sounds like the biggest wreck since the one Gordon Lightfoot sang about. (A different lake, but bear with me here.)
Milwaukee didn’t get a policy
The details are in an article by WUWM, Milwaukee’s NPR station, which took a break from ICE coverage to report on a Thursday night Fire and Police Commission meeting.
“Commissioner Krissie Fung pressed MPD inspector Paul Lao on the department’s past use of facial recognition.
““Just to clarify,” asked Fung, “Is the practice still continuing?”
““As needed right now, we are still using [FRT],” Lao responded.”
It was after 10:00 pm Central time, but the commissioner pressed the issue.
Fung asked Lao if the department was currently still using FRT without an SOP in place.
“As we said that’s correct and we’re trying to work on getting an SOP,” Lao said.
That brought the wolves out, because SOP or no SOP, there are people who hate facial recognition, especially because of other things going on in the city that have nothing to do with MPD. Add the “facial recognition is racist” claims, and MPD was (in Burt’s words) sunk.
Yes, a follow-up meeting will be held, but Burt notes (via WISN) that MPD has imposed its own moratorium on facial recognition technology use.
“Despite our belief that this is useful technology to assist in generating leads for apprehending violent criminals, we recognize that the public trust is far more valuable.”
Milwaukee should have asked, then acted
From Bredemarket’s self-interested perspective this is a content problem.
- Back in August 2025, Milwaukee knew that it needed a facial recognition policy.
- Several months later, in February 2026, it didn’t have one, and didn’t have a timeframe regarding when a policy would be ready for review.
Now I appreciate that a facial recognition policy is not a short writing job. I’ve worked on policies, and you can’t complete one in a couple of days.
But couldn’t you at least come up with a DRAFT in six months?
To create a policy, you need a process.
Deadlines drive behavior
Coincidentally, I live-blogged a Never Search Alone webinar this morning at which Tracy “Trace” Wilkins made this statement.
“Deadlines drive behavior.”
Frankly, I see this a lot. Companies (or governments) require content, but don’t set a deadline for finalizing that content.
And when you don’t set a deadline, then it never gets done.
And no, “as soon as possible” is not a deadline, because “as soon as possible” REALLY means “within a year, if we feel like it.”
Lack of deadlines sinks behavior.
To All the People Who Wanted to Defund the Police
I discussed the whole “defund the police” movement years ago, and now in 2026 we are still depending upon the police to protect us.
According to KARE, here is what happened when the police investigated the death of Alex Pretti…or tried to do so.
“Despite having a signed warrant from a judge, the Minnesota Bureau of Criminal Apprehension (BCA) was denied access to the scene where a man was fatally shot by federal agents Saturday morning in south Minneapolis, according to the BCA.
“Minnesota BCA Superintendent Drew Evans said the department was initially turned away at the scene by the Department of Homeland Security (DHS), so the BCA obtained a warrant from an independent judge. Evans said the judge agreed that the BCA had probable cause to investigate the scene, but DHS officials wouldn’t allow the BCA access to the scene.”
And I might as well say this also…I don’t believe in abolishing ICE either.
Avoiding Bot Medical Malpractice Via…Standards!
Back in the good old days, Dr. Welby’s word was law and was unquestioned.
Then we started to buy medical advice books and researched things ourselves.
Later we started to access peer-reviewed consumer medical websites and researched things ourselves.
Then we obtained our medical advice via late night TV commercials and Internet advertisements.
Finally, we turned to generative AI to answer our medical questions.
With potentially catastrophic results.
So how do we fix this?
The U.S. National Institute of Standards and Technology (NIST) says that we should…drumroll…adopt standards.
Which is what you’d expect a standards-based government agency to say.
But since I happen to like NIST, I’ll listen to its argument.
“One way AI can prove its trustworthiness is by demonstrating its correctness. If you’ve ever had a generative AI tool confidently give you the wrong answer to a question, you probably appreciate why this is important. If an AI tool says a patient has cancer, the doctor and patient need to know the odds that the AI is right or wrong.
“Another issue is reliability, particularly of the datasets AI tools rely on for information. Just as a hacker can inject a virus into a computer network, someone could intentionally infect an AI dataset to make it work nefariously.”
So we know the risks, but how do we mitigate them?
“Like all technology, AI comes with risks that should be considered and managed. Learn about how NIST is helping to manage those risks with our AI Risk Management Framework. This free tool is recommended for use by AI users, including doctors and hospitals, to help them reap the benefits of AI while also managing the risks.”
Order in the Court: California AI Policies
Technology is one thing. But policy must govern technology.
For example, is your court using artificial intelligence?
If your court is in California, it must abide by this rule by next week:
“Any court that does not prohibit the use of generative AI by court staff or judicial officers must adopt a generative AI use policy by December 15, 2025. This rule applies to the superior courts, the Courts of Appeal, and the Supreme Court.”
According to Procopio, such a policy may cover items such as a prohibition on entering private data into public systems, the need to verify and correct AI-generated results, and disclosures on AI use.
Good ideas outside the courtroom also.
For example, the picture illustrating this post was created by Google Gemini—as of this week using Nano Banana.
Which is not a baseball team.

Detecting Deceptively Authoritative Deepfakes
I referenced this on one of my LinkedIn showcase pages earlier this week, but I need to say more on it.
We all agree that deepfakes can (sometimes) result in bad things, but some deepfakes present particular dangers that may not be detected. Let’s look at how deepfakes can harm the healthcare and legal professions.
Arielle Waldman of Dark Reading pointed out these dangers in her post “Sora 2 Makes Videos So Believable, Reality Checks Are Required.”
But I don’t want to talk about the general issues with believable AI (whether it’s Sora 2, Nano Banana Pro, or something else). I want to hone in on this:
“Sora 2 security risks will affect an array of industries, primarily the legal and healthcare sectors. AI generated evidence continues to pose challenges for lawyers and judges because it’s difficult to distinguish between reality and illusion. And deepfakes could affect healthcare, where many benefits are doled out virtually, including appointments and consultations.”
Actually these are two separate issues, and I’ll deal with them both.
Health Deepfakes
It’s bad enough that people can access your health records just by knowing your name and birthdate. But what happens when your medical practitioner sends you a telehealth appointment link…except your medical practitioner didn’t send it?
So here you are, sharing your protected health information with…who exactly?
And once you realize you’ve been duped, you turn to a lawyer.
Or you think you turn to a lawyer.
Legal Deepfakes
First off, is that lawyer truly a lawyer? And are you speaking to the lawyer to whom you think you’re speaking?
And even if you are, when the lawyer gathers information for the case, who knows if it’s real. And I’m not talking about the lawyers who cited hallucinated legal decisions. I’m talking about the lawyers whose eDiscovery platforms gather faked evidence.
The detection of deepfakes is currently concentrated in particular industries, such as financial services. But many more industries require this detection.
When Technology Catches Up: 1980 Murderer of Michelle “Missy” Jones Just Convicted
The Las Vegas Review-Journal reported this in September 2020:
“The Fontana Police Department in San Bernardino County, California, said it arrested Leonard Nash, 66, of Las Vegas on a warrant charging him with murder in the July 5, 1980, slaying of Michelle “Missy” Jones. The young woman was found slain in a grapefruit grove in Fontana.”
So why did it take 40 years to arrest Nash?
“Police said forensic evidence was collected during Jones’ autopsy, but technology at the time did not allow it to be connected to an offender.”
In 2020, the Riverside/San Bernardino CAL-DNA Laboratory successfully obtained a profile, but it did not match the DNA profile of any known offender. Nash, a person of interest, was matched to the profile via “discarded DNA.”
Anyway, Nash was convicted this month, but the news stories that described his conviction are inaccessible to you and me.
Skagit County: No Data to Scrape (For Now)
My Friday post about Sedro-Woolley, Stanwood, and Flock Safety is already out of date.
Original post: Flock Safety data is public record
That post, “Privacy: What Happens When You Data Scrape FROM the Identity Vendors?”, discussed the case involving the two cities above and a private resident, Jose Rodriguez. The resident requested all Flock Safety camera output during particular short time periods. The cities semantically argued they didn’t have the data; Flock Safety did. Meanwhile the requested data was auto-deleted, technically making the request moot.
But not legally.
“IT IS HEREBY ORDERED, ADJUDGED AND DECREED that Plaintiff’s motion for Declaratory Judgment that the Flock camera records are not public records is DENIED.”
So the police attempt to keep the records private failed. Since it’s government material, it’s public record and accessible by anyone.
Update: the cameras are turned off
Now here’s the part I missed in my original post, according to CarScoops:
“[I]t turns out that those pictures are public data, according to a judge’s recent ruling. And almost as soon as the decision landed, local officials scrambled to shut the cameras down….
“Attorneys for the cities said they will review the decision before determining whether to appeal. For now, their Flock cameras aren’t coming back online.”
Because CarScoops didn’t link to the specific decisions by the cities, I investigated further.
Update 2: the cameras were turned off a long time ago
I sought other sources regarding Stanwood, Sedro-Woolley, and Flock Safety, and discovered that CarScoops did not state the truth when it said “almost as soon as the decision landed, local officials scrambled to shut the cameras down.”
Turns out Stanwood shut its cameras off in May, awaiting the judge’s eventual ruling.
“Fourteen Flock cameras were installed in Stanwood this February. Since May, they have been turned off.
“In November 2024, the Stanwood City Council approved a $92,000 contract with Flock Safety to install the cameras….
“The city is seeking a court judgment on whether Flock footage is public record or if it is exempt from the state Public Records Act.”
Moving on to Sedro-Woolley:
“The city of Sedro-Woolley is no longer using cameras that read license plates while it seeks a court ruling on whether images recorded by the cameras are considered public records.
“Police Chief Dan McIlraith said the seven Flock Safety cameras that went live in Sedro-Woolley in March were disabled in June.”
How to turn the cameras on again
From my perspective, the only way I see the Flock Safety cameras being turned on again is if the cities of Stanwood and Sedro-Woolley take the following two actions.
- First, the cities need to establish or beef up their license plate recognition policies. Specifically, they need to set the rules for how to reply to public records requests. (And no, “stall for 30 days until the records are auto-deleted” doesn’t count.)
- Second, and only after a policy is established, implement some form of redaction software. Something that protects the privacy of license plates, faces, and other personally identifying information of people who are NOT part of a criminal investigation.
And yes, such software exists. Flock Safety itself does not offer it—apparently it never, um, envisioned that a city would be forced to release all its data. But companies such as Veritone and CaseGuard do offer such software offering automatic redaction.
If you are a police agency capturing video feeds, plan now.
Privacy: What Happens When You Data Scrape FROM the Identity Vendors?
There is a lot of discussion about data scraping, an activity in which Company 1 takes the information publicly posted by Company 2 and incorporates it into its own records.
In the identity world, this takes the form of a company “scraping” the facial images that were publicly posted by a second company, such as a social media company.
I think that we all know of one identity company that is well-known (a euphemism for “notorious”) for scraping facial images from multiple sources. These not only include government-posted mugshots, but also content posted by private social media firms.
Needless to say, the social media companies think that data scraping is completely evil and terrible and identity vendors that do this should be fined and put out of businress. The identity vendor is question has a different view, even stating at one point that it had a (U.S.) First Amendment right to scrape data.
But what happens when someone wants to scrape data FROM an identity company?
A Skagit County court case
404 Media links to a Skagit County, Washington court case that addresses this very issue: in this case, data captured by Flock Safety.
The case is CITY OF SEDRO-WOOLLEY and CITY OF STANWOOD, Washington Municipal Corporations vs. JOSE RODRIGUEZ. The following are findings of fact:
“On April 10, 2025, Defendant, Jose Rodriguez made a Public Records Request to the Snohomish Police Department. He requested all of the city’s Flock cameras pictures and data logs between 5 pm and 6 pm on March 30, 2025.”
This particular record does not indicate WHY Rodriguez made this request, but 404 Media provided a clarification from Rodriguez himself.
“I wanted the records to see if they would release them to me, in hopes that if they were public records it would raise awareness to all the communities that have the Flock cameras that they may be public record and could be used by stalkers, or burglars scoping out a house, or other ways someone with bad intentions may use them. My goal was to try getting these cameras taken down by the cities that put them up.”
The City of Stanwood (don’t know its relation to Snohomish) answered Rodriguez in part:
“Stanwood PD is not the holder of the records you’re seeking; you may be able to request the records at FlockSafety.com.”
Incidentally, this is a common issue with identity databases using vendor softwares; who owns the data? I’ve addressed this before regarding the Milwaukee Police Department.
Now some legal talent may be able to parse what the word “holder” means, especially in regard to data hosted in the cloud. Perhaps Stanwood PD was trying to claim that since the records weren’t on site, it wasn’t the “holder.”
Anyway, the defendant subsequently made a similar request to the City of Sedro-Woolley, but for a different date. Sedro-Woolley didn’t provide the images either.
Then it gets weird.
What happened to the data?
“The Flock records sought by Defendant from Stanwood and Sedro-Woolley have been auto-deleted.”
Well how convenient.
And the listed statements of fact also contain the following:
“The contract between Flock and Stanwood sates that all Flock images generated off Flock cameras located in Stanwood are the property of Stanwood.
“The contract between Flock and Sedro-Woolley states that all Flock images generated off Flock cameras located in Sedro-Woolley are the property of Sedro-Woolley.”
The judge’s ruling
Fast forward to November 6, when Judge Elizabeth Neidzwski ruled on the cities’ claim that the Flock camera data was not a public record.
“IT IS HEREBY ORDERED, ADJUDGED AND DECREED that Plaintiff’s motion for Declaratory Judgment that the Flock camera records are not public records is DENIED.”
404 Media noted that the cities argued that they resisted the request to…protect privacy.
“In affidavits filed with the court, police argued that ‘if the public could access the Flock Safety System by making Public Records Act requests, it would allow nefarious actors the ability to track private persons and undermine the effectiveness of the system.’ The judge rejected every single one of these arguments.”
Of course, there are those who argue that the police themselves are the “nefarious actors,” and that they shouldn’t be allowed to track private persons either.
But the parties may take the opposite argument
This is not the only example of conflicting claims over WHO has the right to privacy. In fact, if the police were filming protestors and agitators and wanted the public’s help in identifying them, the police and the protestors would take the opposite arguments in the privacy issue: the police saying the footage SHOULD be released, and the protestors who were filmed saying it SHOULD NOT.
Privacy is in the eye of the beholder.
A Fingerprint Identification Story: Bobby Driscoll
In early 1968, two boys found a dead body in New York’s East Village. There was no identification on the man, and no one in the neighborhood knew him. He was fingerprinted and buried in a mass grave, identified by the NYPD nearly two years later.

In the 1960s, fingerprint identification of deceased persons—a laborious process in those days—often happened because the deceased had a criminal record.
His first arrest was in 1956, but he was not convicted of any crime until 1961.
“On May 1, 1961, he was arrested for attempting to cash a check that had been stolen from a liquor store the previous January, and at the same time was also charged with driving under the influence of drugs. He pled guilty to both charges and was sentenced to six months of treatment for drug addiction at the California Institute for Men at Chino.”
Driscoll reportedly cleaned up (his drug of choice was heroin), went east to New York City, and even achieved some fame.
“[H]e purportedly settled into Andy Warhol’s Greenwich Village art community known as “The Factory.” During this time, he also participated in an underground film entitled Dirt, directed by avant-garde filmmaker Piero Heliczer.”
But this was not Driscoll’s first film. He had been in a few films earlier in life.

Here he is (in the upper right corner) playing Johnny in the Disney movie Song of the South.

And he provided the voice for the lead character in the later Disney movie Peter Pan.
Yes, Bobby Driscoll was a child star for Disney and other studios before appearing in Dirt.
But right after Driscoll’s voice became famous in Peter Pan, Disney declined to renew his contract. The reason? Acne…and the fact that he wasn’t a cute kid any more.
This led to his tailspin, which eventually led to his fingerprinting.
And his positive identification after his death.
