Caught!

I was having fun creating videos based upon the controversial third verse of The Star Spangled Banner, but I decided to get back to business.

And the business is that, as the Innocence Project knows all too well, algorithms can be better than humans at identifying faces.

Grok.

But the silly videos are only what I do for fun.

What I do for business is help identity, biometrics, and technology companies explain how their solutions benefit society.

Can Bredemarket help YOUR firm come up with the right words, via compelling content creation?

  • Blog posts. Among other projects, I’ve authored a multi-month blog series to attract business to a client. 
  • Case studies and testimonials. Among other projects, I’ve written a dozen case studies to justify a firm’s capabilities to its projects. 
  • LinkedIn articles and posts. The multi-month blog series was designed for repurposing as LinkedIn articles. 
  • White papers. My white papers have made the case for the superiority of my clients’ products and services.

Set up a free meeting to talk to Bredemarket about your marketing and writing needs.

Why retail needs biometrics – the cameras aren’t working, and the people aren’t working either

(Imagen 4)

In a recent post on Biometric Update, “Why retail needs biometrics – the cameras aren’t working,” Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner made several points about the applicability of biometrics to retail. Among the many points he addressed, he dealt with algorithmic inaccuracy and the proper use of facial recognition as an investigative lead:

“It’s true that some early police algorithms were poor, but the biometric matching algorithms offered by some providers is over 99.99% – that’s as close to perfect as anyone has ever got. That’s NASA-level accuracy, better than some medical or military procedures and light years away from people staring at CCTV monitors. What about errors and misidentification? Used properly, LFR is a decision support tool, it’s not making the identification itself. Ultimately, it’s helping shopkeepers make their decisions and that’s where the occasional misidentification happens – by human error, not technical.”

I offered an additional comment:

“One other point: for all those who complain about the lack of perfection of automated facial recognition, it’s much better than manual facial recognition. The U.S. Innocence Project recounts multiple cases of witness MISidentification, where people have been imprisoned due to faulty and inaccurate identification of suspects as perpetrators. I’d much rather have a top tier FR algorithm watching me than a person who knows nothing about facial recognition at all.”

In case you missed it, I’ve written several Bredemarket blog posts on witness MISidentification: two on Robert Williams’ misidentification alone.

Heck, I addressed the topic back in 2021 in “The dangers of removing facial recognition and artificial intelligence from DHS solutions (DHS ICR part four).” This post covers the misidentification of Archie Williams (no relation).

So don’t toss out the automated facial recognition solution unless you have something better. I’ll wait.

Examples of Biometric Technology Misuse

If I become known for anything in biometrics, I want to be known for my extremely frequent use of the words “investigative lead.” 

Whether you are talking about DNA or facial recognition, these types of biometric evidence should not be the sole evidence used to arrest a person.

For an example of why DNA shouldn’t be your only evidence, see my recent post about Amanda Knox.

Facial recognition misuse in law enforcement

Regarding facial recognition, I wrote this in a social media conversation earlier today:

“Facial recognition CAN be used as a crowd checking tool…with proper governance, including strict adherence to a policy of only using FR as an investigative lead, and requiring review of potential criminal matches by a forensic face investigator. Even then, investigative lead ONLY. Same with DNA.”

I received this reply:

“It’s true but in my experience cops rarely follow any rules.”

Now I could have claimed that this view was exaggerated, but there are enough examples of cops who DON’T follow the rules to tarnish all of them. 

Revisiting Robert Williams’ Detroit arrest

I’ve already addressed the sad story of Robert Williams, who was “wrongfully arrested based upon faulty facial recognition results.”

At the time, I did not explicitly share the circumstances behind Williams’ arrest:

“The complaint alleges that the surveillance footage is poorly lit, the shoplifter never looks directly into the camera and still a Detroit Police Department detective ran a grainy photo made from the footage through the facial recognition technology.”

There’s so much that isn’t said here, such as whether a forensic face examiner made a definitive conclusion, or if the detective just took the first candidate from the list and ran with it.

But I am willing to bet that there was no independent evidence placing Williams at the shop location.

Why this matters

The thing that concerns me about all this? It just provides ammo to the people who want to ban facial recognition entirely.

Not realizing that the alternative—manual witness (mis)identification—is far more inaccurate and far more racist.

But the controversy would pretty much go away if criminal investigators only used facial recognition and DNA as investigative leads.

If We Don’t Train Facial Recognition Users, There Will Be No Facial Recognition

(Part of the biometric product marketing expert series)

We get all sorts of great tools, but do we know how to use them? And what are the consequences if we don’t know how to use them? Could we lose the use of those tools entirely due to bad publicity from misuse?

Hida Viloria. By Intersex77 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=98625035

Do your federal facial recognition users know what they are doing?

I recently saw a WIRED article that primarily talked about submitting Parabon Nanolabs-generated images to a facial recognition program. But buried in the article was this alarming quote:

According to a report released in September by the US Government Accountability Office, only 5 percent of the 196 FBI agents who have access to facial recognition technology from outside vendors have completed any training on how to properly use the tools.

From https://www.wired.com/story/parabon-nanolabs-dna-face-models-police-facial-recognition/

Now I had some questions after reading that sentence: namely, what does “have access” mean? To answer those questions, I had to find the study itself, GAO-23-105607, Facial Recognition Services: Federal Law Enforcement Agencies Should Take Actions to Implement Training, and Policies for Civil Liberties.

It turns out that the study is NOT limited to FBI use of facial recognition services, but also addresses six other federal agencies: the Bureau of Alcohol, Tobacco, Firearms and Explosives (the guvmint doesn’t believe in the Oxford comma); U.S. Customs and Border Protection; the Drug Enforcement Administration; Homeland Security Investigations; the U.S. Marshals Service; and the U.S. Secret Service.

In addition, the study confines itself to four facial recognition services: Clearview AI, IntelCenter, Marinus Analytics, and Thorn. It does not address other uses of facial recognition by the agencies, such as the FBI’s use of IDEMIA in its Next Generation Identification system (IDEMIA facial recognition technology is also used by the Department of Defense).

Two of the GAO’s findings:

  • Initially, none of the seven agencies required users to complete facial recognition training. As of April 2023, two of the agencies (Homeland Security Investigations and the U.S. Marshals Service) required training, two (the FBI and Customs and Border Protection) did not, and the other three had quit using these four facial recognition services.
  • The FBI stated that facial recognition training was recommended as a “best practice,” but not mandatory. And when something isn’t mandatory, you can guess what happened:

GAO found that few of these staff completed the training, and across the FBI, only 10 staff completed facial recognition training of 196 staff that accessed the service. FBI said they intend to implement a training requirement for all staff, but have not yet done so. 

From https://www.gao.gov/products/gao-23-105607.

So if you use my three levels of importance (TLOI) model, facial recognition training is important, but not critically important. Therefore, it wasn’t done.

The detailed version of the report includes additional information on the FBI’s training requirements…I mean recommendations:

Although not a requirement, FBI officials said they recommend (as
a best practice) that some staff complete FBI’s Face Comparison and
Identification Training when using Clearview AI. The recommended
training course, which is 24 hours in length, provides staff with information on how to interpret the output of facial recognition services, how to analyze different facial features (such as ears, eyes, and mouths), and how changes to facial features (such as aging) could affect results.

From https://www.gao.gov/assets/gao-23-105607.pdf.

However, this type of training was not recommended for all FBI users of Clearview AI, and was not recommended for any FBI users of Marinus Analytics or Thorn.

I should note that the report was issued in September 2023, based upon data gathered earlier in the year, and that for all I know the FBI now mandates such training.

Or maybe it doesn’t.

What about your state and local facial recognition users?

Of course, training for federal facial recognition users is only a small part of the story, since most of the law enforcement activity takes place at the state and local level. State and local users need training so that they can understand:

  • The anatomy of the face, and how it affects comparisons between two facial images.
  • How cameras work, and how this affects comparisons between two facial images.
  • How poor quality images can adversely affect facial recognition.
  • How facial recognition should ONLY be used as an investigative lead.

If state and local users received this training, none of the false arrests over the last few years would have taken place.

What are the consequences of no training?

Could I repeat that again?

If facial recognition users had been trained, none of the false arrests over the last few years would have taken place.

  • The users would have realized that the poor images were not of sufficient quality to determine a match.
  • The users would have realized that even if they had been of sufficient quality, facial recognition must only be used as an investigative lead, and once other data had been checked, the cases would have fallen apart.

But the false arrests gave the privacy advocates the ammunition they needed.

Not to insist upon proper training in the use of facial recognition.

But to ban the use of facial recognition entirely.

Like nuclear or biological weapons, facial recognition’s threat to human society and civil liberties far outweighs any potential benefits. Silicon Valley lobbyists are disingenuously calling for regulation of facial recognition so they can continue to profit by rapidly spreading this surveillance dragnet. They’re trying to avoid the real debate: whether technology this dangerous should even exist. Industry-friendly and government-friendly oversight will not fix the dangers inherent in law enforcement’s discriminatory use of facial recognition: we need an all-out ban.

From https://www.banfacialrecognition.com/

(And just wait until the anti-facial recognition forces discover that this is not only a plot of evil Silicon Valley, but also a plot of evil non-American foreign interests located in places like Paris and Tokyo.)

Because the anti-facial recognition forces want us to remove the use of technology and go back to the good old days…of eyewitness misidentification.

Eyewitness misidentification contributes to an overwhelming majority of wrongful convictions that have been overturned by post-conviction DNA testing.

Eyewitnesses are often expected to identify perpetrators of crimes based on memory, which is incredibly malleable. Under intense pressure, through suggestive police practices, or over time, an eyewitness is more likely to find it difficult to correctly recall details about what they saw. 

From https://innocenceproject.org/eyewitness-misidentification/.

And these people don’t stay in jail for a night or two. Some of them remain in prison for years until the eyewitness misidentification is reversed.

Archie Williams moments after his exoneration on March 21, 2019. Photo by Innocence Project New Orleans. From https://innocenceproject.org/fingerprint-database-match-establishes-archie-williams-innocence/

Eyewitnesses, unlike facial recognition algorithms, cannot be tested for accuracy or bias.

And if we don’t train facial recognition users in the technology, then we’re going to lose it.

The dangers of removing facial recognition and artificial intelligence from DHS solutions (DHS ICR part four)

And here’s the fourth and final part of my repurposing exercise. See parts one, two, and three if you missed them.

This post is adapted from Bredemarket’s November 10, 2021 submitted comments on DHS-2021-0015-0005, Information Collection Request, Public Perceptions of Emerging Technology. As I concluded my request, I stated the following.

Of course, even the best efforts of the Department of Homeland Security (DHS) will not satisfy some members of the public. I anticipate that many of the respondents to this ICR will question the need to use biometrics to identify individuals, or even the need to identify individuals at all, believing that the societal costs outweigh the benefits.

By Banksy – One Nation Under CCTV, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=3890275

But before undertaking such drastic action, the consequences of following these alternative paths must be considered.

Taking an example outside of the non-criminal travel interests of DHS, some people prefer to use human eyewitness identification rather than computerized facial recognition.

By Zhe Wang, Paul C. Quinn, James W. Tanaka, Xiaoyang Yu, Yu-Hao P. Sun, Jiangang Liu, Olivier Pascalis, Liezhong Ge and Kang Lee – https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00559/full, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=96233011

However, eyewitness identification itself has clear issues of bias. The Innocence Project has documented many cases in which eyewitness (mis)identification has resulted in wrongful criminal convictions which were later overturned by biometric evidence.

Archie Williams moments after his exoneration on March 21, 2019. Photo by Innocence Project New Orleans. From https://innocenceproject.org/fingerprint-database-match-establishes-archie-williams-innocence/

Mistaken eyewitness identifications contributed to approximately 69% of the more than 375 wrongful convictions in the United States overturned by post-conviction DNA evidence.

Inaccurate eyewitness identifications can confound investigations from the earliest stages. Critical time is lost while police are distracted from the real perpetrator, focusing instead on building the case against an innocent person.

Despite solid and growing proof of the inaccuracy of traditional eyewitness ID procedures – and the availability of simple measures to reform them – traditional eyewitness identifications remain among the most commonly used and compelling evidence brought against criminal defendants.”

Innocence Project, Eyewitness Identification Reform, https://innocenceproject.org/eyewitness-identification-reform/

For more information on eyewitness misidentification, see my November 24, 2020 post on Archie Williams (pictured above) and Uriah Courtney.

Do we really want to dump computerized artificial intelligence and facial recognition, only to end up with manual identification processes that are proven to be even worse?

The tone of voice to use when talking about forensic mistakes

Remember my post that discussed the tone of voice that a company chooses to use when talking about the benefits of the company and its offerings?

Or perhaps you saw the repurposed version of the post, a page section entitled “Don’t use that tone of voice with me!”

The tone of voice that a firm uses does not only extend to benefit statements, but to all communications from a company. Sometimes the tone of voice attracts potential clients. Sometimes it repels them.

For example, a book was published a couple of months ago. Check the tone of voice in these excerpts from the book advertisement.

“That’s not my fingerprint, your honor,” said the defendant, after FBI experts reported a “100-percent identification.” They were wrong. It is shocking how often they are. Autopsy of a Crime Lab is the first book to catalog the sources of error and the faulty science behind a range of well-known forensic evidence, from fingerprints and firearms to forensic algorithms. In this devastating forensic takedown, noted legal expert Brandon L. Garrett poses the questions that should be asked in courtrooms every day: Where are the studies that validate the basic premises of widely accepted techniques such as fingerprinting? How can experts testify with 100 percent certainty about a fingerprint, when there is no such thing as a 100 percent match? Where is the quality control in the laboratories and at the crime scenes? Should we so readily adopt powerful new technologies like facial recognition software and rapid DNA machines? And why have judges been so reluctant to consider the weaknesses of so many long-accepted methods?

Note that author Brandon Garrett is NOT making this stuff up. People in the identity industry are well aware of the Brandon Mayfield case and others that started a series of reforms beginning in 2009, including changes in courtroom testimony and increased testing of forensic techniques by the National Institute of Standards and Technology and others.

It’s obvious that I, with my biases resulting from over 25 years in the identity industry, am not going to enjoy phrases such as “devastating forensic takedown,” especially when I know that some sectors of the forensics profession have been working on correcting these mistakes for 12 years now, and have cooperated with the Innocence Project to rectify some of these mistakes.

So from my perspective, here are my two concerns about language that could be considered inflammatory:

  • Inflammatory language focusing on anecdotal incidents leads to improper conclusions. Yes, there are anecdotal instances in which fingerprint examiners made incorrect decisions. Yes, there are anecdotal instances in which police agencies did not use facial recognition computer results solely as investigative leads, resulting in false arrests. But anecdotal incidents are not in my view substantive enough to ban fingerprint recognition or facial recognition entirely, as some (not all) who read Garrett’s book are going to want to do (and have done, in certain jurisdictions).
  • Inflammatory language prompts inflammatory language from “the other side.” Some forensic practitioners and criminal justice stakeholders may not be pleased to learn that they’ve been targeted by a “devastating forensic takedown.” And sometimes the responses can get nasty: “enemies” of forensic techniques “love criminals.”

Of course, it may be near to impossible to have a reasoned discussion of forensic and police techniques these days. And I’ll confess that it’s hard to sell books by taking a nuanced tone in the book blurb. But if would be nice if we could all just get along.

P.S. Garrett was interviewed on TV in connection to the Derek Chauvin trial, and did not (IMHO) come off as a wild-eyed “defund the police” hack. His major point was that Chauvin’s actions were not made in a split second, but in a course of several minutes.

Quantifying the costs of wrongful incarcerations

As many of you already know, the Innocence Project is dedicated to freeing people who have been wrongfully incarcerated. At times, the people are freed after examining or re-examining biometric evidence, such as fingerprint evidence or DNA evidence.

The latter evidence was relevant in the case of Uriah Courtney, who was convicted and sentenced to life in prison for kidnapping and rape based upon eyewitness testimony. At the time of Courtney’s arrest, DNA testing did not return any meaningful results. Eight years later, however, DNA technology had advanced to the point where the perpetrator could be identified—and, as the California Innocence Project noted, the perpetrator wasn’t Uriah Courtney.

I’ve read Innocence Project stories before, and the one that sticks most in my mind was the case of Archie Williams, who was released (based upon fingerprint evidence) after being imprisoned for a quarter century. At the time that Williams’ wrongful conviction was vacated, Vanessa Potkin, director of post-conviction litigation at the Innocence Project, stated, “There is no way to quantify the loss and pain he has endured.”

But that doesn’t mean that people haven’t tried to (somewhat) quantify the loss.

In the Uriah Courtney case, while it’s impossible to quantify the loss to Courtney himself, it is possible to quantify the loss to the state of California. Using data from the California Legislative Analyst’s Office 2018-19 annual costs per California inmate, the California Innocence Project calculated a “cost of wrongful incarceration” of $649,624.

One can quibble with the methodology—after all, the 2018-19 costs presumably overestimate the costs of incarcerating someone who was released from custody on May 9, 2013—but at least it illustrates that a cost of wrongful incarceration CAN be calculated. Add to that the costs of prosecuting the wrong person (including jury duty daily fees), and the costs can be quantified.

To a certain extent.