My entry for the Spilled Coffee Story Challenge

All the cool kids are doing online social media challenges. Some of these challenges, such as the Ice Bucket Challenge, are very beneficial to society. Others, such as the Tide Pod Challenge, are not.

I believe that this challenge, the Spilled Coffee Story Challenge, falls somewhere between the two. It won’t cure any debilitating diseases, but it won’t kill you either.

Before continuing, I want to emphasize that this is the Spilled Coffee STORY Challenge, not the Spilled Coffee Challenge. The Spilled Coffee Challenge could be very dangerous, because coffee is hot. So DON’T do that.

By Julius Schorzman – Own work, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=107645

Now most of you have never heard of the Spilled Coffee Story Challenge. That’s because I just made it up based upon an online conversation. So I’ll start by explaining how the Spilled Coffee Story Challenge came to be, and then I’ll tell my spilled coffee story.

How the Spilled Coffee Story Challenge came to be

Not too long ago, Sumair Abro and Rhonda Salvestrini were on a podcast together, talking about storytelling. To illustrate the importance of storytelling, Abro proceeded to…tell a story. It’s a story that he overheard about a woman who spilled coffee. By the end of the story, we all knew that…well, I’ll let Abro tell his story. The video can be found here.

After telling the story, Abro mentioned three points:

  1. “When you tell a story from your personal experience – people are genuinely interested.”
  2. “Don’t show all your cards immediately – have an element of surprise.” (Abro’s story DEFINITELY had a surprise at the end, revealing how spilling coffee could be a wonderful event for a particular person.)
  3. “Tell your story to the right audience.”

Salvestrini then chimed in, noting how stories need to be engaging and relevant.

Before going on, the brief clip that I linked above is actually part of a longer conversation between Abro and Salvestrini, which I mentioned before in this blog post.

But in this case, we’re only talking about the short excerpt on storytelling. I shared this excerpt myself on my Bredemarket LinkedIn page, making the following comment as I did so:

But my coffee-spilling story, in which I almost spilled coffee on a customer (but thankfully didn’t), would be hard to spin into a wonderful business truth.

This prompted a response from Rhonda Salvestrini:

Coffee-spilling stories are authentic and let our audience know that we are human. I’m sure you can spin it into a wonderful business truth. Let’s try!

Sumair Abro also chimed in:

hahaha..you dont need to spin it. It’s authentic as mentioned by Rhonda

Well, Rhonda and Sumair…CHALLENGE ACCEPTED.

My Spilled Coffee Story

My spilled coffee story took place a few years ago, when I was working for MorphoTrak. MorphoTrak was a merger of two former competitors that combined their operations—including their previously separate user conferences. I had been involved with the old Motorola User Conferences, so I knew the customers from that side of the company. And as time went on, I got to meet the customers from the non-Motorola side of the company (the Sagem Morpho side).

Me at a User Conference, several years after the coffee incident.

One of the ex-Sagem Morpho customers was from Hawaii. Specifically, the Hawaii Criminal Justice Data Center. This customer not only used MorphoTrak’s fingerprint identification technology, but also used its facial recognition technology, providing Hawaii law enforcement with the ability to use faces as an investigative lead when solving crimes.

Several years ago, the Hawaii Criminal Justice Data Center was represented on the Users Conference Executive Board by Liane Moriyama. Moriyama is a key figure in Hawaii criminal justice, since she was present when Hawaii established its first automated fingerprint identification system in 1990, and was also present for the establishment of Hawaii’s facial recognition system in 2013. But she is proudest of her accomplishments for vulnerable populations:

“We realized that we needed to help the non-criminal justice communities by using the technology and the biometrics (to protect) our vulnerable populations, our children, our disabled and our elderly through licensing and background checks. That really does protect the common citizen, and the culmination of all of that is when I was elected chair of the National Crime Prevention and Privacy Compact Council. I served two terms as the chair nationally and we have made tremendous strides in keeping the vulnerable populations safe.”

Liane Moriyama, Women in Biometrics 2017 Award recipient, quoted in Secure ID News

So Moriyama was a key customer for MorphoTrak, and a nationally recognized public security figure. Oh, and she’s a wonderful woman also (she gave away more macadamia nuts than the guy from Magnum P.I.).

All of this was very true when I was walking down the hall one fateful day. The Users Conference Executive Board was in town planning the next Users Conference. I was not involved in Users Conference planning at the time, but I would usually see Liane and the other customers when they were in the facility.

USUALLY I’d see them.

I didn’t see her one day when I went to the lunchroom to get some coffee, then exited the lunchroom and turned the corner.

Only THEN did I see her, as I turned the corner and found her right in front of me.

And disaster struck, and I spilled my coffee.

Luckily, I spilled it on MYSELF, and DIDN’T spill it on Liane.

She was extremely concerned about the fact that I had spilled coffee on myself, and I was incredibly relieved that I hadn’t spilled coffee on her.

Because if you have the choice, it’s better for you to suffer a mishap than for the client to suffer one.

So all ended well. Liane didn’t have to incur a dry cleaning bill while traveling, I took care of my own clothes, and she still gave me macadamia nuts in the future.

So now I’ll ask you: is “if you have the choice, it’s better for you to suffer a mishap than for the client to suffer one” a wonderful business truth?

Facial recognition and the U.S. Capitol attack

This post examines a number of issues regarding the use of facial recognition. Specifically, it looks at various ways to use facial recognition to identify people who participated in the U.S. Capitol attack.

By TapTheForwardAssist – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=98670006

Let’s start with the technological issues before we look at the legal ones. Specifically, we’ll look at three possible ways to construct databases (galleries) to use for facial recognition, and the benefits and drawbacks of each method.

What a facial recognition system does, and what it doesn’t do

The purpose of a one-to-many facial recognition system is to take a facial image (a “probe” image), process it, and compare it to a “gallery” of already-processed facial images. The system then calculates some sort of mathematical likelihood that the probe matches some of the images in the gallery.

That’s it. That’s all the system does, from a technological point of view.

Although outside of the scope of this particular post, I do want to say that a facial recognition system does NOT determine a match. Now the people USING the system could make the decision that one or more of the images in the gallery should be TREATED as a match, based upon mathematical considerations. However, when using a facial recognition system in the United States for criminal purposes, the general procedure is for a trained facial examiner to use his/her expertise to compare the probe image with selected gallery images. This trained examiner will then make a determination, regardless of what the technology says.

But forget about that for now. I want to concentrate on another issue—adding data to the gallery.

Options for creating a facial recognition “gallery”

As I mentioned earlier, the “gallery” is the database against which the submitted facial image (the “probe”) is compared. In a one-to-many comparison, the probe image is compared against all or some of the images in the gallery. (I’m skipping over the “all or some” issue for now.)

So where do you get facial images to put in the gallery?

For purposes of this post, I’m going to describe three sources for gallery images.

  • Government facial images of people who have been convicted of crimes.
  • Government facial images of people who have not necessarily been convicted of crimes, such as people who have been granted driver’s licenses or passports.
  • Publicly available facial images.

Before delving into these three sources of gallery images, I’m going to present a use case. A few of you may recognize it.

Let’s say that there is an important government building located somewhere, and that access to the building is restricted for security reasons. Now let’s say that some people breach that access and illegally enter the building. Things happen, and the people leave. (Again, why they left and weren’t detained immediately is outside the scope of this post.)

Now that a crime has been committed, the question arises—how do you use facial recognition to solve the crime?

A gallery of government criminal facial images

Let’s look at a case in which the images of people who trespassed at the U.S. Capitol…

Whoops, I gave it away! Yes, for those of you who didn’t already figure it out, I’m specifically talking about the people who entered the U.S. Capitol on Wednesday, January 6. (This will NOT be the only appearance of Captain Obvious in this post.)

Anyway, let’s see how the images of people who trespassed at the U.S. Capitol can be compared against a gallery of images of criminals.

From here on in, we need to not only look at technological issues, but also legal issues. Technology does not exist in a vacuum; it can (or at least should) only be used in accordance with the law.

So we have a legal question: can criminal facial images be lawfully used to identify people who have committed crimes?

In most cases, the answer is yes. The primary reason that criminal databases are maintained in the first place is to identify repeat offenders. If someone habitually trespasses into government buildings, the government would obviously like to know when the person trespasses into another government building.

But why did I say “in most cases”? Because there are cases in which a previously-created criminal record can no longer be used.

  • The record is sealed or expunged. This could happen, for example, if a person committed a crime as a juvenile. After some time, the record could be sealed (prohibiting most access) or expunged (removed entirely). If a record is sealed or expunged, then data in the record (including facial images) shouldn’t be available in the gallery.
  • The criminal is pardoned. If someone is pardoned of a crime, then it’s legally the same as if the crime were never committed at all. In that case, the pardoned person’s criminal record may (or may not) be removed from the criminal database. If it is removed, then again the facial image shouldn’t be in the gallery.
  • The crime happened a long time ago. Decades ago, it cost a lot of money to store criminal records, and due to budgetary constraints it wasn’t worthwhile to keep on storing everything. In my corporate career, I’ve encountered a lot of biometric requests for proposal (RFPs) that required conversion of old data to the new biometric system…with the exception of the old stuff. It stands to reason that if the old arrest record from 1960 is never converted to the new system, then that facial image won’t be in the gallery.

So, barring those exceptions, a search of our probe image from the U.S. Capitol could potentially hit against records in the gallery of criminal facial images.

Great, right?

Well, there’s a couple of issues to consider.

First, there are a lot of criminal databases out there. For those who imagine that the FBI, and the CIA, and the BBC, BB King, and Doris Day (yes) have a single massive database with every single criminal record out there…well, they don’t.

  • There are multiple federal criminal databases out there, and it took many years to get two of the major ones (from the FBI and the Department of Homeland Security) to talk to each other.
  • And every state has its own criminal database; some records are submitted to the FBI, and some aren’t.
  • Oh, and there are also local databases. For many years, one of my former employers was the automated fingerprint identification system provider for Bullhead City, Arizona. And there are a lot of Bullhead City-sized databases; one software package, AFIX Tracker (now owned by Aware) has over 500 installations.

So it you want to search criminal databases, you’re going to have to search a bunch of them. Between the multiple federal databases, the state and territory databases, and the local databases, there are hundreds upon hundreds of databases to search. That could take a while.

Which brings us to the second issue, in which we put on our Captain Obvious hat. If a person has never committed a crime, the person’s facial image is NOT in a criminal database. While biometric databases are great at identifying repeat offenders, they’re not so good at identifying first offenders. (They’re great at identifying second offenders, when someone is arrested for a crime and matches against an unidentified biometric record from a previous crime.)

So even if you search all the criminal databases, you’re only going to find the people with previous records. Those who were trespassing at the U.S. Capitol for the first time are completely invisible to a criminal database.

So something else is needed.

A gallery of government non-criminal facial images

Faced with this problem, you may ask yourself (yes), “What if the government had a database of people who hadn’t committed crimes? Could that database be used to identify the people who stormed the U.S. Capitol?”

Well, various governments DO have non-criminal facial databases. The two most obvious examples are the state databases of people who have driver’s licenses or state ID cards, and the federal database of people who have passports.

(This is an opportune time to remind my non-U.S. readers that the United States does not have national ID cards, and any attempt to create a national ID card is fought fiercely.)

I’ll point out the Captain Obvious issue right now: if someone never gets a passport or driver’s license, they’re not going to be in a facial database. This is of course a small subset of the population, but it’s a potential issue.

There’s a much bigger issue regarding the legal ability to use driver’s license photos in criminal investigation. As of 2018, 31 states allowed the practice…which means that 19 didn’t.

So while searches of driver’s license databases offer a good way to identify Capitol trespassers, it’s not perfect either.

A gallery of publicly available facial images

Which brings us to our third way to populate a gallery of facial images to identify Capitol trespassers.

It turns out that governments are not the only people that store facial images. You can find facial images everywhere. My own facial image can be found in countless places, including a page on the Bredemarket website itself.

There are all sorts of sites that post facial images that can be accessible to the public. A few of these sites include Facebook, Google (including YouTube), LinkedIn (part of Microsoft), Twitter, and Venmo. (We’ll return to those companies later.)

In many cases, these image are tied to (non-verified) identities. For example, if you go to my LinkedIn page, you will see an image that purports to be the image of John Bredehoft. But LinkedIn doesn’t know with 100% certainty that this is really an image of John Bredehoft. Perhaps “John Bredehoft” exists, but the posted picture is not that of John Bredehoft. Or perhaps “John Bredehoft” doesn’t exist and is a synthetic identity.

But regardless, there are billions of images out there, tied to billions of purported identities.

What if you could compare the probe images from the U.S. Capitol against a gallery of those billions of images—many more images than held by any government?

It turns out that you CAN perform that comparison, and that law enforcement did perform that comparison.

Clearview AI’s…facial-recognition app has seen a spike in use as police track down the pro-Trump insurgents who descended on the Capitol on Wednesday….

Clearview AI CEO Hoan Ton-That confirmed to Gizmodo that the app saw a 26% jump in search volume on Jan. 7 compared to its usual weekday averages….

Detectives at the Miami Police Department are using Clearview’s tech to identify rioters in images and videos of the attack and forwarding suspect leads to the FBI, per the Times. Earlier this week, the Wall Street Journal reported that an Alabama police department was also employing Clearview’s tech to ID faces in footage and sending potential matches along to federal investigators.

But now we need to return to the legal question: is “publicly available” equivalent to “publicly usable”?

Certain companies, including the aforementioned Facebook, Google (including YouTube), LinkedIn (part of Microsoft), Twitter, and Venmo, maintain that Clearview AI does NOT have permission to use their publicly available data. Not because of government laws, but because of the companies’ own policies. Here’s what two of the companies said about a year ago:

“Scraping people’s information violates our policies, which is why we’ve demanded that Clearview stop accessing or using information from Facebook or Instagram,” Facebook’s spokesperson told Business Insider….

“YouTube’s Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response, we sent them a cease-and-desist letter.”

For its part, Clearview AI maintains that its First Amendment government rights supersede the terms of service of the companies.

But other things come in play in addition to terms of service. Lawsuits filed in 2020 allege that Clearview AI’s practices violate the California Consumer Privacy Act of 2018, and the even more stringent Illinois Biometric Information Privacy Act of 2008. BIPA is so stringent that even Google is affected by it; as I’ve previously noted, Google’s Nest Hello Video Doorbell’s “familiar face” alerts is not available in Illinois.

Between corporate complaints and aggrieved citizens, the jury is literally still out on Clearview AI’s business model. So while it may work technologically, it may not work legally.

And one more thing

Of course, people are asking themselves, why do we even need to use facial recognition at all? After all, some of the trespassers actually filmed themselves trespassing. And when people see the widely-distributed pictures of the trespassers, they can be identified without using facial recognition.

Yes, to a point.

While it seems intuitive that eyewitnesses can easily identify people in photos, it turns out that such identifications can be unreliable. As the California Innocence Project reminds us:

One of the main causes of wrongful convictions is eyewitness misidentifications. Despite a high rate of error (as many as 1 in 4 stranger eyewitness identifications are wrong), eyewitness identifications are considered some of the most powerful evidence against a suspect.

The California Innocence Project then provides an example of a case in which someone was inaccurately identified due to an eyewitness misidentification. Correction: it provided 11 examples, including ones in which the witnesses were presented to the viewer in a controlled environment (six-pack lineups, similar backgrounds).

The FBI project, in which people look at images captured from the U.S. Capitol itself, is NOT a controlled environment.

Biometric writing, and four ways to substantiate a claim of high biometric accuracy

I wanted to illustrate the difference between biometric writing, and SUBSTANTIVE biometric writing.

A particular company recently promoted its release of a facial recognition application. The application was touted as “state-of-the-art,” and the press release mentioned “high accuracy.” However, the press release never supported the state-of-the-art or high accuracy claims.

By Cicero Moraes – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=66803013

Concentrating on the high accuracy claim, there are four methods in which a biometric vendor (facial recognition, fingerprint identification, iris recognition, whatever) can substantiate a high accuracy claim. This particular company did not employ ANY of these methods.

  • The first method is to publicize the accuracy results of a test that you designed and conducted yourself. This method has its drawbacks, since if you’re administering your own test, you have control over the reported results. But it’s better than nothing.
  • The second method is for you to conduct a test that was designed by someone else. An example of such a test is Labeled Faces in the Wild (LFW). There used to be a test called Megaface, but this project has concluded. A test like this is good for research, but there are still issues; for example, if you don’t like the results, you just don’t submit them.
  • The third method is to have an independent third party design AND conduct the test, using test data. A notable example of this method is the Facial Recognition Vendor Test series sponsored by the U.S. National Institute of Standards and Technology. Yet even this test has drawbacks for some people, since the data used to conduct the test is…test data.
  • The fourth method, which could be employed by an entity (such as a government agency) who is looking to purchase a biometric system, is to have the entity design and conduct the test using its own data. Of course, the results of an accuracy test conducted using the biometric data of a local police agency in North America cannot be applied to determine the accuracy of a national passport system in Asia.

So, these are four methods to substantiate a “high accuracy” claim. Each method has its advantages and disadvantages, and it is possible for a vendor to explain WHY it chose one method over the other. (For example, one facial recognition vendor explained that it couldn’t submit its application for NIST FRVT testing because the NIST testing design was not compatible with the way that this vendor’s application worked. For this particular vendor, methods 1 and 4 were better ways to substantiate its accuracy claims.)

But if a company claims “high accuracy” without justifying the claim with ANY of these four methods, then the claim is meaningless. Or, it’s “biometric writing” without substantiation.