Canada’s IRCC ITQ B7059-180321/B and the biometric proposals chess match

In a competitive bid process, one unshakable truth is that everything you do will be seen by your competitors. This affects what you as a bidder do…and don’t do.

My trip to Hartford for a 30 minute meeting

I saw this in action many years ago when I was the product manager for Motorola’s Omnitrak product (subsequently Printrak BIS, subsequently part of MorphoBIS, subsequently part of MBIS). Connecticut and Rhode Island went out to bid for an two-state automated fingerprint identification system (AFIS). As part of the request for proposal process, the state of Connecticut scheduled a bidders’ conference. This was well before online videoconferencing became popular, so if you wanted to attend this bidders’ conference, you had to physically go to Hartford, Connecticut.

The Mark Twain House in Hartford. For reasons explained in this post, I spent more time here than I did at the bidders’ conference itself. By Makemake, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=751488

So I flew from California to Connecticut to attend the conference, and other people from other companies made the trip. That morning I drove from my hotel to the site of the conference (encountering a traffic jam much worse than the usual traffic jams back home), and I and the competitors assembled and waited for the bidders’ conference to begin.

The state representative opened the floor up to questions from bidders.

Silence.

No one asked a question.

We were all eyeing each other, seeing what the other people were going to ask, and none of us were willing to tip our hands by asking a question ourselves.

Eventually one or two minor questions were asked, but the bidders’ conference ended relatively quickly.

There are a number of chess-like tactics related to what bidders do and don’t do during proposals. Perhaps some day I’ll write a Bredemarket Premium post on the topic and spill my secrets.

But for now, let’s just say that all of the bidders successfully kept their thoughts to themselves during that conference. And I got to visit a historical site, so the trip wasn’t a total waste.

And today, it’s refreshing to know that things don’t change.

When the list of interested suppliers appears to be null

Back on September 24, the Government of Canada issued an Invitation to Qualify (B7059-180321/B) for a future facial recognition system for immigration purposes. This was issued some time ago, but I didn’t hear about it until Biometric Update mentioned it this morning.

Now Bredemarket isn’t going to submit a response (even though section 2.3a says that I can), but Bredemarket can obviously help those companies that ARE submitting a response. I have a good idea who the possible players are, but to check things I went to the page of the List of Interested Suppliers to see if there were any interested suppliers that I missed. The facial recognition market is changing rapidly, so I wondered if some new names were popping up.

So what did I see when I visited the List of Interested Suppliers?

An invitation for me to become the FIRST listed interested supplier.

That’s right, NO ONE has publicly expressed interest in this bid.

A screen shot of https://buyandsell.gc.ca/procurement-data/tender-notice/PW-XS-002-39912/list-of-interested-suppliers as of the late morning (Pacific time) on Monday, October 11.

And yes, I also checked the French list; no names there either.

There could be one of three reasons for this:

  1. Potential bidders don’t know about the Invitation to Qualify. This is theoretically possible; after all, Biometric Update didn’t learn about the invitation until two weeks after it was issued.
  2. No one is interested in bidding on a major facial recognition program. Yeah, right.
  3. Multiple companies ARE interested in this bid, but none wants to tip its hand and let competitors know of its interest.

My money is on reason three.

Hey, bidders. I can keep your secret.

As you may have gathered, as of Monday October 11 I am not part of any team responding to this Invitation to Qualify.

If you are a biometric vendor who needs help in composing your response to IRCC ITQ B7059-180321/B before the November 3 due date, or in framing questions (yes, there are chess moves on that also), let me know.

I won’t tell anybody.

Under the lens…and under many other things: Ambarella

You’ll notice that while I do style myself as an expert on some things, I never claim that I know everything…because I obviously don’t.

This became clear to me when I was watching the Paravision Converge 2021 video and noticed its emphasis on optimizing Paravision’s recognition algorithms for Ambarella.

Ambarella-related announcements from https://www.paravision.ai/converge2021/.

I had never heard of Ambarella.

I should have heard of it.

Even in my own little corner of the technology world, Ambarella has made an impact:

We power a majority of the world’s police body cams.

We were the first to enable HD and UHD security with low power; we revolutionized image processing for low-light and high-contrast scenes; and we are an industry leader in next-generation AI video security solutions.

Video has been a key component of face detection, person detection, and face recognition for years. (Not really of iris recognition…yet.) In certain use cases, it’s extremely desirable to move the processing out from a centralized server system to edge devices such as body cams, smart city cameras, and road safety cameras, and Ambarella (and its software partners) optimize this processing.

In addition to professional (and consumer) security, Ambarella is also a player in automotive solutions including autonomous vehicles, non-security consumer applications, and a variety of IoT/industrial/robotics applications.

All of these markets are supported via Ambarella’s specialized chip architecture:

Our CVflow® chip architecture is based on a deep understanding of core computer vision algorithms. Unlike general-purpose CPUs and GPUs, CVflow includes a dedicated vision processing engine programmed with a high-level algorithm description, allowing our architecture to scale performance to trillions of operations per second with extremely low power consumption.

I’ve always been of the opinion that technology is moving away from specialized hardware to COTS hardware. For example, the fingerprint processing and matching that used to require high-end UNIX computers with custom processor boards in the 1990s can now be accomplished on consumer-grade smartphones.

However, the reason that these consumer-grade devices can now perform these operations is because specialized technologies have been miniaturized and optimized for incorporation into the consumer grade devices, such as Yi home video cameras.

Ambarella itself is US-based (in Santa Clara, California), was founded in 2004, is traded on NASDAQ, and is a $200+ million/year company (although revenues and profits have declined over the last few years). While much smaller than more famous semiconductor companies, Ambarella obviously fills a critical niche for (among others) professional security product firms.

So if you, like me, had never heard of Ambarella…now you have.

A tool is not a way of…bad things

For years I’ve uttered the phrase “a tool is not a way of life,” and a recent statement from Rank One Computing reminded me of this fact. In a piece on the ethical use of facial recognition, Rank One Computing stated the following in passing:

[Rank One Computing] is taking a proactive stand to communicate that public concerns should focus on applications and policies rather than the technology itself.

I emphatically believe that all technologies are neutral. They can be used for good, or they can be used for…bad things.

And yes, facial recognition has been misused.

It is an undeniable fact that a police jurisdiction used a computerized facial recognition result as a justifiable reason for arrest, rather than as an investigative lead that would need to be supported by additional evidence.

But that incident, or ten incidents, or one hundred incidents, does NOT mean that ALL uses of facial recognition should be demonized, or even that SELECTED uses of facial recognition should be demonized (Amazon bad; Apple good).

Policies are not foolproof

Now I will grant that establishment of a policy or procedure does NOT necessarily mean that people will always act in compliance with that policy/procedure.

As an example, one accepted practice in lineup generation is double-blind lineup generation, in which you have different people involved in different parts of the lineup generation and witness viewing process. For example, these two roles can be distinct:

  • A person who knows who the arrested individual is creates the lineup (with additional safeguards to ensure that the created lineup isn’t biased).
  • A second person who DOESN’T know who the arrested individual is shows the lineup to the witness and records what the witness says and doesn’t say when viewing the lineup. The reason for the presence of a separate person is to ensure that the person administering the lineup doesn’t provide subconscious (or conscious) hints as to who the “right” person would be.

Now you can set up your police department’s procedures to require this, and your software vendor could design its software to support this. But that doesn’t prevent a corrupt Chief of Police from saying, “Jane, I want you to create the lineup AND show it to the witness. And make sure the witness chooses the RIGHT guy!”

But policy-based facial recognition is better than no facial recognition at all

But…if I may temporarily allow myself to run a tired cliché into the ground, that doesn’t mean you throw out the baby with the bathwater.

From 1512. Old clichés are old. Public Domain, https://commons.wikimedia.org/w/index.php?curid=689179

Rather than banning facial recognition, we should concentrate on defining ethical uses.

And there’s one more thing to consider. If you ban computerized facial recognition, how are you going to identify people? As I’ve noted elsewhere, witness (mis)identification is rampant with biases that make even the bottom-tier facial recognition algorithms seem accurate.

More on the Israeli master faces study

Eric Weiss of FindBiometrics has opined on the Tel Aviv master faces study that I previously discussed.

Oops, wrong “Faces.” Oh well. By Warner Bros. Records – Billboard, page 18, 14 November 1970, Public Domain, https://commons.wikimedia.org/w/index.php?curid=27031391

While he does not explicitly talk about the myriad of facial recognition algorithms that were NOT addressed in the study, he does have some additional details about the test dataset.

The three algorithms that were tested

Here’s what FindBiometrics says about the three algorithms that were tested in the Israeli study.

The researchers described (the master faces) as master keys that could unlock the three facial recognition systems that were used to test the theory. In that regard, they challenged the Dlib, FaceNet, and SphereFace systems, and their nine master faces were able to impersonate more than 40 percent of the 5,749 people in the LFW set.

While it initially sounds impressive to say that three facial recognition algorithms were fooled by the master faces, bear in mind that there are hundreds of facial recognition algorithms tested by NIST alone, and (as I said earlier) the test has NOT been duplicated against any algorithms other than the three open source algorithms mentioned.

…let’s look at the algorithms themselves and evaluate the claim that results for the three algorithms Dlib, FaceNet, and SphereFace can naturally be extrapolated to ALL facial recognition algorithms….NIST’s subsequent study…evaluated 189 algorithms specially for 1:1 and 1:N use cases….“Tests showed a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors.”

In short, just because the three open source algorithms were fooled by master faces doesn’t mean that commercial grade algorithms would also be fooled by master faces. Maybe they would be fooled…or maybe they wouldn’t.

What about the dataset?

The three open source algorithms were tested against the dataset from Labeled Faces in the Wild. As I noted in my prior post, the LFW people emphasize some important caveats about their dataset, including the following:

Many groups are not well represented in LFW. For example, there are very few children, no babies, very few people over the age of 80, and a relatively small proportion of women. In addition, many ethnicities have very minor representation or none at all.

In the FindBiometrics article, Weiss provides some additional detail about dataset representation.

…there is good reason to question the researchers’ conclusion. Only two of the nine master faces belong to women, and most depicted white men over the age of 60. In plain terms, that means that the master faces are not representative of the global public, and they are not nearly as effective when applied to anyone that falls outside one particular demographic.

That discrepancy can largely be attributed to the limitations of the LFW dataset. Women make up only 22 percent of the dataset, and the numbers are even lower for children, the elderly (those over the age of 80), and for many ethnic groups.

Valid points to be sure, although the definition of a “representative” dataset varies depending upon the use case. For example, a representative dataset for a law enforcement database in the city of El Paso, Texas will differ from a representative dataset for an airport database catering to Air France customers.

So what conclusion can be drawn?

Perhaps it’s just me, but scientific entities that conduct studies are always motivated by the need for additional funding. After a study is concluded, it seems that the entities always conclude that “more research is needed”…which can be self-serving, because as long as more research is needed, the scientific entities can continue to receive necessary funding. Imagine the scientific entity that would dare to say “Well, all necessary research has been conducted. We’re closing down our research center.”

But in this case, there IS a need to perform additional research, to test the master faces against different algorithms and against different datasets. Then we’ll know whether this statement from the FindBiometrics article (emphasis mine) is actually true:

Any face-based identification system would be extremely vulnerable to spoofing…

Faulty “journalism” conclusions: the Israeli “master faces” study DIDN’T test ANY commercial biometric algorithms

Modern “journalism” often consists of reprinting a press release without subjecting it to critical analysis. Sadly, I see a lot of this in publications, including both biometric and technology publications.

This post looks at the recently announced master faces study results, the datasets used (and the datasets not used), the algorithms used (and the algorithms not used), and the (faulty) conclusions that have been derived from the study.

Oh, and it also informs you of a way to make sure that you don’t make the same mistakes when talking about biometrics.

Vulnerabilities from master faces

In facial recognition, there is a concept called “master faces” (similar concepts can be found for other biometric modalities). The idea behind master faces is that such data can potentially match against MULTIPLE faces, not just one. This is similar to a master key that can unlock many doors, not just one.

This can conceivably happen because facial recognition algorithms do not match faces to faces, but match derived features from faces to derived features from faces. So if you can create the right “master” feature set, it can potentially match more than one face.

However, this is not just a concept. It’s been done, as Biometric Update informs us in an article entitled ‘Master faces’ make authentication ‘extremely vulnerable’ — researchers.

Ever thought you were being gaslighted by industry claims that facial recognition is trustworthy for authentication and identification? You have been.

The article goes on to discuss an Israeli research project that demonstrated some true “master faces” vulnerabilities. (Emphasis mine.)

One particular approach, which they write was based on Dlib, created nine master faces that unlocked 42 percent to 64 percent of a test dataset. The team also evaluated its work using the FaceNet and SphereFace, which like Dlib, are convolutional neural network-based face descriptors.

They say a single face passed for 20 percent of identities in Labeled Faces in the Wild, an open-source database developed by the University of Massachusetts. That might make many current facial recognition products and strategies obsolete.

Sounds frightening. After all, the study not only used dlib, FaceNet, and SphereFace, but also made reference to a test set from Labeled Faces in the Wild. So it’s obvious why master faces techniques might make many current facial recognition products obsolete.

Right?

Let’s look at the datasets

It’s always more impressive to cite an authority, and citations of the University of Massachusetts’ Labeled Faces in the Wild (LFW) are no exception. After all, this dataset has been used for some time to evaluate facial recognition algorithms.

But what does Labeled Faces in the Wild say about…itself? (I know this is a long excerpt, but it’s important.)

DISCLAIMER:

Labeled Faces in the Wild is a public benchmark for face verification, also known as pair matching. No matter what the performance of an algorithm on LFW, it should not be used to conclude that an algorithm is suitable for any commercial purpose. There are many reasons for this. Here is a non-exhaustive list:

Face verification and other forms of face recognition are very different problems. For example, it is very difficult to extrapolate from performance on verification to performance on 1:N recognition.

Many groups are not well represented in LFW. For example, there are very few children, no babies, very few people over the age of 80, and a relatively small proportion of women. In addition, many ethnicities have very minor representation or none at all.

While theoretically LFW could be used to assess performance for certain subgroups, the database was not designed to have enough data for strong statistical conclusions about subgroups. Simply put, LFW is not large enough to provide evidence that a particular piece of software has been thoroughly tested.

Additional conditions, such as poor lighting, extreme pose, strong occlusions, low resolution, and other important factors do not constitute a major part of LFW. These are important areas of evaluation, especially for algorithms designed to recognize images “in the wild”.

For all of these reasons, we would like to emphasize that LFW was published to help the research community make advances in face verification, not to provide a thorough vetting of commercial algorithms before deployment.

While there are many resources available for assessing face recognition algorithms, such as the Face Recognition Vendor Tests run by the USA National Institute of Standards and Technology (NIST), the understanding of how to best test face recognition algorithms for commercial use is a rapidly evolving area. Some of us are actively involved in developing these new standards, and will continue to make them publicly available when they are ready.

So there are a lot of disclaimers in that text.

  • LFW is a 1:1 test, not a 1:N test. Therefore, while it can test how one face compares to another face, it cannot test how one face compares to a database of faces. The usual law enforcement use case is to compare a single face (for example, one captured from a video camera) against an entire database of known criminals. That’s a computationally different exercise from the act of comparing a crime scene face against a single criminal face, then comparing it against a second criminal face, and so forth.
  • The people in the LFW database are not necessarily representative of the world population, the population of the United States, the population of Massachusetts, or any population at all. So you can’t conclude that a master face that matches against a bunch of LFW faces would match against a bunch of faces from your locality.
  • Captured faces exhibit a variety of quality levels. A face image captured by a camera three feet from you at eye level in good lighting will differ from a face image captured by an overhead camera in poor lighting. LFW doesn’t have a lot of these latter images.

I should mention one more thing about LFW. The researchers allow testers to access the database itself, essentially making LFW an “open book test.” And as any student knows, if a test is open book, it’s much easier to get an A on the test.

By MCPearson – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=25969927

Now let’s take a look at another test that was mentioned by the LFW folks itself: namely, NIST’s Face Recognition Vendor Test.

This is actually a series of tests that has evolved over the years; NIST is now conducting ongoing tests for both 1:1 and 1:N (unlike LFW, which only conducts 1:1 testing). This is important because most of the large-scale facial recognition commercial applications that we think about are 1:N applications (see my example above, in which a facial image captured at a crime scene is compared against an entire database of criminals).

In addition, NIST uses multiple data sets that cover a number of use cases, including mugshots, visa photos, and faces “in the wild” (i.e. not under ideal conditions).

It’s also important to note that NIST’s tests are also intended to benefit research, and do not necessarily indicate that a particular algorithm that performs well for NIST will perform well in a commercial implementation. (If the algorithm is even available in a commercial implementation: some of the algorithms submitted to NIST are research algorithms only that never made it to a production system.) For the difference between testing an algorithm in a NIST test and testing an algorithm in a production system, please see Mike French’s LinkedIn article on the topic. (I’ve cited this article before.)

With those caveats, I will note that NIST’s FRVT tests are NOT open book tests. Vendors and other entities give their algorithms to NIST, NIST tests them, and then NIST tells YOU what the results were.

So perhaps it’s more robust than LFW, but it’s still a research project.

Let’s look at the algorithms

Now that we’ve looked at two test datasets, let’s look at the algorithms themselves and evaluate the claim that results for the three algorithms Dlib, FaceNet, and SphereFace can naturally be extrapolated to ALL facial recognition algorithms.

This isn’t the first time that we’ve seen such an attempt at extrapolation. After all, the MIT Media Lab’s Gender Shades study (which evaluated neither 1:1 nor 1:N use cases, but algorithmic attempts to identify gender and race) itself only used three algorithms. Yet the popular media conclusion from this study was that ALL facial recognition algorithms are racist.

Compare this with NIST’s subsequent study, which evaluated 189 algorithms specially for 1:1 and 1:N use cases. While NIST did find some race/sex differences in algorithms, these were not universal: “Tests showed a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors.”

In other words, just because an earlier test of three algorithms demonstrated issues in determining race or gender, that doesn’t mean that the current crop of hundreds of algorithms will necessarily demonstrate issues in identifying individuals.

So let’s circle back to the master faces study. How do the results of this study affect “current facial recognition products”?

The answer is “We don’t know.”

Has the master faces experiment been duplicated against the leading commercial algorithms tested by Labeled Faces in the Wild? Apparently not.

Has the master faces experiment been duplicated against the leading commercial algorithms tested by NIST? Well, let’s look at the various ways you can define the “leading” commercial algorithms.

For example, here’s the view of the test set that IDEMIA would want you to see: the 1:N test sorted by the “Visa Border” column (results as of August 6, 2021):

And here’s the view of the test set that Paravision would want you to see: the 1:1 test sorted by the “Mugshot” column (results as of August 6, 2021):

From https://pages.nist.gov/frvt/html/frvt11.html as of August 6, 2021.

Now you can play with the sort order in many different ways, but the question remains: have the Israeli researchers, or anyone else, performed a “master faces” test (preferably a 1:N test) on the IDEMIA, Paravision, Sensetime, NtechLab, Anyvision, or ANY other commercial algorithm?

Maybe a future study WILL conclude that even the leading commercial algorithms are vulnerable to master face attacks. However, until such studies are actually performed, we CANNOT conclude that commercial facial recognition algorithms are vulnerable to master face attacks.

So naturally journalists approach the results critically…not

But I’m sure that people are going to make those conclusions anyway.

From https://xkcd.com/386/. Attribution-NonCommercial 2.5 Generic (CC BY-NC 2.5).

Does anyone even UNDERSTAND these studies? (Or do they choose NOT to understand them?)

How can you avoid the same mistakes when communicating about biometrics?

As you can see, people often write about biometric topics without understanding them fully.

Even biometric companies sometimes have difficulty communicating about biometric topics in a way that laypeople can understand. (Perhaps that’s the reason why people misconstrue these studies and conclude that “all facial recognition is racist” and “any facial recognition system can be spoofed by a master face.”)

Are you about to publish something about biometrics that requires a sanity check? (Hopefully not literally, but you know what I mean.)

Well, why not turn to a biometric content marketing expert?

Bredemarket offers over 25 years of experience in biometrics that can be applied to your marketing and writing projects.

If you don’t have a content marketing project now, you can still subscribe to my Bredemarket Identity Firm Services LinkedIn page or my Bredemarket Identity Firm Services Facebook group to keep up with news about biometrics (or about other authentication factors; biometrics isn’t the only one). Or scroll down to the bottom of this blog post and subscribe to my Bredemarket blog.

If my content creation process can benefit your biometric (or other technology) marketing and writing projects, contact me.

Maryland will soon deal with privacy stakeholders (and they CAN’T care about the GYRO method)

Just last week, I mentioned that the state of Utah appointed the Department of Government Operations’ first privacy officer. Now Maryland is getting into the act, and it’s worth taking a semi-deep dive into what Maryland is doing, and how it affects (or doesn’t affect) public safety.

By François Jouffroy – Christophe MOUSTIER (1994), Attribution, https://commons.wikimedia.org/w/index.php?curid=727606

According to Government Technology, the state of Maryland has created two new state information technology positions, one of which is the State Chief Privacy Officer. Because government, I will refer to this as the SCPO throughout the remainder of this post. If you are referring to this new position in verbal conversation, you can refer to the “Maryland skip-oh.” Or the “crab skip-oh.”

From https://teeherivar.com/product/maryland-is-for-crabs/. Fair use. Buy it if you like it. Virginians understand the origins of the phrase.

Governor Hogan announced the creation of the SCPO position via an Executive Order, a PDF of which can be found here.

Let me call out a few provisions in this executive order.

  • A.2. defines “personally identifiable information,” consisting of a person’s name in conjunction with other information, including but not limited to “[b]iometric information including an individual’s physiological or biological characteristics, including an individual’s deoxyribonucleic acid.” (Yes, that’s DNA.) Oh, and driver’s license numbers also.
  • At the same time, A.2 excludes “information collected, processed, or shared for the purposes of…public safety.”
  • But on the other hand, A.5 lists specific “state units” covered by certain provisions of the law, including both The Department of Public Safety and Correctional Services and the Department of State Police.
  • The reason for the listing of the state units is because every one of them will need to appoint “an agency privacy official” (C.2) who works with the SCPO.

There are other provisions, including the need for agency justification for the collection of personally identifiable information (PII), and the need to provide individuals with access to their collected PII along with the ability to correct or amend it.

But for law enforcement agencies in Maryland, the “public safety” exemption pretty much limits the applicability of THIS executive order (although other laws to correct public safety data would still apply).

Therefore, if some Maryland sheriff’s department releases an automated fingerprint identification system Request for Proposal (RFP) next month, you probably WON’T see a privacy advocate on the evaluation committee.

But what about an RFP released in 2022? Or an RFP released in a different state?

Be sure to keep up with relevant privacy legislation BEFORE it affects you.

Pangiam, CLEAR, and others make a “sporting” effort to deny (or allow) stadium access

Back when I initially entered the automated fingerprint identification systems industry in the last millennium, I primarily dealt with two markets: the law enforcement market that seeks to solve crimes and identify criminals, and the welfare benefits market that seeks to make sure that the right people receive benefits (and the wrong people don’t).

Other markets simply didn’t exist. If I pulled out my 1994-era mobile telephone and looked at it, nothing would happen. Today, I need to look at my 2020-era mobile telephone to obtain access to its features.

And there are other biometric markets also.

Pangiam and stadium bans

Back in 1994 I couldn’t envision a biometrics story in Sports Illustrated magazine. But SI just ran a story on how facial recognition can be used to keep fans out of stadiums who shouldn’t be there.

Some fans (“fanatics”) perform acts in stadiums that cause the sports teams and/or stadium authorities to officially ban them from the stadium, sometimes for life.

John Green is the man in the blue shirt and white baseball cap to Artest’s left. By Copyright 2004 National Basketball Association. – Television broadcast of the Pacers-Pistons brawl on ESPN., Fair use, https://en.wikipedia.org/w/index.php?curid=6824157

But in the past, these measures were ineffective.

For a long time, those “measures” were limited at best. Fans do not have to show ID upon entering arenas. Teams could run checks on all the credit cards to purchase tickets to see whether any belonged to banned fans, but those fans could easily have a friend buy the tickets. 

But there are other ways to enforce stadium bans, and Sports Illustrated quoted an expert on the matter.

“They’ve kicked the fan out; they’ve taken a picture—that fan they know,” says Shaun Moore, CEO of a facial-recognition company called Trueface. “The old way of doing things was, you give that picture to the security staff and say, ‘Don’t let this person back in.’ It’s not really realistic. So the new way of doing it is, if we do have entry-level cameras, we can run that person against everyone that’s coming in. And if there’s a hit, you know then; then there’s a notification to engage with that person.”

This, incidentally, is an example of a “deny list,” or the use of a security system to deny a person access. We’ll get to that later.

But did you notice the company that was mentioned in the last quote? I’ve mentioned that company before, because Trueface was the most recent acquisition by the company Pangiam, a company that has also acquired airport security technology.

But Pangiam/Trueface isn’t the only company serving stadium (and entertainment) venues.

CLEAR and stadium entry

Most of the time, sports stadiums aren’t concentrating on the practice of DENYING people entry to a stadium. They make a lot more money by ALLOWING people entry to a stadium…and allowing them to enter as quickly as possible so they can spend money on concessions.

One such company that supports this is CLEAR, which was recently in the news because of its Initial Public Offering. Coincidentally, CLEAR also provides airport security technology, but it has branched out from that core market and is also active in other areas.

For example, let’s say you’re a die-hard New York Mets fan, and you head to Citi Field to watch a game.

By Chris6d – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=101751795

The Mets don’t just let anyone into the stadium; you have to purchase a ticket. So you need to take your ticket out of your pocket and show it to the gate staff, or you need to take your smartphone out of your pocket and show your digital ticket to the gate staff.

What if you could get into the stadium without taking ANYTHING out of your pocket? Well, you can.

In the CLEAR Lane, your fingerprint is all you need to use tickets in your MLB Ballpark app – no need to pull out your phone or printed ticket as you enter the game.

Now that is really easy.

Pangiam and CLEAR aren’t the only companies in this space, as I well know. But there’s the possibility that biometrics will be used more often for access to sports games, concerts, and similar events.

Two articles on facial recognition

Within the last hour I’ve run across two articles that discuss various aspects of facial recognition, dispelling popular society notions about the science in the process.

Ban facial recognition? Ain’t gonna happen

The first article was originally shared by my former IDEMIA colleague Peter Kirkwood, who certainly understood the significance of it from his many years in the identity industry.

The article, published by the Security Industry Association (SIA), is entitled “Most State Legislatures Have Rejected Bans and Severe Restrictions on Facial Recognition.”

Admittedly the SIA is by explicit definition an industry association, but in this case it is simply noting a fact.

With most 2021 legislative sessions concluded or winding down for the year, proposals to ban or heavily restrict the technology have had very limited overall success despite recent headlines. It turns out that such bills failed to advance or were rejected by legislatures in no fewer than 17 states during the 2020 and 2021 sessions: California, Colorado, Hawaii, Kentucky, Maine, Maryland, Massachusetts, Michigan, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, New York, Oregon, South Carolina and Washington.

And the article even cited one instance in which public safety and civil libertarians worked together, proving such cooperation is actually possible.

In March, Utah enacted the nation’s most comprehensive and precise policy safeguards for government applications. The measure, supported both by the Utah Department of Public Safety as well as the American Civil Liberties Union, establishes requirements for public-sector and law enforcement use, including conditions for access to identity records held by the state, and transparency requirements for new public sector applications of facial recognition technology.

This reminds me of Kirkwood’s statement when he originally shared the article on LinkedIn: “Targeted use with appropriate governance and transparency is an incredibly powerful and beneficial tool.”

NIST’s biometric exit tests reveal an inconvenient truth

Meanwhile, the National Institute of Standards and Technology, which is clearly NOT an industry association, continues to enhance its ongoing Facial Recognition Vendor Test (FRVT). As I noted myself on Facebook and LinkedIn:

With its latest rounds of biometric testing over the last few years, the National Institute of Standards and Technology has shown its ability to adapt its testing to meet current situations.

In this case, NIST announced that it has applied its testing to the not-so-new use case of using facial recognition as a “biometric exit” tool, or as a way to verify that someone who was supposed to leave the country has actually left the country. The biometric exit use case emerged after 9/11 in response to visa overstays, and while the vast, vast majority of people who overstay visas do not fly planes into buildings and kill thousands of people, visa overstays are clearly a concern and thus merit NIST testing.

Transportation Security Administration Checkpoint at John Glenn Columbus International Airport. By Michael Ball – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77279000

But buried at the end of the NIST report (accessible from the link in NIST’s news release) was a little quote that should cause discomfort to all of those who reflexively believe that all biometrics is racist, and thus needs to be banned entirely (see SIA story above). Here’s what NIST said after having looked at the data from the latest test:

“The team explored differences in performance on male versus female subjects and also across national origin, which were the two identifiers the photos included. National origin can, but does not always, reflect racial background. Algorithms performed with high accuracy across all these variations. False negatives, though slightly more common for women, were rare in all cases.”

And as Peter Kirkwood and many other industry professionals would say, you need to use the technology responsibly. This includes things such as:

  • In criminal cases, having all computerized biometric search results reviewed by a trained forensic face examiner.
  • ONLY using facial recognition results as an investigative lead, and not relying on facial recognition alone to issue an arrest warrant.

So facial recognition providers and users had a good day. How was yours?

Is your home your castle when you use consumer doorbell facial recognition?

For purposes of this post, I will define three entities that can employ facial recognition:

  • Public organizations such as governments.
  • Private organizations such as businesses.
  • Individuals.

Some people are very concerned about facial recognition use by the first two categories of entities.

But what about the third category, individuals?

Can individuals assert a Constitutional right to use facial recognition in their own homes? And what if said individuals live in Peoria?

Concerns about ANY use of facial recognition

Let’s start with an ACLU article from 2018 regarding “Amazon’s Disturbing Plan to Add Face Surveillance to Your Front Door.”

Let me go out on a limb and guess that the ACLU opposes the practice.

The article was prompted by an Amazon 2018 patent application which involved both its Rekognition facial recognition service and its Ring cameras.

One of the figures in Amazon’s patent application, courtesy the ACLU. https://www.aclunc.org/docs/Amazon_Patent.pdf

While the main thrust of the ACLU article concerns acquisition of front door face surveillance (and other biometric) information by the government, it also briefly addresses the entity that is initially performing the face surveillance: namely, the individual.

Likewise, homeowners can also add photos of “suspicious” people into the system and then the doorbell’s facial recognition program will scan anyone passing their home.

I should note in passing that ACLU author Jacob Snow is describing a “deny list,” which flags people who should NOT be granted access such as that pesky solar power salesperson. In most cases, consumer products tout the use of an “allow list,” which flags people who SHOULD be granted access such as family members.

Regardless of whether you’re discussing a deny list or an allow list, the thrust of the ACLU article isn’t that governments shouldn’t use facial recognition. The thrust of the article is that facial recognition shouldn’t be used at all.

The ACLU and other civil rights groups have repeatedly warned that face surveillance poses an unprecedented threat to civil liberties and civil rights that must be stopped before it becomes widespread.

Again, not face surveillance by governments, but face surveillance period. People should not have the, um, “civil liberties” to use the technology.

But how does the tech world approach this?

The reason that I cited that particular ACLU article was that it was subsequently referenced in a CNET article from May 2021. This article bore the title “The best facial recognition security cameras of 2021.”

Let me go out on a limb and guess that CNET supports the practice.

The last part of author Megan Wollerton’s article delves into some of the issues regarding facial recognition use, including those raised by the ACLU. But the bulk of the article talks about really cool tech.

As I stated above, Wollerton notes that the intended use case for home facial recognition security systems involves the creation of an “allow list”:

Some home security cameras have facial recognition, an advanced option that lets you make a database of people who visit your house regularly. Then, when the camera sees a face, it determines whether or not it belongs to someone in your list of known faces. If the recognition system does not know who is at the door, it can alert you to an unknown person on your property.

Obviously you could repurpose such a system for anything you want, provided that you can obtain a clear picture of the face of the pesky social power salesperson.

Before posting her reviews of various security systems, and after a brief mention (expanded later in the article) about possible governmental misuse of facial recognition, Wollerton redirects the conversation.

But let’s step back a bit to the consumer realm. Your home is your castle, and the option of having surveillance cameras with facial recognition software is still compelling for those who want to be on the cutting edge of smart home innovation.

“Your home is your castle” may be a distinctly American concept, but it certainly applies here as organizations such as, um, the ACLU defend a person’s right against unreasonable actions by governments.

Obviously, there are limits to ANY Constitutional right. I cannot exercise my Fourth Amendment right to be secure in my house, couple that with my First Amendment right to freely exercise my religion, and conclude that I have the unrestricted right to perform ritual child sacrifices in my home. (Although I guess if I have a home theater and only my family members are present, I can probably yell “Fire!” all I want.)

So perhaps I could mount an argument that I can use facial recognition at my house any time I want, if the government agrees that this right is “reasonable.”

But it turns out that other people are involved.

You knew I was going to mention Illinois in this post

OK, it’s BIPA time.

As I previously explained in a January 2021 post about the Kami Doorbell Camera, “BIPA” is Illinois’ Biometric Information Privacy Act. This act imposes constraints on a private entity’s use of biometrics. (Governments are excluded in Illinois BIPA.) And here’s how BIPA defines the term “private entity”:

“Private entity” means any individual, partnership, corporation, limited liability company, association, or other group, however organized. A private entity does not include a State or local government agency. A private entity does not include any court of Illinois, a clerk of the court, or a judge or justice thereof.

Did you see the term “individual” in that definition?

So BIPA not only affects company use of biometrics, such as use of biometrics by Google or by a theme park or by a fitness center. It also affects an individual such as Harry or Harriet Homeowner’s use of biometrics.

As I previously noted, Google does not sell its Nest Cam “familiar face alert” feature in Illinois. But I guess it’s possible (via location spoofing if necessary) for someone to buy Nest Cam familiar face alerts in Indiana, and then sneak the feature across the border and implement it in the Land of Lincoln. But while this may (or may not) get Google off the hook, the individual is in a heap of trouble (should a trial lawyer decide to sue the individual).

Let’s face it. The average user of Nest Cam’s familiar face alerts, or the Kami Doorbell Camera, or any other home security camera with facial recognition (note that Amazon currently is not using facial recognition in its consumer products), is probably NOT complying with BIPA.

A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever occurs first.

I mean it’s hard enough for Harry and Harriet to get their teenage son to acknowledge receipt of the Homeowner family’s written policy for the use of the family doorbell camera. And you can forget about getting the pesky solar power salesperson to acknowledge receipt.

So from a legal perspective, it appears that any individual homeowner who installs a facial recognition security system can be hauled into civil court under BIPA.

But will these court cases be filed from a practical perspective?

Probably not.

When a social media company violates BIPA, the violation conceivably affects millions of individuals and can result in millions or billions of dollars in civil damages.

When the pesky solar power salesperson discovers that Harry and Harriet Homeowner, the damages would be limited to $1,000 or $5,000 plus relevant legal fees.

It’s not worth pursuing, any more than it’s worth pursuing the Illinois driver who is speeding down the expressway at 66 miles per hour.

My LinkedIn article “Don’t ban facial recognition”

By TapTheForwardAssist – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=98670006

This post serves as a pointer to an article that I just published on LinkedIn, “Don’t ban facial recognition.”

If you’re going to prohibit use of a particular tool, you may want to check the alternatives to that tool to see if the alternatives are better…or worse.

To read the article, go here.