Technology often advances more quickly than our society’s ability to deal with the ramifications of technology.
For example, President Eisenhower’s effort to improve our national defense via construction of a high-speed interstate highway system led to a number of unintended consequences, including the devastation of city downtown areas that were now being bypassed by travelers.
There are numerous other examples.
The previously unknown consequences of biometric technology
One way in which technology has outpaced society is by developing tools that unintentionally threaten individual privacy. For Bredemarket clients and potential clients, one relevant example of this is the ability to apply biometric technologies to previously recorded photographic, video, and audio content. (I won’t deal with real-time here.)
Hey, remember that time in 1969 that you were walking around in a Ku Klux Klan costume and one of your fraternity buddies took a picture of you? Back then you and your buddy had no idea that in future decades someone could capture a digital copy of that picture and share it with millions of people, and that one of those millions of people could use facial recognition software to compare the face in the picture with a known image of your face, and positively determine that you were the person parading around like a Grand Wizard.
Of course, there are also positive applications of biometric technology on older material. Perhaps biometrics could be used to identify an adoptee’s natural birth mother from an old picture. Or biometrics could be used to identify that a missing person was present in a train station on September 8, 2021 in the company of another (identified) person.
But regardless of the positive or negative use case, biometric identification provides us with unequalled capability to identify people who were previously recorded. Something that couldn’t have been imagined years and years ago.
Well, it couldn’t have been imagined by most of us, anyway.
Enter Carl Sagan (courtesy Elena’s Short Wisdom)
As a WordPress user (this blog and the Bredemarket website are hosted on WordPress), I subscribe to a number of other WordPress blogs. One of these blogs is Short Wisdom, authored by Elena. Her purpose is to collect short quotes from others that succinctly encapsulate essential truths.
Normally these quotes are of the inspirational variety, but Elena posted something today that applies to those of us concerned with technology and privacy.
“Might it be possible at some future time, when neurophysiology has advanced substantially, to reconstruct the memories or insight of someone long dead?…It would be the ultimate breach of privacy.”
The quote is taken from Broca’s Brain: Reflections on the Romance of Science, originally published in 1979.
The future is not now…yet
Obviously such technology did not exist in 1979, and doesn’t exist in 2021 either.
Even biometric identification of living people via “brain wave” biometrics isn’t substantively verified to any large degree; last month’s study only included 15 people. Big whoop.
But it’s certainly possible that this ability to reconstruct the memories and insights of the deceased could exist at some future date. Some preliminary work has already been done in this area.
If this technology ever becomes viable and the memories of the dead can be accessed, then the privacy advocates will REALLY howl.
And the already-deceased privacy advocates will be able to contribute to the conversation. Perhaps Carl Sagan himself will posthumously share some thoughts on the ongoing NIST FRVT results.
He can even use technology to sing about the results.
Delta Airlines, the Transportation Security Administration (TSA), and a travel tech company called Pangiam have partnered up to bring facial recognition technology to the Hartsfield–Jackson Atlanta International Airport (ATL).
As of next month, Delta SkyMiles members who use the Fly Delta app and have a TSA PreCheck membership will be able to simply look at a camera to present their “digital ID” and navigate the airport with greater ease. In this program, a customer’s identity is made up of a SkyMiles member number, passport number and Known Traveler Number.
Of course, TSA PreCheck enrollment is provided by three other companies…but I digress. (I’ll digress again in a minute.)
Forbes goes on to say that this navigation will be available at pre-airport check in (on the Fly Delta app), bag drop (via TSA PreCheck), security (again via TSA PreCheck), and the gate.
Incidentally, this illustrates how security systems from different providers build upon each other. Since I was an IDEMIA employee at the time that IDEMIA was the only company that performed TSA PreCheck enrollment, I was well aware (in my super-secret competitive intelligence role) how CLEAR touted the complementary features of TSA PreCheck in its own marketing.
For those who have never looked at FRVT before, it does not merely report the accuracy results of searches against one database, but reports accuracy results for searches against eight different databases of different types and of different sizes (N).
Mugshot, Mugshot, N = 12000000
Mugshot, Mugshot, N = 1600000
Mugshot, Webcam, N = 1600000
Mugshot, Profile, N = 1600000
Visa, Border, N = 1600000
Visa, Kiosk, N = 1600000
Border, Border 10+YRS, N = 1600000
Mugshot, Mugshot 12+YRS, N = 3000000
This is actually good for the vendors who submit their biometric algorithms, because even if the algorithm performs poorly on one of the databases, it may perform wonderfully on one of the other seven. That’s how so many vendors can trumpet that their algorithm is the best. When you throw in other qualifiers such as “top five,” “best non-Chinese vendor,” and even “vastly improved,” you can see how dozens of vendors can issue “NIST says we’re the best” press releases.
Not that I knock the practice; after all, I myself have done this for years. But you need to know how to interpret these press releases, and what they’re really saying. Remember this when you read the vendor announcement toward the end of this post.
Anyway, I went to check the current results, which when you originally visit the page are sorted in the order of the fifth database, the Visa Border database. And this is what I saw this morning (October 27):
For the most part, the top five for the Visa Border test contain the usual players. North Americans will be most familiar with IDEMIA and NEC, and Cloudwalk and Sensetime have been around for a while.
A new algorithm from a not-so-new provider
But I had never noticed Cubox in the NIST testing before. And the number attached to the Cubox algorithm, “000,” indicates that this is Cubox’s first submission.
And Cubox did exceptionally well, especially for a first submission.
As you can see by the superscripts attached to each numeric value, Cubox had the second most accurate algorithm for the Visa Border test, the most accurate algorithm for the Visa Kiosk test, and placed no lower than 12th in the six (of eight) tests in which it participated. Considering that 302 algorithms have been submitted over the years, that’s pretty remarkable for a first-time submission.
Well, as an ex-IDEMIA employee, my curious nature kicked in.
The Cubox that submitted an algorithm to NIST is a South Korean firm with the website cubox.aero, self-described as “The Leading Provider in Biometrics” (aren’t they all?) with fingerprint and face solutions. Cubox competes in the access control and border control markets.
Cubox’s ten-year history and “overseas” page details its growth in its markets, and its solutions that it has provided in South Korea, Mongolia, and Vietnam.
And although Cubox hasn’t trumpeted its performance on its own website (at least in the English version; I don’t know about the Korean version), Cubox has publicized its accomplishment on a LinkedIn post.
But before you get excited about the NIST results from Cubox, Sensetime, or any of the algorithm providers, remember that the NIST test is just a test. NIST cautions people about this, I have cautioned people about this (see the fourth point in this post), and Mike French has also discussed this.
However, it is also important to remember that NIST does not test operational systems, but rather technology submitted as software development kits or SDKs. Sometimes these submissions are labeled as research (or just not labeled), but in reality it cannot be known if these algorithms are included in the product that an agency will ultimately receive when they purchase a biometric system. And even if they are “thesame”, the operational architecture could produce different results with the same core algorithms optimized for use in a NIST study.
The very fact that test results vary between the NIST databases explicitly tells you that a number one ranking on one database does not mean that you’ll get a number one ranking on every database. And as French reminds us, when you take an operational algorithm in an operational system with a customer database, the results may be quite different.
Which is why French recommends that any government agency purchasing a biometric system should conduct its own test, with vendor operational systems (rather than test systems) loaded with the agency’s own data.
Incidentally, if your agency needs a forensic expert to help with a biometric procurement or implementation, check out the consulting services offered by French’s company, Applied Forensic Services.
In a competitive bid process, one unshakable truth is that everything you do will be seen by your competitors. This affects what you as a bidder do…and don’t do.
My trip to Hartford for a 30 minute meeting
I saw this in action many years ago when I was the product manager for Motorola’s Omnitrak product (subsequently Printrak BIS, subsequently part of MorphoBIS, subsequently part of MBIS). Connecticut and Rhode Island went out to bid for an two-state automated fingerprint identification system (AFIS). As part of the request for proposal process, the state of Connecticut scheduled a bidders’ conference. This was well before online videoconferencing became popular, so if you wanted to attend this bidders’ conference, you had to physically go to Hartford, Connecticut.
The Mark Twain House in Hartford. For reasons explained in this post, I spent more time here than I did at the bidders’ conference itself. By Makemake, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=751488
So I flew from California to Connecticut to attend the conference, and other people from other companies made the trip. That morning I drove from my hotel to the site of the conference (encountering a traffic jam much worse than the usual traffic jams back home), and I and the competitors assembled and waited for the bidders’ conference to begin.
The state representative opened the floor up to questions from bidders.
Silence.
No one asked a question.
We were all eyeing each other, seeing what the other people were going to ask, and none of us were willing to tip our hands by asking a question ourselves.
Eventually one or two minor questions were asked, but the bidders’ conference ended relatively quickly.
There are a number of chess-like tactics related to what bidders do and don’t do during proposals. Perhaps some day I’ll write a Bredemarket Premium post on the topic and spill my secrets.
But for now, let’s just say that all of the bidders successfully kept their thoughts to themselves during that conference. And I got to visit a historical site, so the trip wasn’t a total waste.
And today, it’s refreshing to know that things don’t change.
When the list of interested suppliers appears to be null
Back on September 24, the Government of Canada issued an Invitation to Qualify (B7059-180321/B) for a future facial recognition system for immigration purposes. This was issued some time ago, but I didn’t hear about it until Biometric Update mentioned it this morning.
Now Bredemarket isn’t going to submit a response (even though section 2.3a says that I can), but Bredemarket can obviously help those companies that ARE submitting a response. I have a good idea who the possible players are, but to check things I went to the page of the List of Interested Suppliers to see if there were any interested suppliers that I missed. The facial recognition market is changing rapidly, so I wondered if some new names were popping up.
So what did I see when I visited the List of Interested Suppliers?
An invitation for me to become the FIRST listed interested supplier.
That’s right, NO ONE has publicly expressed interest in this bid.
And yes, I also checked the French list; no names there either.
There could be one of three reasons for this:
Potential bidders don’t know about the Invitation to Qualify. This is theoretically possible; after all, Biometric Update didn’t learn about the invitation until two weeks after it was issued.
No one is interested in bidding on a major facial recognition program. Yeah, right.
Multiple companies ARE interested in this bid, but none wants to tip its hand and let competitors know of its interest.
My money is on reason three.
Hey, bidders. I can keep your secret.
As you may have gathered, as of Monday October 11 I am not part of any team responding to this Invitation to Qualify.
If you are a biometric vendor who needs help in composing your response to IRCC ITQ B7059-180321/B before the November 3 due date, or in framing questions (yes, there are chess moves on that also), let me know.
You’ll notice that while I do style myself as an expert on some things, I never claim that I know everything…because I obviously don’t.
This became clear to me when I was watching the Paravision Converge 2021 video and noticed its emphasis on optimizing Paravision’s recognition algorithms for Ambarella.
We power a majority of the world’s police body cams.
We were the first to enable HD and UHD security with low power; we revolutionized image processing for low-light and high-contrast scenes; and we are an industry leader in next-generation AI video security solutions.
Video has been a key component of face detection, person detection, and face recognition for years. (Not really of iris recognition…yet.) In certain use cases, it’s extremely desirable to move the processing out from a centralized server system to edge devices such as body cams, smart city cameras, and road safety cameras, and Ambarella (and its software partners) optimize this processing.
In addition to professional (and consumer) security, Ambarella is also a player in automotive solutions including autonomous vehicles, non-security consumer applications, and a variety of IoT/industrial/robotics applications.
Our CVflow® chip architecture is based on a deep understanding of core computer vision algorithms. Unlike general-purpose CPUs and GPUs, CVflow includes a dedicated vision processing engine programmed with a high-level algorithm description, allowing our architecture to scale performance to trillions of operations per second with extremely low power consumption.
I’ve always been of the opinion that technology is moving away from specialized hardware to COTS hardware. For example, the fingerprint processing and matching that used to require high-end UNIX computers with custom processor boards in the 1990s can now be accomplished on consumer-grade smartphones.
However, the reason that these consumer-grade devices can now perform these operations is because specialized technologies have been miniaturized and optimized for incorporation into the consumer grade devices, such as Yi home video cameras.
For years I’ve uttered the phrase “a tool is not a way of life,” and a recent statement from Rank One Computing reminded me of this fact. In a piece on the ethical use of facial recognition, Rank One Computing stated the following in passing:
[Rank One Computing] is taking a proactive stand to communicate that public concerns should focus on applications and policies rather than the technology itself.
I emphatically believe that all technologies are neutral. They can be used for good, or they can be used for…bad things.
And yes, facial recognition has been misused.
It is an undeniable fact that a police jurisdiction used a computerized facial recognition result as a justifiable reason for arrest, rather than as an investigative lead that would need to be supported by additional evidence.
But that incident, or ten incidents, or one hundred incidents, does NOT mean that ALL uses of facial recognition should be demonized, or even that SELECTED uses of facial recognition should be demonized (Amazon bad; Apple good).
Policies are not foolproof
Now I will grant that establishment of a policy or procedure does NOT necessarily mean that people will always act in compliance with that policy/procedure.
As an example, one accepted practice in lineup generation is double-blind lineup generation, in which you have different people involved in different parts of the lineup generation and witness viewing process. For example, these two roles can be distinct:
A person who knows who the arrested individual is creates the lineup (with additional safeguards to ensure that the created lineup isn’t biased).
A second person who DOESN’T know who the arrested individual is shows the lineup to the witness and records what the witness says and doesn’t say when viewing the lineup. The reason for the presence of a separate person is to ensure that the person administering the lineup doesn’t provide subconscious (or conscious) hints as to who the “right” person would be.
Now you can set up your police department’s procedures to require this, and your software vendor could design its software to support this. But that doesn’t prevent a corrupt Chief of Police from saying, “Jane, I want you to create the lineup AND show it to the witness. And make sure the witness chooses the RIGHT guy!”
But policy-based facial recognition is better than no facial recognition at all
But…if I may temporarily allow myself to run a tired cliché into the ground, that doesn’t mean you throw out the baby with the bathwater.
Rather than banning facial recognition, we should concentrate on defining ethical uses.
And there’s one more thing to consider. If you ban computerized facial recognition, how are you going to identify people? As I’ve noted elsewhere, witness (mis)identification is rampant with biases that make even the bottom-tier facial recognition algorithms seem accurate.
While he does not explicitly talk about the myriad of facial recognition algorithms that were NOT addressed in the study, he does have some additional details about the test dataset.
The three algorithms that were tested
Here’s what FindBiometrics says about the three algorithms that were tested in the Israeli study.
The researchers described (the master faces) as master keys that could unlock the three facial recognition systems that were used to test the theory. In that regard, they challenged the Dlib, FaceNet, and SphereFace systems, and their nine master faces were able to impersonate more than 40 percent of the 5,749 people in the LFW set.
While it initially sounds impressive to say that three facial recognition algorithms were fooled by the master faces, bear in mind that there are hundreds of facial recognition algorithms tested by NIST alone, and (as I said earlier) the test has NOT been duplicated against any algorithms other than the three open source algorithms mentioned.
…let’s look at the algorithms themselves and evaluate the claim that results for the three algorithms Dlib, FaceNet, and SphereFace can naturally be extrapolated to ALL facial recognition algorithms….NIST’s subsequent study…evaluated 189 algorithms specially for 1:1 and 1:N use cases….“Tests showed a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors.”
In short, just because the three open source algorithms were fooled by master faces doesn’t mean that commercial grade algorithms would also be fooled by master faces. Maybe they would be fooled…or maybe they wouldn’t.
What about the dataset?
The three open source algorithms were tested against the dataset from Labeled Faces in the Wild. As I noted in my prior post, the LFW people emphasize some important caveats about their dataset, including the following:
Many groups are not well represented in LFW. For example, there are very few children, no babies, very few people over the age of 80, and a relatively small proportion of women. In addition, many ethnicities have very minor representation or none at all.
In the FindBiometrics article, Weiss provides some additional detail about dataset representation.
…there is good reason to question the researchers’ conclusion. Only two of the nine master faces belong to women, and most depicted white men over the age of 60. In plain terms, that means that the master faces are not representative of the global public, and they are not nearly as effective when applied to anyone that falls outside one particular demographic.
That discrepancy can largely be attributed to the limitations of the LFW dataset. Women make up only 22 percent of the dataset, and the numbers are even lower for children, the elderly (those over the age of 80), and for many ethnic groups.
Valid points to be sure, although the definition of a “representative” dataset varies depending upon the use case. For example, a representative dataset for a law enforcement database in the city of El Paso, Texas will differ from a representative dataset for an airport database catering to Air France customers.
So what conclusion can be drawn?
Perhaps it’s just me, but scientific entities that conduct studies are always motivated by the need for additional funding. After a study is concluded, it seems that the entities always conclude that “more research is needed”…which can be self-serving, because as long as more research is needed, the scientific entities can continue to receive necessary funding. Imagine the scientific entity that would dare to say “Well, all necessary research has been conducted. We’re closing down our research center.”
But in this case, there IS a need to perform additional research, to test the master faces against different algorithms and against different datasets. Then we’ll know whether this statement from the FindBiometrics article (emphasis mine) is actually true:
Any face-based identification system would be extremely vulnerable to spoofing…
Modern “journalism” often consists of reprinting a press release without subjecting it to critical analysis. Sadly, I see a lot of this in publications, including both biometric and technology publications.
This post looks at the recently announced master faces study results, the datasets used (and the datasets not used), the algorithms used (and the algorithms not used), and the (faulty) conclusions that have been derived from the study.
Oh, and it also informs you of a way to make sure that you don’t make the same mistakes when talking about biometrics.
In facial recognition, there is a concept called “master faces” (similar concepts can be found for other biometric modalities). The idea behind master faces is that such data can potentially match against MULTIPLE faces, not just one. This is similar to a master key that can unlock many doors, not just one.
This can conceivably happen because facial recognition algorithms do not match faces to faces, but match derived features from faces to derived features from faces. So if you can create the right “master” feature set, it can potentially match more than one face.
Ever thought you were being gaslighted by industry claims that facial recognition is trustworthy for authentication and identification? You have been.
The article goes on to discuss an Israeli research project that demonstrated some true “master faces” vulnerabilities. (Emphasis mine.)
One particular approach, which they write was based on Dlib, created nine master faces that unlocked 42 percent to 64 percent of a test dataset. The team also evaluated its work using the FaceNet and SphereFace, which like Dlib, are convolutional neural network-based face descriptors.
They say a single face passed for 20 percent of identities in Labeled Faces in the Wild, an open-source database developed by the University of Massachusetts. That might make many current facial recognition products and strategies obsolete.
Sounds frightening. After all, the study not only used dlib, FaceNet, and SphereFace, but also made reference to a test set from Labeled Faces in the Wild. So it’s obvious why master faces techniques might make many current facial recognition products obsolete.
Right?
Let’s look at the datasets
It’s always more impressive to cite an authority, and citations of the University of Massachusetts’ Labeled Faces in the Wild (LFW) are no exception. After all, this dataset has been used for some time to evaluate facial recognition algorithms.
But what does Labeled Faces in the Wild say about…itself? (I know this is a long excerpt, but it’s important.)
DISCLAIMER:
Labeled Faces in the Wild is a public benchmark for face verification, also known as pair matching. No matter what the performance of an algorithm on LFW, it should not be used to conclude that an algorithm is suitable for any commercial purpose. There are many reasons for this. Here is a non-exhaustive list:
Face verification and other forms of face recognition are very different problems. For example, it is very difficult to extrapolate from performance on verification to performance on 1:N recognition.
Many groups are not well represented in LFW. For example, there are very few children, no babies, very few people over the age of 80, and a relatively small proportion of women. In addition, many ethnicities have very minor representation or none at all.
While theoretically LFW could be used to assess performance for certain subgroups, the database was not designed to have enough data for strong statistical conclusions about subgroups. Simply put, LFW is not large enough to provide evidence that a particular piece of software has been thoroughly tested.
Additional conditions, such as poor lighting, extreme pose, strong occlusions, low resolution, and other important factors do not constitute a major part of LFW. These are important areas of evaluation, especially for algorithms designed to recognize images “in the wild”.
For all of these reasons, we would like to emphasize that LFW was published to help the research community make advances in face verification, not to provide a thorough vetting of commercial algorithms before deployment.
While there are many resources available for assessing face recognition algorithms, such as the Face Recognition Vendor Tests run by the USA National Institute of Standards and Technology (NIST), the understanding of how to best test face recognition algorithms for commercial use is a rapidly evolving area. Some of us are actively involved in developing these new standards, and will continue to make them publicly available when they are ready.
So there are a lot of disclaimers in that text.
LFW is a 1:1 test, not a 1:N test. Therefore, while it can test how one face compares to another face, it cannot test how one face compares to a database of faces. The usual law enforcement use case is to compare a single face (for example, one captured from a video camera) against an entire database of known criminals. That’s a computationally different exercise from the act of comparing a crime scene face against a single criminal face, then comparing it against a second criminal face, and so forth.
The people in the LFW database are not necessarily representative of the world population, the population of the United States, the population of Massachusetts, or any population at all. So you can’t conclude that a master face that matches against a bunch of LFW faces would match against a bunch of faces from your locality.
Captured faces exhibit a variety of quality levels. A face image captured by a camera three feet from you at eye level in good lighting will differ from a face image captured by an overhead camera in poor lighting. LFW doesn’t have a lot of these latter images.
I should mention one more thing about LFW. The researchers allow testers to access the database itself, essentially making LFW an “open book test.” And as any student knows, if a test is open book, it’s much easier to get an A on the test.
Now let’s take a look at another test that was mentioned by the LFW folks itself: namely, NIST’s Face Recognition Vendor Test.
This is actually a series of tests that has evolved over the years; NIST is now conducting ongoing tests for both 1:1 and 1:N (unlike LFW, which only conducts 1:1 testing). This is important because most of the large-scale facial recognition commercial applications that we think about are 1:N applications (see my example above, in which a facial image captured at a crime scene is compared against an entire database of criminals).
In addition, NIST uses multiple data sets that cover a number of use cases, including mugshots, visa photos, and faces “in the wild” (i.e. not under ideal conditions).
It’s also important to note that NIST’s tests are also intended to benefit research, and do not necessarily indicate that a particular algorithm that performs well for NIST will perform well in a commercial implementation. (If the algorithm is even available in a commercial implementation: some of the algorithms submitted to NIST are research algorithms only that never made it to a production system.) For the difference between testing an algorithm in a NIST test and testing an algorithm in a production system, please see Mike French’s LinkedIn article on the topic. (I’ve cited this article before.)
With those caveats, I will note that NIST’s FRVT tests are NOT open book tests. Vendors and other entities give their algorithms to NIST, NIST tests them, and then NIST tells YOU what the results were.
So perhaps it’s more robust than LFW, but it’s still a research project.
Let’s look at the algorithms
Now that we’ve looked at two test datasets, let’s look at the algorithms themselves and evaluate the claim that results for the three algorithms Dlib, FaceNet, and SphereFace can naturally be extrapolated to ALL facial recognition algorithms.
This isn’t the first time that we’ve seen such an attempt at extrapolation. After all, the MIT Media Lab’s Gender Shades study (which evaluated neither 1:1 nor 1:N use cases, but algorithmic attempts to identify gender and race) itself only used three algorithms. Yet the popular media conclusion from this study was that ALL facial recognition algorithms are racist.
Compare this with NIST’s subsequent study, which evaluated 189 algorithms specially for 1:1 and 1:N use cases. While NIST did find some race/sex differences in algorithms, these were not universal: “Tests showed a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors.”
In other words, just because an earlier test of three algorithms demonstrated issues in determining race or gender, that doesn’t mean that the current crop of hundreds of algorithms will necessarily demonstrate issues in identifying individuals.
So let’s circle back to the master faces study. How do the results of this study affect “current facial recognition products”?
The answer is “We don’t know.”
Has the master faces experiment been duplicated against the leading commercial algorithms tested by Labeled Faces in the Wild? Apparently not.
Has the master faces experiment been duplicated against the leading commercial algorithms tested by NIST? Well, let’s look at the various ways you can define the “leading” commercial algorithms.
For example, here’s the view of the test set that IDEMIA would want you to see: the 1:N test sorted by the “Visa Border” column (results as of August 6, 2021):
Now you can play with the sort order in many different ways, but the question remains: have the Israeli researchers, or anyone else, performed a “master faces” test (preferably a 1:N test) on the IDEMIA, Paravision, Sensetime, NtechLab, Anyvision, or ANY other commercial algorithm?
Maybe a future study WILL conclude that even the leading commercial algorithms are vulnerable to master face attacks. However, until such studies are actually performed, we CANNOT conclude that commercial facial recognition algorithms are vulnerable to master face attacks.
So naturally journalists approach the results critically…not
But I’m sure that people are going to make those conclusions anyway.
While Matt Schneier doesn’t go to the extreme of saying that all facial recognition algorithms are now defunct, he does classify the research as “fascinating” WITHOUT commenting on its limitations or applicability. Schneier knows security, but he didn’t vet this one.
Gizmodo, on the other hand, breathlessly declares (in “‘Master Face’: Researchers Say They’ve Found a Wildly Successful Bypass for Face Recognition Tech”) that “you’d be safe to add (the study) to the growing body of literature that suggests facial recognition is bad news for everybody except cops and large corporations.” Apparently Gizmodo never read the NIST gender/race test results that I cited earlier.
Does anyone even UNDERSTAND these studies? (Or do they choose NOT to understand them?)
How can you avoid the same mistakes when communicating about biometrics?
As you can see, people often write about biometric topics without understanding them fully.
Even biometric companies sometimes have difficulty communicating about biometric topics in a way that laypeople can understand. (Perhaps that’s the reason why people misconstrue these studies and conclude that “all facial recognition is racist” and “any facial recognition system can be spoofed by a master face.”)
Are you about to publish something about biometrics that requires a sanity check? (Hopefully not literally, but you know what I mean.)
Well, why not turn to a biometric content marketing expert? Use the identity/biometric blog expert to write your blog post, the identity/biometric case study expert to write your case study, or the identity/biometric white paper expert to…well, you get the idea. (And all three experts are the same person!)
Just last week, I mentioned that the state of Utah appointed the Department of Government Operations’ first privacy officer. Now Maryland is getting into the act, and it’s worth taking a semi-deep dive into what Maryland is doing, and how it affects (or doesn’t affect) public safety.
According to Government Technology, the state of Maryland has created two new state information technology positions, one of which is the State Chief Privacy Officer. Because government, I will refer to this as the SCPO throughout the remainder of this post. If you are referring to this new position in verbal conversation, you can refer to the “Maryland skip-oh.” Or the “crab skip-oh.”
Governor Hogan announced the creation of the SCPO position via an Executive Order, a PDF of which can be found here.
Let me call out a few provisions in this executive order.
A.2. defines “personally identifiable information,” consisting of a person’s name in conjunction with other information, including but not limited to “[b]iometric information including an individual’s physiological or biological characteristics, including an individual’s deoxyribonucleic acid.” (Yes, that’s DNA.) Oh, and driver’s license numbers also.
At the same time, A.2 excludes “information collected, processed, or shared for the purposes of…public safety.”
But on the other hand, A.5 lists specific “state units” covered by certain provisions of the law, including both The Department of Public Safety and Correctional Services and the Department of State Police.
The reason for the listing of the state units is because every one of them will need to appoint “an agency privacy official” (C.2) who works with the SCPO.
There are other provisions, including the need for agency justification for the collection of personally identifiable information (PII), and the need to provide individuals with access to their collected PII along with the ability to correct or amend it.
But for law enforcement agencies in Maryland, the “public safety” exemption pretty much limits the applicability of THIS executive order (although other laws to correct public safety data would still apply).
Therefore, if some Maryland sheriff’s department releases an automated fingerprint identification system Request for Proposal (RFP) next month, you probably WON’T see a privacy advocate on the evaluation committee.
But what about an RFP released in 2022? Or an RFP released in a different state?
Be sure to keep up with relevant privacy legislation BEFORE it affects you.
Back when I initially entered the automated fingerprint identification systems industry in the last millennium, I primarily dealt with two markets: the law enforcement market that seeks to solve crimes and identify criminals, and the welfare benefits market that seeks to make sure that the right people receive benefits (and the wrong people don’t).
Other markets simply didn’t exist. If I pulled out my 1994-era mobile telephone and looked at it, nothing would happen. Today, I need to look at my 2020-era mobile telephone to obtain access to its features.
And there are other biometric markets also.
Pangiam and stadium bans
Back in 1994 I couldn’t envision a biometrics story in Sports Illustrated magazine. But SI just ran a story on how facial recognition can be used to keep fans out of stadiums who shouldn’t be there.
Some fans (“fanatics”) perform acts in stadiums that cause the sports teams and/or stadium authorities to officially ban them from the stadium, sometimes for life.
John Green is the man in the blue shirt and white baseball cap to Artest’s left. By Copyright 2004 National Basketball Association. – Television broadcast of the Pacers-Pistons brawl on ESPN., Fair use, https://en.wikipedia.org/w/index.php?curid=6824157
But in the past, these measures were ineffective.
For a long time, those “measures” were limited at best. Fans do not have to show ID upon entering arenas. Teams could run checks on all the credit cards to purchase tickets to see whether any belonged to banned fans, but those fans could easily have a friend buy the tickets.
But there are other ways to enforce stadium bans, and Sports Illustrated quoted an expert on the matter.
“They’ve kicked the fan out; they’ve taken a picture—that fan they know,” says Shaun Moore, CEO of a facial-recognition company called Trueface. “The old way of doing things was, you give that picture to the security staff and say, ‘Don’t let this person back in.’ It’s not really realistic. So the new way of doing it is, if we do have entry-level cameras, we can run that person against everyone that’s coming in. And if there’s a hit, you know then; then there’s a notification to engage with that person.”
This, incidentally, is an example of a “deny list,” or the use of a security system to deny a person access. We’ll get to that later.
But Pangiam/Trueface isn’t the only company serving stadium (and entertainment) venues.
CLEAR and stadium entry
Most of the time, sports stadiums aren’t concentrating on the practice of DENYING people entry to a stadium. They make a lot more money by ALLOWING people entry to a stadium…and allowing them to enter as quickly as possible so they can spend money on concessions.
One such company that supports this is CLEAR, which was recently in the news because of its Initial Public Offering. Coincidentally, CLEAR also provides airport security technology, but it has branched out from that core market and is also active in other areas.
The Mets don’t just let anyone into the stadium; you have to purchase a ticket. So you need to take your ticket out of your pocket and show it to the gate staff, or you need to take your smartphone out of your pocket and show your digital ticket to the gate staff.
What if you could get into the stadium without taking ANYTHING out of your pocket? Well, you can.
In the CLEAR Lane, your fingerprint is all you need to use tickets in your MLB Ballpark app – no need to pull out your phone or printed ticket as you enter the game.
Now that is really easy.
Pangiam and CLEAR aren’t the only companies in this space, as I well know. But there’s the possibility that biometrics will be used more often for access to sports games, concerts, and similar events.