I recently announced a change in business scope for my DBA Bredemarket. Specifically, Bredemarket will no longer accept client work for solutions that identify individuals using (a) friction ridges (including fingerprints and palm prints) and/or (b) faces.
This impacts some companies that previously did business with me, and can potentially impact other companies that want to do business with me. If you are one of these companies, I am no longer available.
Since Bredemarket will no longer help you with your friction ridge/face marketing and writing needs, who will? Who has the expertise to help you? I have two suggestions.
Tandem Technical Writing
Do you need someon who is not only an excellent communicator, but also knows the ins and outs of AFIS and ABIS systems? Turn to Tandem Technical Writing LLC.
I first met Laurel Jew back in 1995 when I started consulting with, and then working for, Printrak. In fact, I joined Printrak when Laurel went on maternity leave. (I was one of two people who joined Printrak at that time. As I’ve previously noted, Laurel needed two people to replace her.)
Laurel worked for Printrak and its predecessor De La Rue Printrak for several years in its proposals organization.
Why does this matter to you? Because Laurel not only understands your biometric business, but also understands how to communicate to your biometric clients. Not many people can do both, so Laurel is a rarity in this industry.
Perhaps your needs are more technical. Maybe you need someone who is a certified forensics professional, and who has also implemented many biometric systems. If that is your need, then you will want to consider Applied Forensic Services LLC.
I met Mike French in 2009 when Safran acquired Motorola’s biometric business and merged it into its U.S. subsidiary Sagem Morpho, creating MorphoTrak (“Morpho” + “Printrak”). I worked with him at MorphoTrak and IDEMIA until 2020.
Unlike me, Mike is a true forensic professional. (See his LinkedIn profile.) Back in 1994, when I was still learning to spell AFIS, Mike joined the latent print unit at the King County (Washington) Sheriff’s Office, where he spent over a decade before joining Sagem Morpho. He is an IAI-certified Latent Print Examiner, an IEEE-certified Biometric Professional, and an active participant in IAI and other forensic activities. I’ve previously referenced his advice on why agencies should conduct their own AFIS benchmarks.
Why does this matter to you? Because Mike’s consultancy, Applied Forensic Services, can provide expert advice on biometric procurements and implementation, ensuring that you get the biometric system that addresses your needs.
There are other companies that can help you with friction ridge and face marketing, writing, and consultation services.
I specifically mention these two because I have worked with their principals both as an employee during my Printrak-to-IDEMIA years, and as a sole proprietor during my Bredemarket years. Laurel and Mike are both knowledgeable, dedicated, and can add value to your firm or agency.
And, unlike some experienced friction ridge and face experts, Laurel and Mike are still working and have not retired. (“Where have you gone, Peter Higgins…”)
After a lack of appearances in the Bredemarket blog (none since November), Pangiam is making an appearance again, based on announcements by Biometric Update and Trueface itself about a new revision of the Trueface facial recognition SDK.
The new revision includes a number of features, including a new model for masked faces and some technical improvements.
So what is this revision called?
“Wait,” you’re asking yourself. “Version 1.0 is the NEW version? It sounds like the ORIGINAL version. Shouldn’t the new version be 2.0?”
Well, no. The original version was V0. Trueface is now ready to release V1.
When I viewed the page on the afternoon of March 28, the latest stable release was 0.33.14634.
If you want to use the version 1.0 that is being “introduced” (Pangiam’s word), you have to go to the latest beta release, which was 1.0.16286.
And if you want to go bleeding edge alpha, you can get release 1.1.16419.
(Again, this was on the afternoon of March 28, and may change by the time you read this.)
Now most biometric vendors don’t expose this much detail about their software. Some don’t even provide any release information, especially for products with long delivery times where the version that a customer will eventually get doesn’t even have locked-down requirements yet. But Pangiam has chosen to provide this level of detail.
Oh, and Pangiam/Trueface also actively participates in the ongoing NIST FRVT testing. Information on the 1:1 performance of the trueface-003 algorithm can be found here. Information on the 1:N performance of the trueface-000 algorithm can be found here.
Marketing materials that state that a particular product is the best “among Western vendors” (which may or may not explain why this is important – see the second caveat here for examples).
European Union regulations that serve to diminish American influence.
The policies of certain countries (China, Iran, North Korea, Russia) that serve to eliminate American influence entirely.
Clearview AI, Ukraine, and Russia
Clearview AI is a U.S. company, but its relationship with the U.S. government is, in Facebook terms, “complicated.”
It’s complicated primarily because “the U.S. government” consists of a number of governments at the federal, state, and local level, and a number of agencies within these governments that sometimes work at cross-purposes with one another. Some U.S. government agencies love Clearview AI, while others hate it.
However, according to Reuters, the Ukrainian government can be counted in the list of governments that love Clearview AI.
Ukraine is receiving free access to Clearview AI’s powerful search engine for faces, letting authorities potentially vet people of interest at checkpoints, among other uses, added Lee Wolosky, an adviser to Clearview and former diplomat under U.S. presidents Barack Obama and Joe Biden.
Here is an example of a company that is supporting certain foreign policies of the government in which it resides. Depending upon your own national origin, you may love this example, or you may hate this example.
Of course, even some who support U.S. actions in Ukraine may not support Clearview AI’s actions in Ukraine. But that’s another story.
For the last twenty-five plus years, I have been involved in the identification of individuals.
Who is the person who is going through the arrest/booking process?
Who is the person who claims to be entitled to welfare benefits?
Who is the person who wants to enter the country?
Who is the person who is exiting the country? (Yes, I remember the visa overstay issue.)
Who is the person who wants to enter the computer room in the office building?
Who is the person who is applying for a driver’s license or passport?
Who is the person who wants to enter the sports stadium or concert arena?
These are just a few of the problems that I have worked on solving over the last twenty-five plus years, all of which are tied to individual identity.
From that perspective, I really don’t care if the person entering the stadium/computer room/country whatever is female, mixed race, Muslim, left handed, or whatever. I just want to know if this is the individual that he/she/they claims to be.
If you’ve never seen the list of potential candidates generated by a top-tier facial recognition program, you may be shocked when you see it. That list of candidates may include white men, Asian women, and everything in between. “Well, that’s wrong,” you may say to yourself. “How can the results include people of multiple races and genders?” It’s because the algorithm doesn’t care about race and gender. Think about it – what if a victim THINKS that he was attacked by a white male, but the attacker was really an Asian female? Identify the individual, not the race or gender.
So when Gender Shades came out, stating that IBM, Microsoft, and Face++ AI services had problems recognizing the gender of people, especially those with darker skin, my reaction was “so what”?
(Note that this is a different question than the question of how an algorithm identifies individuals of different genders, races, and ages, which has been addressed by NIST.)
But some people persist in addressing biometrics’ “failure” to properly identify genders and races, ignoring the fact that both gender and race have become social rather than biological constructs. Is the Olympian Jenner male, female, or something else? What are your personal pronouns? What happens when a mixed race person identifies with one race rather than another? And aren’t we all mixed race anyway?
The latest study from AlBdairi et al on computational methods for ethnicity identification
But there’s still a great interest in “race recognition.”
As Jim Nash of Biometric Update notes, a team of scientists has published an open access paper entitled “Face Recognition Based on Deep Learning and FPGA for Ethnicity Identification.”
The authors claim that their study is “the first image collection gathered specifically to address the ethnicity identification problem.”
But what of the NIST demographic study cited above? you may ask. The NIST study did NOT have the races of the individuals, but used the individuals’ country of origin as a proxy for race. Then again, it is possible that this study may have done the same thing.
Despite the fact that there are several large-scale face image databases accessible online, none of these databases are acceptable for the purpose of the conducted study in our research. Furthermore, 3141 photographs were gathered from a variety of sources. Specifically, 1081, 1021, and 1039 Chinese, Pakistani, and Russian face photos were gathered, respectively.
There was no mention of whether any of the Chinese face photos were Caucasian…or how the researchers could tell that they were Caucasian.
Anyway, if you’re interested in the science behind using Deep Convolutional Neural Network (DCNN) models and field-programmable gate arrays (FPGAs) to identify ethnicity, read the paper. Or skip to the results.
The experimental results reported that our model outperformed all the methods of state-of-the-art, achieving an accuracy and F1 score value of 96.9 percent and 94.6 percent, respectively.
But this doesn’t answer the question I raised earlier.
Three possible use cases for race recognition, two of which are problematic
Why would anyone want to identify ethnicity or engage in race recognition? Jim Nash of Biometric Update summarizes three possible use cases for doing this, which I will address one by one. TL;DR two of the use cases are problematic.
The code…could find a role in the growing field of race-targeted medical treatments and pharmacogenomics, where accurately ascertaining race could provide better care.
Note that in this case race IS a biological construct, so perhaps its use is valid here. Regardless of how Nkechi Amare Diallo (formerly Rachel Dolezal) self-identifies, she’s not a targeted candidate for sickle cell treatment.
It could be helpful to some employers. Such as system could “use racial information to offer employers ethnically convenient services, then preventing the offending risk present in many cultural taboos.”
This is where things start to get problematic. Using Diallo as an example, race recognition software based upon her biological race would see no problem in offering her fried chicken and watermelon at a corporate function, but Diallo might have some different feelings about this. And it’s not guaranteed that ALL members of a particular race are affected by particular cultural taboos. (The text below, from 1965, was slightly edited.)
People used to think of (blacks) as going around with fried chicken in a paper bag, (Godfrey) Cambridge says. But things have changed. “Now,” he says, “we carry an attache case—with fried chicken in it. We ain’t going to give up everything just to get along with you people.”
I thought we had settled this over 20 years ago. Although we really didn’t.
While President Bush was primarily speaking about religious affiliation, he also made the point that we should not judge individuals based upon the color of their skin.
Yet we do.
If I may again return to our current sad reality, there have been allegations that Africans encountered segregation and substandard treatment when trying to flee Ukraine. (When speaking of “African,” note that concerns were raised by officials from Gabon, Ghana, and Kenya – not from Egypt, Libya, or Tunisia. Then again, Indian students also complained of substandard treatment.)
Many people in the United States and western Europe would find it totally unacceptable to treat people at borders and public areas differently by race.
Do we want to encourage this use case?
And if you feel that we should, please provide your picture. I want to see if your concerns are worthy of consideration.
As I’ve noted before, there are a number of facial recognition companies that claim to be the #1 NIST facial recognition vendor. I’m here to help you cut through the clutter so you know who the #1 NIST facial recognition vendor truly is.
Now how can ALL dozen-plus of these entities be number 1?
The NIST 1:1 and 1:N tests include many different accuracy and performance measurements, and each of the entities listed above placed #1 in at least one of these measurements. And all of the databases, database sizes, and use cases measure very different things.
Visage Technologies was #1 in the 1:1 performance measurements for template generation time, in milliseconds, for 480×720 and 960×1440 data.
Meanwhile, NEC was #1 in the 1:N Identification (T>0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million.
Not to be confused with the 1:N Identification (T>0) accuracy measurements for gallery visa, probe border, N = 1.6 million, where the #1 algorithm was not from NEC.
And not to be confused with the 1:N Investigation (R = 1, T = 0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million, where the #1 algorithm was not from NEC.
And can I add a few more caveats?
First caveat: Since all of these tests are ongoing tests, you can probably find a slightly different set of #1 algorithms if you look at the January data, and you will probably find a slightly different set of #1 algorithms when the March data is available.
Second caveat: These are the results for the unqualified #1 NIST categories. You can add qualifiers, such as “#1 non-Chinese vendor” or “#1 western vendor” or “#1 U.S. vendor” to vault a particular algorithm to the top of the list.
Third caveat: You can add even more qualifiers, such as “within the top five NIST vendors” and (one I admit to having used before) “a top tier NIST vendor in multiple categories.” This can mean whatever you want it to mean. (As can “dramatically improved” algorithm, which may mean that you vaulted from position #300 to position #200 in one of the categories.)
Fourth caveat: Even if a particular NIST test applies to your specific use case, #1 performance on a NIST test does not guarantee that a facial recognition system supplied by that entity will yield #1 performance with your database in your environment. The algorithm sent to NIST may or may not make it into a production system. And even if it does, performance against a particular NIST test database may not yield the same results as performance against a Rhode Island criminal database, a French driver’s license database, or a Nigerian passport database. For more information on this, see Mike French’s LinkedIn article “Why agencies should conduct their own AFIS benchmarks rather than relying on others.”
So now that you know who the #1 NIST facial recognition vendor is, do you feel more knowledgeable?
As many of you know, there have been many claims about bias in facial recognition, which have even led to the formation of an Algorithmic Justice League.
Whoops, wrong Justice League. But you get the idea. “Gender Shades” and stuff like that, which I’ve written about before.
Back to Hall’s article, which makes a number of excellent points about bias in facial recognition, including the studies performed by NIST (referenced later in this post), but I loved one comparison that Baker wrote about.
So technical improvements may narrow but not entirely eliminate disparities in face recognition. Even if that’s true, however, treating those disparities as a moral issue still leads us astray. To see how, consider pharmaceuticals. The world is full of drugs that work a bit better or worse in men than in women. Those drugs aren’t banned as the evil sexist work of pharma bros. If the gender differential is modest, doctors may simply ignore the difference, or they may recommend a different dose for women. And even when the differential impact is devastating—such as a drug that helps men but causes birth defects when taken by pregnant women—no one wastes time condemning those drugs for their bias. Instead, they’re treated like any other flawed tool, minimizing their risks by using a variety of protocols from prescription requirements to black box warnings.
As an (tangential) example of this, I recently read an article entitled “To begin addressing racial bias in medicine, start with the skin.” This article does not argue that we should ban dermatology because conditions are more often misdiagnosed in people with darker skin. Instead, the article argues that we should improve dermatology to reduce these biases.
In the same manner, the biometric industry and stakeholder should strive to minimize bias in facial recognition and other biometrics, not ban it. See NIST’s study (NISTIR 8280, PDF) in this regard, referenced in Baker’s article.
In addition to what Baker said, let me again note that when judging the use of facial recognition, it should be compared against the alternatives. While I believe that alternatives should be offered, even passwords, consider that automated facial recognition supported by trained examiner review is much more accurate than witness (mis)identification. I don’t think we want to solely rely on that.
Because falsely imprisoning someone due to non-algorithmic witness misidentification is as bad as kryptonite.
Now this month, Oakland California has also decided to increase police funding after similarly defunding the police in the past. This vote was not unanimous, but the City Council was very much in favor of the measure.
Not that Oakland has returned to the former status quo.
[Mayor Libby] Schaaf applauded the vote in a statement, saying that residents “spoke up for a comprehensive approach to public safety — one that includes prevention, intervention, and addressing crime’s root causes, as well as an adequately staffed police department.”
So while Oakland doesn’t believe that police are the solution to EVERY problem, it feels that police are necessary as part of a comprehensive approach. The city had 78 homicides in 2019, 109 in 2020, and 129 so far in 2021. Granted that it’s difficult to compare year-over-year statistics in the COVID age, but clearly defunding the police hasn’t been a major success.
But if crime is to be addressed by a comprehensive approach including “prevention, intervention, … addressing crime’s root causes, … (and) an adequately staffed police department…
…what about police technology?
What about police technology?
Portland and Oakland have a lot in common. Not only have they defunded and re-funded the police, but both have participated in the “facial recognition is evil” movement.
Oakland was the third U.S. city to limit the use of facial recognition, back in July 2019.
A city ordinance … prohibits the city of Oakland from “acquiring, obtaining, retaining, requesting, or accessing” facial recognition technology….
Portland joined the movement later, in September 2020. But when it did, it made Oakland and other cities look like havens of right-wing totalitarianism.
The Portland City Council has passed the toughest facial recognition ban in the US, blocking both public and private use of the technology. Other cities such as Boston, San Francisco, and Oakland have passed laws barring public institutions from using facial recognition, but Portland is the first to prohibit private use.
Mayor Ted Wheeler noted, “Portlanders should never be in fear of having their right of privacy be exploited by either their government or by a private institution.”
Coincidentally, I was talking to someone this afternoon about some of the marketing work that I performed in 2015 for then-MorphoTrak’s video analytics offering. The market analysis included both government customers (some with acronyms, some without) and potential private customers such as large retail chains.
In 2015, we hadn’t yet seen the movements that would result in dampening both market segments in cities like Portland. (Perpetual Lineup didn’t appear until 2016, while Gender Shades didn’t appear until 2018.)
Flash – ah ah, robber of the universe
But there’s something else that I didn’t imagine in 2015, and that’s the new rage that’s sweeping the nation.
Specifically, flash mobs. And not the fun kind, but the “flash rob” kind.
District Attorney Chesa Boudin, who is facing a recall election in June, called this weekend’s brazen robberies “absolutely unacceptable” and was preparing tough charges against those arrested during the criminal bedlam in Union Square….
Boudin said his office was eagerly awaiting more arrests and plans to announce felony charges on Tuesday. He said 25 individuals are still at large in connection with the Union Square burglaries on Friday night….
“We know that when it comes to property crime in particular, sadly San Francisco police are spread thin,” said Boudin. “They’re not able to respond to every single 911 call, they’re only making arrests at about 3% of reported thefts.”
And here’s the fourth and final part of my repurposing exercise. See parts one, two, and three if you missed them.
This post is adapted from Bredemarket’s November 10, 2021 submitted comments on DHS-2021-0015-0005, Information Collection Request, Public Perceptions of Emerging Technology. As I concluded my request, I stated the following.
Of course, even the best efforts of the Department of Homeland Security (DHS) will not satisfy some members of the public. I anticipate that many of the respondents to this ICR will question the need to use biometrics to identify individuals, or even the need to identify individuals at all, believing that the societal costs outweigh the benefits.
But before undertaking such drastic action, the consequences of following these alternative paths must be considered.
Taking an example outside of the non-criminal travel interests of DHS, some people prefer to use human eyewitness identification rather than computerized facial recognition.
However, eyewitness identification itself has clear issues of bias. The Innocence Project has documented many cases in which eyewitness (mis)identification has resulted in wrongful criminal convictions which were later overturned by biometric evidence.
Mistaken eyewitness identifications contributed to approximately 69% of the more than 375 wrongful convictions in the United States overturned by post-conviction DNA evidence.
Inaccurate eyewitness identifications can confound investigations from the earliest stages. Critical time is lost while police are distracted from the real perpetrator, focusing instead on building the case against an innocent person.
Despite solid and growing proof of the inaccuracy of traditional eyewitness ID procedures – and the availability of simple measures to reform them – traditional eyewitness identifications remain among the most commonly used and compelling evidence brought against criminal defendants.”
This post is adapted from Bredemarket’s November 10, 2021 submitted comments on DHS-2021-0015-0005, Information Collection Request, Public Perceptions of Emerging Technology. See my first and second posts on the topic.
DHS asked respondents to address five questions, including this one:
(2) will this information be processed and used in a timely manner;
Here is part of my response.
I am answering this question from the perspective of a person crossing the border or boarding a plane.
From this perspective, you can ask whether the use of biometric technologies makes the entire process faster, or slower.
Before biometric technologies became available, a person would cross a border or board a plane either by conducting no security check at all, or by having a human conduct a manual security check using the document(s) provided by an individual.
Unless a person was diverted to a secondary inspection process, automatic identification of the person (excluding questions such as “What is your purpose for entering the United States?”) could be accomplished in a few seconds.
However, manual security checks are much less accurate than technological solutions, as will be illustrated in a future post.
With biometric technologies, it is necessary to measure both the time to acquire the biometric data (in this case a facial image) and the time to compare the acquired data against the known data for the person (from a passport, passenger manifest, or database).
The time to acquire biometric data continues to improve. In some cases, the biometric data can be acquired “on the move” as the person is walking toward a gate or other entry area, thus requiring no additional time from the person’s perspective.
The time to compare biometric data can vary. If the source of the known data (such as the passport) is with the person, then comparison can be instantaneous from the person’s perspective. If the source of the known data is a database in a remote location, then the speed of comparison depends upon many factors, including network connections and server computation times. Naturally, DHS designs its systems to minimize this time, ensuring minimal or no delay from the person’s perspective. Of course, a network or system failure can adversely affect this.
In short, biometric evaluation is as fast if not faster than manual processes (provided no network or system failure occurs), and is more accurate than human processes.