Two companies that can provide friction ridge/face marketing and writing services, now that Bredemarket won’t

I recently announced a change in business scope for my DBA Bredemarket. Specifically, Bredemarket will no longer accept client work for solutions that identify individuals using (a) friction ridges (including fingerprints and palm prints) and/or (b) faces.

This impacts some companies that previously did business with me, and can potentially impact other companies that want to do business with me. If you are one of these companies, I am no longer available.

Fingerprint evidence
From https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.500-290e3.pdf (a/k/a “leisure reading for biometric system professionals”).

Since Bredemarket will no longer help you with your friction ridge/face marketing and writing needs, who will? Who has the expertise to help you? I have two suggestions.

Tandem Technical Writing

Do you need someon who is not only an excellent communicator, but also knows the ins and outs of AFIS and ABIS systems? Turn to Tandem Technical Writing LLC.

I first met Laurel Jew back in 1995 when I started consulting with, and then working for, Printrak. In fact, I joined Printrak when Laurel went on maternity leave. (I was one of two people who joined Printrak at that time. As I’ve previously noted, Laurel needed two people to replace her.)

Laurel worked for Printrak and its predecessor De La Rue Printrak for several years in its proposals organization.

Today, her biometric and communication experience is available to you. Tandem Technical Writing provides its clients with “15 years of proposal writing and biometrics technology background with high win %.”

Why does this matter to you? Because Laurel not only understands your biometric business, but also understands how to communicate to your biometric clients. Not many people can do both, so Laurel is a rarity in this industry.

The Tandem Technical Writing website is here.

To schedule a consultation, click here.

Applied Forensic Services

Perhaps your needs are more technical. Maybe you need someone who is a certified forensics professional, and who has also implemented many biometric systems. If that is your need, then you will want to consider Applied Forensic Services LLC.

I met Mike French in 2009 when Safran acquired Motorola’s biometric business and merged it into its U.S. subsidiary Sagem Morpho, creating MorphoTrak (“Morpho” + “Printrak”). I worked with him at MorphoTrak and IDEMIA until 2020.

Unlike me, Mike is a true forensic professional. (See his LinkedIn profile.) Back in 1994, when I was still learning to spell AFIS, Mike joined the latent print unit at the King County (Washington) Sheriff’s Office, where he spent over a decade before joining Sagem Morpho. He is an IAI-certified Latent Print Examiner, an IEEE-certified Biometric Professional, and an active participant in IAI and other forensic activities. I’ve previously referenced his advice on why agencies should conduct their own AFIS benchmarks.

Why does this matter to you? Because Mike’s consultancy, Applied Forensic Services, can provide expert advice on biometric procurements and implementation, ensuring that you get the biometric system that addresses your needs.

Applied Forensic Services offers the following consulting services:

The Applied Forensic Services website is here.

To schedule a consultation, click here.

Yes, there are others

There are other companies that can help you with friction ridge and face marketing, writing, and consultation services.

I specifically mention these two because I have worked with their principals both as an employee during my Printrak-to-IDEMIA years, and as a sole proprietor during my Bredemarket years. Laurel and Mike are both knowledgeable, dedicated, and can add value to your firm or agency.

And, unlike some experienced friction ridge and face experts, Laurel and Mike are still working and have not retired. (“Where have you gone, Peter Higgins…”)

Bredemarket announcement: change in business scope

Effective immediately:

  1. Bredemarket does not accept client work for solutions that identify individuals using (a) friction ridges (including fingerprints and palm prints) and/or (b) faces.
  2. Bredemarket does not accept client work for solutions that identify individuals using secure documents, such as driver’s licenses or passports. 

Pangiam/Trueface: when version 1.0 of the SDK is the REVISED version

After a lack of appearances in the Bredemarket blog (none since November), Pangiam is making an appearance again, based on announcements by Biometric Update and Trueface itself about a new revision of the Trueface facial recognition SDK.

The new revision includes a number of features, including a new model for masked faces and some technical improvements.

So what is this revision called?

Version 1.0.

“Wait,” you’re asking yourself. “Version 1.0 is the NEW version? It sounds like the ORIGINAL version. Shouldn’t the new version be 2.0?”

Well, no. The original version was V0. Trueface is now ready to release V1.

Well, almost ready.

If you go to the Trueface SDK reference page, you’ll see that Trueface releases are categorized as “alpha,” “beta,” and “stable.”

  • When I viewed the page on the afternoon of March 28, the latest stable release was 0.33.14634.
  • If you want to use the version 1.0 that is being “introduced” (Pangiam’s word), you have to go to the latest beta release, which was 1.0.16286.
  • And if you want to go bleeding edge alpha, you can get release 1.1.16419.

(Again, this was on the afternoon of March 28, and may change by the time you read this.)

Now most biometric vendors don’t expose this much detail about their software. Some don’t even provide any release information, especially for products with long delivery times where the version that a customer will eventually get doesn’t even have locked-down requirements yet. But Pangiam has chosen to provide this level of detail.

Oh, and Pangiam/Trueface also actively participates in the ongoing NIST FRVT testing. Information on the 1:1 performance of the trueface-003 algorithm can be found here. Information on the 1:N performance of the trueface-000 algorithm can be found here.

Clearview AI and Ukraine: when a company pursues the interests of its home country

In the security world (biometrics, access control, cybersecurity, and other areas), there has been a lot of discussion about the national origins and/or ownership of various security products.

If a particular product originates in country X, then will the government of country X require the product to serve the national interests of country X?

You see the effects of this everywhere:

  • FOCI mitigation at U.S. subsidiaries of foreign countries.
  • Marketing materials that state that a particular product is the best “among Western vendors” (which may or may not explain why this is important – see the second caveat here for examples).
  • European Union regulations that serve to diminish American influence.
  • The policies of certain countries (China, Iran, North Korea, Russia) that serve to eliminate American influence entirely.

Clearview AI, Ukraine, and Russia

Clearview AI is a U.S. company, but its relationship with the U.S. government is, in Facebook terms, “complicated.”

It’s complicated primarily because “the U.S. government” consists of a number of governments at the federal, state, and local level, and a number of agencies within these governments that sometimes work at cross-purposes with one another. Some U.S. government agencies love Clearview AI, while others hate it.

However, according to Reuters, the Ukrainian government can be counted in the list of governments that love Clearview AI.

Ukraine is receiving free access to Clearview AI’s powerful search engine for faces, letting authorities potentially vet people of interest at checkpoints, among other uses, added Lee Wolosky, an adviser to Clearview and former diplomat under U.S. presidents Barack Obama and Joe Biden.

From https://www.reuters.com/technology/exclusive-ukraine-has-started-using-clearview-ais-facial-recognition-during-war-2022-03-13/

But before you assume that Clearview is just helping anybody, Reuters also pointed this out.

Clearview said it had not offered the technology to Russia…

From https://www.reuters.com/technology/exclusive-ukraine-has-started-using-clearview-ais-facial-recognition-during-war-2022-03-13/

Here is an example of a company that is supporting certain foreign policies of the government in which it resides. Depending upon your own national origin, you may love this example, or you may hate this example.

Of course, even some who support U.S. actions in Ukraine may not support Clearview AI’s actions in Ukraine. But that’s another story.

Why I find “race recognition” problematic

Let me start by admitting to my, um, bias.

For the last twenty-five plus years, I have been involved in the identification of individuals.

  • Who is the person who is going through the arrest/booking process?
  • Who is the person who claims to be entitled to welfare benefits?
  • Who is the person who wants to enter the country?
  • Who is the person who is exiting the country? (Yes, I remember the visa overstay issue.)
  • Who is the person who wants to enter the computer room in the office building?
  • Who is the person who is applying for a driver’s license or passport?
  • Who is the person who wants to enter the sports stadium or concert arena?

These are just a few of the problems that I have worked on solving over the last twenty-five plus years, all of which are tied to individual identity.

From that perspective, I really don’t care if the person entering the stadium/computer room/country whatever is female, mixed race, Muslim, left handed, or whatever. I just want to know if this is the individual that he/she/they claims to be.

If you’ve never seen the list of potential candidates generated by a top-tier facial recognition program, you may be shocked when you see it. That list of candidates may include white men, Asian women, and everything in between. “Well, that’s wrong,” you may say to yourself. “How can the results include people of multiple races and genders?” It’s because the algorithm doesn’t care about race and gender. Think about it – what if a victim THINKS that he was attacked by a white male, but the attacker was really an Asian female? Identify the individual, not the race or gender.

From http://gendershades.org/. Yes, http.

So when Gender Shades came out, stating that IBM, Microsoft, and Face++ AI services had problems recognizing the gender of people, especially those with darker skin, my reaction was “so what”?

(Note that this is a different question than the question of how an algorithm identifies individuals of different genders, races, and ages, which has been addressed by NIST.)

But some people persist in addressing biometrics’ “failure” to properly identify genders and races, ignoring the fact that both gender and race have become social rather than biological constructs. Is the Olympian Jenner male, female, or something else? What are your personal pronouns? What happens when a mixed race person identifies with one race rather than another? And aren’t we all mixed race anyway?

The latest study from AlBdairi et al on computational methods for ethnicity identification

But there’s still a great interest in “race recognition.”

As Jim Nash of Biometric Update notes, a team of scientists has published an open access paper entitled “Face Recognition Based on Deep Learning and FPGA for Ethnicity Identification.”

The authors claim that their study is “the first image collection gathered specifically to address the ethnicity identification problem.”

But what of the NIST demographic study cited above? you may ask. The NIST study did NOT have the races of the individuals, but used the individuals’ country of origin as a proxy for race. Then again, it is possible that this study may have done the same thing.

Despite the fact that there are several large-scale face image databases accessible online, none of these databases are acceptable for the purpose of the conducted study in our research. Furthermore, 3141 photographs were gathered from a variety of sources. Specifically, 1081, 1021, and 1039 Chinese, Pakistani, and Russian face photos were gathered, respectively. 

From https://www.mdpi.com/2076-3417/12/5/2605/htm

There was no mention of whether any of the Chinese face photos were Caucasian…or how the researchers could tell that they were Caucasian.

Anyway, if you’re interested in the science behind using Deep Convolutional Neural Network (DCNN) models and field-programmable gate arrays (FPGAs) to identify ethnicity, read the paper. Or skip to the results.

The experimental results reported that our model outperformed all the methods of state-of-the-art, achieving an accuracy and F1 score value of 96.9 percent and 94.6 percent, respectively.

From https://www.mdpi.com/2076-3417/12/5/2605/htm

But this doesn’t answer the question I raised earlier.

Three possible use cases for race recognition, two of which are problematic

Why would anyone want to identify ethnicity or engage in race recognition? Jim Nash of Biometric Update summarizes three possible use cases for doing this, which I will address one by one. TL;DR two of the use cases are problematic.

The code…could find a role in the growing field of race-targeted medical treatments and pharmacogenomics, where accurately ascertaining race could provide better care.

From https://www.biometricupdate.com/202203/identifying-ethnicity-problematic-so-scientists-write-race-recognition-code

Note that in this case race IS a biological construct, so perhaps its use is valid here. Regardless of how Nkechi Amare Diallo (formerly Rachel Dolezal) self-identifies, she’s not a targeted candidate for sickle cell treatment.

It could be helpful to some employers. Such as system could “use racial information to offer employers ethnically convenient services, then preventing the offending risk present in many cultural taboos.”

From https://www.biometricupdate.com/202203/identifying-ethnicity-problematic-so-scientists-write-race-recognition-code

This is where things start to get problematic. Using Diallo as an example, race recognition software based upon her biological race would see no problem in offering her fried chicken and watermelon at a corporate function, but Diallo might have some different feelings about this. And it’s not guaranteed that ALL members of a particular race are affected by particular cultural taboos. (The text below, from 1965, was slightly edited.)

Godfrey Cambridge. Retrieved from https://www.imdb.com/name/nm0131387/

People used to think of (blacks) as going around with fried chicken in a paper bag, (Godfrey) Cambridge says. But things have changed. “Now,” he says, “we carry an attache case—with fried chicken in it. We ain’t going to give up everything just to get along with you people.”

From http://content.time.com/time/subscriber/article/0,33009,839260,00.html. Yes, http.

While some employees may be pleased that they receive a particular type of treatment because of their biological race, others may not be pleased at all.

So let’s move on to Nash’s third use case for race recognition. Hold on to your seats.

Ultimately, however, the broadest potential mission for race recognition would be in security — at border stations and deployed in public-access areas, according to the report.

From https://www.biometricupdate.com/202203/identifying-ethnicity-problematic-so-scientists-write-race-recognition-code

I thought we had settled this over 20 years ago. Although we really didn’t.

From https://www.youtube.com/watch?v=_rkmIAnfDVY

While President Bush was primarily speaking about religious affiliation, he also made the point that we should not judge individuals based upon the color of their skin.

Yet we do.

If I may again return to our current sad reality, there have been allegations that Africans encountered segregation and substandard treatment when trying to flee Ukraine. (When speaking of “African,” note that concerns were raised by officials from Gabon, Ghana, and Kenya – not from Egypt, Libya, or Tunisia. Then again, Indian students also complained of substandard treatment.)

Many people in the United States and western Europe would find it totally unacceptable to treat people at borders and public areas differently by race.

Do we want to encourage this use case?

And if you feel that we should, please provide your picture. I want to see if your concerns are worthy of consideration.

Who is THE #1 NIST facial recognition vendor?

As I’ve noted before, there are a number of facial recognition companies that claim to be the #1 NIST facial recognition vendor. I’m here to help you cut through the clutter so you know who the #1 NIST facial recognition vendor truly is.

You can confirm this information yourself by visiting the NIST FRVT 1:1 Verification and FRVT 1:N Identification pages. FRVT, by the way, stands for “Face Recognition Vendor Test.”

So I can announce to you that as of February 23, 2022, the #1 NIST facial recognition vendor is Cloudwalk.

And Sensetime.

And Beihang University ERCACAT.

And Cubox.

And Adera.

And Chosun University.

And iSAP Solution Corporation.

And Bitmain.

And Visage Techologies.

And Expasoft LLC.

And Paravision.

And NEC.

And Ptakuratsatu.

And Ayonix.

And Rank One.

And Dermalog.

And Innovatrics.

Now how can ALL dozen-plus of these entities be number 1?

Easy.

The NIST 1:1 and 1:N tests include many different accuracy and performance measurements, and each of the entities listed above placed #1 in at least one of these measurements. And all of the databases, database sizes, and use cases measure very different things.

Transportation Security Administration Checkpoint at John Glenn Columbus International Airport. By Michael Ball – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77279000

For example:

  • Visage Technologies was #1 in the 1:1 performance measurements for template generation time, in milliseconds, for 480×720 and 960×1440 data.
  • Meanwhile, NEC was #1 in the 1:N Identification (T>0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million.
  • Not to be confused with the 1:N Identification (T>0) accuracy measurements for gallery visa, probe border, N = 1.6 million, where the #1 algorithm was not from NEC.
  • And not to be confused with the 1:N Investigation (R = 1, T = 0) accuracy measurements for gallery border, probe border with a delta T greater than or equal to 10 years, N = 1.6 million, where the #1 algorithm was not from NEC.

And can I add a few more caveats?

First caveat: Since all of these tests are ongoing tests, you can probably find a slightly different set of #1 algorithms if you look at the January data, and you will probably find a slightly different set of #1 algorithms when the March data is available.

Second caveat: These are the results for the unqualified #1 NIST categories. You can add qualifiers, such as “#1 non-Chinese vendor” or “#1 western vendor” or “#1 U.S. vendor” to vault a particular algorithm to the top of the list.

Third caveat: You can add even more qualifiers, such as “within the top five NIST vendors” and (one I admit to having used before) “a top tier NIST vendor in multiple categories.” This can mean whatever you want it to mean. (As can “dramatically improved” algorithm, which may mean that you vaulted from position #300 to position #200 in one of the categories.)

Fourth caveat: Even if a particular NIST test applies to your specific use case, #1 performance on a NIST test does not guarantee that a facial recognition system supplied by that entity will yield #1 performance with your database in your environment. The algorithm sent to NIST may or may not make it into a production system. And even if it does, performance against a particular NIST test database may not yield the same results as performance against a Rhode Island criminal database, a French driver’s license database, or a Nigerian passport database. For more information on this, see Mike French’s LinkedIn article “Why agencies should conduct their own AFIS benchmarks rather than relying on others.”

So now that you know who the #1 NIST facial recognition vendor is, do you feel more knowledgeable?

Although I’ll grant that a NIST accuracy or performance claim is better than some other claims, such as self-test results.

Why isn’t there a Pharmaceutical Justice League?

In case you missed it, Blake Hall of ID.me recently shared an article by Stewart Baker about “The Flawed Claims About Bias in Facial Recognition.”

As many of you know, there have been many claims about bias in facial recognition, which have even led to the formation of an Algorithmic Justice League.

By Jason Fabok and Alex Sinclair / DC Comics – [1], Fair use, https://en.wikipedia.org/w/index.php?curid=54168863

Whoops, wrong Justice League. But you get the idea. “Gender Shades” and stuff like that, which I’ve written about before.

Back to Hall’s article, which makes a number of excellent points about bias in facial recognition, including the studies performed by NIST (referenced later in this post), but I loved one comparison that Baker wrote about.

So technical improvements may narrow but not entirely eliminate disparities in face recognition. Even if that’s true, however, treating those disparities as a moral issue still leads us astray. To see how, consider pharmaceuticals. The world is full of drugs that work a bit better or worse in men than in women. Those drugs aren’t banned as the evil sexist work of pharma bros. If the gender differential is modest, doctors may simply ignore the difference, or they may recommend a different dose for women. And even when the differential impact is devastating—such as a drug that helps men but causes birth defects when taken by pregnant women—no one wastes time condemning those drugs for their bias. Instead, they’re treated like any other flawed tool, minimizing their risks by using a variety of protocols from prescription requirements to black box warnings. 

From https://www.lawfareblog.com/flawed-claims-about-bias-facial-recognition

As an (tangential) example of this, I recently read an article entitled “To begin addressing racial bias in medicine, start with the skin.” This article does not argue that we should ban dermatology because conditions are more often misdiagnosed in people with darker skin. Instead, the article argues that we should improve dermatology to reduce these biases.

In the same manner, the biometric industry and stakeholder should strive to minimize bias in facial recognition and other biometrics, not ban it. See NIST’s study (NISTIR 8280, PDF) in this regard, referenced in Baker’s article.

In addition to what Baker said, let me again note that when judging the use of facial recognition, it should be compared against the alternatives. While I believe that alternatives should be offered, even passwords, consider that automated facial recognition supported by trained examiner review is much more accurate than witness (mis)identification. I don’t think we want to solely rely on that.

Because falsely imprisoning someone due to non-algorithmic witness misidentification is as bad as kryptonite.

By Apparent scan made by the original uploader User:Kryptoman., Fair use, https://en.wikipedia.org/w/index.php?curid=11736865

From defund the police to fund the police. But what about technology?

There’s been a tactical reversal by some cities.

Defund the police, then re-fund the police

In November, the Portland Oregon City Council unanimously voted to increase police funding, a little over a year after the city reduced police funding in the wake of the Black Lives Matter movement.

Now this month, Oakland California has also decided to increase police funding after similarly defunding the police in the past. This vote was not unanimous, but the City Council was very much in favor of the measure.

By Taymaz Valley – https://www.flickr.com/photos/taymazvalley/49974424258, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=91013003

Not that Oakland has returned to the former status quo.

[Mayor Libby] Schaaf applauded the vote in a statement, saying that residents “spoke up for a comprehensive approach to public safety — one that includes prevention, intervention, and addressing crime’s root causes, as well as an adequately staffed police department.”

From https://www.police1.com/patrol-issues/articles/oakland-backtracks-votes-to-add-police-as-crimes-surge-MDirxJZAHV41wyxg/

So while Oakland doesn’t believe that police are the solution to EVERY problem, it feels that police are necessary as part of a comprehensive approach. The city had 78 homicides in 2019, 109 in 2020, and 129 so far in 2021. Granted that it’s difficult to compare year-over-year statistics in the COVID age, but clearly defunding the police hasn’t been a major success.

But if crime is to be addressed by a comprehensive approach including “prevention, intervention, … addressing crime’s root causes, … (and) an adequately staffed police department…

…what about police technology?

What about police technology?

Portland and Oakland have a lot in common. Not only have they defunded and re-funded the police, but both have participated in the “facial recognition is evil” movement.

Oakland was the third U.S. city to limit the use of facial recognition, back in July 2019.

A city ordinance … prohibits the city of Oakland from “acquiring, obtaining, retaining, requesting, or accessing” facial recognition technology….

From https://www.vice.com/en/article/zmpaex/oakland-becomes-third-us-city-to-ban-facial-recognition-xz

Portland joined the movement later, in September 2020. But when it did, it made Oakland and other cities look like havens of right-wing totalitarianism.

The Portland City Council has passed the toughest facial recognition ban in the US, blocking both public and private use of the technology. Other cities such as BostonSan Franciscoand Oakland have passed laws barring public institutions from using facial recognition, but Portland is the first to prohibit private use.

From https://www.theverge.com/2020/9/9/21429960/portland-passes-strongest-facial-recognition-ban-us-public-private-technology
The Mayor of Portland, Ore. Ted Wheeler. By Naval Surface Warriors – 180421-N-UK248-023, Public Domain, https://commons.wikimedia.org/w/index.php?curid=91766933

Mayor Ted Wheeler noted, “Portlanders should never be in fear of having their right of privacy be exploited by either their government or by a private institution.”

Coincidentally, I was talking to someone this afternoon about some of the marketing work that I performed in 2015 for then-MorphoTrak’s video analytics offering. The market analysis included both government customers (some with acronyms, some without) and potential private customers such as large retail chains.

In 2015, we hadn’t yet seen the movements that would result in dampening both market segments in cities like Portland. (Perpetual Lineup didn’t appear until 2016, while Gender Shades didn’t appear until 2018.)

Flash – ah ah, robber of the universe

But there’s something else that I didn’t imagine in 2015, and that’s the new rage that’s sweeping the nation.

Flash!

By Dynamite Entertainment, Fair use, https://en.wikipedia.org/w/index.php?curid=57669050
Normally I add the music to the end of the post, but I stuck it in the middle this time as a camp break before this post suddently gets really serious. From https://www.youtube.com/watch?v=LfmrHTdXgK4

Specifically, flash mobs. And not the fun kind, but the “flash rob” kind.

District Attorney Chesa Boudin, who is facing a recall election in June, called this weekend’s brazen robberies “absolutely unacceptable” and was preparing tough charges against those arrested during the criminal bedlam in Union Square….

Boudin said his office was eagerly awaiting more arrests and plans to announce felony charges on Tuesday. He said 25 individuals are still at large in connection with the Union Square burglaries on Friday night….

“We know that when it comes to property crime in particular, sadly San Francisco police are spread thin,” said Boudin. “They’re not able to respond to every single 911 call, they’re only making arrests at about 3% of reported thefts.”

From https://sanfrancisco.cbslocal.com/2021/11/23/smash-and-grab-embattled-san-francisco-district-attorney-chesa-boudin-prosecution/

So there are no arrests in 97% of reported thefts in San Francisco.

To be honest, this is not a “new” rage that is sweeping the nation.

In fact, “flash robs” were occurring as early as 2012 in places like…Portland, Oregon.

If only there were a technology that could recognize flash rob participants and other thieves even when the police WEREN’T present.

A technology that is continuously tested by the U.S. government for accuracy, demographic effects (see this PDF and the individual “report cards” from the 1:1 tests), and other factors.

Does anyone know of any technology that would fill this need?

Perhaps Oakland and Portland could adopt it.

The dangers of removing facial recognition and artificial intelligence from DHS solutions (DHS ICR part four)

And here’s the fourth and final part of my repurposing exercise. See parts one, two, and three if you missed them.

This post is adapted from Bredemarket’s November 10, 2021 submitted comments on DHS-2021-0015-0005, Information Collection Request, Public Perceptions of Emerging Technology. As I concluded my request, I stated the following.

Of course, even the best efforts of the Department of Homeland Security (DHS) will not satisfy some members of the public. I anticipate that many of the respondents to this ICR will question the need to use biometrics to identify individuals, or even the need to identify individuals at all, believing that the societal costs outweigh the benefits.

By Banksy – One Nation Under CCTV, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=3890275

But before undertaking such drastic action, the consequences of following these alternative paths must be considered.

Taking an example outside of the non-criminal travel interests of DHS, some people prefer to use human eyewitness identification rather than computerized facial recognition.

By Zhe Wang, Paul C. Quinn, James W. Tanaka, Xiaoyang Yu, Yu-Hao P. Sun, Jiangang Liu, Olivier Pascalis, Liezhong Ge and Kang Lee – https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00559/full, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=96233011

However, eyewitness identification itself has clear issues of bias. The Innocence Project has documented many cases in which eyewitness (mis)identification has resulted in wrongful criminal convictions which were later overturned by biometric evidence.

Archie Williams moments after his exoneration on March 21, 2019. Photo by Innocence Project New Orleans. From https://innocenceproject.org/fingerprint-database-match-establishes-archie-williams-innocence/

Mistaken eyewitness identifications contributed to approximately 69% of the more than 375 wrongful convictions in the United States overturned by post-conviction DNA evidence.

Inaccurate eyewitness identifications can confound investigations from the earliest stages. Critical time is lost while police are distracted from the real perpetrator, focusing instead on building the case against an innocent person.

Despite solid and growing proof of the inaccuracy of traditional eyewitness ID procedures – and the availability of simple measures to reform them – traditional eyewitness identifications remain among the most commonly used and compelling evidence brought against criminal defendants.”

Innocence Project, Eyewitness Identification Reform, https://innocenceproject.org/eyewitness-identification-reform/

For more information on eyewitness misidentification, see my November 24, 2020 post on Archie Williams (pictured above) and Uriah Courtney.

Do we really want to dump computerized artificial intelligence and facial recognition, only to end up with manual identification processes that are proven to be even worse?

Biometrics enhances accuracy without adversely impacting timeliness (DHS ICR part three)

This post is adapted from Bredemarket’s November 10, 2021 submitted comments on DHS-2021-0015-0005, Information Collection Request, Public Perceptions of Emerging Technology. See my first and second posts on the topic.

DHS asked respondents to address five questions, including this one:

(2) will this information be processed and used in a timely manner;

Here is part of my response.

I am answering this question from the perspective of a person crossing the border or boarding a plane.

During the summer of 2017, CBP conducted biometric exit facial recognition technical demonstrations with various airlines and airports throughout the country. Here, CBP Officer Michael Shamma answers a London-bound American Airlines passenger’s questions at Chicago O’Hare International Airport. Photo by Brian Bell. From https://www.cbp.gov/frontline/cbp-biometric-testing

From this perspective, you can ask whether the use of biometric technologies makes the entire process faster, or slower.

Before biometric technologies became available, a person would cross a border or board a plane either by conducting no security check at all, or by having a human conduct a manual security check using the document(s) provided by an individual.

  • Unless a person was diverted to a secondary inspection process, automatic identification of the person (excluding questions such as “What is your purpose for entering the United States?”) could be accomplished in a few seconds.
  • However, manual security checks are much less accurate than technological solutions, as will be illustrated in a future post.

With biometric technologies, it is necessary to measure both the time to acquire the biometric data (in this case a facial image) and the time to compare the acquired data against the known data for the person (from a passport, passenger manifest, or database).

  • The time to acquire biometric data continues to improve. In some cases, the biometric data can be acquired “on the move” as the person is walking toward a gate or other entry area, thus requiring no additional time from the person’s perspective.
  • The time to compare biometric data can vary. If the source of the known data (such as the passport) is with the person, then comparison can be instantaneous from the person’s perspective. If the source of the known data is a database in a remote location, then the speed of comparison depends upon many factors, including network connections and server computation times. Naturally, DHS designs its systems to minimize this time, ensuring minimal or no delay from the person’s perspective. Of course, a network or system failure can adversely affect this.

In short, biometric evaluation is as fast if not faster than manual processes (provided no network or system failure occurs), and is more accurate than human processes.

Automated Passport Control kiosks
located at international airports across
the nation streamline the passenger’s
entry into the United States. Photo Credit: 
James Tourtellotte. From https://www.cbp.gov/travel/us-citizens/apc