Contactless fingerprint scanning (almost) software at #connectID

Let me kick off this post by quoting from another post that I wrote:

I’ve always been of the opinion that technology is moving away from specialized hardware to COTS hardware. For example, the fingerprint processing and matching that used to require high-end UNIX computers with custom processor boards in the 1990s can now be accomplished on consumer-grade smartphones.

Further evidence of this was promoted in advance of #connectID by Integrated Biometrics.

And yes, for those following Integrated Biometrics’ naming conventions, there IS a 1970s movie called “Slap Shot,” but I don’t think it has anything to do with crime solving. Unless you count hockey “enforcers” as law enforcement. And the product apparently wasn’t named by Integrated Biometrics anyway.

But back to the product:

SlapShot supports the collection of Fingerprint and facial images suitable for use with state of the art matching algorithms. Fingerprints can now be captured by advanced software that enables the camera in your existing smart phones to generate images with a quality capable of precise identification. Facial recognition and metadata supplement the identification process for any potential suspect or person of interest.

This groundbreaking approach turns almost any smart phone into a biometric capture device, and with minimal integration, your entire force can leverage their existing smart phones to capture fingerprints for identification and verification, receiving matching results in seconds from a centralized repository.

Great, you say! But there’s one more thing. Two more things, actually:

SlapShot functions on Android devices that support Lollipop or later operating systems and relies on the device’s rear high-resolution camera. Images captured from the camera are automatically processed on the device in the background and converted into EBTS files. Once the fingerprint image is taken, the fingerprint matcher in the cloud returns results instantly.

The SlapShot SDK allows developers to capture contactless fingerprints and other biometrics within their own apps via calls to the SlapShot APIs.

Note that SlapShot is NOT intended for end users, but for developers to incorporate into existing applications. Also note that it is (currently) ONLY supported on Android, not iOS.

But this does illustrate the continuing move away from dedicated devices, including Integrated Biometrics’ own line of dedicated devices, to multi-use devices that can also perform forensic capture and perform or receive forensic matching results.

And no, Integrated Biometrics is not cannibalizing its own market. I say this for two reasons.

  1. First, there are still going to be customers who will want dedicated devices, for a variety of reasons.
  2. Second, if Integrated Biometrics doesn’t compete in the smartphone contactless fingerprint capture market, it will lose sales to the companies that DO compete in this market.

Contactless fingerprint capture has been pursued by multiple companies for years, ever since the NIST CRADA was issued a few years ago. (Integrated Biometrics’ partner Sciometrics was one of those early CRADA participants, along with others.) Actually this effort launched before that, as there were efforts in 2004 and following years to capture a complete set of fingerprints within 15 seconds; those efforts led, among other things, to the smartphone software we are seeing today. Not only from Integrated Biometrics/Sciometrics, but also from other CRADA participants. (Don’t forget this one.)

Of the CRADA partners, MorphoTrak is now IDEMIA, Diamond Fortress is now Telos ID, Hoyos Labs is now Veridium, AOS is no longer in operation, and 3M’s biometric holdings are now part of Thales. Slide 10 from the NIST presentation posted at https://www.nist.gov/system/files/documents/2016/12/14/iai_2016-nist_contactless_fingerprints-distro-20160811.pdf

Of course these smartphone capture software packages aren’t Electronic Biometric Transmission Specification (EBTS) Appendix F certified, but that’s another story entirely.

More on the Israeli master faces study

Eric Weiss of FindBiometrics has opined on the Tel Aviv master faces study that I previously discussed.

Oops, wrong “Faces.” Oh well. By Warner Bros. Records – Billboard, page 18, 14 November 1970, Public Domain, https://commons.wikimedia.org/w/index.php?curid=27031391

While he does not explicitly talk about the myriad of facial recognition algorithms that were NOT addressed in the study, he does have some additional details about the test dataset.

The three algorithms that were tested

Here’s what FindBiometrics says about the three algorithms that were tested in the Israeli study.

The researchers described (the master faces) as master keys that could unlock the three facial recognition systems that were used to test the theory. In that regard, they challenged the Dlib, FaceNet, and SphereFace systems, and their nine master faces were able to impersonate more than 40 percent of the 5,749 people in the LFW set.

While it initially sounds impressive to say that three facial recognition algorithms were fooled by the master faces, bear in mind that there are hundreds of facial recognition algorithms tested by NIST alone, and (as I said earlier) the test has NOT been duplicated against any algorithms other than the three open source algorithms mentioned.

…let’s look at the algorithms themselves and evaluate the claim that results for the three algorithms Dlib, FaceNet, and SphereFace can naturally be extrapolated to ALL facial recognition algorithms….NIST’s subsequent study…evaluated 189 algorithms specially for 1:1 and 1:N use cases….“Tests showed a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors.”

In short, just because the three open source algorithms were fooled by master faces doesn’t mean that commercial grade algorithms would also be fooled by master faces. Maybe they would be fooled…or maybe they wouldn’t.

What about the dataset?

The three open source algorithms were tested against the dataset from Labeled Faces in the Wild. As I noted in my prior post, the LFW people emphasize some important caveats about their dataset, including the following:

Many groups are not well represented in LFW. For example, there are very few children, no babies, very few people over the age of 80, and a relatively small proportion of women. In addition, many ethnicities have very minor representation or none at all.

In the FindBiometrics article, Weiss provides some additional detail about dataset representation.

…there is good reason to question the researchers’ conclusion. Only two of the nine master faces belong to women, and most depicted white men over the age of 60. In plain terms, that means that the master faces are not representative of the global public, and they are not nearly as effective when applied to anyone that falls outside one particular demographic.

That discrepancy can largely be attributed to the limitations of the LFW dataset. Women make up only 22 percent of the dataset, and the numbers are even lower for children, the elderly (those over the age of 80), and for many ethnic groups.

Valid points to be sure, although the definition of a “representative” dataset varies depending upon the use case. For example, a representative dataset for a law enforcement database in the city of El Paso, Texas will differ from a representative dataset for an airport database catering to Air France customers.

So what conclusion can be drawn?

Perhaps it’s just me, but scientific entities that conduct studies are always motivated by the need for additional funding. After a study is concluded, it seems that the entities always conclude that “more research is needed”…which can be self-serving, because as long as more research is needed, the scientific entities can continue to receive necessary funding. Imagine the scientific entity that would dare to say “Well, all necessary research has been conducted. We’re closing down our research center.”

But in this case, there IS a need to perform additional research, to test the master faces against different algorithms and against different datasets. Then we’ll know whether this statement from the FindBiometrics article (emphasis mine) is actually true:

Any face-based identification system would be extremely vulnerable to spoofing…

Faulty “journalism” conclusions: the Israeli “master faces” study DIDN’T test ANY commercial biometric algorithms

Modern “journalism” often consists of reprinting a press release without subjecting it to critical analysis. Sadly, I see a lot of this in publications, including both biometric and technology publications.

This post looks at the recently announced master faces study results, the datasets used (and the datasets not used), the algorithms used (and the algorithms not used), and the (faulty) conclusions that have been derived from the study.

Oh, and it also informs you of a way to make sure that you don’t make the same mistakes when talking about biometrics.

Vulnerabilities from master faces

In facial recognition, there is a concept called “master faces” (similar concepts can be found for other biometric modalities). The idea behind master faces is that such data can potentially match against MULTIPLE faces, not just one. This is similar to a master key that can unlock many doors, not just one.

This can conceivably happen because facial recognition algorithms do not match faces to faces, but match derived features from faces to derived features from faces. So if you can create the right “master” feature set, it can potentially match more than one face.

However, this is not just a concept. It’s been done, as Biometric Update informs us in an article entitled ‘Master faces’ make authentication ‘extremely vulnerable’ — researchers.

Ever thought you were being gaslighted by industry claims that facial recognition is trustworthy for authentication and identification? You have been.

The article goes on to discuss an Israeli research project that demonstrated some true “master faces” vulnerabilities. (Emphasis mine.)

One particular approach, which they write was based on Dlib, created nine master faces that unlocked 42 percent to 64 percent of a test dataset. The team also evaluated its work using the FaceNet and SphereFace, which like Dlib, are convolutional neural network-based face descriptors.

They say a single face passed for 20 percent of identities in Labeled Faces in the Wild, an open-source database developed by the University of Massachusetts. That might make many current facial recognition products and strategies obsolete.

Sounds frightening. After all, the study not only used dlib, FaceNet, and SphereFace, but also made reference to a test set from Labeled Faces in the Wild. So it’s obvious why master faces techniques might make many current facial recognition products obsolete.

Right?

Let’s look at the datasets

It’s always more impressive to cite an authority, and citations of the University of Massachusetts’ Labeled Faces in the Wild (LFW) are no exception. After all, this dataset has been used for some time to evaluate facial recognition algorithms.

But what does Labeled Faces in the Wild say about…itself? (I know this is a long excerpt, but it’s important.)

DISCLAIMER:

Labeled Faces in the Wild is a public benchmark for face verification, also known as pair matching. No matter what the performance of an algorithm on LFW, it should not be used to conclude that an algorithm is suitable for any commercial purpose. There are many reasons for this. Here is a non-exhaustive list:

Face verification and other forms of face recognition are very different problems. For example, it is very difficult to extrapolate from performance on verification to performance on 1:N recognition.

Many groups are not well represented in LFW. For example, there are very few children, no babies, very few people over the age of 80, and a relatively small proportion of women. In addition, many ethnicities have very minor representation or none at all.

While theoretically LFW could be used to assess performance for certain subgroups, the database was not designed to have enough data for strong statistical conclusions about subgroups. Simply put, LFW is not large enough to provide evidence that a particular piece of software has been thoroughly tested.

Additional conditions, such as poor lighting, extreme pose, strong occlusions, low resolution, and other important factors do not constitute a major part of LFW. These are important areas of evaluation, especially for algorithms designed to recognize images “in the wild”.

For all of these reasons, we would like to emphasize that LFW was published to help the research community make advances in face verification, not to provide a thorough vetting of commercial algorithms before deployment.

While there are many resources available for assessing face recognition algorithms, such as the Face Recognition Vendor Tests run by the USA National Institute of Standards and Technology (NIST), the understanding of how to best test face recognition algorithms for commercial use is a rapidly evolving area. Some of us are actively involved in developing these new standards, and will continue to make them publicly available when they are ready.

So there are a lot of disclaimers in that text.

  • LFW is a 1:1 test, not a 1:N test. Therefore, while it can test how one face compares to another face, it cannot test how one face compares to a database of faces. The usual law enforcement use case is to compare a single face (for example, one captured from a video camera) against an entire database of known criminals. That’s a computationally different exercise from the act of comparing a crime scene face against a single criminal face, then comparing it against a second criminal face, and so forth.
  • The people in the LFW database are not necessarily representative of the world population, the population of the United States, the population of Massachusetts, or any population at all. So you can’t conclude that a master face that matches against a bunch of LFW faces would match against a bunch of faces from your locality.
  • Captured faces exhibit a variety of quality levels. A face image captured by a camera three feet from you at eye level in good lighting will differ from a face image captured by an overhead camera in poor lighting. LFW doesn’t have a lot of these latter images.

I should mention one more thing about LFW. The researchers allow testers to access the database itself, essentially making LFW an “open book test.” And as any student knows, if a test is open book, it’s much easier to get an A on the test.

By MCPearson – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=25969927

Now let’s take a look at another test that was mentioned by the LFW folks itself: namely, NIST’s Face Recognition Vendor Test.

This is actually a series of tests that has evolved over the years; NIST is now conducting ongoing tests for both 1:1 and 1:N (unlike LFW, which only conducts 1:1 testing). This is important because most of the large-scale facial recognition commercial applications that we think about are 1:N applications (see my example above, in which a facial image captured at a crime scene is compared against an entire database of criminals).

In addition, NIST uses multiple data sets that cover a number of use cases, including mugshots, visa photos, and faces “in the wild” (i.e. not under ideal conditions).

It’s also important to note that NIST’s tests are also intended to benefit research, and do not necessarily indicate that a particular algorithm that performs well for NIST will perform well in a commercial implementation. (If the algorithm is even available in a commercial implementation: some of the algorithms submitted to NIST are research algorithms only that never made it to a production system.) For the difference between testing an algorithm in a NIST test and testing an algorithm in a production system, please see Mike French’s LinkedIn article on the topic. (I’ve cited this article before.)

With those caveats, I will note that NIST’s FRVT tests are NOT open book tests. Vendors and other entities give their algorithms to NIST, NIST tests them, and then NIST tells YOU what the results were.

So perhaps it’s more robust than LFW, but it’s still a research project.

Let’s look at the algorithms

Now that we’ve looked at two test datasets, let’s look at the algorithms themselves and evaluate the claim that results for the three algorithms Dlib, FaceNet, and SphereFace can naturally be extrapolated to ALL facial recognition algorithms.

This isn’t the first time that we’ve seen such an attempt at extrapolation. After all, the MIT Media Lab’s Gender Shades study (which evaluated neither 1:1 nor 1:N use cases, but algorithmic attempts to identify gender and race) itself only used three algorithms. Yet the popular media conclusion from this study was that ALL facial recognition algorithms are racist.

Compare this with NIST’s subsequent study, which evaluated 189 algorithms specially for 1:1 and 1:N use cases. While NIST did find some race/sex differences in algorithms, these were not universal: “Tests showed a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors.”

In other words, just because an earlier test of three algorithms demonstrated issues in determining race or gender, that doesn’t mean that the current crop of hundreds of algorithms will necessarily demonstrate issues in identifying individuals.

So let’s circle back to the master faces study. How do the results of this study affect “current facial recognition products”?

The answer is “We don’t know.”

Has the master faces experiment been duplicated against the leading commercial algorithms tested by Labeled Faces in the Wild? Apparently not.

Has the master faces experiment been duplicated against the leading commercial algorithms tested by NIST? Well, let’s look at the various ways you can define the “leading” commercial algorithms.

For example, here’s the view of the test set that IDEMIA would want you to see: the 1:N test sorted by the “Visa Border” column (results as of August 6, 2021):

And here’s the view of the test set that Paravision would want you to see: the 1:1 test sorted by the “Mugshot” column (results as of August 6, 2021):

From https://pages.nist.gov/frvt/html/frvt11.html as of August 6, 2021.

Now you can play with the sort order in many different ways, but the question remains: have the Israeli researchers, or anyone else, performed a “master faces” test (preferably a 1:N test) on the IDEMIA, Paravision, Sensetime, NtechLab, Anyvision, or ANY other commercial algorithm?

Maybe a future study WILL conclude that even the leading commercial algorithms are vulnerable to master face attacks. However, until such studies are actually performed, we CANNOT conclude that commercial facial recognition algorithms are vulnerable to master face attacks.

So naturally journalists approach the results critically…not

But I’m sure that people are going to make those conclusions anyway.

From https://xkcd.com/386/. Attribution-NonCommercial 2.5 Generic (CC BY-NC 2.5).

Does anyone even UNDERSTAND these studies? (Or do they choose NOT to understand them?)

How can you avoid the same mistakes when communicating about biometrics?

As you can see, people often write about biometric topics without understanding them fully.

Even biometric companies sometimes have difficulty communicating about biometric topics in a way that laypeople can understand. (Perhaps that’s the reason why people misconstrue these studies and conclude that “all facial recognition is racist” and “any facial recognition system can be spoofed by a master face.”)

Are you about to publish something about biometrics that requires a sanity check? (Hopefully not literally, but you know what I mean.)

Well, why not turn to a biometric content marketing expert?

Bredemarket offers over 25 years of experience in biometrics that can be applied to your marketing and writing projects.

If you don’t have a content marketing project now, you can still subscribe to my Bredemarket Identity Firm Services LinkedIn page or my Bredemarket Identity Firm Services Facebook group to keep up with news about biometrics (or about other authentication factors; biometrics isn’t the only one). Or scroll down to the bottom of this blog post and subscribe to my Bredemarket blog.

If my content creation process can benefit your biometric (or other technology) marketing and writing projects, contact me.

Build your own automated fingerprint identification system…for FREE!

At Bredemarket, I work with a number of companies that provide biometric systems. And I’ve seen a lot of other systems over the years, including fingerprint, face, DNA, and other systems.

The components of a biometric system

While biometric systems may seem complex, the concept is simple. Years ago, I knew a guy who asserted that a biometric system only needs to contain two elements:

  • An algorithm that takes a biometric sample, such as a fingerprint image, and converts it into a biometric template.
  • An algorithm that can take these biometric templates and match them against each other.

If you have these two algorithms, my friend stated that you had everything you need for an biometric system.

Well, maybe not everything.

Today, I can think of a few other things that might be essential, or at least highly recommended. Here they are:

  • An algorithm that can measure the quality of a biometric sample. In some cases, the quality of the sample may be important in determining how reliable matching results may be.
  • For fingerprints, an algorithm that can classify the prints. Forensic examiners routinely classify prints as arches, whorls, loops, or variants of these three, and classifications can sometimes be helpful in the matching process.
  • For some biometric samples, utilities to manage the compression and decompression of the biometric images. Such images can be huge, and if they can be compressed by a reliable compression methodology, then processing and transmission speeds can be improved.
  • A utility to manage the way in which the biometric data is accessed. To ensure that biometric systems can talk to each other, there are a number of related interchange standards that govern how the biometric information can be read, written, edited, and manipulated.
  • For fingerprints, a utility to segment the fingerprints, in cases where multiple fingerprints can be found in the same image.

So based upon the two lists above, there are seven different algorithms/utilities that could be combined to form an automated fingerprint identification system, and I could probably come up with an eighth one if I really felt like it.

My friend knew about this stuff, because he had worked for several different firms that produced fingerprint identification systems. These firms spent a lot of money hiring many engineers and researchers to create all of these algorithms/utilities and sell them to customers.

How to get these biometric system components for free

But what if I told you that all of these firms were wasting their time?

And if I told you that since 2007, you could get source code for ALL of these algorithms and utilities for FREE?

Well, it’s true.

To further its testing work, the National Institute of Standards and Technology (NIST) created the NIST Biometric Image Software (NBIS), which currently has eight algorithms/utilities. (The eighth one, not mentioned above, is a spectral validation/verification metric for fingerprint images.) Some of these algorithms and utilities are available separately or in other utilities: anyone can (and is encouraged to) use the quality algorithm, called NFIQ, and the minutiae detector MINDTCT is used within the FBI’s Universal Latent Workstation (ULW).

If the FBI had just waited until 2007, it could have obtained the IAFIS software for free. FBI image taken from Chapter 6 of the Fingerprint Sourcebook, https://www.ojp.gov/pdffiles1/nij/225326.pdf.

As I write this, NBIS has not been updated in six years, when Release 5.0.0 came out.

Is anyone using this in a production system?

And no, I am unaware of any law enforcement agency or any other entity that has actually USED NBIS in a production system, outside of the testing realm, with the exception of limited use of selected utilities as noted above. Although Dev Technology Group has compiled NBIS on the Android platform as an exercise. (Would you like an AFIS on your Samsung phone?)

But it’s interesting to note that the capability is there, so the next time someone says, “Hey, let’s build our own AFIS!” you can direct them to https://www.nist.gov/itl/iad/image-group/products-and-services/image-group-open-source-server-nigos#Releases and let the person download the source code and build it.

Three recent #DNA stories

By Zephyris – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15027555

Over the last few days, I’ve run across three stories that deal with two aspects of DNA collection: familial DNA, and DNA mixtures.

Familial DNA

(This case was mentioned on Forensics and Law in Focus, a recommended read for all sorts of forensic techniques.)

Of all of the biometrics, DNA has a property that the others don’t: the similarity of DNA between family members. Someone finding my child’s fingerprints won’t necessarily be able to find me, and even someone who finds my child’s face won’t necessarily be able to find me.

But 84 year old Raymand Vannieuwenhoven is on trial for a 1976 murder because of DNA similarities in families.

Vannieuwenhoven is accused in the July 9, 1976, murders of a Green Bay couple who was camping at McClintock Park in the Town of Silver Cliff. David Schuldes, 25, and Ellen Matheys, 24, were shot and killed at the campground….

A DNA profile obtained through evidence was already on file with the State Crime Lab, according to previous testimony….

Baldwin explained how a breakthrough came in 2018 when Parabon Nanolabs of Virginia developed new technology to examine DNA evidence, which could provide certain genetic characteristics of possible suspects through DNA….

On Dec. 21, 2018, Parabon contacted Baldwin and informed him that a possible suspect was found through the DNA testing. He said they gave him a Green Bay-area family—the Vannieuwenhovens—that had four sons and four grandsons who possibly could be a match.

The detectives then had to test the relatives and compare their DNA to the crime scene DNA. But not ALL of the relatives: this was solely used as an investigative lead, and there was no point in testing the grandsons for a 1976 murder. Raymand was one of those whose DNA was collected (by having him lick an envelope to seal it), and the probabilities indicated a match.

Obviously this technique has controversy in some quarters, since the family members who originally provided the DNA had no idea that it would be used to arrest (or, in some cases, exonerate) another family member in this way. But the technique is being used.

By the way, Vannieuwenhoven was found guilty, and the 84 year old may be sentenced to life in prison.

DNA mixtures

The other story concerns what can be found when a DNA sample is collected. The DNA sample may contain a lot of things, from a lot of people.

With improvements in DNA testing methods, we don’t need much DNA to make a profile and see perhaps if I am a likely contributor to that sample or if you have contributed — even if you never touched the table directly. That level of DNA profiling is useful for many different types of crimes, but also brings up the issue of relevance. We aren’t explaining how DNA got to a location. 

As an example, a single item at a crime scene may include the DNA of the person who committed the crime, the crime victim, an innocent bystander who touched the area in question before the crime was committed, and (if the police officer was careless) the police officer investigating the crime.

Now you have to look at the DNA sample that was collected. With DNA mixtures, this gets tough.

If single-source DNA is like basic arithmetic and a two-person mixture is like algebra, then a complicated mixture is like calculus!

The quotes above are from John Butler of the National Institute of Standards and Technology, who has a concern about how all of the different laboratories interpret DNA mixtures. Ideally, all labs should work together to have a consistent, verifiable way to interpret these mixtures.

We wanted to see if there were established methodologies that worked better than others when tested, and where those limits were being drawn. What we found is that there is not enough publicly available data to enable an external and independent assessment of the degree of reliability of DNA mixture interpretation practices.

NIST, as it does in other areas, seeks to advance the science, and is urging stakeholders to work together to do so.

But wait; there’s more on DNA mixtures!

While NIST has been conducting the work above, the National Institute of Justice have been funding other work.

Michael Marciano, research assistant professor and director for research in the Forensic and National Security Sciences Institute (FNSSI) within the College of Arts and Sciences, and Jonathan Adelman, research assistant professor in FNSSI, have invented a novel hybrid machine learning approach (MLA) to mixture analysis (U.S. patent number 10,957,421). Their method combines the strengths of current computational and expert analysis approaches with those in data mining and artificial intelligence.

Marciano and Adelman received funding from the National Institute of Justice to further develop their idea in 2014. Although this intellectual property has not been fully developed for commercial use, they are pursuing funding to transition the technology. Once this is done, they are hopeful that the new method will be used throughout the law enforcement and criminal justice communities, specifically by forensic DNA scientists and the legal community.

Actually, once the intellectual property has been developed for commercial use, it will NOT be used THROUGHOUT the law enforcement and criminal justice communities. It will be used by PORTIONS of the law enforcement and criminal justice communities, while OTHERS within the community will use commercial products from competitors.

Commercialization of a product actually works AGAINST universal acceptance, except in very limited cases. Take commercialization of fingerprinting products. As Chapter 6 of The Fingerprint Sourcebook details, independent research was performed in four separate countries (France, Japan, the UK, and the US) which, after commercialization, led to three (now two) separate fingerprinting products: NEC’s product from Japanese research, and IDEMIA’s product from separate French (Morpho) and United States (Printrak) research. This initial research, combined with subsequent research that led to additional products, led to an interoperability issue, despite efforts from NIST to advance greater inoperability.

Will NIST have to do the same thing to reconcile competing DNA mixture analysis methods?

The ITIF, digital identity, and federalism

I just read an editorial by Daniel Castro, the vice president of the Information Technology and Innovation Foundation (ITIF) and director of the Center for Data Innovation. The opinion piece, published in Government Technology, is entitled “Absent Federal IDs, Digital Driver’s Licenses a Good Start.”

You knew I was going to comment on this one.

Why Daniel Castro supports a national digital ID

Let me allow Castro to state his case.

After Castro identifies the various ways in which people prove identity online, and the drawbacks of these methods, here’s what Castro says about the problem that needs to be addressed:

…poor identity verification is one of the reasons that identity theft is such a growing problem as more services move online. The Federal Trade Commission received 1.4 million reports of identity theft last year, double the number in 2019, with one security research firm estimating $56 billion in losses.

Castro then goes on to state his ideal solution:

The best solution to this problem would be for the federal government to develop an interoperable framework for securely issuing and validating electronic IDs and then direct a federal agency to start issuing these electronic IDs upon request. 

Castro then notes that the federal government has NOT done this:

But in the absence of federal action, a number of states have already begun this work on their own by creating digital driver’s licenses that provide a secure digital alternative to a physical identity document.

Feel free to read the rest of the story.

“Page two.” By Shealah Craighead – The original was formerly from here and is now archived at georgewbush-whitehouse.archives.gov., Public Domain, https://commons.wikimedia.org/w/index.php?curid=943922

But for me I’m going to stop right there.

Why Americans oppose mandatory national physical and digital IDs

Castro’s proposal, while ideal from a technological standpoint, doesn’t fully account for the realities of American politics.

Many Americans (regardless of political leanings) are strongly opposed to ANY mandatory national ID system. For example, many Americans don’t want our Social Security Numbers to become mandatory national IDs (even though they are de facto national IDs today). And while the federal government does issue passports, it isn’t mandatory that people GET them.

And many Americans don’t want state driver’s licenses to become mandatory national IDs. I went into this whole issue in great detail in my prior post “How 6 CFR 37 (REAL IDs) exhibits…federalism,” which made the following points:

  1. States are NOT mandated to issue REAL IDs. (And, no citizen is mandated to GET a REAL ID.)
  2. The federal government CAN mandate which IDs are accepted for federal purposes.
  3. Because the federal government can mandate the IDs to use when entering a federal facility or flying at a commercial airport, ALL of the states were eventually “persuaded” to issue REAL IDs. (Of course, it has take nearly two decades, so far, for that persuasion to work, and it won’t work until 2023, or later.)

So, considering all of the background regarding the difficulties in mandating a national PHYSICAL ID, imagine how things would erupt if the federal government mandated a national DIGITAL ID.

It wouldn’t…um…fly.

Transportation Security Administration Checkpoint at John Glenn Columbus International Airport. By Michael Ball – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77279000

And this is why some states are moving ahead on their own with mobile driver’s licenses.

LA Wallet Louisiana Digital Driver’s License. lawallet.com.

However, there’s a teeny tiny catch: while the states can choose to mandate that their mDLs be accepted at the STATE level, states cannot mandate that their digital identities be used for FEDERAL purposes.

Here we go again.

Of course, federal government agencies are starting to look at the issues with a mobile version of a “REAL ID,” including the standard(s) to which any mobile ID used for federal purposes must adhere.

Improving Digital Identity Act of 2020, or 2021, or 2025…

While the government agencies are doing this work, another government agency (the U.S. Congress) is also working on this. Castro mentions Rep. Bill Foster’s H.R. 8215, introduced in the last Congress. I’m not sure why he bothered to introduce it in September 2020, when Congress wasn’t going to do anything with it. As you may have heard, we had an election at that time.

Of course, he just reintroduced it last month, so now there’s more of a chance that it will be considered. Or maybe not.

Regardless, the “Improving Digital Identity Act” proposes the creation of a task force at the federal level with federal, state participants, and local participants. It also mandates that NIST create a digital identity “framework,” with an interim version available 240 days after the Act is passed. Among other things, the ACT also mandates that NIST Special Publication 800-63 become “binding operational directives” for federal agencies.

(Does that mean that it will be illegal to mandate password changes every 90 days? Woo hoo!)

Should this Act actually pass at some point, its directives will need to be harmonized with what the Department of Homeland Security is already doing, and of course with what the states are already doing.

Oh, and remember my reference to the DHS’ work in this area? Among those who have submitted verbal and/or written comments, several (primarily from privacy organizations) have stated that the government should NOT be promoting ANY digital ID at all. The sentiments in this written comment, submitted anonymously, are all too common.

There are a lot of security and privacy concerns with accepting digital ID’s. First and foremost, drivers licenses contain a lot of sensitive information. If digital ID’s are accepted, then it could potentially leak that info to hackers if it is not secured properly. Plus, there is the added concern that using digital ID’s will lead to extra surveillance where unnecesary. Finally, digital ID will not allow individuals who are poorer to be abele to submit an ID because they might not have access to the same facilities. I am strongly against this rule and I do NOT think that digital ID should be an option.

I expect other privacy organizations to submit comments that may be better-written, but they echo the same sentiment.

Two articles on facial recognition

Within the last hour I’ve run across two articles that discuss various aspects of facial recognition, dispelling popular society notions about the science in the process.

Ban facial recognition? Ain’t gonna happen

The first article was originally shared by my former IDEMIA colleague Peter Kirkwood, who certainly understood the significance of it from his many years in the identity industry.

The article, published by the Security Industry Association (SIA), is entitled “Most State Legislatures Have Rejected Bans and Severe Restrictions on Facial Recognition.”

Admittedly the SIA is by explicit definition an industry association, but in this case it is simply noting a fact.

With most 2021 legislative sessions concluded or winding down for the year, proposals to ban or heavily restrict the technology have had very limited overall success despite recent headlines. It turns out that such bills failed to advance or were rejected by legislatures in no fewer than 17 states during the 2020 and 2021 sessions: California, Colorado, Hawaii, Kentucky, Maine, Maryland, Massachusetts, Michigan, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, New York, Oregon, South Carolina and Washington.

And the article even cited one instance in which public safety and civil libertarians worked together, proving such cooperation is actually possible.

In March, Utah enacted the nation’s most comprehensive and precise policy safeguards for government applications. The measure, supported both by the Utah Department of Public Safety as well as the American Civil Liberties Union, establishes requirements for public-sector and law enforcement use, including conditions for access to identity records held by the state, and transparency requirements for new public sector applications of facial recognition technology.

This reminds me of Kirkwood’s statement when he originally shared the article on LinkedIn: “Targeted use with appropriate governance and transparency is an incredibly powerful and beneficial tool.”

NIST’s biometric exit tests reveal an inconvenient truth

Meanwhile, the National Institute of Standards and Technology, which is clearly NOT an industry association, continues to enhance its ongoing Facial Recognition Vendor Test (FRVT). As I noted myself on Facebook and LinkedIn:

With its latest rounds of biometric testing over the last few years, the National Institute of Standards and Technology has shown its ability to adapt its testing to meet current situations.

In this case, NIST announced that it has applied its testing to the not-so-new use case of using facial recognition as a “biometric exit” tool, or as a way to verify that someone who was supposed to leave the country has actually left the country. The biometric exit use case emerged after 9/11 in response to visa overstays, and while the vast, vast majority of people who overstay visas do not fly planes into buildings and kill thousands of people, visa overstays are clearly a concern and thus merit NIST testing.

Transportation Security Administration Checkpoint at John Glenn Columbus International Airport. By Michael Ball – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77279000

But buried at the end of the NIST report (accessible from the link in NIST’s news release) was a little quote that should cause discomfort to all of those who reflexively believe that all biometrics is racist, and thus needs to be banned entirely (see SIA story above). Here’s what NIST said after having looked at the data from the latest test:

“The team explored differences in performance on male versus female subjects and also across national origin, which were the two identifiers the photos included. National origin can, but does not always, reflect racial background. Algorithms performed with high accuracy across all these variations. False negatives, though slightly more common for women, were rare in all cases.”

And as Peter Kirkwood and many other industry professionals would say, you need to use the technology responsibly. This includes things such as:

  • In criminal cases, having all computerized biometric search results reviewed by a trained forensic face examiner.
  • ONLY using facial recognition results as an investigative lead, and not relying on facial recognition alone to issue an arrest warrant.

So facial recognition providers and users had a good day. How was yours?

Requests for Comments (RFCs), formal and casual

I don’t know how it happened, but people in the proposals world have to use a lot of acronyms that begin with the letters “RF.” But one “RF” acronym isn’t strictly a proposal acronym, and that’s the acronym “RFC,” or “Request for Comments.”

In one sense, RFC has a very limited meaning. It is often used specifically to refer to documents provided by the Internet Engineering Task Force.

A Request for Comments (RFC) is a numbered document, which includes appraisals, descriptions and definitions of online protocols, concepts, methods and programmes. RFCs are administered by the IETF (Internet Engineering Task Force). A large part of the standards used online are published in RFCs. 

But the IETF doesn’t hold an exclusive trademark on the RFC acronym. As I noted in a post on my personal blog, the National Institute of Standards and Technology recently requested comments on a draft document, NISTIR 8334 (Draft), Mobile Device Biometrics for Authenticating First Responders | CSRC.

While a Request for Comments differs in some respects from a Request for Proposal or a Request for Information, all of the “RFs” require the respondents to follow some set of rules. Comments, proposals, and information need to be provided in the format specified by the appropriate “RF” document. In the case of NIST’s RFC, all comments needed to include some specific information:

  • The commenter’s name.
  • The commenter’s email address.
  • The line number(s) to which the comment applied.
  • The page number(s) to which the comment applied.
  • The comment.

Comments could be supplied in one of two ways (via email and via web form submission). I chose the former.

Cover letter of the PDF that I submitted to NIST via email.

On the other hand, NIST’s RFC didn’t impose some of the requirements found in other “RF” documents.

  • Unlike a recent RFI to which I responded, I could submit as many pages as I liked, and use any font size that I wished. (Both are important for those respondents who choose to meet a 20-page limit by submitting 8-point text.)
  • Unlike a recent RFP to which I responded, I was not required to state all prices in US dollars, exclusive of taxes. (In fact, I didn’t state any prices at all.)
  • I did not have to provide any hard copies of my response. (Believe it or not, some government agencies STILL require printed responses to RFPs. Thankfully, they’re not requiring 12 copies of said responses these days like they used to.)
  • I did not have to state whether or not I was a small business, provide three years of audited financials, or state whether any of the principal officers of my company had been convicted of financial crimes. (I am a small business; my company doesn’t have three years of financials, audited or not; and I am not a crook.)

So RFC responses aren’t quite as involved as RFP/RFI responses.

But they do have a due date and time.

By Arista Records – 45cat.com, Fair use, https://en.wikipedia.org/w/index.php?curid=44395072

Pangiam acquires something else (in this case TrueFace)

People have been coming here to find this news (thanks Google Search Console) so I figured I’d better share it here.

Remember Pangiam, the company that I talked about back in March when it acquired the veriScan product from the Metropolitan Washington Airports Authority? Well, last week Pangiam acquired another company.

TYSONS CORNER, Va., June 2, 2021 /PRNewswire/ — Pangiam, a technology-based security and travel services provider, announced today that it has acquired Trueface, a U.S.-based leader in computer vision focused on facial recognition, weapon detection and age verification technologies. Terms of the transaction were not disclosed….

Trueface, founded in 2013 by Shaun Moore and Nezare Chafni, provides industry leading computer vision solutions to customers in a wide range of industries. The company’s facial recognition technology recently achieved a top three ranking among western vendors in the National Institute of Standards and Technology (NIST) 1:N Face Recognition Vendor Test. 

(Just an aside here: companies can use NIST tests to extract all sorts of superlatives that can be applied to their products, once a bunch of qualifications are applied. Pay attention to the use of the phrase “among western vendors.” While there may be legitimate reasons to exclude non-western vendors from comparisons, make a mental note when such an exclusion is made.)

But what does this mean in terms of Pangiam’s existing product? The press release covers this also.

Trueface will add an additional capability to Pangiam’s existing technologies, creating a comprehensive and seamless solution to satisfy the needs of both federal and commercial enterprises.

And because Pangiam is not a publicly-traded company, it is not obliged to add a disclaimer to investors saying this integration might not happen bla bla bla. Publicly traded companies are obligated to do this so that investors are aware of the risks when a company speculates about its future plans. Pangiam is not publicly traded, and the owners are (presumably) well aware of the risks.

For example, a US government agency may prefer to do business with an eastern vendor. In fact, the US government does a lot of business with one eastern vendor (not Chinese or Russian).

But we’ll see what happens with any future veriTruefaceScan product.

The tone of voice to use when talking about forensic mistakes

Remember my post that discussed the tone of voice that a company chooses to use when talking about the benefits of the company and its offerings?

Or perhaps you saw the repurposed version of the post, a page section entitled “Don’t use that tone of voice with me!”

The tone of voice that a firm uses does not only extend to benefit statements, but to all communications from a company. Sometimes the tone of voice attracts potential clients. Sometimes it repels them.

For example, a book was published a couple of months ago. Check the tone of voice in these excerpts from the book advertisement.

“That’s not my fingerprint, your honor,” said the defendant, after FBI experts reported a “100-percent identification.” They were wrong. It is shocking how often they are. Autopsy of a Crime Lab is the first book to catalog the sources of error and the faulty science behind a range of well-known forensic evidence, from fingerprints and firearms to forensic algorithms. In this devastating forensic takedown, noted legal expert Brandon L. Garrett poses the questions that should be asked in courtrooms every day: Where are the studies that validate the basic premises of widely accepted techniques such as fingerprinting? How can experts testify with 100 percent certainty about a fingerprint, when there is no such thing as a 100 percent match? Where is the quality control in the laboratories and at the crime scenes? Should we so readily adopt powerful new technologies like facial recognition software and rapid DNA machines? And why have judges been so reluctant to consider the weaknesses of so many long-accepted methods?

Note that author Brandon Garrett is NOT making this stuff up. People in the identity industry are well aware of the Brandon Mayfield case and others that started a series of reforms beginning in 2009, including changes in courtroom testimony and increased testing of forensic techniques by the National Institute of Standards and Technology and others.

It’s obvious that I, with my biases resulting from over 25 years in the identity industry, am not going to enjoy phrases such as “devastating forensic takedown,” especially when I know that some sectors of the forensics profession have been working on correcting these mistakes for 12 years now, and have cooperated with the Innocence Project to rectify some of these mistakes.

So from my perspective, here are my two concerns about language that could be considered inflammatory:

  • Inflammatory language focusing on anecdotal incidents leads to improper conclusions. Yes, there are anecdotal instances in which fingerprint examiners made incorrect decisions. Yes, there are anecdotal instances in which police agencies did not use facial recognition computer results solely as investigative leads, resulting in false arrests. But anecdotal incidents are not in my view substantive enough to ban fingerprint recognition or facial recognition entirely, as some (not all) who read Garrett’s book are going to want to do (and have done, in certain jurisdictions).
  • Inflammatory language prompts inflammatory language from “the other side.” Some forensic practitioners and criminal justice stakeholders may not be pleased to learn that they’ve been targeted by a “devastating forensic takedown.” And sometimes the responses can get nasty: “enemies” of forensic techniques “love criminals.”

Of course, it may be near to impossible to have a reasoned discussion of forensic and police techniques these days. And I’ll confess that it’s hard to sell books by taking a nuanced tone in the book blurb. But if would be nice if we could all just get along.

P.S. Garrett was interviewed on TV in connection to the Derek Chauvin trial, and did not (IMHO) come off as a wild-eyed “defund the police” hack. His major point was that Chauvin’s actions were not made in a split second, but in a course of several minutes.