The ITIF, digital identity, and federalism

I just read an editorial by Daniel Castro, the vice president of the Information Technology and Innovation Foundation (ITIF) and director of the Center for Data Innovation. The opinion piece, published in Government Technology, is entitled “Absent Federal IDs, Digital Driver’s Licenses a Good Start.”

You knew I was going to comment on this one.

Why Daniel Castro supports a national digital ID

Let me allow Castro to state his case.

After Castro identifies the various ways in which people prove identity online, and the drawbacks of these methods, here’s what Castro says about the problem that needs to be addressed:

…poor identity verification is one of the reasons that identity theft is such a growing problem as more services move online. The Federal Trade Commission received 1.4 million reports of identity theft last year, double the number in 2019, with one security research firm estimating $56 billion in losses.

Castro then goes on to state his ideal solution:

The best solution to this problem would be for the federal government to develop an interoperable framework for securely issuing and validating electronic IDs and then direct a federal agency to start issuing these electronic IDs upon request. 

Castro then notes that the federal government has NOT done this:

But in the absence of federal action, a number of states have already begun this work on their own by creating digital driver’s licenses that provide a secure digital alternative to a physical identity document.

Feel free to read the rest of the story.

“Page two.” By Shealah Craighead – The original was formerly from here and is now archived at georgewbush-whitehouse.archives.gov., Public Domain, https://commons.wikimedia.org/w/index.php?curid=943922

But for me I’m going to stop right there.

Why Americans oppose mandatory national physical and digital IDs

Castro’s proposal, while ideal from a technological standpoint, doesn’t fully account for the realities of American politics.

Many Americans (regardless of political leanings) are strongly opposed to ANY mandatory national ID system. For example, many Americans don’t want our Social Security Numbers to become mandatory national IDs (even though they are de facto national IDs today). And while the federal government does issue passports, it isn’t mandatory that people GET them.

And many Americans don’t want state driver’s licenses to become mandatory national IDs. I went into this whole issue in great detail in my prior post “How 6 CFR 37 (REAL IDs) exhibits…federalism,” which made the following points:

  1. States are NOT mandated to issue REAL IDs. (And, no citizen is mandated to GET a REAL ID.)
  2. The federal government CAN mandate which IDs are accepted for federal purposes.
  3. Because the federal government can mandate the IDs to use when entering a federal facility or flying at a commercial airport, ALL of the states were eventually “persuaded” to issue REAL IDs. (Of course, it has take nearly two decades, so far, for that persuasion to work, and it won’t work until 2023, or later.)

So, considering all of the background regarding the difficulties in mandating a national PHYSICAL ID, imagine how things would erupt if the federal government mandated a national DIGITAL ID.

It wouldn’t…um…fly.

Transportation Security Administration Checkpoint at John Glenn Columbus International Airport. By Michael Ball – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77279000

And this is why some states are moving ahead on their own with mobile driver’s licenses.

LA Wallet Louisiana Digital Driver’s License. lawallet.com.

However, there’s a teeny tiny catch: while the states can choose to mandate that their mDLs be accepted at the STATE level, states cannot mandate that their digital identities be used for FEDERAL purposes.

Here we go again.

Of course, federal government agencies are starting to look at the issues with a mobile version of a “REAL ID,” including the standard(s) to which any mobile ID used for federal purposes must adhere.

Improving Digital Identity Act of 2020, or 2021, or 2025…

While the government agencies are doing this work, another government agency (the U.S. Congress) is also working on this. Castro mentions Rep. Bill Foster’s H.R. 8215, introduced in the last Congress. I’m not sure why he bothered to introduce it in September 2020, when Congress wasn’t going to do anything with it. As you may have heard, we had an election at that time.

Of course, he just reintroduced it last month, so now there’s more of a chance that it will be considered. Or maybe not.

Regardless, the “Improving Digital Identity Act” proposes the creation of a task force at the federal level with federal, state participants, and local participants. It also mandates that NIST create a digital identity “framework,” with an interim version available 240 days after the Act is passed. Among other things, the ACT also mandates that NIST Special Publication 800-63 become “binding operational directives” for federal agencies.

(Does that mean that it will be illegal to mandate password changes every 90 days? Woo hoo!)

Should this Act actually pass at some point, its directives will need to be harmonized with what the Department of Homeland Security is already doing, and of course with what the states are already doing.

Oh, and remember my reference to the DHS’ work in this area? Among those who have submitted verbal and/or written comments, several (primarily from privacy organizations) have stated that the government should NOT be promoting ANY digital ID at all. The sentiments in this written comment, submitted anonymously, are all too common.

There are a lot of security and privacy concerns with accepting digital ID’s. First and foremost, drivers licenses contain a lot of sensitive information. If digital ID’s are accepted, then it could potentially leak that info to hackers if it is not secured properly. Plus, there is the added concern that using digital ID’s will lead to extra surveillance where unnecesary. Finally, digital ID will not allow individuals who are poorer to be abele to submit an ID because they might not have access to the same facilities. I am strongly against this rule and I do NOT think that digital ID should be an option.

I expect other privacy organizations to submit comments that may be better-written, but they echo the same sentiment.

Two articles on facial recognition

Within the last hour I’ve run across two articles that discuss various aspects of facial recognition, dispelling popular society notions about the science in the process.

Ban facial recognition? Ain’t gonna happen

The first article was originally shared by my former IDEMIA colleague Peter Kirkwood, who certainly understood the significance of it from his many years in the identity industry.

The article, published by the Security Industry Association (SIA), is entitled “Most State Legislatures Have Rejected Bans and Severe Restrictions on Facial Recognition.”

Admittedly the SIA is by explicit definition an industry association, but in this case it is simply noting a fact.

With most 2021 legislative sessions concluded or winding down for the year, proposals to ban or heavily restrict the technology have had very limited overall success despite recent headlines. It turns out that such bills failed to advance or were rejected by legislatures in no fewer than 17 states during the 2020 and 2021 sessions: California, Colorado, Hawaii, Kentucky, Maine, Maryland, Massachusetts, Michigan, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, New York, Oregon, South Carolina and Washington.

And the article even cited one instance in which public safety and civil libertarians worked together, proving such cooperation is actually possible.

In March, Utah enacted the nation’s most comprehensive and precise policy safeguards for government applications. The measure, supported both by the Utah Department of Public Safety as well as the American Civil Liberties Union, establishes requirements for public-sector and law enforcement use, including conditions for access to identity records held by the state, and transparency requirements for new public sector applications of facial recognition technology.

This reminds me of Kirkwood’s statement when he originally shared the article on LinkedIn: “Targeted use with appropriate governance and transparency is an incredibly powerful and beneficial tool.”

NIST’s biometric exit tests reveal an inconvenient truth

Meanwhile, the National Institute of Standards and Technology, which is clearly NOT an industry association, continues to enhance its ongoing Facial Recognition Vendor Test (FRVT). As I noted myself on Facebook and LinkedIn:

With its latest rounds of biometric testing over the last few years, the National Institute of Standards and Technology has shown its ability to adapt its testing to meet current situations.

In this case, NIST announced that it has applied its testing to the not-so-new use case of using facial recognition as a “biometric exit” tool, or as a way to verify that someone who was supposed to leave the country has actually left the country. The biometric exit use case emerged after 9/11 in response to visa overstays, and while the vast, vast majority of people who overstay visas do not fly planes into buildings and kill thousands of people, visa overstays are clearly a concern and thus merit NIST testing.

Transportation Security Administration Checkpoint at John Glenn Columbus International Airport. By Michael Ball – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77279000

But buried at the end of the NIST report (accessible from the link in NIST’s news release) was a little quote that should cause discomfort to all of those who reflexively believe that all biometrics is racist, and thus needs to be banned entirely (see SIA story above). Here’s what NIST said after having looked at the data from the latest test:

“The team explored differences in performance on male versus female subjects and also across national origin, which were the two identifiers the photos included. National origin can, but does not always, reflect racial background. Algorithms performed with high accuracy across all these variations. False negatives, though slightly more common for women, were rare in all cases.”

And as Peter Kirkwood and many other industry professionals would say, you need to use the technology responsibly. This includes things such as:

  • In criminal cases, having all computerized biometric search results reviewed by a trained forensic face examiner.
  • ONLY using facial recognition results as an investigative lead, and not relying on facial recognition alone to issue an arrest warrant.

So facial recognition providers and users had a good day. How was yours?

Requests for Comments (RFCs), formal and casual

I don’t know how it happened, but people in the proposals world have to use a lot of acronyms that begin with the letters “RF.” But one “RF” acronym isn’t strictly a proposal acronym, and that’s the acronym “RFC,” or “Request for Comments.”

In one sense, RFC has a very limited meaning. It is often used specifically to refer to documents provided by the Internet Engineering Task Force.

A Request for Comments (RFC) is a numbered document, which includes appraisals, descriptions and definitions of online protocols, concepts, methods and programmes. RFCs are administered by the IETF (Internet Engineering Task Force). A large part of the standards used online are published in RFCs. 

But the IETF doesn’t hold an exclusive trademark on the RFC acronym. As I noted in a post on my personal blog, the National Institute of Standards and Technology recently requested comments on a draft document, NISTIR 8334 (Draft), Mobile Device Biometrics for Authenticating First Responders | CSRC.

While a Request for Comments differs in some respects from a Request for Proposal or a Request for Information, all of the “RFs” require the respondents to follow some set of rules. Comments, proposals, and information need to be provided in the format specified by the appropriate “RF” document. In the case of NIST’s RFC, all comments needed to include some specific information:

  • The commenter’s name.
  • The commenter’s email address.
  • The line number(s) to which the comment applied.
  • The page number(s) to which the comment applied.
  • The comment.

Comments could be supplied in one of two ways (via email and via web form submission). I chose the former.

Cover letter of the PDF that I submitted to NIST via email.

On the other hand, NIST’s RFC didn’t impose some of the requirements found in other “RF” documents.

  • Unlike a recent RFI to which I responded, I could submit as many pages as I liked, and use any font size that I wished. (Both are important for those respondents who choose to meet a 20-page limit by submitting 8-point text.)
  • Unlike a recent RFP to which I responded, I was not required to state all prices in US dollars, exclusive of taxes. (In fact, I didn’t state any prices at all.)
  • I did not have to provide any hard copies of my response. (Believe it or not, some government agencies STILL require printed responses to RFPs. Thankfully, they’re not requiring 12 copies of said responses these days like they used to.)
  • I did not have to state whether or not I was a small business, provide three years of audited financials, or state whether any of the principal officers of my company had been convicted of financial crimes. (I am a small business; my company doesn’t have three years of financials, audited or not; and I am not a crook.)

So RFC responses aren’t quite as involved as RFP/RFI responses.

But they do have a due date and time.

By Arista Records – 45cat.com, Fair use, https://en.wikipedia.org/w/index.php?curid=44395072

Pangiam acquires something else (in this case TrueFace)

People have been coming here to find this news (thanks Google Search Console) so I figured I’d better share it here.

Remember Pangiam, the company that I talked about back in March when it acquired the veriScan product from the Metropolitan Washington Airports Authority? Well, last week Pangiam acquired another company.

TYSONS CORNER, Va., June 2, 2021 /PRNewswire/ — Pangiam, a technology-based security and travel services provider, announced today that it has acquired Trueface, a U.S.-based leader in computer vision focused on facial recognition, weapon detection and age verification technologies. Terms of the transaction were not disclosed….

Trueface, founded in 2013 by Shaun Moore and Nezare Chafni, provides industry leading computer vision solutions to customers in a wide range of industries. The company’s facial recognition technology recently achieved a top three ranking among western vendors in the National Institute of Standards and Technology (NIST) 1:N Face Recognition Vendor Test. 

(Just an aside here: companies can use NIST tests to extract all sorts of superlatives that can be applied to their products, once a bunch of qualifications are applied. Pay attention to the use of the phrase “among western vendors.” While there may be legitimate reasons to exclude non-western vendors from comparisons, make a mental note when such an exclusion is made.)

But what does this mean in terms of Pangiam’s existing product? The press release covers this also.

Trueface will add an additional capability to Pangiam’s existing technologies, creating a comprehensive and seamless solution to satisfy the needs of both federal and commercial enterprises.

And because Pangiam is not a publicly-traded company, it is not obliged to add a disclaimer to investors saying this integration might not happen bla bla bla. Publicly traded companies are obligated to do this so that investors are aware of the risks when a company speculates about its future plans. Pangiam is not publicly traded, and the owners are (presumably) well aware of the risks.

For example, a US government agency may prefer to do business with an eastern vendor. In fact, the US government does a lot of business with one eastern vendor (not Chinese or Russian).

But we’ll see what happens with any future veriTruefaceScan product.

The tone of voice to use when talking about forensic mistakes

Remember my post that discussed the tone of voice that a company chooses to use when talking about the benefits of the company and its offerings?

Or perhaps you saw the repurposed version of the post, a page section entitled “Don’t use that tone of voice with me!”

The tone of voice that a firm uses does not only extend to benefit statements, but to all communications from a company. Sometimes the tone of voice attracts potential clients. Sometimes it repels them.

For example, a book was published a couple of months ago. Check the tone of voice in these excerpts from the book advertisement.

“That’s not my fingerprint, your honor,” said the defendant, after FBI experts reported a “100-percent identification.” They were wrong. It is shocking how often they are. Autopsy of a Crime Lab is the first book to catalog the sources of error and the faulty science behind a range of well-known forensic evidence, from fingerprints and firearms to forensic algorithms. In this devastating forensic takedown, noted legal expert Brandon L. Garrett poses the questions that should be asked in courtrooms every day: Where are the studies that validate the basic premises of widely accepted techniques such as fingerprinting? How can experts testify with 100 percent certainty about a fingerprint, when there is no such thing as a 100 percent match? Where is the quality control in the laboratories and at the crime scenes? Should we so readily adopt powerful new technologies like facial recognition software and rapid DNA machines? And why have judges been so reluctant to consider the weaknesses of so many long-accepted methods?

Note that author Brandon Garrett is NOT making this stuff up. People in the identity industry are well aware of the Brandon Mayfield case and others that started a series of reforms beginning in 2009, including changes in courtroom testimony and increased testing of forensic techniques by the National Institute of Standards and Technology and others.

It’s obvious that I, with my biases resulting from over 25 years in the identity industry, am not going to enjoy phrases such as “devastating forensic takedown,” especially when I know that some sectors of the forensics profession have been working on correcting these mistakes for 12 years now, and have cooperated with the Innocence Project to rectify some of these mistakes.

So from my perspective, here are my two concerns about language that could be considered inflammatory:

  • Inflammatory language focusing on anecdotal incidents leads to improper conclusions. Yes, there are anecdotal instances in which fingerprint examiners made incorrect decisions. Yes, there are anecdotal instances in which police agencies did not use facial recognition computer results solely as investigative leads, resulting in false arrests. But anecdotal incidents are not in my view substantive enough to ban fingerprint recognition or facial recognition entirely, as some (not all) who read Garrett’s book are going to want to do (and have done, in certain jurisdictions).
  • Inflammatory language prompts inflammatory language from “the other side.” Some forensic practitioners and criminal justice stakeholders may not be pleased to learn that they’ve been targeted by a “devastating forensic takedown.” And sometimes the responses can get nasty: “enemies” of forensic techniques “love criminals.”

Of course, it may be near to impossible to have a reasoned discussion of forensic and police techniques these days. And I’ll confess that it’s hard to sell books by taking a nuanced tone in the book blurb. But if would be nice if we could all just get along.

P.S. Garrett was interviewed on TV in connection to the Derek Chauvin trial, and did not (IMHO) come off as a wild-eyed “defund the police” hack. His major point was that Chauvin’s actions were not made in a split second, but in a course of several minutes.

Words matter, or the latest from the National Institute of Standards and Technology on problematic security terms

(Alternate title: Why totem pole blackmail is so left field.)

I want to revisit a topic I last addressed in December, in a post entitled “Words matter, or the latest from the Security Industry Association on problematic security terms.”

If you recall, that post mentioned the realization in the technology community that certain long-standing industry terms were no longer acceptable to many technologists. My post cited the Security Industry Association’s recommendations for eliminating language bias, such as replacing the term “slave” (as in master/slave) with the term “secondary” or “responder.” The post also mentions other entities, such as Amazon and Microsoft, who are themselves trying to come up with more inclusive terms.

Now in this particular case, I’m not that bent out of shape over the fact that multiple entities are coming up with multiple standards for inclusive language. (As you know, I feel differently about the plethora of standards for vaccine certificates.) I’ll grant that there might be a bit of confusion when one entity refers to a blocklist, another a block list, and a third a deny list (various replacements for the old term “blacklist”), but the use of different terms won’t necessarily put you on a deny list (or whatever) to enter an airport.

Well, one other party has weighed in on the inclusive language debate – not to set its own standards, but to suggest how its employees should participate in general standards discussions.

That entity is the National Institute of Standards and Technology (NIST). I’ve mentioned NIST before in other contexts. But NIST just announced its contribution to the inclusive language discussion.

Our choice of language — what we say and how we say it — can have unanticipated effects on our audience, potentially conveying messages other than those we intend. In an effort to help writers express ideas in language that is both clear and welcoming to all readers, the National Institute of Standards and Technology (NIST) has released new guidance on effective wording in technical standards.

The point about “unanticipated effects” is an interesting point. Those of us who have been in tech for a while have an understanding of what the term “blacklist” means, but what of the new person who sees the term for the first time?

So, since NIST employees participate in technical standards bodies, it is now publicly sharing its internal guidance as NISTIR 8366, Guidance for NIST Staff on Using Inclusive Language in Documentary Standards. This document is available in PDF form at https://doi.org/10.6028/NIST.IR.8366.

It’s important to note that this document is NOT a standard, and some parts of this “guidance” document aren’t even guidance. For example, section 4.1 begins as follows:

The following is taken from the ‘Inclusive Language’ section of the April 2021 version of the NIST Technical Series Publications Author Instructions. It is not official NIST guidance and will be updated periodically based on user feedback.

The need to periodically update is because any type of guidance regarding inclusive language will change over time. (It will also change according to culture, but since NIST is a United States government agency, its guidance in this particular case is focused on U.S. technologists.)

The major contribution of the NIST guidance is to explain WHY inclusive language is desirable. In addition to noting the “unanticipated effects” of our choice of language, NIST documents five key benefits of inclusive language.

1. avoids false assumptions and permits more precise wording,

2. conveys respect to those who listen or read,

3. maintains neutrality, avoiding unpleasant emotions or connotations brought on by more divisive language (e.g., the term ‘elderly’ may have different connotations based on the age of an employee),

4. removes colloquialisms that are exclusive or usually not well understood by all (e.g., drink the Kool-Aid), and

5. enables all to feel included in the topic discussed.

Let me comment on item 4 above. I don’t know how many people know that the term “drink the Kool-Aid” originated after the Guyana murders of Congressman Dan Ryan and others, and the subsequent mass suicides of People’s Temple members, including leader Jim Jones.

Rev. Jim Jones at an anti-eviction rally Sunday, January 16, 1977 in front of the International Hotel, Kearny and Jackson Streets, San Francisco. By Nancy Wong – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=91003548

They committed suicide by drinking a cyanide-laced drink which may or may not have been Kool-Aid. The entire history (not for the squeamish) can be found here. But even in 2012, many people didn’t know that history, so why use the colloquialism?

So that’s the guidance. But for those keeping score on specific terms, the current guidance document mentions the a number of suggestions, either from NIST or other entities. I’m going to concentrate on three terms that I haven’t mentioned previously.

  • Change “blackmail” to “extortion.”
  • Change “way out in left field” to “made very inaccurate measurements.” (Not only do some people not understand baseball terminology, but the concepts of “left” and “right” are sometimes inapplicable to the situation that is under discussion.)
  • Change “too low on the primary totem pole” to “low priority.” (This is also concise.)

So these discussions continue, sometimes with controversy, sometimes without. But all technologists should be aware that the discussions are occurring.

Biometric writing, and four ways to substantiate a claim of high biometric accuracy

I wanted to illustrate the difference between biometric writing, and SUBSTANTIVE biometric writing.

A particular company recently promoted its release of a facial recognition application. The application was touted as “state-of-the-art,” and the press release mentioned “high accuracy.” However, the press release never supported the state-of-the-art or high accuracy claims.

By Cicero Moraes – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=66803013

Concentrating on the high accuracy claim, there are four methods in which a biometric vendor (facial recognition, fingerprint identification, iris recognition, whatever) can substantiate a high accuracy claim. This particular company did not employ ANY of these methods.

  • The first method is to publicize the accuracy results of a test that you designed and conducted yourself. This method has its drawbacks, since if you’re administering your own test, you have control over the reported results. But it’s better than nothing.
  • The second method is for you to conduct a test that was designed by someone else. An example of such a test is Labeled Faces in the Wild (LFW). There used to be a test called Megaface, but this project has concluded. A test like this is good for research, but there are still issues; for example, if you don’t like the results, you just don’t submit them.
  • The third method is to have an independent third party design AND conduct the test, using test data. A notable example of this method is the Facial Recognition Vendor Test series sponsored by the U.S. National Institute of Standards and Technology. Yet even this test has drawbacks for some people, since the data used to conduct the test is…test data.
  • The fourth method, which could be employed by an entity (such as a government agency) who is looking to purchase a biometric system, is to have the entity design and conduct the test using its own data. Of course, the results of an accuracy test conducted using the biometric data of a local police agency in North America cannot be applied to determine the accuracy of a national passport system in Asia.

So, these are four methods to substantiate a “high accuracy” claim. Each method has its advantages and disadvantages, and it is possible for a vendor to explain WHY it chose one method over the other. (For example, one facial recognition vendor explained that it couldn’t submit its application for NIST FRVT testing because the NIST testing design was not compatible with the way that this vendor’s application worked. For this particular vendor, methods 1 and 4 were better ways to substantiate its accuracy claims.)

But if a company claims “high accuracy” without justifying the claim with ANY of these four methods, then the claim is meaningless. Or, it’s “biometric writing” without substantiation.