Pangiam acquires something else (in this case TrueFace)

People have been coming here to find this news (thanks Google Search Console) so I figured I’d better share it here.

Remember Pangiam, the company that I talked about back in March when it acquired the veriScan product from the Metropolitan Washington Airports Authority? Well, last week Pangiam acquired another company.

TYSONS CORNER, Va., June 2, 2021 /PRNewswire/ — Pangiam, a technology-based security and travel services provider, announced today that it has acquired Trueface, a U.S.-based leader in computer vision focused on facial recognition, weapon detection and age verification technologies. Terms of the transaction were not disclosed….

Trueface, founded in 2013 by Shaun Moore and Nezare Chafni, provides industry leading computer vision solutions to customers in a wide range of industries. The company’s facial recognition technology recently achieved a top three ranking among western vendors in the National Institute of Standards and Technology (NIST) 1:N Face Recognition Vendor Test. 

(Just an aside here: companies can use NIST tests to extract all sorts of superlatives that can be applied to their products, once a bunch of qualifications are applied. Pay attention to the use of the phrase “among western vendors.” While there may be legitimate reasons to exclude non-western vendors from comparisons, make a mental note when such an exclusion is made.)

But what does this mean in terms of Pangiam’s existing product? The press release covers this also.

Trueface will add an additional capability to Pangiam’s existing technologies, creating a comprehensive and seamless solution to satisfy the needs of both federal and commercial enterprises.

And because Pangiam is not a publicly-traded company, it is not obliged to add a disclaimer to investors saying this integration might not happen bla bla bla. Publicly traded companies are obligated to do this so that investors are aware of the risks when a company speculates about its future plans. Pangiam is not publicly traded, and the owners are (presumably) well aware of the risks.

For example, a US government agency may prefer to do business with an eastern vendor. In fact, the US government does a lot of business with one eastern vendor (not Chinese or Russian).

But we’ll see what happens with any future veriTruefaceScan product.

The tone of voice to use when talking about forensic mistakes

Remember my post that discussed the tone of voice that a company chooses to use when talking about the benefits of the company and its offerings?

Or perhaps you saw the repurposed version of the post, a page section entitled “Don’t use that tone of voice with me!”

The tone of voice that a firm uses does not only extend to benefit statements, but to all communications from a company. Sometimes the tone of voice attracts potential clients. Sometimes it repels them.

For example, a book was published a couple of months ago. Check the tone of voice in these excerpts from the book advertisement.

“That’s not my fingerprint, your honor,” said the defendant, after FBI experts reported a “100-percent identification.” They were wrong. It is shocking how often they are. Autopsy of a Crime Lab is the first book to catalog the sources of error and the faulty science behind a range of well-known forensic evidence, from fingerprints and firearms to forensic algorithms. In this devastating forensic takedown, noted legal expert Brandon L. Garrett poses the questions that should be asked in courtrooms every day: Where are the studies that validate the basic premises of widely accepted techniques such as fingerprinting? How can experts testify with 100 percent certainty about a fingerprint, when there is no such thing as a 100 percent match? Where is the quality control in the laboratories and at the crime scenes? Should we so readily adopt powerful new technologies like facial recognition software and rapid DNA machines? And why have judges been so reluctant to consider the weaknesses of so many long-accepted methods?

Note that author Brandon Garrett is NOT making this stuff up. People in the identity industry are well aware of the Brandon Mayfield case and others that started a series of reforms beginning in 2009, including changes in courtroom testimony and increased testing of forensic techniques by the National Institute of Standards and Technology and others.

It’s obvious that I, with my biases resulting from over 25 years in the identity industry, am not going to enjoy phrases such as “devastating forensic takedown,” especially when I know that some sectors of the forensics profession have been working on correcting these mistakes for 12 years now, and have cooperated with the Innocence Project to rectify some of these mistakes.

So from my perspective, here are my two concerns about language that could be considered inflammatory:

  • Inflammatory language focusing on anecdotal incidents leads to improper conclusions. Yes, there are anecdotal instances in which fingerprint examiners made incorrect decisions. Yes, there are anecdotal instances in which police agencies did not use facial recognition computer results solely as investigative leads, resulting in false arrests. But anecdotal incidents are not in my view substantive enough to ban fingerprint recognition or facial recognition entirely, as some (not all) who read Garrett’s book are going to want to do (and have done, in certain jurisdictions).
  • Inflammatory language prompts inflammatory language from “the other side.” Some forensic practitioners and criminal justice stakeholders may not be pleased to learn that they’ve been targeted by a “devastating forensic takedown.” And sometimes the responses can get nasty: “enemies” of forensic techniques “love criminals.”

Of course, it may be near to impossible to have a reasoned discussion of forensic and police techniques these days. And I’ll confess that it’s hard to sell books by taking a nuanced tone in the book blurb. But if would be nice if we could all just get along.

P.S. Garrett was interviewed on TV in connection to the Derek Chauvin trial, and did not (IMHO) come off as a wild-eyed “defund the police” hack. His major point was that Chauvin’s actions were not made in a split second, but in a course of several minutes.

Words matter, or the latest from the National Institute of Standards and Technology on problematic security terms

(Alternate title: Why totem pole blackmail is so left field.)

I want to revisit a topic I last addressed in December, in a post entitled “Words matter, or the latest from the Security Industry Association on problematic security terms.”

If you recall, that post mentioned the realization in the technology community that certain long-standing industry terms were no longer acceptable to many technologists. My post cited the Security Industry Association’s recommendations for eliminating language bias, such as replacing the term “slave” (as in master/slave) with the term “secondary” or “responder.” The post also mentions other entities, such as Amazon and Microsoft, who are themselves trying to come up with more inclusive terms.

Now in this particular case, I’m not that bent out of shape over the fact that multiple entities are coming up with multiple standards for inclusive language. (As you know, I feel differently about the plethora of standards for vaccine certificates.) I’ll grant that there might be a bit of confusion when one entity refers to a blocklist, another a block list, and a third a deny list (various replacements for the old term “blacklist”), but the use of different terms won’t necessarily put you on a deny list (or whatever) to enter an airport.

Well, one other party has weighed in on the inclusive language debate – not to set its own standards, but to suggest how its employees should participate in general standards discussions.

That entity is the National Institute of Standards and Technology (NIST). I’ve mentioned NIST before in other contexts. But NIST just announced its contribution to the inclusive language discussion.

Our choice of language — what we say and how we say it — can have unanticipated effects on our audience, potentially conveying messages other than those we intend. In an effort to help writers express ideas in language that is both clear and welcoming to all readers, the National Institute of Standards and Technology (NIST) has released new guidance on effective wording in technical standards.

The point about “unanticipated effects” is an interesting point. Those of us who have been in tech for a while have an understanding of what the term “blacklist” means, but what of the new person who sees the term for the first time?

So, since NIST employees participate in technical standards bodies, it is now publicly sharing its internal guidance as NISTIR 8366, Guidance for NIST Staff on Using Inclusive Language in Documentary Standards. This document is available in PDF form at https://doi.org/10.6028/NIST.IR.8366.

It’s important to note that this document is NOT a standard, and some parts of this “guidance” document aren’t even guidance. For example, section 4.1 begins as follows:

The following is taken from the ‘Inclusive Language’ section of the April 2021 version of the NIST Technical Series Publications Author Instructions. It is not official NIST guidance and will be updated periodically based on user feedback.

The need to periodically update is because any type of guidance regarding inclusive language will change over time. (It will also change according to culture, but since NIST is a United States government agency, its guidance in this particular case is focused on U.S. technologists.)

The major contribution of the NIST guidance is to explain WHY inclusive language is desirable. In addition to noting the “unanticipated effects” of our choice of language, NIST documents five key benefits of inclusive language.

1. avoids false assumptions and permits more precise wording,

2. conveys respect to those who listen or read,

3. maintains neutrality, avoiding unpleasant emotions or connotations brought on by more divisive language (e.g., the term ‘elderly’ may have different connotations based on the age of an employee),

4. removes colloquialisms that are exclusive or usually not well understood by all (e.g., drink the Kool-Aid), and

5. enables all to feel included in the topic discussed.

Let me comment on item 4 above. I don’t know how many people know that the term “drink the Kool-Aid” originated after the Guyana murders of Congressman Dan Ryan and others, and the subsequent mass suicides of People’s Temple members, including leader Jim Jones.

Rev. Jim Jones at an anti-eviction rally Sunday, January 16, 1977 in front of the International Hotel, Kearny and Jackson Streets, San Francisco. By Nancy Wong – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=91003548

They committed suicide by drinking a cyanide-laced drink which may or may not have been Kool-Aid. The entire history (not for the squeamish) can be found here. But even in 2012, many people didn’t know that history, so why use the colloquialism?

So that’s the guidance. But for those keeping score on specific terms, the current guidance document mentions the a number of suggestions, either from NIST or other entities. I’m going to concentrate on three terms that I haven’t mentioned previously.

  • Change “blackmail” to “extortion.”
  • Change “way out in left field” to “made very inaccurate measurements.” (Not only do some people not understand baseball terminology, but the concepts of “left” and “right” are sometimes inapplicable to the situation that is under discussion.)
  • Change “too low on the primary totem pole” to “low priority.” (This is also concise.)

So these discussions continue, sometimes with controversy, sometimes without. But all technologists should be aware that the discussions are occurring.

Biometric writing, and four ways to substantiate a claim of high biometric accuracy

I wanted to illustrate the difference between biometric writing, and SUBSTANTIVE biometric writing.

A particular company recently promoted its release of a facial recognition application. The application was touted as “state-of-the-art,” and the press release mentioned “high accuracy.” However, the press release never supported the state-of-the-art or high accuracy claims.

By Cicero Moraes – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=66803013

Concentrating on the high accuracy claim, there are four methods in which a biometric vendor (facial recognition, fingerprint identification, iris recognition, whatever) can substantiate a high accuracy claim. This particular company did not employ ANY of these methods.

  • The first method is to publicize the accuracy results of a test that you designed and conducted yourself. This method has its drawbacks, since if you’re administering your own test, you have control over the reported results. But it’s better than nothing.
  • The second method is for you to conduct a test that was designed by someone else. An example of such a test is Labeled Faces in the Wild (LFW). There used to be a test called Megaface, but this project has concluded. A test like this is good for research, but there are still issues; for example, if you don’t like the results, you just don’t submit them.
  • The third method is to have an independent third party design AND conduct the test, using test data. A notable example of this method is the Facial Recognition Vendor Test series sponsored by the U.S. National Institute of Standards and Technology. Yet even this test has drawbacks for some people, since the data used to conduct the test is…test data.
  • The fourth method, which could be employed by an entity (such as a government agency) who is looking to purchase a biometric system, is to have the entity design and conduct the test using its own data. Of course, the results of an accuracy test conducted using the biometric data of a local police agency in North America cannot be applied to determine the accuracy of a national passport system in Asia.

So, these are four methods to substantiate a “high accuracy” claim. Each method has its advantages and disadvantages, and it is possible for a vendor to explain WHY it chose one method over the other. (For example, one facial recognition vendor explained that it couldn’t submit its application for NIST FRVT testing because the NIST testing design was not compatible with the way that this vendor’s application worked. For this particular vendor, methods 1 and 4 were better ways to substantiate its accuracy claims.)

But if a company claims “high accuracy” without justifying the claim with ANY of these four methods, then the claim is meaningless. Or, it’s “biometric writing” without substantiation.