This post is adapted from Bredemarket’s November 10, 2021 submitted comments on DHS-2021-0015-0005, Information Collection Request, Public Perceptions of Emerging Technology. See my first and second posts on the topic.
DHS asked respondents to address five questions, including this one:
(2) will this information be processed and used in a timely manner;
Here is part of my response.
I am answering this question from the perspective of a person crossing the border or boarding a plane.
During the summer of 2017, CBP conducted biometric exit facial recognition technical demonstrations with various airlines and airports throughout the country. Here, CBP Officer Michael Shamma answers a London-bound American Airlines passenger’s questions at Chicago O’Hare International Airport. Photo by Brian Bell. From https://www.cbp.gov/frontline/cbp-biometric-testing
From this perspective, you can ask whether the use of biometric technologies makes the entire process faster, or slower.
Before biometric technologies became available, a person would cross a border or board a plane either by conducting no security check at all, or by having a human conduct a manual security check using the document(s) provided by an individual.
Unless a person was diverted to a secondary inspection process, automatic identification of the person (excluding questions such as “What is your purpose for entering the United States?”) could be accomplished in a few seconds.
However, manual security checks are much less accurate than technological solutions, as will be illustrated in a future post.
With biometric technologies, it is necessary to measure both the time to acquire the biometric data (in this case a facial image) and the time to compare the acquired data against the known data for the person (from a passport, passenger manifest, or database).
The time to acquire biometric data continues to improve. In some cases, the biometric data can be acquired “on the move” as the person is walking toward a gate or other entry area, thus requiring no additional time from the person’s perspective.
The time to compare biometric data can vary. If the source of the known data (such as the passport) is with the person, then comparison can be instantaneous from the person’s perspective. If the source of the known data is a database in a remote location, then the speed of comparison depends upon many factors, including network connections and server computation times. Naturally, DHS designs its systems to minimize this time, ensuring minimal or no delay from the person’s perspective. Of course, a network or system failure can adversely affect this.
In short, biometric evaluation is as fast if not faster than manual processes (provided no network or system failure occurs), and is more accurate than human processes.
Automated Passport Control kiosks located at international airports across the nation streamline the passenger’s entry into the United States. Photo Credit: James Tourtellotte. From https://www.cbp.gov/travel/us-citizens/apc
This post is adapted from Bredemarket’s November 10, 2021 submitted comments on DHS-2021-0015-0005, Information Collection Request, Public Perceptions of Emerging Technology. See yesterday’s post for additional thoughts on bias, security, and privacy.
Because of many factors, including the 9/11 tragedy that spurred the organization of the Department of Homeland Security (DHS) itself, DHS has been charged to identify individuals as a part of its oversight of customs and border protection, transportation security, and investigations. There are many ways to identify individuals, including:
What you know, such as a password.
What you have, such as a passport or token.
What you are, such as your individual face, fingers, voice, or DNA.
Where you are.
Is it possible to identify an individual without use of computerized facial recognition or other biometric or AI technologies? In other words, can the “what you are” test be eliminated from DHS operations?
Some may claim that the “what you have” test is sufficient. Present a driver’s license or a passport and you’re identified.
However, secure documents are themselves secured by the use of biometrics, primarily facial recognition.
Before a passport is issued, many countries including the U.S. conduct some type of biometric test to ensure that a single person does not obtain two or more passports.
Similar tests are conducted before driver’s licenses and other secure documents are issued.
In addition, people attempt to forge secure documents by creating fake driver’s licenses and fake passports. Thus, all secure documents need to be evaluated, in part by confirming that the biometrics on the document match the biometrics of the person presenting the document.
In short, there is no way to remove biometric identification from the DHS identification operation. And if you did, who knows how each individual officer would judge whether a person is who they claim to be?
This post is adapted from Bredemarket’s November 10, 2021 submitted comments on DHS-2021-0015-0005, Information Collection Request, Public Perceptions of Emerging Technology.
The original DHS request included the following sentence in the introductory section:
AI in general and facial recognition in particular are not without public controversy, including concerns about bias, security, and privacy.
Even though this was outside of the topics specifically requiring a response, I had to respond anyway. Here’s (in part) what I said.
The topics of bias, security, and privacy deserve attention. Public misunderstandings on these topics have the capability of scuttling all of DHS’ efforts in customs and border protection, transportation security, and investigations.
Regarding bias, it is imperative upon government agencies, biometric vendors, and other interested parties (including myself as a biometric consultant) to educate and inform the public about issues relating to bias. In the interests of brevity, I will confine myself to two critical points.
There is a difference between identification of individuals and classification of groups of individuals.
The summary at the top of the Gender Shades website http://gendershades.org/ clearly frames the question asked by the study: “How well do IBM, Microsoft, and Face++ AI services guess the gender of a face?” As the study title and its summary clearly state, the study only attempted to classify the genders of faces.
This is a different problem than the problem addressed in customs and border protection, transportation security, and investigations applications: namely, the identification of an individual. If someone purporting to be me attempts to board a plane, DHS does not care whether I am male, female, gender fluid, or anything else related to gender. DHS only cares about my individual identity.
It is imperative that any discussion of bias as related to DHS purposes confine itself to the DHS use case of identification of individuals.
Different algorithms exhibit different levels of bias (and different types of bias) when identifying individuals.
While Gender Shades did not directly address this issue, it turns out that it is possible to identify differences in individual identification between different genders, races, and ages.
The National Institute of Standards and Technology (NIST) has conducted ongoing studies of the accuracy and performance of face recognition algorithms. In one of these tests, the FRVT 1:1 Verification Test (at the https://pages.nist.gov/frvt/html/frvt11.html URL), each tested algorithm is examined for its performance among different genders, races (with nationality used as a proxy for race), and ages.
While neither IBM nor Microsoft (two of the three algorithm providers studied in Gender Shades) have not submitted algorithms to the FRVT 1:1 Verification Test, over 360 1:1 algorithms have been tested by NIST.
In a 2019 report issued by NIST on demographic effects (at the https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf URL), NIST concluded that the tested algorithms “show a wide range in accuracy across developers, with the most accurate algorithms producing many fewer errors.”
It is possible to look at the data for each individual algorithm to see detailed information on the algorithm’s performance. Click on each 1:1 algorithm to see its “report card,” including demographic results.
However, even NIST tests are just that – tests. Performance of a research algorithm on a NIST test with NIST data does not guarantee the same performance of an operational algorithm in a DHS system with DHS data.
As DHS implements biometric systems for its purposes of customs and border protection, transportation security, and investigations, DHS not only needs to internally measure the overall accuracy of these systems using DHS algorithms and data, but also needs to internally measure accuracy when these demographic factors are taken into account. While even highly accurate results may not be perceived as such by the public (the anecdotal tale of a single inaccurate result may outweigh stellar statistical accuracy in the public’s mind), such accuracy measurements are essential for the DHS to ensure that it is fulfilling its mission.
Regarding security and privacy, which are intertwined in many ways, there are legitimate questions regarding how the use of biometric technologies can detract or enhance the security and privacy of individual information. (I will confine myself to technology issues, and will not comment on the societal questions regarding knowledge of an individual’s whereabouts.)
Data, including facial recognition vectors or templates, is stored in systems that may themselves be compromised. This is the same issue that is faced by other types of data that may be compromised, including passwords. In this regard, the security of facial recognition data is no different than the security of other data.
In some of the DHS use cases, it is not only necessary to store facial recognition vectors or templates, but it is also necessary to store the original facial images. These are not needed by the facial recognition algorithms themselves, but by the humans who review the results of facial algorithm comparisons. As long as we continue to place facial images on driver’s licenses, passports, visas, and other secure identity documents, the need to store these facial images will continue and cannot be avoided.
However, one must ensure that the storage of any personally identifiable information (including Social Security Numbers and other non-biometric data) is secure, and that the PII is only available on a need-to-know basis.
In some cases, the use of facial recognition technologies can actually enhance privacy. For example, take the moves by various U.S. states to replace their existing physical driver’s licenses with smartphone-based mobile driver’s licenses (mDLs). These mDL applications can be designed to only provide necessary information to those viewing the mDL.
When a purchase uses a physical driver’s license to buy age-restricted items such as alcohol, the store clerk viewing the license is able to see a vast amount of PII, including the purchaser’s birthdate, full name, residence address, and even height and weight. A dishonest store clerk can easily misuse this data.
When a purchaser uses a mobile driver’s license to buy age-restricted items, most of this information is not exposed to the store clerk viewing the license. Even the purchaser’s birthdate is not exposed; all that the store clerk sees is whether or not the purchaser is old enough to buy the restricted item (for example, over the age of 21).
Therefore, use of these technologies can actually enhance privacy.
I’ll be repurposing other portions of my response as new blog posts over the next several days.
The picture above is what I picture when I think of Dollar General. In fact, it looks similar to a Dollar General that I’ve seen outside of Huntsville, Alabama: just a building with a parking lot out in a field by a major road. You can hear the crickets chirping at night.
Not the kind of place where you’d expect to see a lot of futurists connecting a spectrum of innovation where human and biological system designs interact together seamlessly.
Yes, even Dollar General is embracing technology, but as far as I can tell it is concentrating on consumer-facing technology and hasn’t adopted blockchain yet. But I could be wrong.
I failed to quote from the linked article at the time, which dates from 2019.
Digital is becoming a “big part” of customers’ lives, Dollar General chief executive Todd Vasos said last year.
Dollar General is also building a digital strategy because customers who redeem digital savings coupons and use the new Dollar General app, released last year (2018), spend about twice as much on average as regular shoppers….
It’s not a surprise that Dollar General has been slow to embrace digital. The company’s core customers make about $40,000 a year per household, more than $20,000 below the national average.
Because of the income gap, Dollar General’s main customers are often “behind the curve” on new technology….But smartphones are ubiquitous now, and about 85% of Dollar General’s customers use one, in line with the national average.
Well, now Dollar General customers have a new way to use their smartphones.
Dollar General (NYSE: DG) today announced a partnership with DoorDash (NYSE: DASH), the nation’s leading last-mile logistics platform, to offer on-demand delivery of household essential items, including food, snacks, cleaning supplies, and more, at everyday low prices customers trust Dollar General to provide….
On-demand delivery from DoorDash is currently available from more than 9,000 Dollar General stores with plans to expand to more than 10,000 locations by December 2021. Dollar General and DoorDash initially piloted a program in summer 2021 with approximately 600 stores in rural and metropolitan communities.
In the minds of some, Dollar General seems very old school while DoorDash seems very cutting-edge. But behind the scenes, Dollar General provides as much tech innovation as another rural success story, Cracker Barrel. And when you think about it, DoorDash is just a warmed-over techie delivery service.
By Conrad Poirier – This file has been scanned and uploaded to Wikimedia Commons with the gracious permission and cooperation of Bibliothèque et Archives nationales du Québec and Wikimedia Canada under the Poirier Project., Public Domain, https://commons.wikimedia.org/w/index.php?curid=34364242
FTR FST (“future fest”), sponsored by 4th Sector Innovations, SwoopIn, and several other organizations, will be held on Friday, November 12 in downtown Ontario, California. While I’m primarily going for the “professional development” part, FTR FST also features creative expression (including food trucks, which appropriately fall into the “creative expression” category), collaboration, and a tech showcase.
The professional development schedule includes the following sessions, among others:
A keynote presentation from Colin Mangham entitled “Days of Future Past.” According to FTR FST, the topic will be biomimicry.
Biomimicry is the practice of learning from and emulating life’s genius to create more efficient, elegant, and sustainable designs. It’s a problem-solving method, a sustainability ethos, an innovation approach, a change movement, and a new way of viewing and valuing nature. In practice it’s dedicated to reconnecting people with nature, and aligning human systems with biological systems.
As such, our aim is to connect a spectrum of innovation where human and biological system designs interact together seamlessly. Our team offers education and consulting to apply biological insights to systematic sustainability challenges. Our collaborative partnerships and services support interdisciplinary dialogue across industry sectors and regions, while reconnecting all of us to the local ecosystem that supports us.
OK, at this point some of you are saying to yourselves, “THAT kind of conference.”
But frankly, there’s just as much value in approaching problems from a futuristic sustainability view as there is in approaching problems from a more traditional program management process (or Shipley process, or whatever), or even from a more old school sustainability view as espoused by Broguiere’s and the late Huell Howser.
See, it all ties together. After all, the new school 4th Sector Innovations is less than a mile from the decidedly old school Graber Olive House (featured in one of Howser’s “Louie, take a look at this!” TV shows.)
The workshop “Next Gen Cyber Security” by Erik Delgadillo, SecLex.
The workshop “The Evolution of Mobility” by Maritza Berger at Piaggio Fast Forward.
A panel (participants unidentified) on equalizing opportunity.
Vendor spotlights.
After 3:00 pm, FTR FST transitions to less intensive sessions. Bring the kids! The complete schedule can be found here.
You can register for FTR FST here. Oh, and one new wrinkle: attendance at the professional development sessions is now FREE, thanks to the event sponsors.
FTR FST will be at 4th Sector Innovations, 404 N Euclid Avenue in Ontario.
Technology often advances more quickly than our society’s ability to deal with the ramifications of technology.
For example, President Eisenhower’s effort to improve our national defense via construction of a high-speed interstate highway system led to a number of unintended consequences, including the devastation of city downtown areas that were now being bypassed by travelers.
There are numerous other examples.
The previously unknown consequences of biometric technology
One way in which technology has outpaced society is by developing tools that unintentionally threaten individual privacy. For Bredemarket clients and potential clients, one relevant example of this is the ability to apply biometric technologies to previously recorded photographic, video, and audio content. (I won’t deal with real-time here.)
Hey, remember that time in 1969 that you were walking around in a Ku Klux Klan costume and one of your fraternity buddies took a picture of you? Back then you and your buddy had no idea that in future decades someone could capture a digital copy of that picture and share it with millions of people, and that one of those millions of people could use facial recognition software to compare the face in the picture with a known image of your face, and positively determine that you were the person parading around like a Grand Wizard.
Of course, there are also positive applications of biometric technology on older material. Perhaps biometrics could be used to identify an adoptee’s natural birth mother from an old picture. Or biometrics could be used to identify that a missing person was present in a train station on September 8, 2021 in the company of another (identified) person.
But regardless of the positive or negative use case, biometric identification provides us with unequalled capability to identify people who were previously recorded. Something that couldn’t have been imagined years and years ago.
Well, it couldn’t have been imagined by most of us, anyway.
Enter Carl Sagan (courtesy Elena’s Short Wisdom)
As a WordPress user (this blog and the Bredemarket website are hosted on WordPress), I subscribe to a number of other WordPress blogs. One of these blogs is Short Wisdom, authored by Elena. Her purpose is to collect short quotes from others that succinctly encapsulate essential truths.
Normally these quotes are of the inspirational variety, but Elena posted something today that applies to those of us concerned with technology and privacy.
“Might it be possible at some future time, when neurophysiology has advanced substantially, to reconstruct the memories or insight of someone long dead?…It would be the ultimate breach of privacy.”
The quote is taken from Broca’s Brain: Reflections on the Romance of Science, originally published in 1979.
The future is not now…yet
Obviously such technology did not exist in 1979, and doesn’t exist in 2021 either.
Even biometric identification of living people via “brain wave” biometrics isn’t substantively verified to any large degree; last month’s study only included 15 people. Big whoop.
But it’s certainly possible that this ability to reconstruct the memories and insights of the deceased could exist at some future date. Some preliminary work has already been done in this area.
If this technology ever becomes viable and the memories of the dead can be accessed, then the privacy advocates will REALLY howl.
And the already-deceased privacy advocates will be able to contribute to the conversation. Perhaps Carl Sagan himself will posthumously share some thoughts on the ongoing NIST FRVT results.
He can even use technology to sing about the results.
Delta Airlines, the Transportation Security Administration (TSA), and a travel tech company called Pangiam have partnered up to bring facial recognition technology to the Hartsfield–Jackson Atlanta International Airport (ATL).
As of next month, Delta SkyMiles members who use the Fly Delta app and have a TSA PreCheck membership will be able to simply look at a camera to present their “digital ID” and navigate the airport with greater ease. In this program, a customer’s identity is made up of a SkyMiles member number, passport number and Known Traveler Number.
Of course, TSA PreCheck enrollment is provided by three other companies…but I digress. (I’ll digress again in a minute.)
Forbes goes on to say that this navigation will be available at pre-airport check in (on the Fly Delta app), bag drop (via TSA PreCheck), security (again via TSA PreCheck), and the gate.
Incidentally, this illustrates how security systems from different providers build upon each other. Since I was an IDEMIA employee at the time that IDEMIA was the only company that performed TSA PreCheck enrollment, I was well aware (in my super-secret competitive intelligence role) how CLEAR touted the complementary features of TSA PreCheck in its own marketing.
Back when automated fingerprint identification systems (AFIS) were originally expanded to become automated fingerprint/palmprint identification systems (AFPIS), a common rationale for the expansion was the large number of unsolved latent palmprints at crime scenes.
The statistic that everyone cited was a statistic that 30% of all latent friction ridge prints at crime scenes were from palmprints. Here’s a citation from the National Institute of Justice.
Anecdotally, it is estimated that approximately 30% of comparison cases involve palm impressions.
Note that the NIJ took care to include the word “anecdotally.” Others don’t.
It is estimated that 30 percent of latent prints found at crime scenes come from palms.
But who provided the initial “30% of latents are palms” estimate long ago? And what was the basis for this estimate? This critical information seems to have been lost.
Now I don’t have a problem with imprecise estimates, provided that the assumptions that go behind the estimate are well-documented. I’ve done this many times myself.
But sadly, any assumptions for the “30% of latents are palms” figure have disappeared over the years, and only the percentage remains.
Is there any contemporary evidence that can be used to check the 30% estimate?
Yes.
The blind proficiency study wasn’t blind regarding the test data
A Center for Statistics and Applications in Forensic Science study (downloadable here) was published earlier this year. Although the study was devoted to another purpose, it touched upon this particular issue.
The “Latent print quality in blind proficiency testing: Using quality metrics to examine laboratory performance” study obviously needed some data, so it analyzed a set of latent prints examined by the Houston Forensic Science Center (HFSC) over a multi-year period.
In the winter of 2017, HFSC implemented a blind quality control program in latent print comparison. Since its implementation, the Quality Division within the laboratory has developed and inserted 290 blind cases/requests for analysis into the latent print comparison unit as of August 4, 2020….
Of the 290 blind cases inserted into casework, we were able to obtain print images for 144 cases, with report dates spanning approximately two years (i.e., January 9, 2018 to January 8, 2020)….
In total, examiners reviewed 376 latent prints submitted as part of the 144 blind cases/requests for analysis.
So, out of those 376 latent prints, how many were from palms?
The majority of latent prints were fingerprints (94.3%; n = 350) or palm prints (4.9%; n = 18). Very few were joint impressions or unspecified impressions (0.8%; n = 3)….
The remaining 5 of 376 prints were not attributed to an anatomical source because examiners determined them to be of no comparative value and did not consider them to be latent prints.
For those who are math-challenged, 5 percent is not equal to 30 percent. In fact, 5 percent is much less than 30 percent. (And 4.9% is even less, if you want to get precise about it.)
Now I’ll grant that this is just one study, and other latent examinations may have wildly different percentages. At a minimum, though, this data should cause us to question the universally-accepted “30%” figure.
As any scientific institute that desires funding would proclaim, further research is needed.
And I’ll grant that. Well, I won’t grant it, but some government or private funding entity might.
The naming, or renaming, of a company is an important step in a company’s journey. While one should rightly concentrate on mission statements and processes and the like, the first impression many people will have of a company is its name.
So it’s important to get it right.
How my company was named
Sometimes the naming of a company is a relatively simple affair. For example, the company name “Bredemarket” is a combination of the beginning of my last name, Bredehoft, with the word market (derived from marketing).
Certainly the name is open to confusion (not that I was planning on doing business in East Sussex), but the name does communicate what the company is about.
I guess I could have called the company Bredewrite, but Bredemarket has grown on me.
Sometimes the naming of a company gets a little more involved.
How my former employer was renamed
When Oberthur was merged with the Morpho portion of Safran, the combined company needed a name (Oberthur was ruled out). So the company adopted the name “OT-Morpho,” indicating the heritage of the two parts of the company.
However, OT-Morpho was never intended to be the permanent name of the company. Everyone knew that the company would be renamed at some point in the future.
A few months later, as part of a razzle dazzle event, the new name of the company was revealed to an in-person audience in France and to people watching remotely all over the world (including myself).
If you don’t want to watch the entire video, the new name was…IDEMIA.
Some thought went into this name, as the accompanying press release noted.
In a world directly impacted by the exponential growth of connected objects, the increasing globalisation of exchanges, the digitalisation of the economy and the consumerisation of technology, IDEMIA stands as the new leader in trusted identities placing “Augmented Identity” at the heart of its actions. As an expression of this innovative strategy, the group has been renamed IDEMIA in reference to powerful terms: Identity, Idea and the Latin word idem, reflecting its mission to guarantee everyone a safer world thanks to its expertise in trusted identities.
However, some people didn’t like the new name at the time, and there was a big ruckus about how to pronounce the name. But at least some thought went into the name, and potential customers at least made the connection between IDEMIA and identity, if not to the other influences.
Some of IDEMIA’s corporate predecessors also had some stories behind their names.
My former employer MorphoTrak was the result of a merger between Tacoma-based Sagem Morpho and the Anaheim-based Biometric Business Unit of Motorola that was previously known as Printrak. In the same way that OT-Morpho represented the union of Oberthur and Morpho, MorphoTrak represented the union of Sagem Morpho and Printrak.
The Morpho in Sagem Morpho was an element of the name of the original French company that was founded in the 1980s, Morpho Systèmes. I don’t know exactly why the company was named Morpho, but the term can mean form or structure, or it can refer to a particular group of butterflies with distinct wing patterns.
And Printrak, a product name before it was a company name, was derived from the word fingerprint. (And presumably from the system that tracked the fingerprints.)
So even if you don’t like these names, at least some thought went behind them.
And then there are other cases.
How another company was renamed
Anyvision was a company that had been around for a while, specializing in using artificial intelligence and vision to provide security solutions. But recently the company decided to expand its focus.
[T]he company’s evolution and vision for the future…is shaped, in part, by a new collaboration with Carnegie Mellon University’s (CMU) CyLab Biometric Research Center. The CMU partnership will focus on early-stage research in object, body, and behavior recognition….
“Historically, the company has focused on security-related use cases for our watchlist alerting and touchless access control solutions….[W]e’re looking beyond the lens of security to include ways our solutions can positively impact an organization’s safety, productivity and customer experience.”
So with this expanded focus, Anyvision decided that its corporate name was too limiting. So the company announced that is was renaming itself.
Now some of you may have noticed that the name “Oosto” does not convey the idea of object, body, or behavior recognition in English, Latin, Hebrew (Anyvision started in Israel), or any other known language. As far as I know. (And yes, I saw what The Names Dictionary says.)