A second “biometrics is evil” post (Amazon One)

This is a follow-up to something I wrote a couple of weeks ago. I concluded that earlier post by noting that when you say that something needs to be replaced because it is bad, you need to evaluate the replacement to see if it is any better…or worse.

First, the recap

Before moving forward, let me briefly recap my points from the earlier post. If you like, you can read the entire post here.

  • Amazon is incentivizing customers ($10) to sign up for its Amazon One palm print program.
  • Amazon is not the first company to use biometrics to speed retail purchases. Pay By Touch, the University of Maryland Dining Hall have already done this, as well as every single store that lets you use Apple Pay, Google Pay, or Samsung Pay.
  • Amazon One is not only being connected in the public eye to unrelated services such as Amazon Rekognition, and to unrelated studies such as Gender Shades (which dealt with classification, not recognition), but has been accused of “asking people to sell their bodies.” Yet companies that offer similar services are not being demonized in the same way.
  • If you don’t use Amazon One to pay for your purchases, that doesn’t necessarily mean that you are protected from surveillance. I’ll dive into that in this post.

Now that we’re caught up, let’s look at the latest player to enter the Amazon One controversy.

Yes, U.S. Senators can be bipartisan

If you listen to the “opinion” news services, you get the feeling that the United States Senate has devolved into two warring factions that can’t get anything done. But Senators have always worked together (see Edward Kennedy and Dan Quayle), and they continue to work together today.

Specifically, three Senators are working together to ask Amazon a few questions: Bill Cassidy, M.D. (R-LA), Amy Klobuchar (D-MN), and Jon Ossoff (D-GA).

And naturally they issued a press release about it.

Now arguments can be made about whether Congressional press releases and hearings merely constitute grandstanding, or whether they are serious attempts to better the nation. Of course, anything that I oppose is obviously grandstanding, and anything I support is obviously a serious effort.

But for the moment let’s assume that the Senators have serious concerns about the privacy of American consumers, and that the nation demands answers to these questions from Amazon.

Here are the Senators’ questions, from the press release:

  1. Does Amazon have plans to expand Amazon One to additional Whole Foods, Amazon Go, and other Amazon store locations, and if so, on what timetable? 
  2. How many third-party customers has Amazon sold (or licensed) Amazon One to? What privacy protections are in place for those third parties and their customers?
  3. How many users have signed up for Amazon One? 
  4. Please describe all the ways you use data collected through Amazon One, including from third-party customers. Do you plan to use data collected through Amazon One devices to personalize advertisements, offers, or product recommendations to users? 
  5. Is Amazon One user data, including the Amazon One ID, ever paired with biometric data from facial recognition systems? 
  6. What information do you provide to consumers about how their data is being used? How will you ensure users understand and consent to Amazon One’s data collection, storage, and use practices when they link their Amazon One and Amazon account information?
  7. What actions have you taken to ensure the security of user data collected through Amazon One?

So when will we investigate other privacy-threatening technologies?

In a sense, the work of these three Senators should be commended, because if Amazon One is not implemented properly, serious privacy breaches could happen which could adversely impact American citizens. And this is the reason why many states and municipalities have moved to restrict the use of biometrics by private businesses.

And we know that Amazon is evil, because Slate said so back in January 2020.

The online bookseller has evolved into a giant of retail, resale, meal delivery, video streaming, cloud computing, fancy produce, original entertainment, cheap human labor, smart home tech, surveillance tech, and surveillance tech for smart homes….The company’s “last mile” shipping operation has led to burnout, injuries, and deaths, all connected to a warehouse operation that, while paying a decent minimum wage, is so efficient in part because it treats its human workers like robots who sometimes get bathroom breaks.

But why stop with Amazon? After all, Slate’s list included 29 other companies (while Amazon tops the list, other “top”-ranked companies include Facebook, Alphabet, Palantir Technologies, and Uber), to say nothing of entire industries that are capable of massive privacy violations.

Privacy breaches are not just tied to biometric systems, but can be tied to any system that stores private data. Restricting or banning biometric systems won’t solve anything, since all of these abuses could potentially occur on other systems.

  • When will the Senators ask these same questions to Apple, Google (part of the aforementioned Alphabet), and Samsung to find out when these companies will expand their “Pay” services? They won’t even have to ask all seven questions, because we already know the answer to question 5.
  • Oh, and while we’re at it, what about Mastercard, Visa, American Express, Discover, and similar credit card services that are often tied to information from our bank accounts? How do these firms personalize their offerings? Who can buy all that data?
  • And while we’re looking at credit cards, what about the debit cards issued by the banks, which are even more vulnerable to abuse. Let’s have the banks publicly reveal all the ways in which they protect user data.
  • You know, you have to watch out for those money orders also. How often do money order issuers ask consumers to show their government ID? What happens to that data?
  • Oh, and what about those gift cards that stores issue? What happens to the location and purchase data that is collected for those gift cards?
  • When people use cash to pay for goods, what is the resolution of the surveillance cameras that are trained on the cash registers? Can those surveillance cameras read the serial numbers on the bills that are exchanged? What assurances can the stores give that they are not tracking those serial numbers as they flow through the economy?

If you think that it’s silly to shut down every single payment system that could result in a privacy violation…you’re right.

Obviously if Amazon is breaking federal law, it should be prosecuted accordingly.

And if Amazon is breaking state law (such as Illinois BIPA law), then…well, that’s not the Senators’ business, that’s the business of class action lawyers.

But now the ball is in Amazon’s court, and Amazon will either provide thousands of pages of documents, a few short answers, a response indicating that the Senators are asking for confidential information on future product plans, or (unlikely with Amazon, but possible with other companies) a reply stating that the Senators can go pound sand.

Either way, the “Amazon is evil” campaign will continue.

Today’s “biometrics is evil” post (Amazon One)

I can’t recall who recorded it, but there’s a radio commercial heard in Southern California (and probably nationwide) that intentionally ridicules people who willingly give up their own personally identifiable information (PII) for short-term gain. In the commercial, both the husband and the wife willingly give away all sorts of PII, including I believe their birth certificates.

While voluntary surrender of PII happens all the time (when was the last time you put your business card in a drawing bowl at a restaurant?), people REALLY freak out when the information that is provided is biometric in nature. But are the non-biometric alternatives any better?

TechCrunch, Amazon One, and Ten Dollars

TechCrunch recently posted “Amazon will pay you $10 in credit for your palm print biometrics.

If you think that the article details an insanely great way to make some easy money from Amazon, then you haven’t been paying attention to the media these last few years.

The article begins with a question:

How much is your palm print worth?

The article then describes how Amazon’s brick-and-mortar stores in several states have incorporated a new palm print scanner technology called “Amazon One.” This technology, which reads both friction ridge and vein information from a shopper’s palms. This then is then associated with a pre-filed credit card and allows the shopper to simply wave a palm to buy the items in the shopping cart.

There is nothing new under the sun

Amazon One is the latest take on processes that have been implemented several times before. I’ll cite three examples.

Pay By Touch. The first one that comes to my mind is Pay By Touch. While the management of the company was extremely sketchy, the technology (provided by Cogent, now part of Thales) was not. In many ways the business idea was ahead of its time, and it had to deal with challenging environmental conditions: the fingerprint readers used for purchases were positioned near the entrances/exits to grocery stores, which could get really cold in the winter. Couple this with the elderly population that used the devices, and it was sometimes difficult to read the fingers themselves. Yet, this relatively ancient implementation is somewhat similar to what Amazon is doing today.

University of Maryland Dining Hall. The second example occurred to me because it came from my former employer (MorphoTrak, then part of Safran and now part of IDEMIA), and was featured at a company user conference for which I coordinated speakers. There’s a video of this solution, but sadly it is not public. I did find an article describing the solution:

With the new system students will no longer need a UMD ID card to access their own meals…

Instead of pulling out a card, the students just wave their hand through a MorphoWave device. And this allows the students to pay for their meals QUICKLY. Good thing when you’re hungry.

This Pay and That Pay. But the most common example that everyone uses is Apple Pay, Google Pay, Samsung Pay, or whatever “pay” system is supported on your smartphone. Again, you don’t have to pull out a credit card or ID card. You just have to look at your phone or swipe your finger on the phone, and payment happens.

Amazon One is the downfall of civilization

I don’t know if TechCrunch editorialized against Pay By Touch or [insert phone vendor here] Pay, and it probably never heard of the MorphoWave implementation at the University of Maryland. But Amazon clearly makes TechCrunch queasy.

While the idea of contactlessly scanning your palm print to pay for goods during a pandemic might seem like a novel idea, it’s one to be met with caution and skepticism given Amazon’s past efforts in developing biometric technology. Amazon’s controversial facial recognition technology, which it historically sold to police and law enforcement, was the subject of lawsuits that allege the company violated state laws that bar the use of personal biometric data without permission.

Oh well, at least TechCrunch didn’t say that Amazon was racist. (If you haven’t already read it, please read the Security Industry Association’s “What Science Really Says About Facial Recognition Accuracy and Bias Concerns.” Unless you don’t like science.)

OK, back to Amazon and Amazon One. TechCrunch also quotes Albert Fox Cahn of the Surveillance Technology Oversight Project.

People Leaving the Cities, photo art by Zbigniew Libera, imagines a dystopian future in which people have to leave dying metropolises. By Zbigniew Libera – https://artmuseum.pl/pl/kolekcja/praca/libera-zbigniew-wyjscie-ludzi-z-miast, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=66055122.

“The dystopian future of science fiction is now. It’s horrifying that Amazon is asking people to sell their bodies, but it’s even worse that people are doing it for such a low price.”

“Sell their bodies.” Isn’t it even MORE dystopian when people “give their bodies away for free” when they sign up for Apple Pay, Google Pay, or Samsung Pay? While the Surveillance Technology Oversight Project (acronym STOP) expresses concern about digital wallets, there is a significant lack of horror in its description of them.

Digital wallets and contactless payment systems like smart chips have been around for years. The introduction of Apple Pay, Amazon Pay, and Google Pay have all contributed to the e-commerce movement, as have fast payment tools like Venmo and online budgeting applications. In response to COVID-19, the public is increasingly looking for ways to reduce or eliminate physical contact. With so many options already available, contactless payments will inevitably gain momentum….

Without strong federal laws regulating the use of our data, we’re left to rely on private companies that have consistently failed to protect our information. To prevent long-term surveillance, we need to limit the data collected and shared with the government to only what is needed. Any sort of monitoring must be secure, transparent, proportionate, temporary, and must allow for a consumer to find out about or be alerted to implications for their data. If we address these challenges now, at a time when we will be generating more and more electronic payment records, we can ensure our privacy is safeguarded.

So STOP isn’t calling for the complete elimination of Amazon Pay. But apparently it wants to eliminate Amazon One.

Is a world without Amazon One a world with less surveillance?

Whenever you propose to eliminate something, you need to look at the replacement and see if it is any better.

In 1998, Fox fired Bill Russell as the manager of the Los Angeles Dodgers. He had a win-loss percentage of .538. His replacement, Glenn Hoffman, lasted less than a season and had a percentage of .534. Hoffman’s replacement, true baseball man Davey Johnson, compiled a percentage of .503 over the next two seasons before he was fired. Should have stuck with Russell.

Anyone who decides (despite the science) that facial recognition is racist is going to have to rely on other methods to identify criminals, such as witness identification. Witness identification has documented inaccuracies.

And if you think that elimination of Amazon One from Amazon’s brick-and-mortar stores will lead to a privacy nirvana, think again. If you don’t use your palm to pay for things, you’re going to have to use a credit card, and that data will certainly be scanned by the FBI and the CIA and the BBC, B. B. King, and Doris Day. (And Matt Busby, of course.) And even if you use cash, the only way that you’ll preserve any semblance of your privacy is to pay anonymously and NOT tie the transaction to your Amazon account.

And if you’re going to do that, you might as well skip Whole Foods and go straight to Dollar General. Or maybe not, since Dollar General has its own app. And no one calls Dollar General dystopian. Wait, they do: “They tend to cluster, like scavengers feasting on the carcasses of the dead.”

I seemed to have strayed from the original point of this post.

But let me sum up. It appears that biometrics is evil, Amazon is evil, and Amazon biometrics are Double Secret Evil.

Franchisees and BIPA

In other contexts, I have written about the relationship between franchisors and franchisees, which in some respects is similar to the way gig drivers work “with” (not “for”) Uber, Lyft, and the like. In many cases, the products that are advertised by a particular company are not made by that company, but by a franchisee of that company who is entirely separate from the parent company, but who is responsible for doing things the way the parent company wants them done. If you’re a franchisee, you CAN’T…um…”have it your way.”

This Whopper probably wasn’t made by Burger King itself, but by a franchisee of Burger King. By Tokfo – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=37367904

Speaking of which, here is an example of an article that confuses franchisor and franchisee. The Buzzfeed article, in typical Buzzfeed style, is entitled “This Is What Happened After A Bunch Of Employees At A Burger King Quit.” (Because of malfunctioning air conditioning, a number of employees put in their two weeks’ notice, leaving a “We All Quit” sign as they left.) You have to read ANOTHER article (from NBC) to find this little statement:

“Our franchisee is looking into this situation to ensure this doesn’t happen in the future,” a Burger King spokesperson said.

Yes, the employees’…um…beef wasn’t with Burger King itself (or its Brazilian/Canadian/American parent Restaurant Brands International), but with whoever manages the local franchise.

Well, now this world of franchisors and franchisees has entered the biometric world, according to a post in Greensfelder, a self-described “franchising & distribution law blog.”

Greensfelder’s post starts by explaining to its readers what BIPA is (something you already know if you read MY blog) and how franchisees are affected.

Plaintiffs are suing both franchisors and franchisees. Franchisors are being sued for collecting the information themselves for their own employees and also for the actions of their franchisees on theories of joint and several liability, vicarious liability, agency and alter ego. A recently filed case alleges that a franchisor mandates and controls virtually every aspect of its franchise locations, including the use of certain equipment that collects biometric information to track employees’ time and attendance and to monitor cash register systems for fraud.

This benefits the lawyers, who get to collect double the damages by claiming that both the franchisor and the franchisee are separately liable.

Greensfelder’s takeaway for franchisors:

Franchisors should be careful about mandating franchisee use of biometric procedures and devices without first checking applicable law and also making sure that their own policies and procedures are in compliance with those laws.

I’m not sure who is providing takeaways for franchisees.

Other than the usual advice to read the franchise agreement very, very carefully.

Is your home your castle when you use consumer doorbell facial recognition?

(Part of the biometric product marketing expert series)

For purposes of this post, I will define three entities that can employ facial recognition:

  • Public organizations such as governments.
  • Private organizations such as businesses.
  • Individuals.

Some people are very concerned about facial recognition use by the first two categories of entities.

But what about the third category, individuals?

Can individuals assert a Constitutional right to use facial recognition in their own homes? And what if said individuals live in Peoria?

Concerns about ANY use of facial recognition

Let’s start with an ACLU article from 2018 regarding “Amazon’s Disturbing Plan to Add Face Surveillance to Your Front Door.”

Let me go out on a limb and guess that the ACLU opposes the practice.

The article was prompted by an Amazon 2018 patent application which involved both its Rekognition facial recognition service and its Ring cameras.

One of the figures in Amazon’s patent application, courtesy the ACLU. https://www.aclunc.org/docs/Amazon_Patent.pdf

While the main thrust of the ACLU article concerns acquisition of front door face surveillance (and other biometric) information by the government, it also briefly addresses the entity that is initially performing the face surveillance: namely, the individual.

Likewise, homeowners can also add photos of “suspicious” people into the system and then the doorbell’s facial recognition program will scan anyone passing their home.

I should note in passing that ACLU author Jacob Snow is describing a “deny list,” which flags people who should NOT be granted access such as that pesky solar power salesperson. In most cases, consumer products tout the use of an “allow list,” which flags people who SHOULD be granted access such as family members.

Regardless of whether you’re discussing a deny list or an allow list, the thrust of the ACLU article isn’t that governments shouldn’t use facial recognition. The thrust of the article is that facial recognition shouldn’t be used at all.

The ACLU and other civil rights groups have repeatedly warned that face surveillance poses an unprecedented threat to civil liberties and civil rights that must be stopped before it becomes widespread.

Again, not face surveillance by governments, but face surveillance period. People should not have the, um, “civil liberties” to use the technology.

But how does the tech world approach this?

The reason that I cited that particular ACLU article was that it was subsequently referenced in a CNET article from May 2021. This article bore the title “The best facial recognition security cameras of 2021.”

Let me go out on a limb and guess that CNET supports the practice.

The last part of author Megan Wollerton’s article delves into some of the issues regarding facial recognition use, including those raised by the ACLU. But the bulk of the article talks about really cool tech.

As I stated above, Wollerton notes that the intended use case for home facial recognition security systems involves the creation of an “allow list”:

Some home security cameras have facial recognition, an advanced option that lets you make a database of people who visit your house regularly. Then, when the camera sees a face, it determines whether or not it belongs to someone in your list of known faces. If the recognition system does not know who is at the door, it can alert you to an unknown person on your property.

Obviously you could repurpose such a system for anything you want, provided that you can obtain a clear picture of the face of the pesky social power salesperson.

Before posting her reviews of various security systems, and after a brief mention (expanded later in the article) about possible governmental misuse of facial recognition, Wollerton redirects the conversation.

But let’s step back a bit to the consumer realm. Your home is your castle, and the option of having surveillance cameras with facial recognition software is still compelling for those who want to be on the cutting edge of smart home innovation.

“Your home is your castle” may be a distinctly American concept, but it certainly applies here as organizations such as, um, the ACLU defend a person’s right against unreasonable actions by governments.

Obviously, there are limits to ANY Constitutional right. I cannot exercise my Fourth Amendment right to be secure in my house, couple that with my First Amendment right to freely exercise my religion, and conclude that I have the unrestricted right to perform ritual child sacrifices in my home. (Although I guess if I have a home theater and only my family members are present, I can probably yell “Fire!” all I want.)

So perhaps I could mount an argument that I can use facial recognition at my house any time I want, if the government agrees that this right is “reasonable.”

But it turns out that other people are involved.

You knew I was going to mention Illinois in this post

OK, it’s BIPA time.

As I previously explained in a January 2021 post about the Kami Doorbell Camera, “BIPA” is Illinois’ Biometric Information Privacy Act. This act imposes constraints on a private entity’s use of biometrics. (Governments are excluded in Illinois BIPA.) And here’s how BIPA defines the term “private entity”:

“Private entity” means any individual, partnership, corporation, limited liability company, association, or other group, however organized. A private entity does not include a State or local government agency. A private entity does not include any court of Illinois, a clerk of the court, or a judge or justice thereof.

Did you see the term “individual” in that definition?

So BIPA not only affects company use of biometrics, such as use of biometrics by Google or by a theme park or by a fitness center. It also affects an individual such as Harry or Harriet Homeowner’s use of biometrics.

As I previously noted, Google does not sell its Nest Cam “familiar face alert” feature in Illinois. But I guess it’s possible (via location spoofing if necessary) for someone to buy Nest Cam familiar face alerts in Indiana, and then sneak the feature across the border and implement it in the Land of Lincoln. But while this may (or may not) get Google off the hook, the individual is in a heap of trouble (should a trial lawyer decide to sue the individual).

Let’s face it. The average user of Nest Cam’s familiar face alerts, or the Kami Doorbell Camera, or any other home security camera with facial recognition (note that Amazon currently is not using facial recognition in its consumer products), is probably NOT complying with BIPA.

A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever occurs first.

I mean it’s hard enough for Harry and Harriet to get their teenage son to acknowledge receipt of the Homeowner family’s written policy for the use of the family doorbell camera. And you can forget about getting the pesky solar power salesperson to acknowledge receipt.

So from a legal perspective, it appears that any individual homeowner who installs a facial recognition security system can be hauled into civil court under BIPA.

But will these court cases be filed from a practical perspective?

Probably not.

When a social media company violates BIPA, the violation conceivably affects millions of individuals and can result in millions or billions of dollars in civil damages.

When the pesky solar power salesperson discovers that Harry and Harriet Homeowner, the damages would be limited to $1,000 or $5,000 plus relevant legal fees.

It’s not worth pursuing, any more than it’s worth pursuing the Illinois driver who is speeding down the expressway at 66 miles per hour.

The tone of voice to use when talking about forensic mistakes

Remember my post that discussed the tone of voice that a company chooses to use when talking about the benefits of the company and its offerings?

Or perhaps you saw the repurposed version of the post, a page section entitled “Don’t use that tone of voice with me!”

The tone of voice that a firm uses does not only extend to benefit statements, but to all communications from a company. Sometimes the tone of voice attracts potential clients. Sometimes it repels them.

For example, a book was published a couple of months ago. Check the tone of voice in these excerpts from the book advertisement.

“That’s not my fingerprint, your honor,” said the defendant, after FBI experts reported a “100-percent identification.” They were wrong. It is shocking how often they are. Autopsy of a Crime Lab is the first book to catalog the sources of error and the faulty science behind a range of well-known forensic evidence, from fingerprints and firearms to forensic algorithms. In this devastating forensic takedown, noted legal expert Brandon L. Garrett poses the questions that should be asked in courtrooms every day: Where are the studies that validate the basic premises of widely accepted techniques such as fingerprinting? How can experts testify with 100 percent certainty about a fingerprint, when there is no such thing as a 100 percent match? Where is the quality control in the laboratories and at the crime scenes? Should we so readily adopt powerful new technologies like facial recognition software and rapid DNA machines? And why have judges been so reluctant to consider the weaknesses of so many long-accepted methods?

Note that author Brandon Garrett is NOT making this stuff up. People in the identity industry are well aware of the Brandon Mayfield case and others that started a series of reforms beginning in 2009, including changes in courtroom testimony and increased testing of forensic techniques by the National Institute of Standards and Technology and others.

It’s obvious that I, with my biases resulting from over 25 years in the identity industry, am not going to enjoy phrases such as “devastating forensic takedown,” especially when I know that some sectors of the forensics profession have been working on correcting these mistakes for 12 years now, and have cooperated with the Innocence Project to rectify some of these mistakes.

So from my perspective, here are my two concerns about language that could be considered inflammatory:

  • Inflammatory language focusing on anecdotal incidents leads to improper conclusions. Yes, there are anecdotal instances in which fingerprint examiners made incorrect decisions. Yes, there are anecdotal instances in which police agencies did not use facial recognition computer results solely as investigative leads, resulting in false arrests. But anecdotal incidents are not in my view substantive enough to ban fingerprint recognition or facial recognition entirely, as some (not all) who read Garrett’s book are going to want to do (and have done, in certain jurisdictions).
  • Inflammatory language prompts inflammatory language from “the other side.” Some forensic practitioners and criminal justice stakeholders may not be pleased to learn that they’ve been targeted by a “devastating forensic takedown.” And sometimes the responses can get nasty: “enemies” of forensic techniques “love criminals.”

Of course, it may be near to impossible to have a reasoned discussion of forensic and police techniques these days. And I’ll confess that it’s hard to sell books by taking a nuanced tone in the book blurb. But if would be nice if we could all just get along.

P.S. Garrett was interviewed on TV in connection to the Derek Chauvin trial, and did not (IMHO) come off as a wild-eyed “defund the police” hack. His major point was that Chauvin’s actions were not made in a split second, but in a course of several minutes.

Will the Kami Doorbell Camera sell in Illinois?

There was a recent press release that I missed until Biometric Update started talking about it two days later. The January 19 press release from Kami was entitled “Kami Releases Smart Video Doorbell With Facial Recognition Capabilities.” The subhead announced, “The device also offers user privacy controls.”

And while reading that Kami press release, I noticed a potential issue that wasn’t fully addressed in the press release, or (so far) in the media coverage of the press release. That issue relates to that four-letter word “BIPA.”

This post explains what BIPA is and why it’s important.

  • But it starts by looking at smart video doorbells.
  • Next, it looks at this particular press release about a smart video doorbell.
  • Then we’ll look at a competitor’s smart video doorbell, and a particular decision that the competitor made because of BIPA.
  • Only then will we dive into BIPA.
  • Finally, we’ll circle back to Kami, and how it may be affected by BIPA. (Caution: I’m not a lawyer.)

What is a smart video doorbell?

Many of us can figure out what a smart video doorbell would do, since Kami isn’t the first company to offer such a product. (I’ll talk about another company in a little bit.)

The basic concept is that the owner of the video doorbell (whom I’ll refer to as the “user,” to be consistent with Kami’s terminology) manages a small database of faces that could be recognized by the video doorbell. For example, if I owned such a device, I would definitely want to enroll my face and the face of my wife, and I would probably want to enroll the faces of other relatives and close friends. Doing this would create an allowlist of people who are known to the smart video doorbell system.

However, because technology itself is neutral, I need to point out two things about a standard smart video doorbell implementation:

  • Depending upon the design, you can enroll a person into the system without the person knowing it. If the user of the system controls the enrollment, then the user has complete control over the people that are enrolled into the system. All I need is a picture of the person, and I can use that picture to enroll the person into my smart video doorbell. I can grab a picture that I took from New Year’s Eve, or I could even grab a picture from the Internet. After all, if President Joe Biden walked up to my front door, I’d definitely want to know about it. Now there are technological solutions to this; for example, liveness detection could be used to ensure that the person who is enrolling in the system is a live person and not a picture. But I’m not aware of any system that requires liveness detection for this particular use case.
  • You can enroll a person into the system for ANY reason. Usually consumer smart video doorbells are presented as a way to let you know when friends and family come to the door. But the technology has no way of detecting whether you are actually enrolling a “friend.” Perhaps you want to know when your ex-girlfriend comes to the door. Or perhaps you have a really good picture of the guy who’s been breaking into homes in your neighborhood. Now enterprise and government systems account for this by supporting separate allowlists and blocklists, but frankly you can put anyone on to any list for any reason.

So with that introduction, let’s see what Kami is offering, and why it’s different.

The Kami Doorbell Camera

Let’s return to the Kami press release. It, as well as the description of the item in Kami’s online store, parallels a lot of the features that you can find in any smart video doorbell.

Know exactly who’s at your door. Save the faces of friends and family in your Kami or YI Home App, allowing you to get notified if the person outside your front door is a familiar face or a stranger.

And it has other features, such as an IP-65 rating stating that the camera will continue to work outdoors in challenging weather conditions.

However, Yamin Durrani, Kami’s CEO, emphasized a particular point in the press release:

“The Kami Doorbell Camera was inspired by a greater need for safety and peace of mind as people spend more time at home and consumers’ increasing desire to reside in smart homes,” said Yamin Durrani, CEO of Kami. “However, we noticed one gaping hole in the smart doorbell market — it was lacking an extremely advanced security solution that also puts the user in complete control of their privacy. In designing our video doorbell camera we considered all the ways people live in their homes to elegantly combine accelerated intelligence with a level of customization and privacy that is unmatched in today’s market. The result is a solution that provides comfort, safety and peace of mind.”

Privacy for the user(s) makes sense, because you don’t want someone hacking into the system and stealing the pictures and other stored information. As described, Kami lets the user(s) control their own data, and the system has presumably been designed from the ground up to support this.

But Kami isn’t the only product out there.

One of Kami’s competitors has an interesting footnote in its product description

There’s this company called Google. You may have heard of it. And Google offers a product called Nest Aware. This product is a subscription service that works with Nest cameras and provides various types of alerts for activities within the range of the cameras.

And Nest even has a feature that sounds, um, familiar to Kami users. Nest refers to the feature as “familiar face detection.”

Nest speakers and displays listen for unusual sounds. Nest cameras can spot a familiar face.4 And they all send intelligent alerts that matter.

So it sounds like Nest Aware has the same type of “allowlist” feature that allows the Nest Aware user to enroll friends and family (or whoever) into the system, so that they can be automatically recognized and so you can receive relevant information.

Hmm…did you note that there is a footnote next to the mention of “familiar face”? Let’s see what that footnote says.

4. Familiar face alerts not available on Nest Cams used in Illinois.

To the average consumer, that footnote probably looks a little odd. Why would this feature not be available in Illinois, but available in all the other states?

Or perhaps the average consumer may recall another Google app from three years ago, the Google Art & Culture app. That app became all the rage when it introduced a feature that let you compare your face to the faces on famous works of art. Well, it let you perform that comparison…unless you lived in Illinois or Texas.

So what’s the big deal about Illinois?

Those of us who are active in the facial recognition industry, or people who are active in the privacy industry, are well aware of the Illinois Biometric Information and Privacy Act, or BIPA. This Act, which was passed in 2008, provides Illinois residents control over the use of their biometric data. And if a company violates that control, the resident is permitted to sue the offending company. And class action lawsuits are allowed, thus increasing the possible damages to the offending company.

And there are plenty of lawyers that are willing to help residents exercise their rights under BIPA.

One early example of a BIPA lawsuit was filed against L.A. Tan. This firm offered memberships, and rather than requiring the member to present a membership card, the member simply placed his or her fingerprint onto a scanner to verify membership. But under BIPA, that could be a problem:

The plaintiffs in the L.A. Tan case alleged that the company, which used customers’ fingerprint scans in lieu of key fobs for tanning membership ID purposes, violated the BIPA by failing to obtain the customers’ written consent to use the fingerprint data and by not disclosing to customers the company’s plans for storing the data or destroying it in the event a tanning customer terminated her salon membership or a franchise closed. The plaintiffs did not claim L.A. Tan illegally sold or lost customers’ fingerprint data, just that it did not handle the data as carefully as the BIPA requires.

L.A. Tan ended up settling the case for over a million dollars, but Illinois Policy wondered:

This outcome is reassuring for anyone concerned about the handling of private information like facial-recognition data and fingerprints, but it also could signal a flood of similar lawsuits to come.

And there certainly was a flood of lawsuits. I was working in strategic marketing at the time, and I would duly note the second lawsuit filed under BIPA, and then the third lawsuit, and the fourth…Eventually I stopped counting.

As of June 2019, 324 such lawsuits had been filed in total, including 161 in the first six months of 2019 alone. And some big names have been sued under BIPA.

Facebook settled for $650 million.

Google was sued in October 2019 over Google Photos, again in February 2020 over Google Photos, again in April 2020 over its G Suite for Education, again in July 2020 over its use of IBM’s Diversity in Faces algorithm, and probably several other times besides.

So you can understand why Google is a little reluctant to sell Nest Aware’s familiar face detection feature in Illinois.

So where does that leave Kami?

Here’s where the problem may lie. Based upon the other lawsuits, it appears that lawyers are alleging that before an Illinois resident’s biometric features are stored in a database, the person has to give consent for the biometric to be stored, and the person has to be informed of his or her rights under BIPA.

So such explicit permission has to be given for every biometric database physically within the state of Illinois?

Yes…and then some. Remember that Facebook and Google’s databases aren’t necessarily physically located within the state of Illinois, but those companies have been sued under BIPA. I’m not a lawyer, but conceivably an Illinois resident could sue a Swiss company, with its databases in Switzerland, for violating BIPA.

Now when someone sets up a Kami system, does the Kami user ensure that every Illinois resident has received the proper BIPA notices? And if the Kami user doesn’t do that, is Kami legally liable?

For all I know, the Kami enrollment feature may include explicit BIPA questions, such as “Is the person in this picture a resident of Illinois?” Then again, it may not.

Again, I’m not a lawyer, but it’s interesting to note that Google, who does have access to a bunch of lawyers, decided to dodge the issue by not selling familiar face detection to Illinois residents.

Which doesn’t answer the question of an Iowa Nest Aware familiar face detection user who enrolls an Illinois resident…

Facial recognition and the U.S. Capitol attack

This post examines a number of issues regarding the use of facial recognition. Specifically, it looks at various ways to use facial recognition to identify people who participated in the U.S. Capitol attack.

By TapTheForwardAssist – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=98670006

Let’s start with the technological issues before we look at the legal ones. Specifically, we’ll look at three possible ways to construct databases (galleries) to use for facial recognition, and the benefits and drawbacks of each method.

What a facial recognition system does, and what it doesn’t do

The purpose of a one-to-many facial recognition system is to take a facial image (a “probe” image), process it, and compare it to a “gallery” of already-processed facial images. The system then calculates some sort of mathematical likelihood that the probe matches some of the images in the gallery.

That’s it. That’s all the system does, from a technological point of view.

Although outside of the scope of this particular post, I do want to say that a facial recognition system does NOT determine a match. Now the people USING the system could make the decision that one or more of the images in the gallery should be TREATED as a match, based upon mathematical considerations. However, when using a facial recognition system in the United States for criminal purposes, the general procedure is for a trained facial examiner to use his/her expertise to compare the probe image with selected gallery images. This trained examiner will then make a determination, regardless of what the technology says.

But forget about that for now. I want to concentrate on another issue—adding data to the gallery.

Options for creating a facial recognition “gallery”

As I mentioned earlier, the “gallery” is the database against which the submitted facial image (the “probe”) is compared. In a one-to-many comparison, the probe image is compared against all or some of the images in the gallery. (I’m skipping over the “all or some” issue for now.)

So where do you get facial images to put in the gallery?

For purposes of this post, I’m going to describe three sources for gallery images.

  • Government facial images of people who have been convicted of crimes.
  • Government facial images of people who have not necessarily been convicted of crimes, such as people who have been granted driver’s licenses or passports.
  • Publicly available facial images.

Before delving into these three sources of gallery images, I’m going to present a use case. A few of you may recognize it.

Let’s say that there is an important government building located somewhere, and that access to the building is restricted for security reasons. Now let’s say that some people breach that access and illegally enter the building. Things happen, and the people leave. (Again, why they left and weren’t detained immediately is outside the scope of this post.)

Now that a crime has been committed, the question arises—how do you use facial recognition to solve the crime?

A gallery of government criminal facial images

Let’s look at a case in which the images of people who trespassed at the U.S. Capitol…

Whoops, I gave it away! Yes, for those of you who didn’t already figure it out, I’m specifically talking about the people who entered the U.S. Capitol on Wednesday, January 6. (This will NOT be the only appearance of Captain Obvious in this post.)

Anyway, let’s see how the images of people who trespassed at the U.S. Capitol can be compared against a gallery of images of criminals.

From here on in, we need to not only look at technological issues, but also legal issues. Technology does not exist in a vacuum; it can (or at least should) only be used in accordance with the law.

So we have a legal question: can criminal facial images be lawfully used to identify people who have committed crimes?

In most cases, the answer is yes. The primary reason that criminal databases are maintained in the first place is to identify repeat offenders. If someone habitually trespasses into government buildings, the government would obviously like to know when the person trespasses into another government building.

But why did I say “in most cases”? Because there are cases in which a previously-created criminal record can no longer be used.

  • The record is sealed or expunged. This could happen, for example, if a person committed a crime as a juvenile. After some time, the record could be sealed (prohibiting most access) or expunged (removed entirely). If a record is sealed or expunged, then data in the record (including facial images) shouldn’t be available in the gallery.
  • The criminal is pardoned. If someone is pardoned of a crime, then it’s legally the same as if the crime were never committed at all. In that case, the pardoned person’s criminal record may (or may not) be removed from the criminal database. If it is removed, then again the facial image shouldn’t be in the gallery.
  • The crime happened a long time ago. Decades ago, it cost a lot of money to store criminal records, and due to budgetary constraints it wasn’t worthwhile to keep on storing everything. In my corporate career, I’ve encountered a lot of biometric requests for proposal (RFPs) that required conversion of old data to the new biometric system…with the exception of the old stuff. It stands to reason that if the old arrest record from 1960 is never converted to the new system, then that facial image won’t be in the gallery.

So, barring those exceptions, a search of our probe image from the U.S. Capitol could potentially hit against records in the gallery of criminal facial images.

Great, right?

Well, there’s a couple of issues to consider.

First, there are a lot of criminal databases out there. For those who imagine that the FBI, and the CIA, and the BBC, BB King, and Doris Day (yes) have a single massive database with every single criminal record out there…well, they don’t.

  • There are multiple federal criminal databases out there, and it took many years to get two of the major ones (from the FBI and the Department of Homeland Security) to talk to each other.
  • And every state has its own criminal database; some records are submitted to the FBI, and some aren’t.
  • Oh, and there are also local databases. For many years, one of my former employers was the automated fingerprint identification system provider for Bullhead City, Arizona. And there are a lot of Bullhead City-sized databases; one software package, AFIX Tracker (now owned by Aware) has over 500 installations.

So it you want to search criminal databases, you’re going to have to search a bunch of them. Between the multiple federal databases, the state and territory databases, and the local databases, there are hundreds upon hundreds of databases to search. That could take a while.

Which brings us to the second issue, in which we put on our Captain Obvious hat. If a person has never committed a crime, the person’s facial image is NOT in a criminal database. While biometric databases are great at identifying repeat offenders, they’re not so good at identifying first offenders. (They’re great at identifying second offenders, when someone is arrested for a crime and matches against an unidentified biometric record from a previous crime.)

So even if you search all the criminal databases, you’re only going to find the people with previous records. Those who were trespassing at the U.S. Capitol for the first time are completely invisible to a criminal database.

So something else is needed.

A gallery of government non-criminal facial images

Faced with this problem, you may ask yourself (yes), “What if the government had a database of people who hadn’t committed crimes? Could that database be used to identify the people who stormed the U.S. Capitol?”

Well, various governments DO have non-criminal facial databases. The two most obvious examples are the state databases of people who have driver’s licenses or state ID cards, and the federal database of people who have passports.

(This is an opportune time to remind my non-U.S. readers that the United States does not have national ID cards, and any attempt to create a national ID card is fought fiercely.)

I’ll point out the Captain Obvious issue right now: if someone never gets a passport or driver’s license, they’re not going to be in a facial database. This is of course a small subset of the population, but it’s a potential issue.

There’s a much bigger issue regarding the legal ability to use driver’s license photos in criminal investigation. As of 2018, 31 states allowed the practice…which means that 19 didn’t.

So while searches of driver’s license databases offer a good way to identify Capitol trespassers, it’s not perfect either.

A gallery of publicly available facial images

Which brings us to our third way to populate a gallery of facial images to identify Capitol trespassers.

It turns out that governments are not the only people that store facial images. You can find facial images everywhere. My own facial image can be found in countless places, including a page on the Bredemarket website itself.

There are all sorts of sites that post facial images that can be accessible to the public. A few of these sites include Facebook, Google (including YouTube), LinkedIn (part of Microsoft), Twitter, and Venmo. (We’ll return to those companies later.)

In many cases, these image are tied to (non-verified) identities. For example, if you go to my LinkedIn page, you will see an image that purports to be the image of John Bredehoft. But LinkedIn doesn’t know with 100% certainty that this is really an image of John Bredehoft. Perhaps “John Bredehoft” exists, but the posted picture is not that of John Bredehoft. Or perhaps “John Bredehoft” doesn’t exist and is a synthetic identity.

But regardless, there are billions of images out there, tied to billions of purported identities.

What if you could compare the probe images from the U.S. Capitol against a gallery of those billions of images—many more images than held by any government?

It turns out that you CAN perform that comparison, and that law enforcement did perform that comparison.

Clearview AI’s…facial-recognition app has seen a spike in use as police track down the pro-Trump insurgents who descended on the Capitol on Wednesday….

Clearview AI CEO Hoan Ton-That confirmed to Gizmodo that the app saw a 26% jump in search volume on Jan. 7 compared to its usual weekday averages….

Detectives at the Miami Police Department are using Clearview’s tech to identify rioters in images and videos of the attack and forwarding suspect leads to the FBI, per the Times. Earlier this week, the Wall Street Journal reported that an Alabama police department was also employing Clearview’s tech to ID faces in footage and sending potential matches along to federal investigators.

But now we need to return to the legal question: is “publicly available” equivalent to “publicly usable”?

Certain companies, including the aforementioned Facebook, Google (including YouTube), LinkedIn (part of Microsoft), Twitter, and Venmo, maintain that Clearview AI does NOT have permission to use their publicly available data. Not because of government laws, but because of the companies’ own policies. Here’s what two of the companies said about a year ago:

“Scraping people’s information violates our policies, which is why we’ve demanded that Clearview stop accessing or using information from Facebook or Instagram,” Facebook’s spokesperson told Business Insider….

“YouTube’s Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response, we sent them a cease-and-desist letter.”

For its part, Clearview AI maintains that its First Amendment government rights supersede the terms of service of the companies.

But other things come in play in addition to terms of service. Lawsuits filed in 2020 allege that Clearview AI’s practices violate the California Consumer Privacy Act of 2018, and the even more stringent Illinois Biometric Information Privacy Act of 2008. BIPA is so stringent that even Google is affected by it; as I’ve previously noted, Google’s Nest Hello Video Doorbell’s “familiar face” alerts is not available in Illinois.

Between corporate complaints and aggrieved citizens, the jury is literally still out on Clearview AI’s business model. So while it may work technologically, it may not work legally.

And one more thing

Of course, people are asking themselves, why do we even need to use facial recognition at all? After all, some of the trespassers actually filmed themselves trespassing. And when people see the widely-distributed pictures of the trespassers, they can be identified without using facial recognition.

Yes, to a point.

While it seems intuitive that eyewitnesses can easily identify people in photos, it turns out that such identifications can be unreliable. As the California Innocence Project reminds us:

One of the main causes of wrongful convictions is eyewitness misidentifications. Despite a high rate of error (as many as 1 in 4 stranger eyewitness identifications are wrong), eyewitness identifications are considered some of the most powerful evidence against a suspect.

The California Innocence Project then provides an example of a case in which someone was inaccurately identified due to an eyewitness misidentification. Correction: it provided 11 examples, including ones in which the witnesses were presented to the viewer in a controlled environment (six-pack lineups, similar backgrounds).

The FBI project, in which people look at images captured from the U.S. Capitol itself, is NOT a controlled environment.

Quantifying the costs of wrongful incarcerations

As many of you already know, the Innocence Project is dedicated to freeing people who have been wrongfully incarcerated. At times, the people are freed after examining or re-examining biometric evidence, such as fingerprint evidence or DNA evidence.

The latter evidence was relevant in the case of Uriah Courtney, who was convicted and sentenced to life in prison for kidnapping and rape based upon eyewitness testimony. At the time of Courtney’s arrest, DNA testing did not return any meaningful results. Eight years later, however, DNA technology had advanced to the point where the perpetrator could be identified—and, as the California Innocence Project noted, the perpetrator wasn’t Uriah Courtney.

I’ve read Innocence Project stories before, and the one that sticks most in my mind was the case of Archie Williams, who was released (based upon fingerprint evidence) after being imprisoned for a quarter century. At the time that Williams’ wrongful conviction was vacated, Vanessa Potkin, director of post-conviction litigation at the Innocence Project, stated, “There is no way to quantify the loss and pain he has endured.”

But that doesn’t mean that people haven’t tried to (somewhat) quantify the loss.

In the Uriah Courtney case, while it’s impossible to quantify the loss to Courtney himself, it is possible to quantify the loss to the state of California. Using data from the California Legislative Analyst’s Office 2018-19 annual costs per California inmate, the California Innocence Project calculated a “cost of wrongful incarceration” of $649,624.

One can quibble with the methodology—after all, the 2018-19 costs presumably overestimate the costs of incarcerating someone who was released from custody on May 9, 2013—but at least it illustrates that a cost of wrongful incarceration CAN be calculated. Add to that the costs of prosecuting the wrong person (including jury duty daily fees), and the costs can be quantified.

To a certain extent.