The Imperfect Way to Enforce New York’s Child Data Protection Act

It’s often good to use emotion in your marketing.

For example, when biometric companies want to justify the use of their technology, they have found that it is very effective to position biometrics as a way to combat sex trafficking.

Similarly, moves to rein in social media are positioned as a way to preserve mental health.

By Marc NL at English Wikipedia – Transferred from en.wikipedia to Commons., Public Domain, https://commons.wikimedia.org/w/index.php?curid=2747237

Now that’s a not-so-pretty picture, but it effectively speaks to emotions.

“If poor vulnerable children are exposed to addictive, uncontrolled social media, YOUR child may end up in a straitjacket!”

In New York state, four government officials have declared that the ONLY way to preserve the mental health of underage social media users is via two bills, one of which is the “New York Child Data Protection Act.”

But there is a challenge to enforce ALL of the bill’s provisions…and only one way to solve it. An imperfect way—age estimation.

This post only briefly addresses the alleged mental health issues of social media before plunging into one of the two proposed bills to solve the problem. It then examines a potentially unenforceable part of the bill and a possible solution.

Does social media make children sick?

Letitia “Tish” James is the 67th Attorney General for the state of New York. From https://ag.ny.gov/about/meet-letitia-james

On October 11, a host of New York State government officials, led by New York State Attorney General Letitia James, jointly issued a release with the title “Attorney General James, Governor Hochul, Senator Gounardes, and Assemblymember Rozic Take Action to Protect Children Online.”

Because they want to protect the poor vulnerable children.

By Paolo Monti – Available in the BEIC digital library and uploaded in partnership with BEIC Foundation.The image comes from the Fondo Paolo Monti, owned by BEIC and located in the Civico Archivio Fotografico of Milan., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48057924

And because the major U.S. social media companies are headquartered in California. But I digress.

So why do they say that children need protection?

Recent research has shown devastating mental health effects associated with children and young adults’ social media use, including increased rates of depression, anxiety, suicidal ideation, and self-harm. The advent of dangerous, viral ‘challenges’ being promoted through social media has further endangered children and young adults.

From https://ag.ny.gov/child-online-safety

Of course one can also argue that social media is harmful to adults, but the New Yorkers aren’t going to go that far.

So they are just going to protect the poor vulnerable children.

CC BY-SA 4.0.

This post isn’t going to deeply analyze one of the two bills the quartet have championed, but I will briefly mention that bill now.

  • The “Stop Addictive Feeds Exploitation (SAFE) for Kids Act” (S7694/A8148) defines “addictive feeds” as those that are arranged by a social media platform’s algorithm to maximize the platform’s use.
  • Those of us who are flat-out elderly vaguely recall that this replaced the former “chronological feed” in which the most recent content appeared first, and you had to scroll down to see that really cool post from two days ago. New York wants the chronological feed to be the default for social media users under 18.
  • The bill also proposes to limit under 18 access to social media without parental consent, especially between midnight and 6:00 am.
  • And those who love Illinois BIPA will be pleased to know that the bill allows parents (and their lawyers) to sue for damages.

Previous efforts to control underage use of social media have faced legal scrutinity, but since Attorney General James has sworn to uphold the U.S. Constitution, presumably she has thought about all this.

Enough about SAFE for Kids. Let’s look at the other bill.

The New York Child Data Protection Act

The second bill, and the one that concerns me, is the “New York Child Data Protection Act” (S7695/A8149). Here is how the quartet describes how this bill will protect the poor vulnerable children.

CC BY-SA 4.0.

With few privacy protections in place for minors online, children are vulnerable to having their location and other personal data tracked and shared with third parties. To protect children’s privacy, the New York Child Data Protection Act will prohibit all online sites from collecting, using, sharing, or selling personal data of anyone under the age of 18 for the purposes of advertising, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, this informed consent must come from a parent.

From https://ag.ny.gov/child-online-safety

And again, this bill provides a BIPA-like mechanism for parents or guardians (and their lawyers) to sue for damages.

But let’s dig into the details. With apologies to the New York State Assembly, I’m going to dig into the Senate version of the bill (S7695). Bear in mind that this bill could be amended after I post this, and some of the portions that I cite could change.

The “definitions” section of the bill includes the following:

“MINOR” SHALL MEAN A NATURAL PERSON UNDER THE AGE OF EIGHTEEN.

From https://www.nysenate.gov/legislation/bills/2023/S7695, § 899-EE, 2.

This only applies to natural persons. So the bots are safe, regardless of age.

Speaking of age, the age of 18 isn’t the only age referenced in the bill. Here’s a part of the “privacy protection by default” section:

§ 899-FF. PRIVACY PROTECTION BY DEFAULT.

1. EXCEPT AS PROVIDED FOR IN SUBDIVISION SIX OF THIS SECTION AND SECTION EIGHT HUNDRED NINETY-NINE-JJ OF THIS ARTICLE, AN OPERATOR SHALL NOT PROCESS, OR ALLOW A THIRD PARTY TO PROCESS, THE PERSONAL DATA OF A COVERED USER COLLECTED THROUGH THE USE OF A WEBSITE, ONLINE SERVICE, ONLINE APPLICATION, MOBILE APPLICA- TION, OR CONNECTED DEVICE UNLESS AND TO THE EXTENT:

(A) THE COVERED USER IS TWELVE YEARS OF AGE OR YOUNGER AND PROCESSING IS PERMITTED UNDER 15 U.S.C. § 6502 AND ITS IMPLEMENTING REGULATIONS; OR

(B) THE COVERED USER IS THIRTEEN YEARS OF AGE OR OLDER AND PROCESSING IS STRICTLY NECESSARY FOR AN ACTIVITY SET FORTH IN SUBDIVISION TWO OF THIS SECTION, OR INFORMED CONSENT HAS BEEN OBTAINED AS SET FORTH IN SUBDIVISION THREE OF THIS SECTION.

From https://www.nysenate.gov/legislation/bills/2023/S7695

So a lot of this bill depends upon whether a person is over or under the age of eighteen, or over or under the age of thirteen.

And that’s a problem.

How old are you?

The bill needs to know whether or not a person is 18 years old. And I don’t think the quartet will be satisfied with the way that alcohol websites determine whether someone is 21 years old.

This age verification method is…not that robust.

Attorney General James and the others would presumably prefer that the social media companies verify ages with a government-issued ID such as a state driver’s license, a state identification card, or a national passport. This is how most entities verify ages when they have to satisfy legal requirements.

For some people, even some minors, this is not that much of a problem. Anyone who wants to drive in New York State must have a driver’s license, and you have to be at least 16 years old to get a driver’s license. Admittedly some people in the city never bother to get a driver’s license, but at some point these people will probably get a state ID card.

You don’t need a driver’s license to ride the New York City subway, but if the guitarist wants to open a bank account for his cash it would help him prove his financial identity. By David Shankbone – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=2639495
  • However, there are going to be some 17 year olds who don’t have a driver’s license, government ID or passport.
  • And some 16 year olds.
  • And once you look at younger people—15 year olds, 14 year olds, 13 year olds, 12 year olds—the chances of them having a government-issued identification document are much less.

What are these people supposed to do? Provide a birth certificate? And how will the social media companies know if the birth certificate is legitimate?

But there’s another way to determine ages—age estimation.

How old are you, part 2

As long-time readers of the Bredemarket blog know, I have struggled with the issue of age verification, especially for people who do not have driver’s licenses or other government identification. Age estimation in the absence of a government ID is still an inexact science, as even Yoti has stated.

Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.

From https://www.yoti.com/wp-content/uploads/Yoti-Age-Estimation-White-Paper-March-2023.pdf

So if a minor does not have a government ID, and the social media firm has to use age estimation to determine a minor’s age for purposes of the New York Child Data Protection Act, the following two scenarios are possible:

  • An 11 year old may be incorrectly allowed to give informed consent for purposes of the Act.
  • A 14 year old may be incorrectly denied the ability to give informed consent for purposes of the Act.

Is age estimation “good enough for government work”?

Five Topics a Biometric Content Marketing Expert Needs to Understand

As a child, did you sleep at night dreaming that someday you could become a biometric content marketing expert?

I didn’t either. Frankly, I didn’t even work in biometrics professionally until I was in my 30s.

If you have a mad adult desire to become a biometric content marketing expert, here are five topics that I (a self-styled biometric content marketing expert) think you need to understand.

Topic One: Biometrics

Sorry to be Captain Obvious, but if you’re going to talk about biometrics you need to know what you’re talking about.

The days in which an expert could confine themselves to a single biometric modality are long past. Why? Because once you declare yourself an iris expert, someone is bound to ask, “How does iris recognition compare to facial recognition?”

Only some of the Biometrics Institute’s types of biometrics. Full list at https://www.biometricsinstitute.org/what-is-biometrics/types-of-biometrics/

And there are a number of biometric modalities. In addition to face and iris, the Biometrics Institute has cataloged a list of other biometric modalities, including fingerprints/palmprints, voice, DNA, vein, finger/hand geometry, and some more esoteric ones such as gait, keystrokes, and odor. (I wouldn’t want to manage the NIST independent testing for odor.)

As far as I’m concerned, the point isn’t to select the best biometric and ignore all the others. I’m a huge fan of multimodal biometrics, in which a person’s identity is verified or authenticated by multiple biometric types. It’s harder to spoof multiple biometrics than it is to spoof a single one. And even if you spoof two of them, what if the system checks for odor and you haven’t spoofed that one yet?

Topic Two: All the other factors

In the same way that I don’t care for people who select one biometric and ignore the others, I don’t care for some in the “passwords are dead” crowd who go further and say, “Passwords are dead. Use biometrics instead.”

Although I admire the rhyming nature of the phrase.

If you want a robust identity system, you need to use multiple factors in identity verification and authentication.

  • Something you know.
  • Something you have.
  • Something you are (i.e. biometrics).
  • Something you do.
  • Somewhere you are.

Again, use of multiple factors protects against spoofing. Maybe someone can create a gummy fingerprint, but can they also create a fake passport AND spoof the city in which you are physically located?

From https://www.youtube.com/shorts/mqfHAc227As

Don’t assume that biometrics answers all the ills of the world. You need other factors.

And if you master these factors, you are not only a biometric content marketing expert, but also an identity content marketing expert.

Topic Three: How biometrics are used

It’s not enough to understand the technical ins and outs of biometric capture, matching, and review. You need to know how biometrics are used.

  • One-to-one vs. one-to-many. Is the biometric that you acquire only compared to a single biometric samples, or to a database of hundreds, thousands, millions, or billions of other biometric samples?
  • Markets. When I started in biometrics, I only participated in two markets: law enforcement (catch bad people) and benefits (get benefit payments to the right people). There are many other markets. Just recently I have written about financial identity and educational identity. I’ve worked with about a dozen other markets personally, and there are many more.
  • Use cases. Related to markets, you need to understand the use cases that biometrics can address. Taking the benefits example, there’s a use case in which a person enrolls for benefits, and the government agency wants to make sure that the person isn’t already enrolled under another name. And there’s a use cases when benefits are paid to make sure that the authorized recipient receives their benefits, and no one else receives their benefits.
  • Legal and privacy issues. It is imperative that you understand the legal ramifications that affect your chosen biometric use case in your locality. For example, if your house has a doorbell camera that uses “familiar face detection” to identify the faces of people that come to your door, and the people that come to your door are residents of the state of Illinois, you have a BIG BIPA (Biometric Information Privacy Act) problem.

Any identity content marketing expert or biometric content marketing expert worth their salt will understand these and related issues.

Topic Four: Content marketing

This is another Captain Obvious point. If you want to present yourself as a biometric contet marketing expert or identity content marketing expert, you have to have a feel for content marketing.

Here’s how HubSpot defines content marketing:

The definition of content marketing is simple: It’s the process of publishing written and visual material online with the purpose of attracting more leads to your business. These can include blog posts, pages, ebooks, infographics, videos, and more.

From https://blog.hubspot.com/marketing/content-marketing

Here are all the types of content in which one content marketer claims proficiency (as of July 27, 2023, subject to change):

Articles • Battlecards (80+) • Blog Posts (400+) • Briefs/Data/Literature Sheets • Case Studies (12+) • Competitive Analyses • Email Newsletters (200+) • Event/Conference/Trade Show Demonstration Scripts • FAQs • Plans • Playbooks • Presentations • Proposal Templates • Proposals (100+) • Quality Improvement Documents • Requirements • Scientific Book Chapters • Smartphone Application Content • Social Media (Facebook, Instagram, LinkedIn, Threads, TikTok, Twitter) • Strategic Analyses • Web Page Content • White Papers and E-Books

From https://www.linkedin.com/in/jbredehoft/, last updated 7/27/2023.

Now frankly, that list is pretty weak. You’ll notice that it doesn’t include Snapchat.

But content marketers need to be comfortable with creating at least one type of content.

Topic Five: How L-1 Identity Solutions came to be

Yes, an identity content marketing expert needs to thoroughly understand how L-1 Identity Solutions came to be.

I’m only half joking.

Back in the late 1990s and early 2000s (I’ll ignore FpVTE results for a moment), the fingerprint world in which I worked recognized four major vendors: Cogent, NEC, Printrak (later part of Motorola), and Sagem Morpho.

And then there were all these teeny tiny vendors that offered biometric and non-biometric solutions, including the fierce competitors Identix and Digital Biometrics, the fierce competitors Viisage and Visionics, and a bunch of other companies like Iridian.

Wel, there WERE all these teeny tiny vendors.

Until Bob LaPenta bought them all up and combined them into a single company, L-1 Identity Solutions. (LaPenta was one of the “Ls” in L-3, so he chose the name L-1 when he started his own company.)

So around 2008 the Big Four (including a post-FpVTE Motorola) became the Big Five, since L-1 Identity Solutions was now at the table with the big boys.

But then several things happened:

  • Motorola started selling off parts of itself. One of those parts, its Biometric Business Unit, was purchased by Safran (the company formed after Sagem and Snecma merged). This affected me because I, a Motorola employee, became an employee of MorphoTrak, the subsidiary formed when Sagem Morpho de facto acquired “Printrak” (Motorola’s Biometric Business Unit). So now the Big Five were the Big Four.
  • Make that the Big Three, because Safran also bought L-1 Identity Solutions, which became MorphoTrust. MorphoTrak and MorphoTrust were separate entities, and in fact competed against each other, so maybe we should say that the Big Four still existed.
  • Oh, and by the way, the independent company Cogent was acquired by 3M (although NEC considered buying it).
  • A few years later, 3M sold bits of itself (including the Cogent bit) to Gemalto.
  • Then in 2017, Advent International (which owned Oberthur) acquired bits of Safran (the “Morpho” part) and merged them with Oberthur to form IDEMIA. As a consequence of this, MorphoTrust de facto acquired MorphoTrak, ending the competition but requiring me to have two separate computers to access the still-separate MorphoTrust and MorphoTrak computer networks. (In passing, I have heard from two sources, but have not confirmed myself, that the possible sale of IDEMIA is on hold.)
  • And Gemalto was acquired by Thales.

So as of 2023, the Big Three (as characterized by Maxine Most and FindBiometrics) are IDEMIA, NEC, and Thales.

Why do I mention all this? Because all these mergers and acquisitions have resulted in identity practitioners working for a dizzying number of firms.

As of August 2023, I myself have worked for five identity firms, but in reality four of the five are the same firm because the original Printrak International kept on getting acquired (Motorola, Safran, IDEMIA).

And that’s nothing. One of my former Printrak coworkers (R.M.) has also worked for Digital Biometrics (now part of IDEMIA), Cross Match Technologies (now part of ASSA ABLOY), Iridian (now part of IDEMIA), Datastrip, Creative Information Technology, AGNITiO, iTouch Biometrics, NDI Recognition Systems, iProov, and a few other firms here and there.

The point is that everybody knows everybody because everybody has worked with (and against) everybody. And with all the job shifts, it’s a regular Peyton Place.

By ABC Television – eBay itemphoto frontphoto back, Public Domain, https://commons.wikimedia.org/w/index.php?curid=17252688

Not sure which one is me, which one is R.M., and who the other people are.

Do you need an identity content marketing expert today?

Do you need someone who not only knows biometrics and content marketing, but also all the other factors, their uses, and even knows the tangled history of L-1?

Someone who offers:

  • No identity learning curve?
  • No content learning curve?
  • Proven results?

If I can help you create your identity content, contact me.

Digital identity and…the United Nations Sustainable Development Goals?

Over the last few years, I have approached digital identity(ies) from a particular perspective, concentrating on the different types of digital identities that we have (none of us has a single identity, when you think about it), and the usefulness of these identities for various purposes, including purposes in which the identity of the person must be well established.

I have also noted the wide list of organizations that have expressed an interest in digital identity. Because of pressing digital identity needs, many of these organizations have moved forward with their own digital identity proposals, although now they are devoting more effort to ensure that their individual proposals play well with the proposals of other organizations.

Enter the United Nations (or part of it)

Well, let’s add one more organization to the list of those concerned about digital identity: the United Nations.

Although actually “the United Nations” is in reality a whole bunch of separate organizations that kinda sorta work together under the UN umbrella. But each of these organizations can get some oomph (an international relations diplomatic turn) from trumpeting a UN affiliation.

So let’s look at the Better Than Cash Alliance.

Based at the United Nations, the Better Than Cash Alliance is a partnership of governments, companies, and international organizations that accelerates the transition from cash to responsible digital payments to help achieve the Sustainable Development Goals

Note right off the bat that the Better Than Cash Alliance is not focused on digital identity per se, but digital payments. (Chris Burt of Biometric Update notes this focus.) Of course, digital payments and digital identity are necessarily intertwined, as we will see in a minute.

Enter the Sustainable Development Goals

But more importantly, digital payments themselves are not the ultimate goal of the Better Than Cash Alliance. Digital payments are only a means to an end to realize the United Nations Sustainable Development Goals, issued by a different UN organization.

Because of its primary focus, the Better Than Cash Alliance concentrates on issues that I myself have only studied in passing. For example, I have concentrated on the issues faced by people with no verifiable identity, but have not specifically looked at this from the lens of Sustainable Development Goal number 5, Gender Equality.

Principle 2 of the UN Principles for Responsible Digital Payments (October 2021 revision)

For this post, however, I’m going to focus on the digital identity aspects of the Better Than Cash Alliance and its report, UN Principles for Responsible Digital Payments (PDF), which was just updated this month (October 2021).

One of the key factors outlined in the report is “trust.” Now trust can have a variety of meanings (including trust that the information about my identity will not be used to throw me into a terrorist concentration camp), but for my purposes I want to concentrate on the trust that I, as a digital payments recipient, will receive the payments to which I am entitled.

To that end, the revised principles include items such as “ensure funds are protected and accessible” (principle 2), “champion value chain accountability” (principle 9), and other principles that impact on digital identity.

The introduction to the discussion on principle 2 highlights the problem:

A prerequisite of digital payments is that they match or surpass the
qualities of cash. All users rightly expect their funds to be safe and readily available, but this is not always the case. The causal factors behind this are multiplex.

(“Multiplex”? Yes, this document was written by government committees. Or movie theater owners.)

AMC Ontario Mills. (California, not Canada.) By Coolcaesar – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=104309320

To avoid the multiplexity of these issues, one offered response is to “proactively track and protect against unauthorized transactions, including fraud and mistakes.” This can be done by several methods near and dear to us in identity-land:

Advocate for appropriate security controls to mitigate transaction risks (e.g., biometric security,34 two factor authentication,35 limits on logins or transaction amounts,36 creating “need-to-know” administrative privileges for interacting with client data).

Now most people who read this report aren’t interested in the footnotes. But I am. Here are footnotes 34, 35, and 36 from the document.

34 Examples include the use of biometrics in India’s Aadhaar identification system, and UNHCR’s use of iris technology to distribute cash to refugees in Jordan

35 See EU PSD2 Articles 97–98, Ghana’s Payments Systems and Service Act, 2019 (section 65(1)), and Malawi’s 2019 e-Money regulations (section 17)

36 India Master Direction on Prepaid Payment Instruments, Section 15.3

Of course the report could have cited other examples, such as the use of fingerprints for benefits payments in the United States in the 1990s and 2000s, but I’m sure that falls afoul of some Sustainable Development Goal.

Although it’s harder to criticize a UN entity, such as the aforementioned UNHCR, when it uses biometrics.

Or maybe it isn’t that hard, when you think about Access Now’s criticisms of the UNHCR program.

Refugees should not be required to hand over personal biometric data in exchange for basic needs such as purchasing food, or accessing money. However, iris scan technology supplied by UK-registered company, IrisGuard, is reportedly being used by the World Food Programme (WFP) and the United Nations High Commissioner for Refugees (UNHCR) in refugee camps and urban centers in Jordan.

Based on reports suggesting the absence of meaningful consent, and an opaque privacy policy, Access Now has serious objections to the lack of transparency and privacy safeguards around this precarious tech rollout. 

Wow. Jordan is as bad as Illinois. Maybe Jordan needs a BIPA! Hope their doorbell cameras aren’t a problem…

So while the Better Than Cash Alliance is focusing on other things, it’s at least paying lip service to some of the stronger identity controls that many in the identity industry advocate.

Of course, it’s outside of the scope of the Better Than Cash Alliance to dictate HOW to implement “appropriate security controls.”

But anything that saves the whales AND the plankton (and complies with BIPA) will be met with approval.

Is your home your castle when you use consumer doorbell facial recognition?

(Part of the biometric product marketing expert series)

For purposes of this post, I will define three entities that can employ facial recognition:

  • Public organizations such as governments.
  • Private organizations such as businesses.
  • Individuals.

Some people are very concerned about facial recognition use by the first two categories of entities.

But what about the third category, individuals?

Can individuals assert a Constitutional right to use facial recognition in their own homes? And what if said individuals live in Peoria?

Concerns about ANY use of facial recognition

Let’s start with an ACLU article from 2018 regarding “Amazon’s Disturbing Plan to Add Face Surveillance to Your Front Door.”

Let me go out on a limb and guess that the ACLU opposes the practice.

The article was prompted by an Amazon 2018 patent application which involved both its Rekognition facial recognition service and its Ring cameras.

One of the figures in Amazon’s patent application, courtesy the ACLU. https://www.aclunc.org/docs/Amazon_Patent.pdf

While the main thrust of the ACLU article concerns acquisition of front door face surveillance (and other biometric) information by the government, it also briefly addresses the entity that is initially performing the face surveillance: namely, the individual.

Likewise, homeowners can also add photos of “suspicious” people into the system and then the doorbell’s facial recognition program will scan anyone passing their home.

I should note in passing that ACLU author Jacob Snow is describing a “deny list,” which flags people who should NOT be granted access such as that pesky solar power salesperson. In most cases, consumer products tout the use of an “allow list,” which flags people who SHOULD be granted access such as family members.

Regardless of whether you’re discussing a deny list or an allow list, the thrust of the ACLU article isn’t that governments shouldn’t use facial recognition. The thrust of the article is that facial recognition shouldn’t be used at all.

The ACLU and other civil rights groups have repeatedly warned that face surveillance poses an unprecedented threat to civil liberties and civil rights that must be stopped before it becomes widespread.

Again, not face surveillance by governments, but face surveillance period. People should not have the, um, “civil liberties” to use the technology.

But how does the tech world approach this?

The reason that I cited that particular ACLU article was that it was subsequently referenced in a CNET article from May 2021. This article bore the title “The best facial recognition security cameras of 2021.”

Let me go out on a limb and guess that CNET supports the practice.

The last part of author Megan Wollerton’s article delves into some of the issues regarding facial recognition use, including those raised by the ACLU. But the bulk of the article talks about really cool tech.

As I stated above, Wollerton notes that the intended use case for home facial recognition security systems involves the creation of an “allow list”:

Some home security cameras have facial recognition, an advanced option that lets you make a database of people who visit your house regularly. Then, when the camera sees a face, it determines whether or not it belongs to someone in your list of known faces. If the recognition system does not know who is at the door, it can alert you to an unknown person on your property.

Obviously you could repurpose such a system for anything you want, provided that you can obtain a clear picture of the face of the pesky social power salesperson.

Before posting her reviews of various security systems, and after a brief mention (expanded later in the article) about possible governmental misuse of facial recognition, Wollerton redirects the conversation.

But let’s step back a bit to the consumer realm. Your home is your castle, and the option of having surveillance cameras with facial recognition software is still compelling for those who want to be on the cutting edge of smart home innovation.

“Your home is your castle” may be a distinctly American concept, but it certainly applies here as organizations such as, um, the ACLU defend a person’s right against unreasonable actions by governments.

Obviously, there are limits to ANY Constitutional right. I cannot exercise my Fourth Amendment right to be secure in my house, couple that with my First Amendment right to freely exercise my religion, and conclude that I have the unrestricted right to perform ritual child sacrifices in my home. (Although I guess if I have a home theater and only my family members are present, I can probably yell “Fire!” all I want.)

So perhaps I could mount an argument that I can use facial recognition at my house any time I want, if the government agrees that this right is “reasonable.”

But it turns out that other people are involved.

You knew I was going to mention Illinois in this post

OK, it’s BIPA time.

As I previously explained in a January 2021 post about the Kami Doorbell Camera, “BIPA” is Illinois’ Biometric Information Privacy Act. This act imposes constraints on a private entity’s use of biometrics. (Governments are excluded in Illinois BIPA.) And here’s how BIPA defines the term “private entity”:

“Private entity” means any individual, partnership, corporation, limited liability company, association, or other group, however organized. A private entity does not include a State or local government agency. A private entity does not include any court of Illinois, a clerk of the court, or a judge or justice thereof.

Did you see the term “individual” in that definition?

So BIPA not only affects company use of biometrics, such as use of biometrics by Google or by a theme park or by a fitness center. It also affects an individual such as Harry or Harriet Homeowner’s use of biometrics.

As I previously noted, Google does not sell its Nest Cam “familiar face alert” feature in Illinois. But I guess it’s possible (via location spoofing if necessary) for someone to buy Nest Cam familiar face alerts in Indiana, and then sneak the feature across the border and implement it in the Land of Lincoln. But while this may (or may not) get Google off the hook, the individual is in a heap of trouble (should a trial lawyer decide to sue the individual).

Let’s face it. The average user of Nest Cam’s familiar face alerts, or the Kami Doorbell Camera, or any other home security camera with facial recognition (note that Amazon currently is not using facial recognition in its consumer products), is probably NOT complying with BIPA.

A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever occurs first.

I mean it’s hard enough for Harry and Harriet to get their teenage son to acknowledge receipt of the Homeowner family’s written policy for the use of the family doorbell camera. And you can forget about getting the pesky solar power salesperson to acknowledge receipt.

So from a legal perspective, it appears that any individual homeowner who installs a facial recognition security system can be hauled into civil court under BIPA.

But will these court cases be filed from a practical perspective?

Probably not.

When a social media company violates BIPA, the violation conceivably affects millions of individuals and can result in millions or billions of dollars in civil damages.

When the pesky solar power salesperson discovers that Harry and Harriet Homeowner, the damages would be limited to $1,000 or $5,000 plus relevant legal fees.

It’s not worth pursuing, any more than it’s worth pursuing the Illinois driver who is speeding down the expressway at 66 miles per hour.

Will the Kami Doorbell Camera sell in Illinois?

There was a recent press release that I missed until Biometric Update started talking about it two days later. The January 19 press release from Kami was entitled “Kami Releases Smart Video Doorbell With Facial Recognition Capabilities.” The subhead announced, “The device also offers user privacy controls.”

And while reading that Kami press release, I noticed a potential issue that wasn’t fully addressed in the press release, or (so far) in the media coverage of the press release. That issue relates to that four-letter word “BIPA.”

This post explains what BIPA is and why it’s important.

  • But it starts by looking at smart video doorbells.
  • Next, it looks at this particular press release about a smart video doorbell.
  • Then we’ll look at a competitor’s smart video doorbell, and a particular decision that the competitor made because of BIPA.
  • Only then will we dive into BIPA.
  • Finally, we’ll circle back to Kami, and how it may be affected by BIPA. (Caution: I’m not a lawyer.)

What is a smart video doorbell?

Many of us can figure out what a smart video doorbell would do, since Kami isn’t the first company to offer such a product. (I’ll talk about another company in a little bit.)

The basic concept is that the owner of the video doorbell (whom I’ll refer to as the “user,” to be consistent with Kami’s terminology) manages a small database of faces that could be recognized by the video doorbell. For example, if I owned such a device, I would definitely want to enroll my face and the face of my wife, and I would probably want to enroll the faces of other relatives and close friends. Doing this would create an allowlist of people who are known to the smart video doorbell system.

However, because technology itself is neutral, I need to point out two things about a standard smart video doorbell implementation:

  • Depending upon the design, you can enroll a person into the system without the person knowing it. If the user of the system controls the enrollment, then the user has complete control over the people that are enrolled into the system. All I need is a picture of the person, and I can use that picture to enroll the person into my smart video doorbell. I can grab a picture that I took from New Year’s Eve, or I could even grab a picture from the Internet. After all, if President Joe Biden walked up to my front door, I’d definitely want to know about it. Now there are technological solutions to this; for example, liveness detection could be used to ensure that the person who is enrolling in the system is a live person and not a picture. But I’m not aware of any system that requires liveness detection for this particular use case.
  • You can enroll a person into the system for ANY reason. Usually consumer smart video doorbells are presented as a way to let you know when friends and family come to the door. But the technology has no way of detecting whether you are actually enrolling a “friend.” Perhaps you want to know when your ex-girlfriend comes to the door. Or perhaps you have a really good picture of the guy who’s been breaking into homes in your neighborhood. Now enterprise and government systems account for this by supporting separate allowlists and blocklists, but frankly you can put anyone on to any list for any reason.

So with that introduction, let’s see what Kami is offering, and why it’s different.

The Kami Doorbell Camera

Let’s return to the Kami press release. It, as well as the description of the item in Kami’s online store, parallels a lot of the features that you can find in any smart video doorbell.

Know exactly who’s at your door. Save the faces of friends and family in your Kami or YI Home App, allowing you to get notified if the person outside your front door is a familiar face or a stranger.

And it has other features, such as an IP-65 rating stating that the camera will continue to work outdoors in challenging weather conditions.

However, Yamin Durrani, Kami’s CEO, emphasized a particular point in the press release:

“The Kami Doorbell Camera was inspired by a greater need for safety and peace of mind as people spend more time at home and consumers’ increasing desire to reside in smart homes,” said Yamin Durrani, CEO of Kami. “However, we noticed one gaping hole in the smart doorbell market — it was lacking an extremely advanced security solution that also puts the user in complete control of their privacy. In designing our video doorbell camera we considered all the ways people live in their homes to elegantly combine accelerated intelligence with a level of customization and privacy that is unmatched in today’s market. The result is a solution that provides comfort, safety and peace of mind.”

Privacy for the user(s) makes sense, because you don’t want someone hacking into the system and stealing the pictures and other stored information. As described, Kami lets the user(s) control their own data, and the system has presumably been designed from the ground up to support this.

But Kami isn’t the only product out there.

One of Kami’s competitors has an interesting footnote in its product description

There’s this company called Google. You may have heard of it. And Google offers a product called Nest Aware. This product is a subscription service that works with Nest cameras and provides various types of alerts for activities within the range of the cameras.

And Nest even has a feature that sounds, um, familiar to Kami users. Nest refers to the feature as “familiar face detection.”

Nest speakers and displays listen for unusual sounds. Nest cameras can spot a familiar face.4 And they all send intelligent alerts that matter.

So it sounds like Nest Aware has the same type of “allowlist” feature that allows the Nest Aware user to enroll friends and family (or whoever) into the system, so that they can be automatically recognized and so you can receive relevant information.

Hmm…did you note that there is a footnote next to the mention of “familiar face”? Let’s see what that footnote says.

4. Familiar face alerts not available on Nest Cams used in Illinois.

To the average consumer, that footnote probably looks a little odd. Why would this feature not be available in Illinois, but available in all the other states?

Or perhaps the average consumer may recall another Google app from three years ago, the Google Art & Culture app. That app became all the rage when it introduced a feature that let you compare your face to the faces on famous works of art. Well, it let you perform that comparison…unless you lived in Illinois or Texas.

So what’s the big deal about Illinois?

Those of us who are active in the facial recognition industry, or people who are active in the privacy industry, are well aware of the Illinois Biometric Information and Privacy Act, or BIPA. This Act, which was passed in 2008, provides Illinois residents control over the use of their biometric data. And if a company violates that control, the resident is permitted to sue the offending company. And class action lawsuits are allowed, thus increasing the possible damages to the offending company.

And there are plenty of lawyers that are willing to help residents exercise their rights under BIPA.

One early example of a BIPA lawsuit was filed against L.A. Tan. This firm offered memberships, and rather than requiring the member to present a membership card, the member simply placed his or her fingerprint onto a scanner to verify membership. But under BIPA, that could be a problem:

The plaintiffs in the L.A. Tan case alleged that the company, which used customers’ fingerprint scans in lieu of key fobs for tanning membership ID purposes, violated the BIPA by failing to obtain the customers’ written consent to use the fingerprint data and by not disclosing to customers the company’s plans for storing the data or destroying it in the event a tanning customer terminated her salon membership or a franchise closed. The plaintiffs did not claim L.A. Tan illegally sold or lost customers’ fingerprint data, just that it did not handle the data as carefully as the BIPA requires.

L.A. Tan ended up settling the case for over a million dollars, but Illinois Policy wondered:

This outcome is reassuring for anyone concerned about the handling of private information like facial-recognition data and fingerprints, but it also could signal a flood of similar lawsuits to come.

And there certainly was a flood of lawsuits. I was working in strategic marketing at the time, and I would duly note the second lawsuit filed under BIPA, and then the third lawsuit, and the fourth…Eventually I stopped counting.

As of June 2019, 324 such lawsuits had been filed in total, including 161 in the first six months of 2019 alone. And some big names have been sued under BIPA.

Facebook settled for $650 million.

Google was sued in October 2019 over Google Photos, again in February 2020 over Google Photos, again in April 2020 over its G Suite for Education, again in July 2020 over its use of IBM’s Diversity in Faces algorithm, and probably several other times besides.

So you can understand why Google is a little reluctant to sell Nest Aware’s familiar face detection feature in Illinois.

So where does that leave Kami?

Here’s where the problem may lie. Based upon the other lawsuits, it appears that lawyers are alleging that before an Illinois resident’s biometric features are stored in a database, the person has to give consent for the biometric to be stored, and the person has to be informed of his or her rights under BIPA.

So such explicit permission has to be given for every biometric database physically within the state of Illinois?

Yes…and then some. Remember that Facebook and Google’s databases aren’t necessarily physically located within the state of Illinois, but those companies have been sued under BIPA. I’m not a lawyer, but conceivably an Illinois resident could sue a Swiss company, with its databases in Switzerland, for violating BIPA.

Now when someone sets up a Kami system, does the Kami user ensure that every Illinois resident has received the proper BIPA notices? And if the Kami user doesn’t do that, is Kami legally liable?

For all I know, the Kami enrollment feature may include explicit BIPA questions, such as “Is the person in this picture a resident of Illinois?” Then again, it may not.

Again, I’m not a lawyer, but it’s interesting to note that Google, who does have access to a bunch of lawyers, decided to dodge the issue by not selling familiar face detection to Illinois residents.

Which doesn’t answer the question of an Iowa Nest Aware familiar face detection user who enrolls an Illinois resident…