You may remember the May hoopla regarding amendments to Illinois’ Biometric Information Privacy Act (BIPA). These amendments do not eliminate the long-standing law, but lessen its damage to offending companies.
The General Assembly is expected to send the bill to Illinois Governor JB Pritzker within 30 days. Gov. Pritzker will then have 60 days to sign it into law. It will be immediately effective.
While the BIPA amendment has passed the Illinois House and Senate and was sent to the Governor, there is no indication that he has signed the bill into law within the 60-day timeframe.
A proposed class action claims Photomyne, the developer of several photo-editing apps, has violated an Illinois privacy law by collecting, storing and using residents’ facial scans without authorization….
The lawsuit contends that the app developer has breached the BIPA’s clear requirements by failing to notify Illinois users of its biometric data collection practices and inform them how long and for what purpose the information will be stored and used.
In addition, the suit claims the company has unlawfully failed to establish public guidelines that detail its data retention and destruction policies.
Image from the mid-2010s. “John, how do you use the CrowdCompass app for this Users Conference?” Well, let me tell you…
Because of my former involvement with the biometric user conference managed by IDEMIA, MorphoTrak, Sagem Morpho, Motorola, and older entities, I always like to peek and see what they’re doing these days. And it looks like they’re still prioritizing the educational element of the conference.
Although the 2024 Justice and Public Safety Conference won’t take place until September, the agenda is already online.
Subject to change, presumably.
This Joseph Courtesis session, scheduled for the afternoon of Thursday, September 12 caught my eye. It’s entitled “Ethical Use of Facial Recognition in Law Enforcement: Policy Before Technology.” Here’s an excerpt from the abstract:
This session will focus on post investigative image identification with the assistance of Facial Recognition Technology (FRT). It’s important to point out that FRT, by itself, does not produce Probable Cause to arrest.
Re-read that last sentence, then re-read it one more time. 100% of the wrongful arrest cases would be eliminated if everyone adopted this one practice. FRT is ONLY an investigative lead.
And Courtesis makes one related point:
Any image identification process that includes FRT should put policy before the technology.
Any technology that could deprive a person of their liberty needs a clear policy on its proper use.
September conference attendees will definitely receive a comprehensive education from an authority on the topic.
But now I’m having flashbacks, and visions of Excel session planning workbooks are dancing in my head. Maybe they plan with Asana today.
Biometric security company Facewatch…is facing a lawsuit after its system wrongly flagged a 19-year-old girl as a shoplifter….(The girl) was shopping at Home Bargains in Manchester in February when staff confronted her and threw her out of the store…..’I have never stolen in my life and so I was confused, upset and humiliated to be labeled as a criminal in front of a whole shop of people,’ she said in a statement.
While Big Brother Watch and others are using this story to conclude that facial recognition is evil and no one should ever use it, the problem isn’t the technology. The problem is when the technology is misused.
Were the Home Bargains staff trained in forensic face examination, so that they could confirm that the customer was the shoplifter? I doubt it.
Even if they were forensically trained, did the Home Bargains staff follow accepted practices and use the face recognition results ONLY as an investigative lead, and seek other corroborating evidence to identify the girl as a shoplifter? I doubt it.
Again, the problem is NOT the technology. The problem is MISUSE of the technology—by this English store, by a certain chain of U.S. stores, and even by U.S. police agencies who fail to use facial recognition results solely as an investigative lead.
A prospect approached me some time ago to have Bredemarket help tell this story. However, the prospect has delayed moving forward with the project, and so their story has not yet been told.
Does YOUR firm have a story that you have failed to tell?
If you’re a biometric product marketing expert, or even if you’re not, you’re presumably analyzing the possible effects to your identity/biometric product from the proposed changes to the Biometric Information Privacy Act (BIPA).
As of May 16, the Illinois General Assembly (House and Senate) passed a bill (SB2979) to amend BIPA. It awaits the Governor’s signature.
What is the amendment? Other than defining an “electronic signature,” the main purpose of the bill is to limit damages under BIPA. The new text regarding the “Right of action” codifies the concept of a “single violation.”
(T)he amended law DOES NOT CHANGE “Private Right of Action” so BIPA LIVES!
Companies who violate the strict requirements of BIPA aren’t off the hook. It’s just that the trial lawyers—whoops, I mean the affected consumers make a lot less money.
We get all sorts of great tools, but do we know how to use them? And what are the consequences if we don’t know how to use them? Could we lose the use of those tools entirely due to bad publicity from misuse?
According to a report released in September by the US Government Accountability Office, only 5 percent of the 196 FBI agents who have access to facial recognition technology from outside vendors have completed any training on how to properly use the tools.
It turns out that the study is NOT limited to FBI use of facial recognition services, but also addresses six other federal agencies: the Bureau of Alcohol, Tobacco, Firearms and Explosives (the guvmint doesn’t believe in the Oxford comma); U.S. Customs and Border Protection; the Drug Enforcement Administration; Homeland Security Investigations; the U.S. Marshals Service; and the U.S. Secret Service.
Initially, none of the seven agencies required users to complete facial recognition training. As of April 2023, two of the agencies (Homeland Security Investigations and the U.S. Marshals Service) required training, two (the FBI and Customs and Border Protection) did not, and the other three had quit using these four facial recognition services.
The FBI stated that facial recognition training was recommended as a “best practice,” but not mandatory. And when something isn’t mandatory, you can guess what happened:
GAO found that few of these staff completed the training, and across the FBI, only 10 staff completed facial recognition training of 196 staff that accessed the service. FBI said they intend to implement a training requirement for all staff, but have not yet done so.
Although not a requirement, FBI officials said they recommend (as a best practice) that some staff complete FBI’s Face Comparison and Identification Training when using Clearview AI. The recommended training course, which is 24 hours in length, provides staff with information on how to interpret the output of facial recognition services, how to analyze different facial features (such as ears, eyes, and mouths), and how changes to facial features (such as aging) could affect results.
However, this type of training was not recommended for all FBI users of Clearview AI, and was not recommended for any FBI users of Marinus Analytics or Thorn.
I should note that the report was issued in September 2023, based upon data gathered earlier in the year, and that for all I know the FBI now mandates such training.
Or maybe it doesn’t.
What about your state and local facial recognition users?
Of course, training for federal facial recognition users is only a small part of the story, since most of the law enforcement activity takes place at the state and local level. State and local users need training so that they can understand:
The anatomy of the face, and how it affects comparisons between two facial images.
How cameras work, and how this affects comparisons between two facial images.
How poor quality images can adversely affect facial recognition.
How facial recognition should ONLY be used as an investigative lead.
If facial recognition users had been trained, none of the false arrests over the last few years would have taken place.
The users would have realized that the poor images were not of sufficient quality to determine a match.
The users would have realized that even if they had been of sufficient quality, facial recognition must only be used as an investigative lead, and once other data had been checked, the cases would have fallen apart.
But the false arrests gave the privacy advocates the ammunition they needed.
Not to insist upon proper training in the use of facial recognition.
Like nuclear or biological weapons, facial recognition’s threat to human society and civil liberties far outweighs any potential benefits. Silicon Valley lobbyists are disingenuously calling for regulation of facial recognition so they can continue to profit by rapidly spreading this surveillance dragnet. They’re trying to avoid the real debate: whether technology this dangerous should even exist. Industry-friendly and government-friendly oversight will not fix the dangers inherent in law enforcement’s discriminatory use of facial recognition: we need an all-out ban.
(And just wait until the anti-facial recognition forces discover that this is not only a plot of evil Silicon Valley, but also a plot of evil non-American foreign interests located in places like Paris and Tokyo.)
Because the anti-facial recognition forces want us to remove the use of technology and go back to the good old days…of eyewitness misidentification.
Eyewitnesses are often expected to identify perpetrators of crimes based on memory, which is incredibly malleable. Under intense pressure, through suggestive police practices, or over time, an eyewitness is more likely to find it difficult to correctly recall details about what they saw.
Forensic examiners, YOU’RE DOING IT WRONG based on this bold claim:
“Columbia engineers have built a new AI that shatters a long-held belief in forensics–that fingerprints from different fingers of the same person are unique. It turns out they are similar, only we’ve been comparing fingerprints the wrong way!” (From Newswise)
Couple that claim with the initial rejection of the paper by multiple forensic journals because “it is well known that every fingerprint is unique” (apparently the reviewer never read the NAS report), and you have the makings of a sexy story.
Or do you?
And what is the paper’s basis for the claim that fingerprints from the same person are NOT unique?
““The AI was not using ‘minutiae,’ which are the branchings and endpoints in fingerprint ridges – the patterns used in traditional fingerprint comparison,” said Guo, who began the study as a first-year student at Columbia Engineering in 2021. “Instead, it was using something else, related to the angles and curvatures of the swirls and loops in the center of the fingerprint.”” (From Newswise)
Perhaps there are similarities in the patterns of the fingers at the center of a print, but that doesn’t negate the uniqueness of the bifurcations and ridge ending locations throughout the print. Guo’s method uses less of the distal fingerprint than traditional minutiae analysis.
But maybe there are forensic applications for this alternate print comparison technique, at least as an investigative lead. (Let me repeat that again: “investigative lead.”) Courtroom use will be limited because there is no AI equivalent to explain to the court how the comparison was made, and if any other expert AI algorithm would yield the same results.
As I said, I shared the piece above to several places, including one frequented by forensic experts. One commenter in a private area offered the following observation, in part:
What was the validation process? Did they have a qualified latent print examiner confirm their data?
From a private source.
Before you dismiss the comment as reflecting a stick-in-the-mud forensic old fogey who does not recognize the great wisdom of our AI overlords, remember (as I noted above) that forensic experts are required to testify in court about things like this. If artificial intelligence is claimed to identify relationships between fingers from the same person, you’d better make really sure that this is true before someone is put to death.
I hate to repeat the phrase used by scientific study authors in search of more funding, but…
Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent (October 26)….
[A]dded in (to the Act) is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.
This not only opens up issues regarding encryption and privacy, but also specific identity technologies such as age verification and age estimation.
This post looks at three types of firms that are affected by the UK Online Safety Act, the stories they are telling, and the stories they may need to tell in the future. What is YOUR firm’s Online Safety Act-related story?
What three types of firms are affected by the UK Online Safety Act?
As of now I have been unable to locate a full version of the final final Act, but presumably the provisions from this July 2023 version (PDF) have only undergone minor tweaks.
Among other things, this version discusses “User identity verification” in 65, “Category 1 service” in 96(10)(a), “United Kingdom user” in 228(1), and a multitude of other terms that affect how companies will conduct business under the Act.
I am focusing on three different types of companies:
Technology services (such as Yoti) that provide identity verification, including but not limited to age verification and age estimation.
User-to-user services (such as WhatsApp) that provide encrypted messages.
User-to-user services (such as Wikipedia) that allow users (including United Kingdom users) to contribute content.
What types of stories will these firms have to tell, now that the Act is law?
For ALL services, the story will vary as Ofcom decides how to implement the Act, but we are already seeing the stories from identity verification services. Here is what Yoti stated after the Act became law:
We have a range of age assurance solutions which allow platforms to know the age of users, without collecting vast amounts of personal information. These include:
Age estimation: a user’s age is estimated from a live facial image. They do not need to use identity documents or share any personal information. As soon as their age is estimated, their image is deleted – protecting their privacy at all times. Facial age estimation is 99% accurate and works fairly across all skin tones and ages.
Digital ID app: a free app which allows users to verify their age and identity using a government-issued identity document. Once verified, users can use the app to share specific information – they could just share their age or an ‘over 18’ proof of age.
MailOnline has approached WhatsApp’s parent company Meta for comment now that the Bill has received Royal Assent, but the firm has so far refused to comment.
[T]o comply with the new law, the platform says it would be forced to weaken its security, which would not only undermine the privacy of WhatsApp messages in the UK but also for every user worldwide.
‘Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98 per cent of users,’ Mr Cathcart has previously said.
Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.)
All of these firms have shared their stories either before or after the Act became law, and those stories will change depending upon what Ofcom decides.
For example, when biometric companies want to justify the use of their technology, they have found that it is very effective to position biometrics as a way to combat sex trafficking.
Similarly, moves to rein in social media are positioned as a way to preserve mental health.
Now that’s a not-so-pretty picture, but it effectively speaks to emotions.
“If poor vulnerable children are exposed to addictive, uncontrolled social media, YOUR child may end up in a straitjacket!”
In New York state, four government officials have declared that the ONLY way to preserve the mental health of underage social media users is via two bills, one of which is the “New York Child Data Protection Act.”
But there is a challenge to enforce ALL of the bill’s provisions…and only one way to solve it. An imperfect way—age estimation.
Because they want to protect the poor vulnerable children.
By Paolo Monti – Available in the BEIC digital library and uploaded in partnership with BEIC Foundation.The image comes from the Fondo Paolo Monti, owned by BEIC and located in the Civico Archivio Fotografico of Milan., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48057924
And because the major U.S. social media companies are headquartered in California. But I digress.
So why do they say that children need protection?
Recent research has shown devastating mental health effects associated with children and young adults’ social media use, including increased rates of depression, anxiety, suicidal ideation, and self-harm. The advent of dangerous, viral ‘challenges’ being promoted through social media has further endangered children and young adults.
Of course one can also argue that social media is harmful to adults, but the New Yorkers aren’t going to go that far.
So they are just going to protect the poor vulnerable children.
CC BY-SA 4.0.
This post isn’t going to deeply analyze one of the two bills the quartet have championed, but I will briefly mention that bill now.
The “Stop Addictive Feeds Exploitation (SAFE) for Kids Act” (S7694/A8148) defines “addictive feeds” as those that are arranged by a social media platform’s algorithm to maximize the platform’s use.
Those of us who are flat-out elderly vaguely recall that this replaced the former “chronological feed” in which the most recent content appeared first, and you had to scroll down to see that really cool post from two days ago. New York wants the chronological feed to be the default for social media users under 18.
The bill also proposes to limit under 18 access to social media without parental consent, especially between midnight and 6:00 am.
And those who love Illinois BIPA will be pleased to know that the bill allows parents (and their lawyers) to sue for damages.
Previous efforts to control underage use of social media have faced legal scrutinity, but since Attorney General James has sworn to uphold the U.S. Constitution, presumably she has thought about all this.
Enough about SAFE for Kids. Let’s look at the other bill.
The New York Child Data Protection Act
The second bill, and the one that concerns me, is the “New York Child Data Protection Act” (S7695/A8149). Here is how the quartet describes how this bill will protect the poor vulnerable children.
CC BY-SA 4.0.
With few privacy protections in place for minors online, children are vulnerable to having their location and other personal data tracked and shared with third parties. To protect children’s privacy, the New York Child Data Protection Act will prohibit all online sites from collecting, using, sharing, or selling personal data of anyone under the age of 18 for the purposes of advertising, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, this informed consent must come from a parent.
And again, this bill provides a BIPA-like mechanism for parents or guardians (and their lawyers) to sue for damages.
But let’s dig into the details. With apologies to the New York State Assembly, I’m going to dig into the Senate version of the bill (S7695). Bear in mind that this bill could be amended after I post this, and some of the portions that I cite could change.
This only applies to natural persons. So the bots are safe, regardless of age.
Speaking of age, the age of 18 isn’t the only age referenced in the bill. Here’s a part of the “privacy protection by default” section:
§ 899-FF. PRIVACY PROTECTION BY DEFAULT.
1. EXCEPT AS PROVIDED FOR IN SUBDIVISION SIX OF THIS SECTION AND SECTION EIGHT HUNDRED NINETY-NINE-JJ OF THIS ARTICLE, AN OPERATOR SHALL NOT PROCESS, OR ALLOW A THIRD PARTY TO PROCESS, THE PERSONAL DATA OF A COVERED USER COLLECTED THROUGH THE USE OF A WEBSITE, ONLINE SERVICE, ONLINE APPLICATION, MOBILE APPLICA- TION, OR CONNECTED DEVICE UNLESS AND TO THE EXTENT:
(A) THE COVERED USER IS TWELVE YEARS OF AGE OR YOUNGER AND PROCESSING IS PERMITTED UNDER 15 U.S.C. § 6502 AND ITS IMPLEMENTING REGULATIONS; OR
(B) THE COVERED USER IS THIRTEEN YEARS OF AGE OR OLDER AND PROCESSING IS STRICTLY NECESSARY FOR AN ACTIVITY SET FORTH IN SUBDIVISION TWO OF THIS SECTION, OR INFORMED CONSENT HAS BEEN OBTAINED AS SET FORTH IN SUBDIVISION THREE OF THIS SECTION.
So a lot of this bill depends upon whether a person is over or under the age of eighteen, or over or under the age of thirteen.
And that’s a problem.
How old are you?
The bill needs to know whether or not a person is 18 years old. And I don’t think the quartet will be satisfied with the way that alcohol websites determine whether someone is 21 years old.
Attorney General James and the others would presumably prefer that the social media companies verify ages with a government-issued ID such as a state driver’s license, a state identification card, or a national passport. This is how most entities verify ages when they have to satisfy legal requirements.
For some people, even some minors, this is not that much of a problem. Anyone who wants to drive in New York State must have a driver’s license, and you have to be at least 16 years old to get a driver’s license. Admittedly some people in the city never bother to get a driver’s license, but at some point these people will probably get a state ID card.
However, there are going to be some 17 year olds who don’t have a driver’s license, government ID or passport.
And some 16 year olds.
And once you look at younger people—15 year olds, 14 year olds, 13 year olds, 12 year olds—the chances of them having a government-issued identification document are much less.
What are these people supposed to do? Provide a birth certificate? And how will the social media companies know if the birth certificate is legitimate?
But there’s another way to determine ages—age estimation.
How old are you, part 2
As long-time readers of the Bredemarket blog know, I have struggled with the issue of age verification, especially for people who do not have driver’s licenses or other government identification. Age estimation in the absence of a government ID is still an inexact science, as even Yoti has stated.
Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.
So if a minor does not have a government ID, and the social media firm has to use age estimation to determine a minor’s age for purposes of the New York Child Data Protection Act, the following two scenarios are possible:
An 11 year old may be incorrectly allowed to give informed consent for purposes of the Act.
A 14 year old may be incorrectly denied the ability to give informed consent for purposes of the Act.
Is age estimation “good enough for government work”?
At the highest level, debates regarding government and enterprise use of biometric technology boil down to a debate about whether to keep people safe, or whether to preserve individual privacy.
In the state of Montana, school safety is winning over school privacy—for now.
The state Legislature earlier this year passed a law barring state and local governments from continuous use of facial recognition technology, typically in the form of cameras capable of reading and collecting a person’s biometric data, like the identifiable features of their face and body. A bipartisan group of legislators went toe-to-toe with software companies and law enforcement in getting Senate Bill 397 over the finish line, contending public safety concerns raised by the technology’s supporters don’t overcome individual privacy rights.
School districts, however, were specifically carved out of the definition of state and local governments to which the facial recognition technology law applies.
At a minimum Montana school districts seek to abide by two existing Federal laws when installating facial recognition and video surveillance systems.
Without many state-level privacy protection laws in place, school policies typically lean on the Children’s Online Privacy Protection Act (COPPA), a federal law requiring parental consent in order for websites to collect data on their children, or the Family Educational Rights and Privacy Act (FERPA), which protects the privacy of student education records.
If a vendor doesn’t agree to abide by these laws, then the Montana School Board Association recommends that the school district not do business with the vendor.
The Family Educational Rights and Privacy Act was passed by the US federal government to protect the privacy of students’ educational records. This law requires public schools and school districts to give families control over any personally identifiable information about the student.
(The Sun River Valley School District’s) use of the technology is more focused on keeping people who shouldn’t be on school property away, he said, such as a parent who lost custody of their child.
(Simms) High School Principal Luke McKinley said it’s been more frequent to use the facial recognition technology during extra-curricular activities, when football fans get too rowdy for a high school sports event.
Technology (in this case from Verkada) helps the Sun River School District, especially in its rural setting. Back in 2022, it took law enforcement an estimated 45 minutes to respond to school incidents. The hope is that the technology could identify those who engaged in illegal activity, or at least deter it.
What about other school districts?
When I created my educational identity page, I included the four key words “When permitted by law.” While Montana school districts are currently permitted to use facial recognition and video surveillance, other school districts need to check their local laws before implementing such a system, and also need to ensure that they comply with federal laws such as COPPA and FERPA.
I may be, um, biased in my view, but as long as the school district (or law enforcement agency, or apartment building owner, or whoever) complies with all applicable laws, and implements the technology with a primary purpose of protecting people rather than spying on them, facial recognition is a far superior tool to protect people than manual recognition methods that rely on all-too-fallible human beings.