A Legal Leg to Stand On: The New Triad of AI Governance

In business, it is best to use a three-legged stool.

  • A two-legged stool obviously tips over, and you fall to the ground.
  • A four-legged stool is too robust for these cost-conscious days, where the jettisoning of employees is policy at both the private and public level.

But a three-legged stool is just right, as project managers already know when they strive to balance time, cost, and quality.

Perhaps the three-legged stool was in the back of Yunique Demann’s mind when she wrote a piece for the Information Systems Audit and Control Association (ISACA) entitled “The New Triad of AI Governance: Privacy, Cybersecurity, and Legal.” If you only rely on privacy and cybersecurity, you will fall to the ground like someone precariously balanced on a two-legged stool.

“As AI regulations evolve globally, legal expertise has become a strategic necessity in AI governance. The role of legal professionals now extends beyond compliance into one that is involved in shaping AI strategy and legally addressing ethical considerations…”

Read more of Demann’s thoughts here.

(Stool image public domain)

When Remote Bar Exam Technology Failed, You Won’t Believe What Happened Next

(Imagen 3)

This is a remote education post, but not an educational identity post.

I have previously discussed online test taking, and I guess the State Bar of California reads the Bredemarket blog because it decided that an online bar exam would be a great idea, since it would reduce the costs of renting large halls for test taking purposes.

But it didn’t work.

“The online testing platforms repeatedly crashed before some applicants even started. Others struggled to finish and save essays, experienced screen lags and error messages and could not copy and paste text from test questions into the exam’s response field — a function officials had stated would be possible.”

No surprise, but the remote bar exam debacle was so bad that students are filing…lawsuits.

“Some students also filed a complaint Thursday in the U.S. District Court for the Northern District of California, accusing Meazure Learning, the company that administered the exam, of “failing spectacularly” and causing an “unmitigated disaster.””

Biometric Product Marketers, BIPA Remains Unaltered

(Part of the biometric product marketing expert series)

You may remember the May hoopla regarding amendments to Illinois’ Biometric Information Privacy Act (BIPA). These amendments do not eliminate the long-standing law, but lessen its damage to offending companies.

Back on May 29, Fox Rothschild explained the timeline:

The General Assembly is expected to send the bill to Illinois Governor JB Pritzker within 30 days. Gov. Pritzker will then have 60 days to sign it into law. It will be immediately effective.

According to the Illinois General Assembly website, the Senate sent the bill to the Governor on June 14.

While the BIPA amendment has passed the Illinois House and Senate and was sent to the Governor, there is no indication that he has signed the bill into law within the 60-day timeframe.

So BIPA 1.0 is still in effect.

As Photomyne found out:

A proposed class action claims Photomyne, the developer of several photo-editing apps, has violated an Illinois privacy law by collecting, storing and using residents’ facial scans without authorization….

The lawsuit contends that the app developer has breached the BIPA’s clear requirements by failing to notify Illinois users of its biometric data collection practices and inform them how long and for what purpose the information will be stored and used.

In addition, the suit claims the company has unlawfully failed to establish public guidelines that detail its data retention and destruction policies.

From https://www.instagram.com/p/C7ZWA9NxUur/.

Investigative Lead, Again

Image from the mid-2010s. “John, how do you use the CrowdCompass app for this Users Conference?” Well, let me tell you…

Because of my former involvement with the biometric user conference managed by IDEMIA, MorphoTrak, Sagem Morpho, Motorola, and older entities, I always like to peek and see what they’re doing these days. And it looks like they’re still prioritizing the educational element of the conference.

Although the 2024 Justice and Public Safety Conference won’t take place until September, the agenda is already online.

Subject to change, presumably.

This Joseph Courtesis session, scheduled for the afternoon of Thursday, September 12 caught my eye. It’s entitled “Ethical Use of Facial Recognition in Law Enforcement: Policy Before Technology.” Here’s an excerpt from the abstract:

This session will focus on post investigative image identification with the assistance of Facial Recognition Technology (FRT). It’s important to point out that FRT, by itself, does not produce Probable Cause to arrest.

Re-read that last sentence, then re-read it one more time. 100% of the wrongful arrest cases would be eliminated if everyone adopted this one practice. FRT is ONLY an investigative lead.

And Courtesis makes one related point:

Any image identification process that includes FRT should put policy before the technology.

Any technology that could deprive a person of their liberty needs a clear policy on its proper use.

September conference attendees will definitely receive a comprehensive education from an authority on the topic.

But now I’m having flashbacks, and visions of Excel session planning workbooks are dancing in my head. Maybe they plan with Asana today.

Don’t Misuse Facial Recognition Technology

From https://www.biometricupdate.com/202405/facewatch-met-police-face-lawsuits-after-facial-recognition-misidentification.

From Biometric Update:

Biometric security company Facewatch…is facing a lawsuit after its system wrongly flagged a 19-year-old girl as a shoplifter….(The girl) was shopping at Home Bargains in Manchester in February when staff confronted her and threw her out of the store…..’I have never stolen in my life and so I was confused, upset and humiliated to be labeled as a criminal in front of a whole shop of people,’ she said in a statement.

While Big Brother Watch and others are using this story to conclude that facial recognition is evil and no one should ever use it, the problem isn’t the technology. The problem is when the technology is misused.

  • Were the Home Bargains staff trained in forensic face examination, so that they could confirm that the customer was the shoplifter? I doubt it.
  • Even if they were forensically trained, did the Home Bargains staff follow accepted practices and use the face recognition results ONLY as an investigative lead, and seek other corroborating evidence to identify the girl as a shoplifter? I doubt it.

Again, the problem is NOT the technology. The problem is MISUSE of the technology—by this English store, by a certain chain of U.S. stores, and even by U.S. police agencies who fail to use facial recognition results solely as an investigative lead.

A prospect approached me some time ago to have Bredemarket help tell this story. However, the prospect has delayed moving forward with the project, and so their story has not yet been told.

Does YOUR firm have a story that you have failed to tell?

What is Your Biometric Firm’s BIPA Product Marketing Story?

(Part of the biometric product marketing expert series)

If your biometric firm conducts business in the United States, then your biometric firm probably conducts business in Illinois.

(With some exceptions.)

Your firm and your customers are impacted by Illinois’ Biometric Information Privacy Act, or BIPA.

Including requirements for consumer consent for use of biometrics.

And heavy fines (currently VERY heavy fines) if you don’t obtain that consent.

What is your firm telling your customers about BIPA?

Bredemarket has mentioned BIPA several times in the Bredemarket blog.

But what has YOUR firm said about BIPA?

And if your firm has said nothing about BIPA, why not?

Perhaps the biometric product marketing expert can ensure that your product is marketed properly in Illlinois.

Contact Bredemarket before it’s too late.

From https://www.instagram.com/p/C7ZWA9NxUur/.

BIPA Remains a Four-Letter Word

(Part of the biometric product marketing expert series)

If you’re a biometric product marketing expert, or even if you’re not, you’re presumably analyzing the possible effects to your identity/biometric product from the proposed changes to the Biometric Information Privacy Act (BIPA).

From ilga.gov. Link.

As of May 16, the Illinois General Assembly (House and Senate) passed a bill (SB2979) to amend BIPA. It awaits the Governor’s signature.

What is the amendment? Other than defining an “electronic signature,” the main purpose of the bill is to limit damages under BIPA. The new text regarding the “Right of action” codifies the concept of a “single violation.”

From ilga.gov. Link.
2(b) For purposes of subsection (b) of Section 15, a
3private entity that, in more than one instance, collects,
4captures, purchases, receives through trade, or otherwise
5obtains the same biometric identifier or biometric information
6from the same person using the same method of collection in
7violation of subsection (b) of Section 15 has committed a
8single violation of subsection (b) of Section 15 for which the
9aggrieved person is entitled to, at most, one recovery under
10this Section.
11(c) For purposes of subsection (d) of Section 15, a
12private entity that, in more than one instance, discloses,
13rediscloses, or otherwise disseminates the same biometric
14identifier or biometric information from the same person to
15the same recipient using the same method of collection in
16violation of subsection (d) of Section 15 has committed a
17single violation of subsection (d) of Section 15 for which the
18aggrieved person is entitled to, at most, one recovery under
19this Section regardless of the number of times the private
20entity disclosed, redisclosed, or otherwise disseminated the
21same biometric identifier or biometric information of the same
22person to the same recipient.
From ilga.gov. Link. Emphasis mine.

So does this mean that Google Nest Cam’s “familiar face alert” feature will now be available in Illinois?

Probably not. As Doug “BIPAbuzz” OGorden has noted:

(T)he amended law DOES NOT CHANGE “Private Right of Action” so BIPA LIVES!

Companies who violate the strict requirements of BIPA aren’t off the hook. It’s just that the trial lawyers—whoops, I mean the affected consumers make a lot less money.

If We Don’t Train Facial Recognition Users, There Will Be No Facial Recognition

(Part of the biometric product marketing expert series)

We get all sorts of great tools, but do we know how to use them? And what are the consequences if we don’t know how to use them? Could we lose the use of those tools entirely due to bad publicity from misuse?

Hida Viloria. By Intersex77 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=98625035

Do your federal facial recognition users know what they are doing?

I recently saw a WIRED article that primarily talked about submitting Parabon Nanolabs-generated images to a facial recognition program. But buried in the article was this alarming quote:

According to a report released in September by the US Government Accountability Office, only 5 percent of the 196 FBI agents who have access to facial recognition technology from outside vendors have completed any training on how to properly use the tools.

From https://www.wired.com/story/parabon-nanolabs-dna-face-models-police-facial-recognition/

Now I had some questions after reading that sentence: namely, what does “have access” mean? To answer those questions, I had to find the study itself, GAO-23-105607, Facial Recognition Services: Federal Law Enforcement Agencies Should Take Actions to Implement Training, and Policies for Civil Liberties.

It turns out that the study is NOT limited to FBI use of facial recognition services, but also addresses six other federal agencies: the Bureau of Alcohol, Tobacco, Firearms and Explosives (the guvmint doesn’t believe in the Oxford comma); U.S. Customs and Border Protection; the Drug Enforcement Administration; Homeland Security Investigations; the U.S. Marshals Service; and the U.S. Secret Service.

In addition, the study confines itself to four facial recognition services: Clearview AI, IntelCenter, Marinus Analytics, and Thorn. It does not address other uses of facial recognition by the agencies, such as the FBI’s use of IDEMIA in its Next Generation Identification system (IDEMIA facial recognition technology is also used by the Department of Defense).

Two of the GAO’s findings:

  • Initially, none of the seven agencies required users to complete facial recognition training. As of April 2023, two of the agencies (Homeland Security Investigations and the U.S. Marshals Service) required training, two (the FBI and Customs and Border Protection) did not, and the other three had quit using these four facial recognition services.
  • The FBI stated that facial recognition training was recommended as a “best practice,” but not mandatory. And when something isn’t mandatory, you can guess what happened:

GAO found that few of these staff completed the training, and across the FBI, only 10 staff completed facial recognition training of 196 staff that accessed the service. FBI said they intend to implement a training requirement for all staff, but have not yet done so. 

From https://www.gao.gov/products/gao-23-105607.

So if you use my three levels of importance (TLOI) model, facial recognition training is important, but not critically important. Therefore, it wasn’t done.

The detailed version of the report includes additional information on the FBI’s training requirements…I mean recommendations:

Although not a requirement, FBI officials said they recommend (as
a best practice) that some staff complete FBI’s Face Comparison and
Identification Training when using Clearview AI. The recommended
training course, which is 24 hours in length, provides staff with information on how to interpret the output of facial recognition services, how to analyze different facial features (such as ears, eyes, and mouths), and how changes to facial features (such as aging) could affect results.

From https://www.gao.gov/assets/gao-23-105607.pdf.

However, this type of training was not recommended for all FBI users of Clearview AI, and was not recommended for any FBI users of Marinus Analytics or Thorn.

I should note that the report was issued in September 2023, based upon data gathered earlier in the year, and that for all I know the FBI now mandates such training.

Or maybe it doesn’t.

What about your state and local facial recognition users?

Of course, training for federal facial recognition users is only a small part of the story, since most of the law enforcement activity takes place at the state and local level. State and local users need training so that they can understand:

  • The anatomy of the face, and how it affects comparisons between two facial images.
  • How cameras work, and how this affects comparisons between two facial images.
  • How poor quality images can adversely affect facial recognition.
  • How facial recognition should ONLY be used as an investigative lead.

If state and local users received this training, none of the false arrests over the last few years would have taken place.

What are the consequences of no training?

Could I repeat that again?

If facial recognition users had been trained, none of the false arrests over the last few years would have taken place.

  • The users would have realized that the poor images were not of sufficient quality to determine a match.
  • The users would have realized that even if they had been of sufficient quality, facial recognition must only be used as an investigative lead, and once other data had been checked, the cases would have fallen apart.

But the false arrests gave the privacy advocates the ammunition they needed.

Not to insist upon proper training in the use of facial recognition.

But to ban the use of facial recognition entirely.

Like nuclear or biological weapons, facial recognition’s threat to human society and civil liberties far outweighs any potential benefits. Silicon Valley lobbyists are disingenuously calling for regulation of facial recognition so they can continue to profit by rapidly spreading this surveillance dragnet. They’re trying to avoid the real debate: whether technology this dangerous should even exist. Industry-friendly and government-friendly oversight will not fix the dangers inherent in law enforcement’s discriminatory use of facial recognition: we need an all-out ban.

From https://www.banfacialrecognition.com/

(And just wait until the anti-facial recognition forces discover that this is not only a plot of evil Silicon Valley, but also a plot of evil non-American foreign interests located in places like Paris and Tokyo.)

Because the anti-facial recognition forces want us to remove the use of technology and go back to the good old days…of eyewitness misidentification.

Eyewitness misidentification contributes to an overwhelming majority of wrongful convictions that have been overturned by post-conviction DNA testing.

Eyewitnesses are often expected to identify perpetrators of crimes based on memory, which is incredibly malleable. Under intense pressure, through suggestive police practices, or over time, an eyewitness is more likely to find it difficult to correctly recall details about what they saw. 

From https://innocenceproject.org/eyewitness-misidentification/.

And these people don’t stay in jail for a night or two. Some of them remain in prison for years until the eyewitness misidentification is reversed.

Archie Williams moments after his exoneration on March 21, 2019. Photo by Innocence Project New Orleans. From https://innocenceproject.org/fingerprint-database-match-establishes-archie-williams-innocence/

Eyewitnesses, unlike facial recognition algorithms, cannot be tested for accuracy or bias.

And if we don’t train facial recognition users in the technology, then we’re going to lose it.

Claimed AI-detected Similarity in Fingerprints From the Same Person: Are Forensic Examiners Truly “Doing It Wrong”?

I shared some fingerprint-related information on my LinkedIn feed and other places, and I thought I’d share it here.

Along with an update.

You’re doing it wrong

Forensic examiners, YOU’RE DOING IT WRONG based on this bold claim:

“Columbia engineers have built a new AI that shatters a long-held belief in forensics–that fingerprints from different fingers of the same person are unique. It turns out they are similar, only we’ve been comparing fingerprints the wrong way!” (From Newswise)

Couple that claim with the initial rejection of the paper by multiple forensic journals because “it is well known that every fingerprint is unique” (apparently the reviewer never read the NAS report), and you have the makings of a sexy story.

Or do you?

And what is the paper’s basis for the claim that fingerprints from the same person are NOT unique?

““The AI was not using ‘minutiae,’ which are the branchings and endpoints in fingerprint ridges – the patterns used in traditional fingerprint comparison,” said Guo, who began the study as a first-year student at Columbia Engineering in 2021. “Instead, it was using something else, related to the angles and curvatures of the swirls and loops in the center of the fingerprint.”” (From Newswise)

Perhaps there are similarities in the patterns of the fingers at the center of a print, but that doesn’t negate the uniqueness of the bifurcations and ridge ending locations throughout the print. Guo’s method uses less of the distal fingerprint than traditional minutiae analysis.

But maybe there are forensic applications for this alternate print comparison technique, at least as an investigative lead. (Let me repeat that again: “investigative lead.”) Courtroom use will be limited because there is no AI equivalent to explain to the court how the comparison was made, and if any other expert AI algorithm would yield the same results.

Thoughts?

https://www.newswise.com/articles/ai-discovers-that-not-every-fingerprint-is-unique

The update

As I said, I shared the piece above to several places, including one frequented by forensic experts. One commenter in a private area offered the following observation, in part:

What was the validation process? Did they have a qualified latent print examiner confirm their data?

From a private source.

Before you dismiss the comment as reflecting a stick-in-the-mud forensic old fogey who does not recognize the great wisdom of our AI overlords, remember (as I noted above) that forensic experts are required to testify in court about things like this. If artificial intelligence is claimed to identify relationships between fingers from the same person, you’d better make really sure that this is true before someone is put to death.

I hate to repeat the phrase used by scientific study authors in search of more funding, but…

…more research is needed.

What Is Your Firm’s UK Online Safety Act Story?

It’s time to revisit my August post entitled “Can There Be Too Much Encryption and Age Verification Regulation?” because the United Kingdom’s Online Safety Bill is now the Online Safety ACT.

Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent (October 26)….

[A]dded in (to the Act) is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.

From Wired.
By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727

This not only opens up issues regarding encryption and privacy, but also specific identity technologies such as age verification and age estimation.

This post looks at three types of firms that are affected by the UK Online Safety Act, the stories they are telling, and the stories they may need to tell in the future. What is YOUR firm’s Online Safety Act-related story?

What three types of firms are affected by the UK Online Safety Act?

As of now I have been unable to locate a full version of the final final Act, but presumably the provisions from this July 2023 version (PDF) have only undergone minor tweaks.

Among other things, this version discusses “User identity verification” in 65, “Category 1 service” in 96(10)(a), “United Kingdom user” in 228(1), and a multitude of other terms that affect how companies will conduct business under the Act.

I am focusing on three different types of companies:

  • Technology services (such as Yoti) that provide identity verification, including but not limited to age verification and age estimation.
  • User-to-user services (such as WhatsApp) that provide encrypted messages.
  • User-to-user services (such as Wikipedia) that allow users (including United Kingdom users) to contribute content.

What types of stories will these firms have to tell, now that the Act is law?

Stories from identity verification services

From Yoti.

For ALL services, the story will vary as Ofcom decides how to implement the Act, but we are already seeing the stories from identity verification services. Here is what Yoti stated after the Act became law:

We have a range of age assurance solutions which allow platforms to know the age of users, without collecting vast amounts of personal information. These include:

  • Age estimation: a user’s age is estimated from a live facial image. They do not need to use identity documents or share any personal information. As soon as their age is estimated, their image is deleted – protecting their privacy at all times. Facial age estimation is 99% accurate and works fairly across all skin tones and ages.
  • Digital ID app: a free app which allows users to verify their age and identity using a government-issued identity document. Once verified, users can use the app to share specific information – they could just share their age or an ‘over 18’ proof of age.
From Yoti.

Stories from encrypted message services

From WhatsApp.

Not surprisingly, message encryption services are telling a different story.

MailOnline has approached WhatsApp’s parent company Meta for comment now that the Bill has received Royal Assent, but the firm has so far refused to comment.

Will Cathcart, Meta’s head of WhatsApp, said earlier this year that the Online Safety Act was the most concerning piece of legislation being discussed in the western world….

[T]o comply with the new law, the platform says it would be forced to weaken its security, which would not only undermine the privacy of WhatsApp messages in the UK but also for every user worldwide. 

‘Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98 per cent of users,’ Mr Cathcart has previously said.

From Daily Mail.

Stories from services with contributed content

From Wikipedia.

And contributed content services are also telling their own story.

Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.)

From Wired.

What is YOUR firm’s story?

All of these firms have shared their stories either before or after the Act became law, and those stories will change depending upon what Ofcom decides.

But what about YOUR firm?

Is your firm affected by the UK Online Safety Act, and the future implementation of the Act by Ofcom?

Do you have a story that you need to tell to achieve your firm’s goals?

Do you need an extra, experienced hand to help out?

Learn how Bredemarket can create content that drives results for your firm.

Click the image below.