Biometric Update reports that Amazon’s Ring products are offering a feature called “Familiar Faces.”
“In September, Amazon revealed a revamped Ring camera lineup featuring two notable AI features, Familiar Faces and Search Party. Familiar Faces uses facial recognition and lets users tag neighbors or friends so future alerts identify them by name rather than generic motion.”
Illinois music lovers, wanna see a concert? Sounds like you may have to surrender your BIPA protections.
Specifically, if the concert venue uses Ticketmaster (who doesn’t?), and if the concert venue captures your biometric data without your consent, you may not have legal recourse.
“These Terms of Use (“Terms”) govern your use of Live Nation and Ticketmaster’s websites and applications…
“The Terms contain an arbitration agreement and class action waiver—along with some limited exceptions—in Section 14, below. Specifically, you and we agree that any dispute or claim relating in any way to the Terms, your use of the Marketplace, or products or services sold, distributed, issued, or serviced by us or through us, will be resolved by binding arbitration, rather than in court…
“By agreeing to arbitration, you and we each waive any right to participate in a class action lawsuit or class action arbitration, except those already filed and currently pending as of August 12, 2025.”
And what about Freja? Well, if the Danish Copyright Act takes effect on March 31, 2026 as expected, Cali John can get into a ton of trouble if he uses the video to create a realistic, digitally generated imitation of Freja. Again, consent is required. Again, there can be monetary penalties if you don’t get that consent.
But there’s another question we have to consider.
The vendor responsibility
Does the videoconference provider bear any responsibility for the violations of Illinois and Danish law?
“5. USE OF SERVICES AND YOUR RESPONSIBILITIES. You may only use the Services pursuant to the terms of this Agreement. You are solely responsible for Your and Your End Users’ use of the Services and shall abide by, and ensure compliance with, all Laws in connection with Your and each End User’s use of the Services, including but not limited to Laws related to recording, intellectual property, privacy and export control. Use of the Services is void where prohibited.”
But such requirements haven’t stopped BIPA lawyers from filing lawsuits against deep pocketed software vendors. Remember when Facebook settled for $650 million?
So remember what could happen the next time you participate in a multinational, multi-state, or even multi-city videoconference. Hope your AI note taker isn’t capturing screen shots.
Unlike some clickbait-like article titles, this one from Communications Today succinctly encapsulates the problem up front.
It’s not that the TPRM software is failing to find the red flags. Oh, it finds them!
But the folks at Gartner discovered something:
“A Gartner survey of approximately 900 third-party relationship owners…revealed that while 95% saw a third-party red flag in the past 12 months, only around half of them escalate it to compliance teams.”
Among other things, the relationship owners worry about “the perceived return on investment (ROI) of sharing information.”
And that’s not a software issue. It’s a process issue.
And this is not unique to the cybersecurity world. Let’s look at facial recognition.
Another case in point
I’ve said this over and over, but for U.S. criminal purposes, facial recognition results should ONLY be used as investigative leads.
It doesn’t matter whether they’re automated results, or if they have been reviewed by a trained forensic face examiner.
Facial recognition results should only be used as investigative leads.
Sorry for the repetition, but some people aren’t listening.
But it’s not the facial recognition vendors. Bredemarket has worked with numerous facial recognition vendors over the years, and of those who work with law enforcement, ALL of them have emphatically insisted that their software results should only be used as investigative leads.
And that’s not a software issue. It’s a process issue.
No amount of coding or AI can fix that.
I hope the TPRM folks don’t mind my detour into biometrics, but there’s a good reason for it.
Product marketing for TPRM and facial recognition
Some product marketers, including myself, believe that it’s not enough to educate prospects and customers about your product. You also need to educate them about proper use of the product, including legal and ethical concerns.
If you don’t, your customers will do dumb things in Europe, Illinois, or elsewhere—and blame you when they are caught.
Be a leader in your industry by doing or saying the right thing.
And now here’s a word from our sponsor.
Not the “CPA” guy again…
Bredemarket has openings
There’s a reason why this post specifically focused on cybersecurity and facial recognition.
If you need product marketing assistance with your product, Bredemarket has two openings. One for a cybersecurity client, and one for a facial recognition client.
Because my local Amazon Fresh post is taking off, it’s a good time to revisit the “one” thing Uplanders will encounter when they get there.
I’ve talked about Amazon One palm/vein biometrics several times in the past.
The August 2021 post about Amazon paying $10 for your biometrics, long before World (Worldcoin) did something similar. Hmm…wonder if the $10 deal is still on?
And it’s also available (or soon will be) on TP-Link door locks. But the How-To Geek writer is confused:
“TP-Link says that these palm vein patterns are so unique that they can even tell the difference between identical twins, making them safer than regular fingerprint or facial recognition methods.”
And the TP-Link page for the product has no sales restrictions. Even Illinois residents can buy it. Presumably there’s an ironclad consent agreement with every enrollment to prevent BIPA lawsuits.
You may remember the May hoopla regarding amendments to Illinois’ Biometric Information Privacy Act (BIPA). These amendments do not eliminate the long-standing law, but lessen its damage to offending companies.
The General Assembly is expected to send the bill to Illinois Governor JB Pritzker within 30 days. Gov. Pritzker will then have 60 days to sign it into law. It will be immediately effective.
While the BIPA amendment has passed the Illinois House and Senate and was sent to the Governor, there is no indication that he has signed the bill into law within the 60-day timeframe.
A proposed class action claims Photomyne, the developer of several photo-editing apps, has violated an Illinois privacy law by collecting, storing and using residents’ facial scans without authorization….
The lawsuit contends that the app developer has breached the BIPA’s clear requirements by failing to notify Illinois users of its biometric data collection practices and inform them how long and for what purpose the information will be stored and used.
In addition, the suit claims the company has unlawfully failed to establish public guidelines that detail its data retention and destruction policies.
If you’re a biometric product marketing expert, or even if you’re not, you’re presumably analyzing the possible effects to your identity/biometric product from the proposed changes to the Biometric Information Privacy Act (BIPA).
As of May 16, the Illinois General Assembly (House and Senate) passed a bill (SB2979) to amend BIPA. It awaits the Governor’s signature.
What is the amendment? Other than defining an “electronic signature,” the main purpose of the bill is to limit damages under BIPA. The new text regarding the “Right of action” codifies the concept of a “single violation.”
(T)he amended law DOES NOT CHANGE “Private Right of Action” so BIPA LIVES!
Companies who violate the strict requirements of BIPA aren’t off the hook. It’s just that the trial lawyers—whoops, I mean the affected consumers make a lot less money.
Why did I mention the “future implementation” of the UK Online Safety Act? Because the passage of the UK Online Safety Act is just the FIRST step in a long process. Ofcom still has to figure out how to implement the Act.
Ofcom started to work on this on November 9, but it’s going to take many months to finalize—I mean finalise things. This is the UK Online Safety Act, after all.
This is the first of four major consultations that Ofcom, as regulator of the new Online Safety Act, will publish as part of our work to establish the new regulations over the next 18 months.
It focuses on our proposals for how internet services that enable the sharing of user-generated content (‘user-to-user services’) and search services should approach their new duties relating to illegal content.
On November 9 Ofcom published a slew of summary and detailed documents. Here’s a brief excerpt from the overview.
Mae’r ddogfen hon yn rhoi crynodeb lefel uchel o bob pennod o’n hymgynghoriad ar niwed anghyfreithlon i helpu rhanddeiliaid i ddarllen a defnyddio ein dogfen ymgynghori. Mae manylion llawn ein cynigion a’r sail resymegol sylfaenol, yn ogystal â chwestiynau ymgynghori manwl, wedi’u nodi yn y ddogfen lawn. Dyma’r cyntaf o nifer o ymgyngoriadau y byddwn yn eu cyhoeddi o dan y Ddeddf Diogelwch Ar-lein. Mae ein strategaeth a’n map rheoleiddio llawn ar gael ar ein gwefan.
Oops, I seem to have quoted from the Welsh version. Maybe you’ll have better luck reading the English version.
This document sets out a high-level summary of each chapter of our illegal harms consultation to help stakeholders navigate and engage with our consultation document. The full detail of our proposals and the underlying rationale, as well as detailed consultation questions, are set out in the full document. This is the first of several consultations we will be publishing under the Online Safety Act. Our full regulatory roadmap and strategy is available on our website.
And if you need help telling your firm’s UK Online Safety Act story, Bredemarket can help. (Unless the final content needs to be in Welsh.) Click below!
Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent (October 26)….
[A]dded in (to the Act) is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.
This not only opens up issues regarding encryption and privacy, but also specific identity technologies such as age verification and age estimation.
This post looks at three types of firms that are affected by the UK Online Safety Act, the stories they are telling, and the stories they may need to tell in the future. What is YOUR firm’s Online Safety Act-related story?
What three types of firms are affected by the UK Online Safety Act?
As of now I have been unable to locate a full version of the final final Act, but presumably the provisions from this July 2023 version (PDF) have only undergone minor tweaks.
Among other things, this version discusses “User identity verification” in 65, “Category 1 service” in 96(10)(a), “United Kingdom user” in 228(1), and a multitude of other terms that affect how companies will conduct business under the Act.
I am focusing on three different types of companies:
Technology services (such as Yoti) that provide identity verification, including but not limited to age verification and age estimation.
User-to-user services (such as WhatsApp) that provide encrypted messages.
User-to-user services (such as Wikipedia) that allow users (including United Kingdom users) to contribute content.
What types of stories will these firms have to tell, now that the Act is law?
For ALL services, the story will vary as Ofcom decides how to implement the Act, but we are already seeing the stories from identity verification services. Here is what Yoti stated after the Act became law:
We have a range of age assurance solutions which allow platforms to know the age of users, without collecting vast amounts of personal information. These include:
Age estimation: a user’s age is estimated from a live facial image. They do not need to use identity documents or share any personal information. As soon as their age is estimated, their image is deleted – protecting their privacy at all times. Facial age estimation is 99% accurate and works fairly across all skin tones and ages.
Digital ID app: a free app which allows users to verify their age and identity using a government-issued identity document. Once verified, users can use the app to share specific information – they could just share their age or an ‘over 18’ proof of age.
MailOnline has approached WhatsApp’s parent company Meta for comment now that the Bill has received Royal Assent, but the firm has so far refused to comment.
[T]o comply with the new law, the platform says it would be forced to weaken its security, which would not only undermine the privacy of WhatsApp messages in the UK but also for every user worldwide.
‘Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98 per cent of users,’ Mr Cathcart has previously said.
Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.)
All of these firms have shared their stories either before or after the Act became law, and those stories will change depending upon what Ofcom decides.