A Californian, an Illinoisan, and a Dane Walk Into a Videoconference

I was recently talking with a former colleague, whose name I am not at liberty to reveal, and they posed a question that stymied me.

What happens when multiple people join a videoconference, and they all reside in jurisdictions with different privacy regulations?

An example will illustrate what would happen, and I volunteer to be the evil party in this one.

The videoconference

Let’s say:

On a particular day in April 2026, a Californian launches a videoconference on Zoom.

Imagen 4.

The Californian invites an Illinoisan.

Imagen 4.

And also invites a Dane.

Imagen 4.

And then—here’s the evil part—records and gathers images from the videoconference without letting the other two know.

The legal violations

Despite the fact that the Illinois Biometric Information Privacy Act, or BIPA, requires written consent before acquiring Abe’s facial geometry. And if Cali John doesn’t obtain that written consent, he could lose a lot of money.

And what about Freja? Well, if the Danish Copyright Act takes effect on March 31, 2026 as expected, Cali John can get into a ton of trouble if he uses the video to create a realistic, digitally generated imitation of Freja. Again, consent is required. Again, there can be monetary penalties if you don’t get that consent.

But there’s another question we have to consider.

The vendor responsibility 

Does the videoconference provider bear any responsibility for the violations of Illinois and Danish law?

Since I used Zoom as my example, I looked at Zoom’s EULA Terms of Service.

TL;DR: not our problem, that’s YOUR problem.

“5. USE OF SERVICES AND YOUR RESPONSIBILITIES. You may only use the Services pursuant to the terms of this Agreement. You are solely responsible for Your and Your End Users’ use of the Services and shall abide by, and ensure compliance with, all Laws in connection with Your and each End User’s use of the Services, including but not limited to Laws related to recording, intellectual property, privacy and export control. Use of the Services is void where prohibited.”

But such requirements haven’t stopped BIPA lawyers from filing lawsuits against deep pocketed software vendors. Remember when Facebook settled for $650 million?

So remember what could happen the next time you participate in a multinational, multi-state, or even multi-city videoconference. Hope your AI note taker isn’t capturing screen shots.

Strategy is not Tactics

I’ve said that strategy is one of four essential elements of product marketing. But you have to know what strategy is…and what it is not.

To illustrate the difference between strategy and tactics, it helps to differentiate between abstract, long term goals and concrete, short term goals.

If your goal is to better the world, that’s a strategy.

If your goal is to excel in a particular industry, that’s a strategy.

Although strategies can change. Those who know of Nokia as a telecommunications company, and those who remember Nokia as a phone supplier, are not old enough to remember Nokia’s beginnings as a pulp mill in 1865.

If your goal is to secure business from a specific prospect, that’s a tactic. Or it should be.

Fleming Companies secured a 10-year contract in 2001 as the main supplier of groceries to Kmart, accounting for 20% of Fleming’s revenue. Kmart cancelled that contract when it declared bankruptcy a year later. Fleming filed a $1.4 billion claim in Kmart’s bankruptcy case…but only got $385 million. Fleming itself ended up in bankruptcy court in 2003.

But Fleming’s strategy was to excel at food wholesaling through acquisition and innovation.

It’s just that one tactical blunder upended that strategy.

Whether Bredemarket pivots from biometric content to resume writing (not likely), I am presently equipped to address both your strategic and tactical product marketing needs. If I can help you, talk to me at https://bredemarket.com/mark/.

Differentiating the DNA of Twins?

(Part of the biometric product marketing expert series)

There are certain assumptions that you make in biometrics.

Namely, that certain biometrics are unable to differentiate twins: facial recognition, and DNA analysis.

Now as facial recognition algorithms get bettter and better, perhaps they will be able to tell twins apart: even identical twins.

But DNA is DNA, right?

Twins and somatic mutations

Mike Bowers (CSIDDS) links to an article in Forensic Magazine which suggests that twins’ DNA can be differentiated.

For the first time in the U.S., an identical twin has been convicted of a crime based on DNA analysis.

The breakthrough came from Parabon Nanolabs, who’s scientists used deep whole genome sequencing to identify extremely rare “somatic mutations” that differentiated Russell Marubbio and his twin, John. The results were admitted as evidence in court, making last week’s conviction of Russell in the 1987 rape of a 50-year-old woman a landmark case.

Twin DNA.

Parabon Nanolabs (whom I briefly mentioned in 2024) applied somatic mutations as follows:

Somatic mutations are DNA changes that happen after conception and can cause genetic differences between otherwise identical twins. These mutations can arise during the earliest stages of embryonic development, affecting the split of the zygote, and accumulate throughout life due to errors in cell division. Somatic mutations can be present in only one twin, a subset of cells, or both, potentially leading to differences in health and even developmental disorders—and in this case, DNA.

The science behind somatic mutations is not new, and is well-researched, understood and accepted. It’s just uncommon for DNA to lead to twins, and even more uncommon for somatic mutations to be able to distinguish between twins.

Note that “well-researched, understood and accepted” part (even though it lacks an Oxford comma). Because this isn’t the only recent story that touches upon whole genome sequencing.

Whole genome sequencing and legal admissibility

Bowers also links to a CNN article which references Daubert/Frye-like questions about whether evidence is admissable.

Evidence derived from cutting-edge DNA technology that prosecutors say points directly at Rex Heuermann being the Gilgo Beach serial killer will be admissible at his trial, a Suffolk County judge ruled Wednesday….

Heuermann’s defense attorney Michael Brown had argued the DNA technology, known as whole genome sequencing, has not yet been widely accepted by the scientific community and therefore shouldn’t be permitted. He said he plans to argue the validity of the technology before a jury.

Meanwhile, prosecutors have argued this type of DNA extraction has been used by local law enforcement, the FBI and even defense attorneys elsewhere in the country, according to court records.

Let me point out one important detail: the fact that police agencies are using a particular technology doesn’t mean that said technology is “widely accepted by the scientific community.” I suspect that this same question will be raised in other courts, and other judges may hold a different decision.

And after checking my blog, I realize that I have never written an article about Daubert/Frye. Another assignment for Bredebot, I guess…

Your identity/biometric product marketing needs to assert the facts rather than old lies,

Bredemarket can help.

Forget About Milwaukee’s Facial Recognition DATA: We All Want to See Milwaukee’s Facial Recognition POLICY

(Part of the biometric product marketing expert series)

I love how Biometric Update bundles a bunch of stories into a single post. Chris Burt outdid himself on Wednesday, covering a slew of stories regarding use and possible misuse of facial recognition by Texas bounty hunters, the NYPD, and cities ranging from Chicago, Illinois to Houlton, Maine.

But those stories aren’t the ones that I’m focusing on. Before I get to my focus, I want to go off on a tangent and address something else.

Read us any rule, we’ll break it

In a huddle space in an office, a smiling robot named Bredebot places his robotic arms on a wildebeest and a wombat, encouraging them to collaborate on a product marketing initiative.
Bredebot and his pals.

By the time you read this, the first full post by my counterpart “Bredebot” will have published on the Bredemarket blog. This is a completely AI-generated post in which a bot DID write the first draft. More posts are coming.

What I didn’t expect was that competition would arise between me and my bot. I’m writing these words on August 27, two days before the first Bredebot post appears, and I’m already feeling the heat.

What if Bredebot’s posts receive more traffic than the ones I write myself? What does that mean for my own posts…and for the whole premise of hiring Bredemarket to write for others?

I’m treating this as a challenge, vowing to outdo my fast bot counterpart.

And in that spirit, let’s revisit Milwaukee.

Give us any chance, we’ll take it

Access.

When Biometric Update initially visited Milwaukee in its April 28 post, the main concern was the possible agreement for the Milwaukee Police Department to provide “access” to facial data to the company Biometrica in exchange for facial recognition licenses. I subsequently explored the data issue in my own May 6 guest post for Biometric Update.

Vendors must disclose responsible uses of biometric data.

But today the questions addressed to Milwaukee don’t focus on the data, but on the use of facial recognition itself. The Biometric Update article links to a Wisconsin Watch article with more detail. The arguments are familiar to all of you: facial recognition is racist, facial recognition is sometimes relied upon as the sole piece of evidence, facial recognition data can be sent to ICE, and facial recognition can be misused.

However, before Milwaukee’s Common Council can approve facial recognition use, one requirement has to be met.

Since the passage of Wisconsin Act 12, the only official way to amend or reject MPD policy is by a vote of at least two-thirds of the Common Council, or 10 members. 

“However, council members cannot make any decision about it until MPD actually drafts its policy, often referred to as a “standard operating procedure.” 

“Ald. Peter Burgelis – one of four council members who did not sign onto the Common Council letter to Norman – said he is waiting to make a decision until he sees potential policy from MPD or an official piece of legislation considered by the city’s Public Safety and Health Committee.”

The Milwaukee Police Department agrees that such a policy is necessary.

“MPD has consistently stated that a carefully developed policy could help reduce risks associated with facial recognition.

“’Should MPD move forward with acquiring FRT, a policy will be drafted based upon best practices and public input,’ a department spokesperson said.”

An aside from my days at MorphoTrak, when I would load user conference documents into the CrowdCompass mobile app: one year the topic of law enforcement agency facial recognition policies was part of our conference agenda. One agency had such a policy, but the agency would not allow me to upload the policy into the CrowdCompass app. You see, the agency had a policy…but it wasn’t public.

Needless to say, the Milwaukee Police Department’s draft policy WILL be public…and a lot of people will be looking at it.

Although I don’t know if it will make everyone’s dreams come true.

“Somewhat You Why” and Geolocation Stalkerware

Geolocation and “somewhat you why” (my proposed sixth factor of identity verification and authentication) can not only be used to identify and authenticate people.

They can also be used to learn things about people already authenticated, via the objects they might have in their possession.

Stalkerware

404 Media recently wrote an article about “stalkerware” geolocation tools that vendors claim can secretly determine if your partner is cheating on you.

Before you get excited about them, 404 Media reveals that many of these tools are NOT secret.

“Immediately notifies anyone traveling with it.” (From a review)

Three use cases for geolocation tracking

But let’s get back to the tool, and the intent. Because I maintain that intent makes all the difference. Look at these three use cases for geolocation tracking of objects:

  • Tracking an iPhone (held by a person). Many years ago, an iPhone user had to take a long walk from one location to another after dark. This iPhone user asked me to track their whereabouts while on that walk. Both of us consented to the arrangement.
  • Tracking luggage. Recently, passengers have placed AirTags in their luggage before boarding a flight. This lets the passengers know where their luggage is at any given time. But some airlines were not fans of the practice:

“Lufthansa created all sorts of unnecessary confusion after it initially banned AirTags out of concern that they are powered by a lithium battery and could emit radio signals and potentially interfere with aircraft navigation.

“The FAA put an end to those baseless concerns saying, “Luggage tracking devices powered by lithium metal cells that have 0.3 grams or less of lithium can be used on checked baggage”.   The Apple AirTag battery is a third of that size and poses no risk to aircraft operation.”

  • Tracking an automobile. And then there’s the third case, raised by the 404 Media article. 404 Media found countless TikTok advertisements for geolocation trackers with pitches such as “men with cheating wives, you might wanna get one of these.” As mentioned above, the trackers claim to be undetectable, which reinforces the fact that the person whose car is being tracked did NOT consent.

From consent to stalkerware, and the privacy implications

Geolocation technologies are used in every instance. But in one case it’s perfectly acceptable, while it’s less acceptable in the other two cases.

Banning geolocation tracking technology would be heavy-handed since it would prevent legitimate, consent-based uses of the technology.

So how do we set up the business and technical solutions that ensure that any tracking is authorized by all parties?

Does your firm offer a solution that promotes privacy? Do you need Bredemarket’s help to tell prospects about your solution? Contact me.

Why retail needs biometrics – the cameras aren’t working, and the people aren’t working either

(Imagen 4)

In a recent post on Biometric Update, “Why retail needs biometrics – the cameras aren’t working,” Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner made several points about the applicability of biometrics to retail. Among the many points he addressed, he dealt with algorithmic inaccuracy and the proper use of facial recognition as an investigative lead:

“It’s true that some early police algorithms were poor, but the biometric matching algorithms offered by some providers is over 99.99% – that’s as close to perfect as anyone has ever got. That’s NASA-level accuracy, better than some medical or military procedures and light years away from people staring at CCTV monitors. What about errors and misidentification? Used properly, LFR is a decision support tool, it’s not making the identification itself. Ultimately, it’s helping shopkeepers make their decisions and that’s where the occasional misidentification happens – by human error, not technical.”

I offered an additional comment:

“One other point: for all those who complain about the lack of perfection of automated facial recognition, it’s much better than manual facial recognition. The U.S. Innocence Project recounts multiple cases of witness MISidentification, where people have been imprisoned due to faulty and inaccurate identification of suspects as perpetrators. I’d much rather have a top tier FR algorithm watching me than a person who knows nothing about facial recognition at all.”

In case you missed it, I’ve written several Bredemarket blog posts on witness MISidentification: two on Robert Williams’ misidentification alone.

Heck, I addressed the topic back in 2021 in “The dangers of removing facial recognition and artificial intelligence from DHS solutions (DHS ICR part four).” This post covers the misidentification of Archie Williams (no relation).

So don’t toss out the automated facial recognition solution unless you have something better. I’ll wait.

Easing the Pain of Case Study Creation and Approval

Case studies are powerful marketing collateral for companies.

Why?

Because if you select your subjects carefully, your prospects will say, “That subject is just like me. And the company’s solution solved the subject’s problem. Perhaps the solution will solve my problem also.”

Imagen 4

Ideally a company would want to publish dozens of case studies, so their prospects could find one case study—or perhaps two or three—that describe the exact same problem the prospect is encountering.

It’s hard to create case studies

But case studies are by definition more difficult for a company to create. 

  • For other types of content, the approval process resides completely within the company itself. 
  • But case studies by definition require approval by two companies…even if the end customers in the case studies remain anonymous.

Perhaps that’s why there are so few published, recent case studies.

On Tuesday I had the occasion to visit four technology websites.

  • One had 5 case studies, all written in 2024.
  • One had 4 case studies, all written in 2023 and all anonymous.
  • One had 8 case studies, all written in 2021.
  • One had no case studies at all, even though the company had clients who could be referenced.

And the approvals don’t just involve the end customer.

Imagen 4

A former friend interviewed many customers but was only able to complete one case study; the approvals from company legal, other company executives, and the end customers were overwhelming, delaying the other case studies.

So how do you expedite case study creation and approval?

Three tips for creating case studies

Here are three tips to expedite the creation of case studies.

Creation tip 1: Get the facts first

If the sales rep, program manager, or the subject itself can provide the basic facts beforehand, then the interview can simply consist of confirming facts and filling gaps.

Creation tip 2: Outline the case study and tell your story

Imagen 4

Whether you use the STAR method (situation, task, action, result) or some other method (I prefer the simpler problem, solution, result), take the facts you gathered above. Then fit them into the outline and into the story you want to tell. Then see what pieces of the story are missing.

Creation tip 3: Obtain a meeting transcript

Since the subject has already consented to the case study, they should consent to the meeting being recorded.

The most efficient way to do this is with one of the popular AI note takers, which lets the case study writer review the actual words from the interview without going back and forth through a video recording.

And AI note takers are more efficient than the way I used to transcribe case study interviews.

Three tips for approving case studies

Here are three tips to expedite the approval of case studies.

Approval tip 1: Read the contract

The language of the contract with the subject may have clauses regarding publicity.

If the subject wrote the contract, then it may prohibit any promotional publicity whatsoever, or it may dictate that any publicity must be approved by a high-level governing board in a foreign country.

If the provisions are onerous or impossible, don’t use that subject and find another.

Approval tip 2: Get pre-approvals, or at least grease the wheels

Let your approvers know what’s coming, and when you think it will come.

Once I submitted a case study for pre-approval even before the results were available. This subject had a lengthy approval process, so I wanted the approvers to see the first part of the case study as soon as possible.

Approval tip 3: Use every ethical method to get those approvals

Imagen 4

While the case study may be critically important to you, it may be merely important (or even inconsequential) to the lawyer with 50 other tasks.

From the lawyer’s perspective, it may be better if the company does NOT publish the case study. Fewer potential lawsuits that way.

Do everything you can to expedite the approval. If the CEO is demanding a published case study in three days, say so.

If not…well, that’s why you’re a salesperson. Oh, you’re NOT a salesperson? You are now.

One final tip

You don’t have to go it alone. If your staff is stretched, or if your staff has never written a case study before, Bredemarket can help. Visit my content for tech marketers page.

Is There a Calculator On That Slide Rule?

(Imagen 4)

Once again I’m painting a picture, this time of two people: the IT chick, deftly wielding her slide rule as she sizes up hardware and software, and the finance dude, deftly wielding his calculator as he tabulates profit, loss, and other money stuff. Each of them in their own little worlds.

Despite the thoughts of Norman Marks in his post “Cyber is one of many business risks.”

  • “Many years ago, my friend Ed Hill, a Managing Director with Protiviti at the time, coined the expression ‘there is no such thing as IT risk. There is only business risk.’”
  • “The [Qualsys] report reveals a persistent disconnect between cybersecurity operations and business outcomes. While 49% of respondents reported having formal risk programmes, only 30% link them directly to business objectives. Even fewer (18%) use integrated risk scenarios that consider both business processes and financial exposure.”

I admit that I often draw a clear distinction between technical risk and business risk. For example, the supposedly separate questions regarding whether a third-party risk management (TPRM) algorithm is accurate, and what happens if an end customer sues your company because the end customer’s personally identifiable information was breached on your partner company’s system.

Imagen 4.

So make sure that when your IT chick wields her slide rule, the tool has an embedded calculator on it to quantify the financial effects of her IT decisions.

Is There a Calculator On That Slide Rule?

When Pre-Acquisition Announcements Stalled Negative Activities Affecting Motorola and IDEMIA

When a company announces its intent to buy another company, certain activities at both firms may be stalled.

This can be a good thing, as certain Motorola employees and IDEMIA lawyers know.

Motorola layoffs on hold

In late 2008 and early 2009 Motorola was in trouble—so much trouble that it would eventually bifurcate. (Heh.) So Motorola was laying off employees throughout the company…

…except in the Biometric Business Unit where I resided. Safran had announced its intent purchase that unit, and Motorola was obligated to deliver that unit to Safran intact.

So I kept my job…for another 12 years anyway.

IDEMIA lawsuit on hold

Anyway, Motorola’s Biometric Business Unit became part of Safran and then IDEMIA. And according to ID Tech, IDEMIA is the beneficiary of new acquisition activity.

“A legal dispute between South African Black Economic Empowerment (BEE) firm INFOVERGE and French multinational IDEMIA has stalled, with INFOVERGE citing ongoing acquisition activity involving IDEMIA’s South African subsidiary as the reason for the delay. The firm is seeking R39 million in damages over what it describes as a breach of contractual obligations by IDEMIA.

“INFOVERGE told reporters that it had been informed IDEMIA’s South African division is undergoing a corporate transaction, which has effectively paused the litigation process. ‘We’ve been told that the South African arm of IDEMIA is under acquisition … which leaves our legal matter in some kind of limbo as we wait,’ an INFOVERGE spokesperson said, adding that the prolonged delay is impacting their ability to fulfill their empowerment mandate.”

I’m not sure whether the completed IN Groupe acquisition is the culprit, or if the possible public security sale is to blame.

I’m Bot a Doctor: Consumer-grade Generative AI Dispensation of Health Advice

In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?

Often technology companies seek regulatory approval before claiming that their hardware or software can be used for medical purposes.

Users aren’t warned that generative AI is not a doctor

Consumer-grade generative AI responses are another matter. Maybe.

“AI companies have now mostly abandoned the once-standard practice of including medical disclaimers and warnings in response to health questions.”

A study led by Sonali Sharma analyzed historical responses to medical questions since 2022. The study included OpenAI, Anthropic, DeepSeek, Google, and xAI. It included both answers to user health questions and analysis of medical images. Note that there is a difference between medical-grade image analysis products used by professionals, and general-purpose image analysis performed by a consumer-facing tool.

Dharma’s conclusion? Generative AI’s “I’m not a doctor” warnings have declined since 2022.

But users ARE warned…sort of

But at least one company claims that users ARE warned.

“An OpenAI spokesperson…pointed to the terms of service. These say that outputs are not intended to diagnose health conditions and that users are ultimately responsible.”

The applicable clause in OpenAI’s TOS can be found in section 9, Medical Use.

“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”

4479

From OpenAI’s Service Terms.

But the claim “it’s in the TOS” sometimes isn’t sufficient. 

  • I just signed a TOS from a company, but was explicitly reminded that I was signing something that required binding arbitration in place of lawsuits.
  • Is it sufficient to restrict a “don’t rely on me for medical advice; you could die” warning to a document that we MAY only read once?

Proposed “The Bots Want to Kill You” contest

Of course, one way to keep generative AI companies in line is to expose them to the Rod of Ridicule. When the bots provide bad medical advice, expose them:

“Maxwell claimed that in the first message Tessa sent, the bot told her that eating disorder recovery and sustainable weight loss can coexist. Then, it recommended that she should aim to lose 1-2 pounds per week. Tessa also suggested counting calories, regular weigh-ins, and measuring body fat with calipers. 

“‘If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED. If I had not gotten help, I would not still be alive today,” Maxwell wrote on the social media site. “Every single thing Tessa suggested were things that led to my eating disorder.’”

The organization hosting the bot, the National Eating Disorders Association (NEDA), withdrew the bot within a week.

How can we, um, diagnose additional harmful recommendations delivered without disclaimers?

Maybe a “The Bots Want to Kill You” contest is in order. Contestants would gather reproducible prompts for consumer-grade generative AI applications. The prompt most likely to result in a person’s demise would receive a prize of…well, that still has to be worked out.