My thoughts. Or our thoughts.
Tag Archives: artificial intelligence
Why Biometric Marketing Experience Beats Biometric Marketing Immaturity
I know that the experts say that “too much knowledge is actually bad in tech.” But based upon what I just saw from an (unnamed) identity verification company, I assert that too little knowledge is much worse.
As a biometric product marketing expert and biometric product marketing writer, I pay a lot of attention to how identity verification companies and other biometric and identity companies market themselves. Many companies know how to speak to their prospects…and many don’t.
Take a particular company, which I will not name. Here is the “marketing” from this company.
- We have funding!
- We have long lists of features!

- We offer lower pricing than selected competitors!
- We claim high facial recognition accuracy but don’t publish our NIST FRTE results! (While the company claims to author its technology, the company name does not appear in either the NIST FRTE 1:1 or NIST FRTE 1:N results.)
- We claim liveness detection (presentation attack detection) but don’t publish any confirmation letters! (Again, I could not find the company name on the confirmation letter lists from BixeLab or iBeta.)

So what is the difference between this company and the other 100+ identity verification companies…many of which explicitly state their benefits, trumpet their NIST FRTE performance, and trumpet their third-party liveness detection confirmation letters?
If you claim great accuracy and great liveness detection but can’t support it via independent third-party verification, your claim is “so what?” worthless. Prove your claims.
Now I’m sure I could help this company. Even if they have none of the certifications or confirmations I mentioned, I could at least get the company to focus on meaningful differentiation and meaningful benefits. But there’s no need to even craft a Bredemarket pitch to the company, since the only marketer on staff is an intern who is indifferent to strategy.

Because while many companies assert that all they need is a salesperson, an engineer, an African data labeler, and someone to run the generative AI for everything else…there are dozens of competitors doing the exact same thing.
But some aren’t. Some identity/biometric companies are paying attention to their long-term viability, and are creating content, proposals, and analyses that support that viability.
Take a look at your company’s marketing. Does it speak to prospects? Does it prove that you will meet your customers’ needs? Or does it sound like every other company that’s saying “We use AI. Trust us“?
And if YOUR company needs experienced help in conveying customer-focused benefits to your prospects…contact Bredemarket. I’ve delivered meaningful biometric materials to two dozen companies over the years. And yes, I have experience. Let me use it for your advantage.
What About the Data Labelers Themselves?
Earlier this month I discussed a class action lawsuit, originated in the United States, from people who believe their privacy is being violated by the use of Kenyan data labelers to view their video output.
And the data labelers themselves are not happy, according to a 404 Media article “AI is African Intelligence.”
Before I get to the Kenyans, let’s talk about the reality of AI. No, AI output is not 100% generated by computers alone. There is often human review.
In some cases human review is understandable. There was a recent brouhaha when it was publicly highlighted that when a Waymo vehicle runs into a problematic situation, Waymo calls upon a human reviewer to intervene. People’s anger about this is pointless: would they prefer that Waymo NOT call upon a human reviewer, and just let the car do whatever?
Back to Kenya and the Data Labelers Association (DLA) reports of what data labelers actually do.
“Every day, Michael Geoffrey Asia spent eight consecutive hours at his laptop in Kenya staring at porn, annotating what was happening in every frame for an AI data labeling company. When he was done with his shift, he started his second job as the human labor behind AI sex bots, sexting with real lonely people he suspected were in the United States. His boss was an algorithm that told him to flit in and out of different personas.”
I’ve previously seen reports about people in the U.S. reviewing shocking material for social media companies, but it’s a heck of a lot cheaper to outsource the work abroad.
Unless the U.S. Government insists on bringing data labeling work to the United States, in the same way that it wants to bring call center jobs back here.
I do offer one caution: there is a lot of data labeling work that is NOT pornographic. In the identity verification industry, data labelers review real and fake faces, real and fake documents, and the like to train AI models. Such work does not have the emotional stress that you get from watching certain videos.
But it’s still hard work.
Gemini Doesn’t Know Me
Vanity searches have been replaced by vanity LLM questions. And when I asked Google Gemini about myself, I found all sorts of errors.
A sampling:
He holds a Bachelor’s degree (and has mentioned graduate studies in public administration in professional contexts).
Having never studied public administration, I pressed Gemini on the issue.
Master of Business Administration (MBA): He earned his MBA from California State University, San Bernardino.
I pressed on that: I actually attended Cal State Fullerton. But that was nothing compared to this:
Undergraduate: He holds a Bachelor of Arts from Concordia University Chicago (formerly Concordia Teachers College).
That is actually my wife. I went to Reed College, which is NOT affiliated with the Lutheran Church Missouri Synod.

Whether you use Google Gemini, Wikipedia, or the Bredemarket blog, ALWAYS check your sources.
Surplus Labor
Therefore never send to know for whom the bell tolls; it tolls for thee.
Data Labelers Gonna Label, and Class Action Lawyers Gonna Lawyer
On Wednesday, I described how Meta’s Kenyan data labelers ended up watching explicit videos from people who presumably didn’t know that smart glasses were recording their activity.
To no one’s surprise, class action lawyers are now involved.
“In the newly filed complaint, plaintiffs Gina Bartone of New Jersey and Mateo Canu of California, represented by the public interest-focused Clarkson Law Firm, allege that Meta violated privacy laws and engaged in false advertising.
“The complaint alleges that the Meta AI smart glasses are advertised using promises like “designed for privacy, controlled by you,” and “built for your privacy,” which might not lead customers to assume their glasses’ footage, including intimate moments, was being watched by overseas workers. The plaintiffs believed Meta’s marketing and said they saw no disclaimer or information that contradicted the advertised privacy protections.”

“Clear, easy device and app settings help you manage your information, giving you control over what content you choose to share with others, and when.”
Except that according to Clarkson, people can’t opt out of the data labeling process.
This could get very revealing.
By 1980
Last June I created a still for a fictional TV show called “The Amazing Computer.” The show described the real life activities of the FBI in computerizing its fingerprint system.
Now I’m giving it the Lyria treatment.
“We Use AI.” And We Use YOUR (Non-copyrighted) AI.
A private social media comment got me thinking. I will gladly credit the author, with their permission.
“If a U.S. federal court says that you can’t copyright AI generated content, an appellate court upholds that ruling, and the SCOTUS refuses to hear the case, what are the implications for software generated by LLMs?”
Think about that the next time Company X publishes its marketing message “we use AI.”
What if Company X’s code and prompts were themselves written with AI?
Couldn’t Company Y take Company X’s non-copyrightable code and run it without penalty, like open source code?
Now Company X would be forced to prove that it does NOT use AI. For its code, anyway.
Data Labelers Gonna Label
Before diving in, I should note that this is not just a Meta Ray Ban AI glasses issue.
This is an issue with ANY video feed that requires AI processing.
Because AI can’t do its job on its own.
To ensure that the AI is trained properly, an army of humans looks at the same data and uses data labeling to classify it.
We allow this when we sign those Terms of Service. And I personally believe it’s a good thing, since it helps correct errors from uncontrolled AI.
But Futurism notes the types of video feeds that the human data labelers have to label.
“I saw a video where a man puts the glasses on the bedside table and leaves the room,” one data annotator told the newspapers. “Shortly afterwards his wife comes in and changes her clothes.”

Basically we record more than we should. One example: a bank card.
But regardless of whether data labelers are present or not, assume that any recording device will record anything, and potentially distribute it.
Let’s Talk Hype With Gartner on Generative AI
Gartner’s article “Latest Hype Cycle for Artificial Intelligence Goes Beyond GenAI” was written in July 2025, but even many months later it’s still illustrative. At the time, author Haritha Khandabattu said the following:
“…GenAI enters the Trough of Disillusionment as organizations gain understanding of its potential and limits.
“AI leaders continue to face challenges when it comes to proving GenAI’s value to the business. Despite an average spend of $1.9 million on GenAI initiatives in 2024, less than 30% of AI leaders report their CEOs are happy with AI investment return. Low-maturity organizations have trouble identifying suitable use cases and exhibit unrealistic expectations for initiatives. Mature organizations, meanwhile, struggle to find skilled professionals and instill GenAI literacy.”
To see and download Gartner’s pretty pictures, go to the article.
Since the article was published, IBM has tripled entry-level hiring rather than assume that generative AI can perform ALL entry-level jobs.
