What About the Data Labelers Themselves?

Earlier this month I discussed a class action lawsuit, originated in the United States, from people who believe their privacy is being violated by the use of Kenyan data labelers to view their video output.

And the data labelers themselves are not happy, according to a 404 Media article “AI is African Intelligence.”

Before I get to the Kenyans, let’s talk about the reality of AI. No, AI output is not 100% generated by computers alone. There is often human review.

In some cases human review is understandable. There was a recent brouhaha when it was publicly highlighted that when a Waymo vehicle runs into a problematic situation, Waymo calls upon a human reviewer to intervene. People’s anger about this is pointless: would they prefer that Waymo NOT call upon a human reviewer, and just let the car do whatever?

Back to Kenya and the Data Labelers Association (DLA) reports of what data labelers actually do.

“Every day, Michael Geoffrey Asia spent eight consecutive hours at his laptop in Kenya staring at porn, annotating what was happening in every frame for an AI data labeling company. When he was done with his shift, he started his second job as the human labor behind AI sex bots, sexting with real lonely people he suspected were in the United States. His boss was an algorithm that told him to flit in and out of different personas.”

I’ve previously seen reports about people in the U.S. reviewing shocking material for social media companies, but it’s a heck of a lot cheaper to outsource the work abroad.

Unless the U.S. Government insists on bringing data labeling work to the United States, in the same way that it wants to bring call center jobs back here.

I do offer one caution: there is a lot of data labeling work that is NOT pornographic. In the identity verification industry, data labelers review real and fake faces, real and fake documents, and the like to train AI models. Such work does not have the emotional stress that you get from watching certain videos.

But it’s still hard work.

“We Use AI” Marketing Goes Beyond the IDV Realm

I recently mentioned again how ALL the identity verification companies use the following two elements in their product marketing:

  • “We use AI.”
  • “Trust!”

If you read three marketing messages from three IDV vendors, I defy you to tell them apart. Admittedly my last comparison took place years ago, so I took a fresh look at the 2026 versions. Here are two:

“Industry-leading AI-driven Technology”

“We make it easy to safeguard your customers with AI-driven identity verification.”

Thankfully the companies are finally mentioning differentiators other than trust, but the magic letters AI still persist.

AI is everywhere and nowhere

But you can’t really blame the IDV vendors when everyone is injecting the two letter word in their messaging.

20 years ago, anyone who talked about an AI-powered vacuum cleaner would have been relegated to the back of the hall and told to put on his Vulcan ears.

Now we have things like AI pens.

“Handwrite only the critical points. Let Flowtica AI summarize and visualize the rest-audio, photo and even your sketches – into insights. Stay focused in the flow”

And lest you think that such efforts are fringe, Open AI and Jony Ive are reportedly working on one.

But AI pens make as much sense as AI influencers. If you have AI, why do you need the influencers? And if you have AI, why have a pen?

But that won’t stop people from hawking AI pens, and pencils, and erasers, and 3 hole punches, and maybe even…paperclips.

Nobot Policies Hurt Your Company and Your Product

If your security software enforces a “no bots” policy, you’re only hurting yourself.

Bad bots

Yes, there are some bots you want to keep out.

“Scrapers” that obtain your proprietary data without your consent.

“Ad clickers” from your competitors that drain your budgets.

And, of course, non-human identities that fraudulently crack legitimate human and non-human accounts (ATO, or account takeover).

Good bots

But there are some bots you want to welcome with open arms.

Such as the indexers, either web crawlers or AI search assistants, that ensure your company and its products are known to search engines and large language models. If you nobot these agents, your prospects may never hear about you.

Buybots

And what about the buybots—those AI agents designed to make legitimate purchases? 

Perhaps a human wants to buy a Beanie Baby, Bitcoin, or airline ticket, but only if the price dips below a certain point. It is physically impossible for a human to monitor prices 24 hours a day, 7 days a week, so the human empowers an AI agent to make the purchase. 

Do you want to keep legitimate buyers from buying just because they’re non-human identities?

(Maybe…but that’s another topic. If you’re interested, see what Vish Nandlall said in November about Amazon blocking Perplexity agents.)

Nobots 

According to click fraud fighter Anura in October 2025, 51% of web traffic is non-human bots, and 37% of the total traffic is “bad bots.” Obviously you want to deny the 37%, but you want to allow the 14% “good bots.”

Nobot policies hurt. If your verification, authentication, and authorization solutions are unable to allow good bots, your business will suffer.

Francesco Fabbrocino’s Five Rules of Fraud Prevention…and Bredemarket’s Caveat to Rule 2

Francesco Fabbrocino of Dunmor presented at today’s SoCal Tech Forum at FoundrSpace in Rancho Cucamonga, California. His topic? Technology in FinTech/Fraud Detection. I covered his entire presentation in a running LinkedIn post, but I’d like to focus on one portion here—and my caveat to one of his five rules of fraud detection. (Four-letter word warning.)

The five rules

In the style of Fight Club, Fabbrocino listed his five rules of fraud detection:

1. Nearly all fraud is based on impersonation.

2. Never expose your fraud prevention techniques.

3. Preventing fraud usually increases friction.

4. Fraud prevention is a business strategy.

5. Whatever you do, fraudsters will adapt to it.

All good points. But I want to dig into rule 2, which is valid…to a point.

Rule 2

If the fraudster presents three different identity verification or authentication factors, and one of them fails, there’s no need to tell the fraudster which one failed. Bad password? Don’t volunteer that information.

In fact, under certain circumstances you may not have to reveal the failure at all. If you are certain this is a fraud attempt, let the fraudster believe that the transaction (such as a wire transfer) was successful. The fraudster will learn the truth soon enough: if not in this fraud attempt, perhaps in the next one.

But “never” is a strong word, and there are some times when you MUST expose your fraud prevention techniques. Let me provide an example.

Biometric time cards

One common type of fraud is time card fraud, in which an employee claims to start work at 8:00, even though he didn’t show up for work until 8:15. How do you fool the time clock? By buddy punching, where your friend inserts your time card into the time clock precisely at 8, even though you’re not present.

Enter biometric time clocks, in which a worker must use their finger, palm, face, iris, or voice to punch in and out. It’s very hard for your buddy to have your biometric, so this decreases time clock fraud significantly.

The four-letter word

Unless you’re an employer in Illinois, or a biometric time clock vendor to employers in Illinois.

Illinois state flag. Public domain.

And you fail to inform the employees of the purpose for collecting biometrics, and obtain the employees’ explicit consent to collect biometrics for this purpose.

Because that’s a violation of BIPA, Illinois’ Biometric Information Privacy Act. And you can be liable for damages for violating it.

In a case like this, or a case in a jurisdiction governed by some other privacy law, you HAVE to “expose” that you are using an individual’s biometrics as a fraud prevention techniques.

But if there’s no law to the contrary, obfuscate at will.

Communicating your anti-fraud solution

Now there are a number of companies that fight the many types of fraud that Fabbrocino mentioned. But these companies need to ensure that their prospects and clients understand the benefits of their anti-fraud solutions.

That’s where Bredemarket can help.

As a product marketing consultant, I help identity, biometric, and technology firms market their products to their end clients.

And I can help your firm also.

Read about Bredemarket’s content for tech marketers and book a free meeting with me to discuss your needs.

More information:

Bredemarket: Services, Process, and Pricing.

Kalshi, Polymarket, DraftKings, FanDuel, and Gambling Legality

(Bredebot helped write small parts of this post.)

Is it only smartphone game app users who are inundated with an unrelenting barrage of Kalshi ads?

If nothing else, the barrage inspired me to research Designated Contract Markets (DCMs). A DCM is a status granted and regulated by the Commodity Futures Trading Commission (CFTC), a federal agency. As such, Kalshi argues that it is exempt from state gaming regulations because it’s not hosting gambling. It’s hosting futures trading.

Gemini.

But Kalshi and similar apps such as Polymarket are opposed by DraftKings, FanDuel, and other sports betting apps. They make no pretense of “trading futures,” but comply with state-level gambling regulations, and use geolocation to prohibit mobile sports betting in states such as California where it is illegal.

And both are opposed by Native American casinos governed by the Indian Gaming Regulatory Act (IGRA) of 1988, which allows sovereign tribal nations to host traditional Indian games.

And they are opposed by other card houses, racetracks, bingo games, and state sponsored lotteries.

And all are opposed by the traditional Las Vegas casinos…except when they themselves host mobile apps and strike licensing deals with Native American casinos.

But the mobile app variants not only deal with geolocation, but also digital identity verification and age verification. 

And employment verification or non-verification to ensure that football players aren’t betting on football games.

Gemini.

Plus authentication to open the app and ensure Little Jimmy doesn’t open it.

Gemini.

There are all sorts of gaming identity stories…and Bredemarket can help identity/biometric marketers tell them.

When the Games Stopped: March 11, 2020

In late 2019 and early 2020 I was working on a project promoting biometric entry at sports facilities and concert venues…until a teeny little worldwide pandemic shut down all the sport and concert venues.

Some of you may remember that a pivotal day during that period was March 11, 2020. Among many many other things, this was the day on which basketball fans awaited the start of a game.

“8 p.m. [ET; 7 p.m. local time]: In Oklahoma City, it was just another game day for Nerlens Noel and his Thunder teammates, who were warming up to play the visiting Utah Jazz.”

The day soon became abnormal after a meeting between NBA officials and the two coaches. Unbeknownst to the crowd, the officials and coaches were discussing a medical diagnosis of Rudy Gobert. (That’s another story.)

“8:31 p.m. [ET]: Teams were sent back to their locker rooms but the crowd at Chesapeake Energy Arena weren’t informed of the cancellation immediately. Instead, recording artist Frankie J, the intended halftime entertainment, put on his show, while officials decided how to break the news.”

Eight minutes later, the crowd was instructed to leave the arena.

Twenty minutes after that, the NBA suspended all games.

Imagen 4.

A little over a month later, on April 19, millions of people were huddled in their homes, glued to the opening episode of a TV series called The Last Dance…the only basketball any of us were going to get for a while. And of course, these games were on decades-long tape delay, and we already knew the outcome. (The Chicago Bulls won.)

And that was our basketball…until the suspended season resumed on July 30 under very bizarre circumstances.

Anyway, all of that was a very long time ago.

Imagen 4.

Games and concerts have been back in business since 2021, and identity verification and authentication of venue visitors with biometrics and other factors is becoming more popular every year.

Mistaken Identity

I generated this picture in Imagen 4 after reading an AI art prompt suggestion from Danie Wylie. (I have mentioned her before in the Bredemarket blog…twice.)

The AI exercise raises a question.

What if you are in the middle of an identity verification or authentication process, and only THEN discover that a fraudster is impersonating you at that very moment?

Oh, Joel (Texas Porn and Georgia Social Media)

The definitive summary on U.S. age assurance for adult content and social media as of today (June 27, 2025) has already been written at Biometric Update.

And I confess that if I were Joel R. McConvey, I would have unable to resist the overpowering temptation to dip my pen in the inkwell and write the following sentence:

“But as age checks become law in more and more places, the industry will have to weigh how far it can push – or pull out.”

But McConvey’s article does not just cover the Supreme Court’s decision on Texas HB 1181’s age verification requirement for porn websites—and Justice Clarence Thomas’ statement in the majority opinion that the act “triggers, and survives, review under intermediate scrutiny because it only incidentally burdens the protected speech of adults.”

What about social media?

The Biometric Update article also notes that a separate case regarding age assurance for social media use is still winding its way through the courts. The article quotes U.S. District Judge Amy Totenberg’s ruling on Georgia SB 351:

“[T]he act curbs the speech rights of Georgia’s youth while imposing an immense, potentially intrusive burden on all Georgians who wish to engage in the most central computerized public fora of the twenty-first century. This cannot comport with the free flow of information the First Amendment protects.”

One important distinction: while opposition to pornography is primarily (albeit not exclusively) from the right of the U.S. political spectrum, opposition to social media is more broad-based. So social media restrictions are less of a party issue.

But returning to law rather than politics, one can objectively (or most likely subjectively) debate the Constitutional merits of naked people having sex vs. AI fakes of reunions of the living members of Led Zeppelin, the latter of which seem to be the trend on Facebook these days.

Minority Report

But streaking back to Texas, what of the minority opinion of the three Supreme Court Justices who dissented in the 6-3 opinion? According to The Texas Tribune, Justice Elena Kagan spoke for Justices Sonia Sotomayor and Kentanji Brown Jackson:

“But what if Texas could do better — what if Texas could achieve its interest without so interfering with adults’ constitutionally protected rights in viewing the speech HB 1181 covers? The State should be foreclosed from restricting adults’ access to protected speech if that is not in fact necessary.”

If you assume age verification (which uses a government backed ID) rather than age estimation (which does not), the question of whether identity verification (even without document retention) is “restricting” is a muddy one.

Of course all these issues have little to do with the technology itself, reminding us that technology is only a small part of any solution.