It’s a Deepfake…World

Remember the Church Lady’s saying, “Well how convenient“?

People weren’t laughing at Joel R. McConvey when he reminded us of a different saying:

“In Silicon Valley parlance, ‘create the problem, sell the solution.'”

Joel R. McConvey’s “tale of two platforms”

McConvey was referring to two different Sam Altman investments. One, OpenAI’s newly-released Sora 2, amounts to a deepfake “slop machine” that is flooding our online, um, world in fakery. This concerns many, including SAG-AFTRA president Sean Astin. He doesn’t want his union members to lose their jobs to the Tilly Norwoods out there.

The deepfake “sea of slop” was created by Google Gemini.

If only there were a way to tell the human content from the non-person entity (NPE) content. Another Sam Altman investment, World (formerly Worldcoin), just happens to provide a solution to humanness detection.

“What if we could reduce the efficacy of deepfakes? Proof of human technology provides a promising tool. By establishing cryptographic proof that you’re interacting with a real, unique human, this technology addresses the root of the problem. It doesn’t try to determine if content is fake; it ensures the source is real from the start.”

Google Gemini. Not an accurate depiction of the Orb, but it’s really cool.

All credit to McConvey for tying these differing Altman efforts together in his Biometric Update article.

World is not enough

But World’s solution is partial at best.

As I’ve said before, proof of humanness is only half the battle. Even if you’ve detected humanness, some humans are capable of their own slop, and to solve the human slop problem you need to prove WHICH human is responsible for something.

Which is something decidedly outside of World’s mission.

But is it part of YOUR company’s mission? Talk to Bredemarket about getting your anti-fraud message out there: https://bredemarket.com/mark/

Identity and Expression

(Part of the biometric product marketing expert series)

Whether you are a human or a non-person entity (NPE) with facial recognition capability, you rely on visual cues to positively identify or authenticate a person. Let’s face it; many people resemble each other, but specific facial expressions or emotions are not always shared by people who otherwise look alike.

All pictures Google Gemini.

But in one of those oddities that fill the biometric world, you can have TOO MUCH expression. Part 3 of International Civil Aviation Organization (ICAO) Document 9303, which governs machine readable travel documents, mandates that faces on travel documents must maintain a neutral expression without smiling. At the time (2003) it was believed that the facial recognition algorithms would work best if the subject were expressionless. I don’t know if that holds true today.

But once the smile is erased, any other removal of expression or emotion degrades identification capability significantly. For example, closing the eyes not only degrades facial recognition, but is obviously fatal to iris recognition.

And if you remove the landmarks upon which facial recognition depends, identification is impossible.

While expression or lack thereof does not invalidate the assumption of permanence of the biometric authentication factor, it does govern the ability of people and machines to perform identification or authentication.

The Late Maya Jean Yourex, Canine Identifiable Information, and Voter Fraud

There are a variety of non-person entities, all of which may engage in felonies. Take the late Maya Jean Yourex of Costa Mesa, California, who was encouraged to register to vote…even though Maya is a dog.

I’m sure that Carl DeMaio will hop on this story immediately.

Maya’s voting history

Maya first voted via mail-in ballot in the 2021 California gubernatorial recall election of Gavin Newsom. We know about this because Laura Lee Yourex posted a picture in January 2022 of her dog wearing an “I voted” sticker.

This could be dismissed as a silly picture, but Laura Lee’s October 2024 post exemplifies dumb crime. According to Orange County District Attorney spokeswoman Kimberly Edds (who presumably is human, though I haven’t verified this):

“Yourex had posted [a photo] in October 2024 of Maya’s dog tag and a vote-by-mail ballot with the caption “Maya is still getting her ballot,” even after the dog had passed away…”

The second ballot was rejected, but the first was counted.

Maya got away scot-free.

The fix was in. Imagen 4.

But Laura Lee potentially faces five felonies:

  • two counts of casting a ballot when not entitled to vote
  • perjury
  • procuring or offering a false or forged document to be filed
  • registering a non-existent person to vote

She is scheduled to enter a plea on Tuesday and theoretically faces six years behind bars.

Nathaniel Percy of the Orange County Register points out an important difference between the two elections in which Maya participated:

“Proof of residence or identification is not required for citizens to register to vote in state elections or cast ballots in state elections, which was how Maya’s vote counted in the recall election of Newsom….

“It was not immediately known on Friday how Maya voted in that election.

“However, proof of residence and registration is required of first-time voters in federal elections, and the ballot in Maya’s name for the 2022 primary was challenged and rejected….”

Voting agencies can’t find fake IDs

However, as I have previously noted, voting officials do not have the knowledge or tools to determine whether a government identification document is legitimate.

This is fake. Well, the card is real, but it’s not official.

As long as Maya’s ID declared that she was 18 years old, some voting officials would approve it.

Even if Maya’s face on the ID was a dog face.

This is also fake. Really fake, since it’s Imagen 4 generated.

Beyond “ID plus selfie“

As for proof of residency, Laura Lee’s electric bill could list Maya on the account, and Southern California Edison would be none the wiser.

Which is why many identity verification processes go beyond “ID plus selfie” (what you have plus what you are), and also include checks of textual databases for additional evidence of the person. 

Socure, for example, accesses over 400 global data sources to verify identities or identify fraudulent ones.

I doubt that Laura Lee enrolled her dog Maya in all of these sources. How many Social Security Numbers, email addresses, bank accounts, credit cards, and other records would Maya have? “Canine identifiable information” (CII)?

Do you validate identities?

If you are a marketing leader that wants to promote your identity solution, and your company can benefit from a marketing consultant with 30 years of identity experience, schedule a meeting with Bredemarket at bredemarket.com/mark.

Drive content results.

An IMEI Number Is NOT Unique to Each Mobile Phone

(Imagen 3)

Have you ever used the phrase “sort of unique”? Something is either unique or it isn’t. And International Mobile Equipment Identity (IMEI) numbers fail the uniquness test.

Claims that International Mobile Equipment Identity (IMEI) numbers are unique

Here’s what a few companies say about the IMEI number on each mobile phone. Emphasis mine.

  • Thales: “The IMEI (International Mobile Equipment Identity) number is a unique 15-digit serial number for identifying a device; every mobile phone in the world has one.”
  • Verizon: “An IMEI stands for International Mobile Equipment Identity. Think of it as your phone’s fingerprint — it’s a 15-digit number unique to each device.”
  • Blue Goat Cyber: “In today’s interconnected world, where our smartphones have become an indispensable part of our lives, it is essential to understand the concept of IMEI – the International Mobile Equipment Identity. This unique identifier plays a crucial role in various aspects of our mobile devices, from security to tracking and repairs.”

These and other descriptions of the IMEI prominently use the word “unique.” Not “sort of unique,” but “unique.”

Which means (for non-person entities, just like persons) that if someone can find a SINGLE reliable instance of more than one mobile phone having the same IMEI number, then the claim of uniqueness falls apart completely.

Examples of non-uniqueness of IMEI numbers on mobile phones

People who claim IMEI uniqueness obviously didn’t read my Bredemarket blog post of April 1, in which I WASN’T fooling.

  • I talked about an incident in India in which a cyber fraud operation “specialised in IMEI cloning.”
  • And an incident in Canada in which someone was scammed out of C$1,000, even though the phone had a valid IMEI.

IMEICheck.net even tells you (at a high level) how to clone an IMEI. It’s not easy, but it’s not impossible.

“In theory, hackers can clone a phone using its IMEI, but this requires significant effort. They need physical access to the device or SIM card to extract data, typically using specialized tools.

“The cloning process involves copying the IMEI and other credentials necessary to create a functional duplicate of the phone. However, IMEI number security features in modern devices are designed to prevent unauthorized cloning.”

So don’t claim an IMEI is unique when there is evidence to the contrary. As I said in my April post:

NOTHING provides 100.00000% security. Not even an IMEI number.”

What does this mean for your identity product?

If you offer an identity product, educate your prospects and avoid unsupportable claims. While a few prospects may be swayed by “100%” claims, the smarter ones will appreciate more supportable statements, such as “Our facial recognition algorithm demonstrated a 0.0022 false non-match rate in the mugshot:mugshot NIST FRTE 1:1 laboratory testing.”

When you are truthful in educating your prospects, they will (apologizes in advance for using this overused word) trust you and become more inclined to buy from you.

If you need help in creating content (blog posts, case studies, white papers, proposals, and many more), work with Bredemarket to create the customer-focused content you need. Book a free meeting with me.

I’m Bot a Doctor, Google MedGemma and MedSigLIP Edition

The Instagram account acknowledge.aI posted the following (in part):

“Google has released its MedGemma and MedSigLIP models to the public, and they’re powerful enough to analyse chest X-rays, medical images, and patient histories like a digital second opinion.”

Um, didn’t we just address this on Wednesday?

“In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?”

Google and developers

So I wanted to see how Google offered MedGemma and MedSigLIP. So I found Google’s own July 9 announcement

In the announcement, Google asserted that their tools are privacy-preserving, allowing developers to control privacy. In fact, developers are frequently mentioned in the announcement. Yes, developers.

OH wait, that was Microsoft.

The implication: Google just provides the tool: developers are responsible for its use. And the long disclaimer includes this sentence:

“The outputs generated by these models are not intended to directly inform clinical diagnosis, patient management decisions, treatment recommendations, or any other direct clinical practice applications.”

We’ve faced this before

And we’ve addressed this also, regarding proper use of facial recognition ONLY as an investigative lead. Responsible vendors emphasize this:

“In a piece on the ethical use of facial recognition, Rank One Computing stated the following in passing:

“‘[Rank One Computing] is taking a proactive stand to communicate that public concerns should focus on applications and policies rather than the technology itself.’”

But just because ROC or Clearview AI or another vendor communicates that facial recognition should ONLY be used as an investigative lead…does that mean that their customers will listen?

I’m Bot a Doctor: Consumer-grade Generative AI Dispensation of Health Advice

In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?

Often technology companies seek regulatory approval before claiming that their hardware or software can be used for medical purposes.

Users aren’t warned that generative AI is not a doctor

Consumer-grade generative AI responses are another matter. Maybe.

“AI companies have now mostly abandoned the once-standard practice of including medical disclaimers and warnings in response to health questions.”

A study led by Sonali Sharma analyzed historical responses to medical questions since 2022. The study included OpenAI, Anthropic, DeepSeek, Google, and xAI. It included both answers to user health questions and analysis of medical images. Note that there is a difference between medical-grade image analysis products used by professionals, and general-purpose image analysis performed by a consumer-facing tool.

Dharma’s conclusion? Generative AI’s “I’m not a doctor” warnings have declined since 2022.

But users ARE warned…sort of

But at least one company claims that users ARE warned.

“An OpenAI spokesperson…pointed to the terms of service. These say that outputs are not intended to diagnose health conditions and that users are ultimately responsible.”

The applicable clause in OpenAI’s TOS can be found in section 9, Medical Use.

“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”

4479

From OpenAI’s Service Terms.

But the claim “it’s in the TOS” sometimes isn’t sufficient. 

  • I just signed a TOS from a company, but was explicitly reminded that I was signing something that required binding arbitration in place of lawsuits.
  • Is it sufficient to restrict a “don’t rely on me for medical advice; you could die” warning to a document that we MAY only read once?

Proposed “The Bots Want to Kill You” contest

Of course, one way to keep generative AI companies in line is to expose them to the Rod of Ridicule. When the bots provide bad medical advice, expose them:

“Maxwell claimed that in the first message Tessa sent, the bot told her that eating disorder recovery and sustainable weight loss can coexist. Then, it recommended that she should aim to lose 1-2 pounds per week. Tessa also suggested counting calories, regular weigh-ins, and measuring body fat with calipers. 

“‘If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED. If I had not gotten help, I would not still be alive today,” Maxwell wrote on the social media site. “Every single thing Tessa suggested were things that led to my eating disorder.’”

The organization hosting the bot, the National Eating Disorders Association (NEDA), withdrew the bot within a week.

How can we, um, diagnose additional harmful recommendations delivered without disclaimers?

Maybe a “The Bots Want to Kill You” contest is in order. Contestants would gather reproducible prompts for consumer-grade generative AI applications. The prompt most likely to result in a person’s demise would receive a prize of…well, that still has to be worked out.

Some NPE’s Watching Me

(Imagen 4)

Unless you’re in the surveillance industry, surveillance sounds like a dirty word. I once knew an identity/biometric CEO who forcefully declared that HIS company would NEVER work in the surveillance industry.

Imagen 4.

But as Goddard Technologies notes, surveillance can be useful even if you’re NOT chasing bad people.

But before I describe how, I’m going to reveal my age.

Kennedy (John) William (Smokey) Gordy

Let’s talk about a singer who went by the name Rockwell. This was supposedly to conceal the fact that his last name was Gordy (he is Berry’s son). But he didn’t really conceal the fact that one of the uncredited backup vocalists on his wonderful one hit was a man named Michael Jackson. This was in the 1980s, when Michael Jackson was kinda sorta popular. OK, now do you remember the song?

“Somebody’s Watching Me” by Rockwell.

This excerpt from the lyrics provides the sinister tone of the song:

People call me on the phone, I’m trying to avoid
But can the people on TV see me, or am I just paranoid?

But that was the 1980s, when there was always a person in the surveillance loop. Even if there was a video camera hidden in Rockwell’s shower, some person was looking at the feed.

Things have changed.

Goddard Technologies’ “The Rise of Robotic Observers”

Now non-person entities (NPEs) are no longer the stuff of science fiction, and they can do things that only humans could do 40 years ago.

Sandra Krombacher shared one example from a LinkedIn article by Jon Kaplan of Goddard Technologies.

Kaplan’s theme:

“While much of the attention has gone to robots that do something (cleaning, welding, lifting), there’s a quieter, equally important shift happening: the rise of robots that observe.”

But what do they observe?

“These robots navigate environments, gather data, and report back. Think of them as mobile sensors with wheels, legs or propellers that identify open doors, check for damage, verify inventory, or confirm environmental conditions.”

Kaplan then notes that there are human beings that perform similar tasks, and that therefore these observer bots “align with how many industrial jobs actually work.” After the observations are collected, then humans—or perhaps other bots—can act upon the observations.

Does this affect how you perceive non-person entities? How do you feel about non-person entities that merely collect data for others to act? This is technically “surveillance,” but it could potentially reduce costs, increase profits, or even save lives.

Do you sell robotic observers, or something equally important?

Jon Kaplan used a LinkedIn article to tell his story about Goddard Technologies’ activities with observing robots.

But maybe your firm has your own story to tell.

Imagen 4. And I have to give credit where credit is due. I asked Google Gemini to create a picture with a wildebeest-authored LinkedIn article, but the article title, “The Grass Ceiling: Overcoming Obstacles in the Corporate Savana” (sic), didn’t come from me but from Google.

Why haven’t you written a LinkedIn article about your product? This lets you reach B2B prospects who are more likely on LinkedIn than on TikTok. In fact, I wrote a LinkedIn article about LinkedIn articles. (I wrote it so long ago that I only asked my clients six questions rather than seven questions.) And I’ve also written LinkedIn articles for Bredemarket clients.

Do you need help in writing that LinkedIn article that tells the world about your product? Maybe you could become one of my clients, since I help create content for tech marketers. Contact me.