The Facial Recognition Vendor Is Not At Fault If You Don’t Upgrade Your Software, December 2025 Edition

This is the third time that I’ve seen something like this, so I thought I’d bring attention to it.

Biometric Update recently published a story about a United Kingdom agency that was criticized for its use of Cognitec facial recognition software.

Why? Because the facial recognition software the agency has is not accurate enough, particularly in regards to demographic bias.

Note “the facial recognition software the agency has.” There’s a story here.

“Cognitec released its FaceVACS-DBScan 5.5 software for biometrics matching at scale in 2020….The current version is 5.9, but Home Office’s Police National Database uses 5.5, which is why that version was tested.”

Important clarification.

Now perhaps the agency had its reasons for not upgrading the Cognitec software.

But governments and enterprises should not use old facial recognition software. Unless they have to run the software on computers running PC-DOS. Then they have other problems.

And if you detected that this post sounds really really similar to one I wrote back in April…you’re right. Back then an Australian agency continued to use an older version of the Cognitec algorithm, even though a newer one was available.

But I’m still using the pre-Nano Banana illustration for this new post.

A question for you: is YOUR company using outdated content? Are you ready to update it? Talk to Bredemarket.

The Government Wants You To Work for A Company, Not Yourself

I’m sure you’ve heard the empowerment gurus on LinkedIn who say that people working for companies are idiots. Admittedly it seems that too many companies don’t care about their employees and will jettison them at a moment’s notice.

So what do the empowerment gurus recommend? They tell people to take control of their own destiny and work for themselves. Don’t use your talents to fatten some executive’s stock options.

Google Gemini.

However, those of us in the United States face a huge barrier to that.

Healthcare.

Unless a solopreneur’s spouse has employer-subsidized healthcare, the financial healthcare penalty for working for yourself is huge. From an individual perspective, anyway.

The average annual premium for employer-sponsored family coverage totaled about $27,000 in 2025, according to [the Kaiser Family Foundation]. This is coverage for a family of four.

But workers don’t pay the full sum. They contributed just $6,850 — about 25% — toward the total premium, according to KFF. Employers subsidized the rest, paying about $20,000, on average.

By comparison, if the enhanced ACA subsidies expire next year, the average family of four earning $130,000 would pay the full, unsubsidized premium for marketplace coverage.

Their annual insurance premiums would jump to about $23,900, more than double the subsidized cost of $11,050 — an increase of almost $12,900, according to the Center on Budget and Policy Priorities.

Google Gemini.

Do how do those who oppose Communist subsidies propose to solve ACA healthcare costs?

By providing people with an annual health savings account funding of…checks notes…$1,500.

Perhaps I’m deprived because of my 20th century math education, but last I checked $1,500 in funding is less than $12,900 in losses.

People who are on COBRA, or a similar program such as Cal COBRA, experience similar sticker shocks.

So my advice to people is to do one or both of the following:

  • Get employer-subsidized healthcare.
  • Marry someone with employer-subsidized healthcare.

Step Into Christmas: Deepfake?

Deepfakes are not a 21st century invention. Take this video of “Step Into Christmas.”

But here are the musician credits.

Elton: Piano and vocals

Davey Johnstone: Guitars and backing vocals

Dee Murray: Bass guitar and backing vocals

Nigel Olsson: Drums and backing vocals

Ray Cooper: Percussion

Kiki Dee: Backing vocals (uncredited)

Jo Partridge: Backing vocals (uncredited)

Roger Pope: Tambourine (uncredited)

David Hentschel: ARP 2500 synthesizer (uncredited)

The video doesn’t match this list. According to the video, Elton played more than the guitar, and Bernie Taupin performed on the track.

So while we didn’t use the term “deepfake” in 1973, this promotional video meets at least some of the criteria of a deepfake.

And before you protest that everybody knew that Elton John didn’t play guitar…undoubtedly some people saw this video and believed that Elton was a guitarist. After all, they saw it with their own eyes.

Sounds like fraud to me!

Remember this when you watch things.

Updates on Hungary’s FaceKom and “Beneficial Ownership”

Masha Borak of Biometric Update is writing about FaceKom again.

I discussed Borak’s previous article on FaceKom, which noted the alleged ties between FaceKom and the Hungarian government. The whole thing is a classic example of BENEFICIAL ownership, in which someone who is not the legal owner of a company may still benefit from it.

Borak returned to the theme in the current post:

“FaceKom, the identity verification company used by the Hungarian national digital identity program, has been acquired by major local IT and telecom group, 4iG Informatikai (4iG IT). The deal is now attracting attention among media outlets and political watchers due to the companies’ relationship with Prime Minister Viktor Orbán….

“Recent 4iG’s purchases, however, have been raising questions over the company’s reported links to the Hungarian government, which has been accused by critics of enriching political allies, family, and loyalists through state resources and public contracts.”

The details are in Borak’s post, including:

“4iG chairman and majority investor Gellért Jászai is known for his ties to Orbán and was invited as part of his entourage to Donald Trump’s Mar-a-Lago resort after the 2024 U.S. presidential election.”

“[FaceKom’s] previous owner is Equilor Fund Management, owned by the Central European Opportunity Private Equity Fund (CEOM)….While CEOM has no direct links with Orbán, local media investigations have discovered links with companies owned by the Prime Minister’s son-in-law, István Tiborcz.”

Mere links do not necessarily indicate illegal activity, and Hungarian law may differ from laws in other countries, but FaceKom is being watched.

Messy Negative “Why” Stories Are Powerful

I’ve previously talked about companies with powerful why stories. But Chantelle Davison recently pointed out something I should have realized before.

A company’s “why” story can evoke negative emotions, and for this reason can powerfully resonate with their prospects who are experiencing the same problem, and admire someone who overcame it.

As Davison tells it, her 1:1s with businesspeople often turn into confessionals. Not that I picture the lovebug-loving Chantelle as a priest, but bear with me.

“Then they tell me something they’ve been carrying around for YEARS.

“Something they’re convinced would make people think less of them.

“Something they’ve buried so deep they’d almost forgotten it was there….

“The messy backstory that shaped exactly WHY they do what they do.”

And that can resonate with prospects.

Take the Keith Puckett failure example that I shared earlier: he had purchased a home security system thinking that it would protect him…and then while he was traveling the security system sent him an alarm with no context.

Now I guess Puckett could have been embarassed by this stupid purchase of something that did no good at all. But Puckett wasn’t embarassed at all. And he tells Ubiety prospects that he spent good money on a bad system, experienced fear and helplessness…and that he NEVER wants Ubiety customers to experience those same negative emotions.

Share YOUR why story.

Even if it’s a poor tattoo choice.

Google Gemini.

And if you need help writing your why story, talk to me.

Groupthink From Bots

I participate in several public and private AI communities, and one fun exercise is to take another creator’s image generation prompt, run it yourself (using the same AI tool or a different tool), and see what happens. But certain tools can yield similar results, for explicable reasons.

On Saturday morning in a private community Zayne Harbison shared his Nano Banana prompt (which I cannot share here) and the resulting output. So I ran his prompt in Nano Banana and other tools, including Microsoft Copilot and OpenAI ChatGPT.

The outputs from those two generative AI engines were remarkably similar.

Copilot.
ChatGPT.

Not surprising, given the history of Microsoft and OpenAI. (It got more tangled later.)

But Harbison’s prompt was relatively simple. What if I provided a much more detailed prompt to both engines?

Create a realistic photograph of a coworking space in San Francisco in which coffee and hash brownies are available to the guests. A wildebeest, who is only partaking in a green bottle of sparkling water, is sitting at a laptop. A book next to the wildebeest is entitled “AI Image Generation Platforms.” There is a Grateful Dead poster on the brick wall behind the wildebeest, next to the hash brownies.

So here’s what I got from the Copilot and ChatGPT platforms.

Copilot.
ChatGPT.

For comparison, here is Google Gemini’s output for the same prompt.

Gemini.

So while there are more differences when using the more detailed prompt (see ChatGPT’s brownie placement), the Copilot and ChatGPT results still show similarities, most notably in the Grateful Dead logo and the color used in the book.

So what have we learned, Johnny? Not much, since Copilot and ChatGPT can perform many tasks other than image generation. There may be more differentiation when they perform SWOT analyses or other operations. As any good researcher would say, more funding is needed for further research.

But I will hazard two lessons learned:

  • More detailed prompts are better.