How Can You Maximize Your Facial Recognition Or Cybersecurity Marketing Impact?

(This news was originally supposed to be embargoed until Monday April 21, but…well…things happen.)

Facial recognition and cybersecurity marketing leaders,

Stretched?

Is a stretched team holding you back from creating stellar marketing materials? Are competitors taking your prospects from you while you remain silent?

I’m John Bredehoft from Bredemarket, and I currently have TWO openings to act as your on-demand marketing muscle for facial recognition or cybersecurity:

  • compelling content creation
  • winning proposal development
  • actionable analysis
CPA?

Bias can be good when it’s a bias to action.

Bias?

Satisfy your immediate needs and book a call: https://bredemarket.com/cpa/

Zoom Scam With Faces

An interesting variant on fraudulent deepfake scams.

Kenny Li of Manta fame was sucked into a scam attempt, but was able to perceive the scam before any damage was done.

Li responded to a message from a known contact, which resulted in a Telegram conversation, which resulted in a Zoom call.

“In the call, there were team members who had their cameras on, and [the] Manta founder could see their faces. He mentioned that “Everything looked very real. But I couldn’t hear them.” Then came the “Zoom update required” prompt…”

Li didn’t fall for it.

(Imagen 3)

And one more thing…

The formal announcement is embargoed until Monday, but Bredemarket has TWO openings to act as your on-demand marketing muscle for facial recognition or cybersecurity:

  • compelling content creation
  • winning proposal development
  • actionable analysis

Book a call: https://bredemarket.com/cpa/ 

The Facial Recognition Vendor Is Not At Fault If You Don’t Upgrade Your Software

This is the second time that I’ve seen something like this, so I thought I’d bring attention to it.

Biometric Update recently published a story about an Australian agency that is no longer using Cognitec facial recognition software.

Why? Because the facial recognition software the agency has is not accurate enough.

Note “the facial recognition software the agency has.” There’s a story here.

Police and Counter-terrorism Minister Yasmin Catley clarifies that Cognitec has released numerous updates to the product since its deployment, but the police did not purchase them. As with other developers, Cognitec’s legacy algorithms have higher error rates for various demographic groups.

Important clarification.

Now perhaps the agency had its reasons for not upgrading the Cognitec software, and for using other software instead.

But governments and enterprises should not use old facial recognition software. Unless they have to run the software on computers running PC-DOS. Then they have other problems.

(A little aside: when I prompted Google Gemini to create the Imagen 3 image for this post, I asked it to create an image of a 1980s IBM PC running MS-DOS. Those in the know realize my prompt was incorrect. I should have requested a 1980s IBM PC running PC-DOS, not MS-DOS. PC-DOS was the version of MS-DOS that IBM licensed for its own computers, leaving Microsoft able to provide MS-DOS to the “clone computers” that eventually eclipsed IBM’s own offering.)

The Chinese Version: How to Recognize People From Quite a Long Way Away

Remember in January when OpenAI announced some great achievement, and then a few days later we learned that the Chinese firm DeepSeek could boast the same performance, only much better?

These Chinese leapfrogs don’t only happen in artificial intelligence.

One kilometer facial capture

In February, I wrote about something that I initially heard of via Biometric Update. My post, “How to Recognize People From Quite a Long Way Away,” told of an effort at Heriot-Watt University in Edinburgh, Scotland in which the researchers used light detection and ranging (LiDAR) to capture and evaluate faces from as far as a kilometer away.

In normal circumstances, we capture faces from a distance of mere meters. So one kilometer facial capture is impressive.

Or is it?

One hundred kilometer facial capture

Some Chinese researchers replied, “Hold my Tsingtao,” according to a Chinese Journal of Lasers paper (in Chinese) that was reported on by Live Science (in English). (And again, I learned of this via Biometric Update.)

Scientists in China have created a satellite with laser-imaging technology powerful enough to capture human facial details from more than 60 miles (100 kilometers) away….

According to the South China Morning Post, the scientists conducted a test across Qinghai Lake in the northwest of the country with a new system based on synthetic aperture lidar (SAL), a type of laser radar capable of constructing two-dimensional or three-dimensional images.

Qinghai Lake, from Google Maps.

Writers will note that the acronym SAL incorporates the L from the acronym LiDAR. This is APO, or acronym piling on.

Since I cannot read the original report, I don’t know if the researchers actually performed tests with actual faces. But supposedly SAL “detected details as small as 0.07 inches (1.7 millimeters),” based in part upon the benefits of its technology:

[T]his new system operates at optical wavelengths, which have much shorter wavelengths than microwaves and produce clearer images (though microwaves are better for penetrating into materials, because their longer wavelengths aren’t scattered or absorbed as easily).

All the cited articles make a big deal about the 100 kilometer distance’s equivalence to the boundaries of space. But before you get too excited, remember that a space-hosted SAL will be ABOVE any human subjects, and therefore will NOT capture the face at an optimal angle…

Can you identify Bart Everson’s face from this picture? For all I know it could be Moby. CC-BY-2.0, https://www.flickr.com/photos/editor/158206278.

…unless you’re lying on the beach sunbathing and therefore facing TOWARD space where all the Chinese satellites can see you.

Oh, and one more thing. The Chinese tests were conducted in optimal weather conditions, and obviously you can’t get the same results in bad weather.

But in the ideal conditions, perhaps you CAN be identified remotely.

(Snowman from Imagen 3)

Examples of Biometric Technology Misuse

If I become known for anything in biometrics, I want to be known for my extremely frequent use of the words “investigative lead.” 

Whether you are talking about DNA or facial recognition, these types of biometric evidence should not be the sole evidence used to arrest a person.

For an example of why DNA shouldn’t be your only evidence, see my recent post about Amanda Knox.

Facial recognition misuse in law enforcement

Regarding facial recognition, I wrote this in a social media conversation earlier today:

“Facial recognition CAN be used as a crowd checking tool…with proper governance, including strict adherence to a policy of only using FR as an investigative lead, and requiring review of potential criminal matches by a forensic face investigator. Even then, investigative lead ONLY. Same with DNA.”

I received this reply:

“It’s true but in my experience cops rarely follow any rules.”

Now I could have claimed that this view was exaggerated, but there are enough examples of cops who DON’T follow the rules to tarnish all of them. 

Revisiting Robert Williams’ Detroit arrest

I’ve already addressed the sad story of Robert Williams, who was “wrongfully arrested based upon faulty facial recognition results.”

At the time, I did not explicitly share the circumstances behind Williams’ arrest:

“The complaint alleges that the surveillance footage is poorly lit, the shoplifter never looks directly into the camera and still a Detroit Police Department detective ran a grainy photo made from the footage through the facial recognition technology.”

There’s so much that isn’t said here, such as whether a forensic face examiner made a definitive conclusion, or if the detective just took the first candidate from the list and ran with it.

But I am willing to bet that there was no independent evidence placing Williams at the shop location.

Why this matters

The thing that concerns me about all this? It just provides ammo to the people who want to ban facial recognition entirely.

Not realizing that the alternative—manual witness (mis)identification—is far more inaccurate and far more racist.

But the controversy would pretty much go away if criminal investigators only used facial recognition and DNA as investigative leads.

How Much Does Synthetic Identity Fraud Cost?

Identity firms really hope that prospects understand the threat posed by synthetic identity fraud, or SIF.

I’m here to help.

(Synthetic identity AI image from Imagen 3.)

Estimated SIF costs in 2020

In an early synthetic identity fraud post in 2020, I referenced a Thomson Reuters (not Thomas Reuters) article from that year which quoted synthetic identity fraud figures all over the map.

  • My own post referenced the Auriemma Group estimate of a $6 billion cost to U.S. lenders.
  • McKinsey preferred to use a percentage estimate of “10–15% of charge offs in a typical unsecured lending portfolio.” However, this may not be restricted to synthetic identity fraud, but may include other types of fraud.
  • Thomson Reuters quoted Socure’s Johnny Ayers, who estimated that “20% of credit losses stem from synthetic identity fraud.”

Oh, and a later post that I wrote quoted a $20 billion figure for synthetic identity fraud losses in 2020. Plus this is where I learned the cool acronym “SIF” to refer to synthetic identity fraud. As far as I know, there is no government agency with the acronym SIF, which would of course cause confusion. (There was a Social Innovation Fund, but that may no longer exist in 2025.)

Never Search Alone, not National Security Agency. AI image from Imagen 3.

Back to synthetic identity fraud, which reportedly resulted in between $6 billion and $20 billion in losses in 2020.

Estimated SIF costs in 2025

But that was 2020.

What about now? Let’s visit Socure again:

The financial toll of AI-driven fraud is staggering, with projected global losses reaching $40 billion by 2027 up from US12.3 billion in 2023 (CAGR 32%)., driven by sophisticated fraud techniques and automation, such as synthetic identities created with AI tools​.

Again this includes non-synthetic fraud, but it’s a good number for the high end. While my FTC fraud post didn’t break out synthetic identity fraud figures, Plaid cited a 2023 $1.8 billion figure for the auto industry alone, and Mastercard cited a $5 billion figure.

But everyone agrees on a figure of billions and billions.

The real Carl Sagan.
The deepfake Carl Sagan.

(I had to stop writing this post for a minute because I received a phone call from “JP Morgan Chase,” but the person didn’t know who they were talking to, merely asking for the owner of the phone number. Back to fraud.)

Reducing SIF in 2025

In a 2023 post, I cataloged four ways to fight synthetic identity fraud:

  1. Private databases.
  2. Government documents.
  3. Government databases.
  4. A “who you are” test with facial recognition and liveness detection (presentation attack detection).

Ideally an identity verification solution should use multiple methods, and not just one. It doesn’t do you any good to forge a driver’s license if AAMVA doesn’t know about the license in any state or provincial database.

And if you need an identity content marketing expert to communicate how your firm fights synthetic identities, Bredemarket can help with its content-proposal-analysis services.

Find out more about Bredemarket’s “CPA” services.

Know Your…Passenger

(Part of the biometric product marketing expert series)

OK, here’s another “KYx” acronym courtesy Facephi…Know Your Passenger.

And this is a critical one, and has been critical since…well, about September 11, 2001.

I saw Steve Craig’s reshare of the Facephi press release, which includes the following:

Currently, passengers must verify their identity at multiple checkpoints throughout a single journey, leading to delays and increased congestion at airports. To address this challenge, Facephi has developed technology that enables identity validation before arriving at the airport, reducing wait times and ensuring a seamless and secure travel experience. This innovation has already been successfully tested in collaboration with IATA through a proof of concept conducted last November.

More here.

The idea of creating an ecosystem in which identity is known throughout the entire passenger journey is not new to Facephi, of course. I remember that Safran developed a similar concept in the 2010s before it sold off Morpho, MorphoTrust, MorphoTrak, and Morpho Detection. And I’ve previously discussed the SITA-IDEMIA-Indico “Digital Travel Ecosystem.”

But however it’s accomplished, seamless travel benefits everyone…except the terrorists.

Have You Been Falsely Accused of NPE Use? You May Be Entitled To Compensation.

(From imgflip)

Yes, I broke a cardinal rule by placing an undefined acronym in the blog post title.

99% of all readers probably concluded that the “NPE” in the title was some kind of dangerous drug.

And there actually is something called Norpseudoephedrine that uses the acronym NPE. It was discussed in a 1998 study shared by the National Library of Medicine within the National Institutes of Health. (TL;DR: NPE “enhances the analgesic and rate decreasing effects of morphine, but inhibits its discriminative properties.”)

From the National Library of Medicine.

But I wasn’t talking about THAT NPE.

I was talking about the NPEs that are non-person entities. 

But not in the context of attribute-based access control or rivers or robo-docs

I was speaking of using generative artificial intelligence to write text.

My feelings on this have been expressed before, including my belief that generative AI should NEVER write the first draft of any published piece.

A false accusation

A particular freelance copywriter holds similar beliefs, so she was shocked when she received a rejection notice from a company that included the following:

“We try to avoid employing people who use AI for their writing.

“Although you answered ‘No’ to our screening question, the text of your proposal is AI-generated.”

There’s only one teeny problem: the copywriter wrote her proposal herself.

(This post doesn’t name the company who made the false accusation, so if you DON’T want to know who the company is, don’t click on this link.)

Face it. (Yes, I used that word intentionally; I’ve got a business to run.) Some experts—well, self-appointed “experts”—who delve into the paragraph you’re reading right now will conclude that its use of proper grammar, em dashes, the word “delve,” and the Oxford comma PROVE that I didn’t write it. Maybe I’ll add a rocket emoji to help them perpetuate their misinformation. 🚀

Heck, I’ve used the word “delve” for years before ChatGPT became a verb. And now I use it on purpose just to irritate the “experts.”

The ramifications of a false accusation

And the company’s claim about the copywriter’s authorship is not only misinformation.

It’s libel.

I have some questions for the company that falsely accused the copywriter of using generative AI to write her proposal.

  • How did the company conclude that the copywriter did not write her proposal, but used a generative AI tool to write it?
  • What is the measured accuracy of the method employed by the company?
  • Has the copywriter been placed on a blocklist by the company based upon this false accusation?
  • Has the company shared this false accusation with other companies, thus endangering the copywriter’s ability to make a living?

If this raises to the level of personal injury, perhaps an attorney should get involved.

From imgflip.

A final thought

Seriously: if you’re accused of something you didn’t do, push back.

After all, humans who claim to detect AI have not been independently measured regarding their AI detection accuracy.

And AI-powered AI detectors can hallucinate.

So be safe, and take care of yourself, and each other.


Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259.

How to Recognize People From Quite a Long Way Away

I can’t find it, and I failed to blog about it (because reasons), but several years ago there was a U.S. effort to recognize people from quite a long way away.

Recognize, not recognise.

From https://www.youtube.com/watch?v=ug8nHaelWtc.

The U.S. effort was not a juvenile undertaking, but from what I recall was seeking solutions to wartime use cases, in which the enemy (or a friend) might be quite a long way away.

I was reminded of this U.S. long-distance biometric effort when Biometric Update reported on efforts by Heriot-Watt University in Edinburgh, Scotland and other entities to use light detection and ranging (LiDAR) to capture and evaluate faces from as far as a kilometer away.

At 325 metres – the length of around three soccer pitches – researchers were able to 3D image the face of one of their co-authors in millimetre-scale detail.

The same system could be used to accurately detect faces and human activity at distances of up to one kilometre – equivalent to the length of 10 soccer pitches – the researchers say.

(I’m surprised they said “soccer.” Maybe it’s a Scots vs. English thing.)

More important than the distance is the fact that since they didn’t depend upon visible light, they could capture faces shrouded by the environment.

“The results of our research show the enormous potential of such a system to construct detailed high-resolution 3D images of scenes from long distances in daylight or darkness conditions.

“For example, if someone is standing behind camouflage netting, this system has the potential to determine whether they are on their mobile phone, holding something, or just standing there idle. So there are a number of potential applications from a security and defence perspective.”

So much for camouflage.

But this is still in the research stage. Among other things, the tested “superconducting nanowire single-photon detector (SNSPD)” only works at 1 degree Kelvin.

That’s cold.