Oh, and one more thing.
If in 1969 I was creating videos about a 1993 standard issued by the National Institute of Standards and Technology (an agency that didn’t exist under that name in 1969)…
…I must have been a genius little boy.
Identity/biometrics/technology marketing and writing services
Oh, and one more thing.
If in 1969 I was creating videos about a 1993 standard issued by the National Institute of Standards and Technology (an agency that didn’t exist under that name in 1969)…
…I must have been a genius little boy.
Back in July 1969, American astronauts walked on the moon.
And I posted on TikTok.
Perhaps not.
As you can discern, I posted that video on TikTok in 2024. And I also posted it here.
By the way, I’ll have more serious comments about Chinese companies and fingerprints later.
(Imagen 4)
Tech CMOs want to move their prospects to act and buy world-changing offerings (products or services) from their firms…and I want to move my tech CMO prospects to act and buy marketing and writing services from Bredemarket. So tech CMOs, I definitely feel your pain. But how can you move your prospects…and how can I move you?
In my recent post about converting an end customer interview into a case study, I discussed a “problem, solution, results” simple case study outline.
Justin Welsh just discussed the same thing, but with better words.
“I copy/pasted a spreadsheet of over 100 posts I’ve written that created real impact for my readers into ChatGPT, and I found a pattern:
“Specific struggle + specific transformation = lasting change
“Not some vague tension. Not a generic transformation. Specific moments where everything shifted.”
Of course the dozen case studies I ghostwrote for my client were implicitly specific. But it’s helpful to make that word “specific” explicit.

You see what I did there. Well, as much as I could while preserving my ghostwriter status and my client’s anonymity.
This section of the blog post is specifically addressed to tech CMOs and other marketers. The rest of you can skip this part and watch this entertaining video instead.

Now I know I’ve loaded this post with links to previous Bredemarket content that addresses the…um…specific topics in much more detail. Maybe you clicked on the links, or maybe you didn’t. I will find out.
But if you are ready to move forward, this is the one link you need to click. (“Now you tell me, John!”) It lets you set up a meeting with Bredemarket to discuss your specific needs.
(The picture is only from Imagen 3. I’ve been using it since January, as you will see.)
Here’s a “why” question: why does Bredemarket write the things it writes about?
Several reasons:
And then there are really specific reasons such as this one.
In late January I first wrote about third-party risk management (TPRM) and have continued to do so since.
Why?
Because at that time, a TPRM firm had a need for content marketing and product marketing services, and Bredemarket started consulting for the firm.
I was very busy for 2 1/2 months, and the firm was happy with my work. And I got to dive into TPRM issues in great detail:
But for internal reasons that I can’t disclose (NDA, you know), the firm had to end my contract.
Never mind, I thought. I had amassed an incredible 75 days of TPRM experience—or about the same time that it takes for a BAD TPRM vendor to complete an assessment.
But how could I use this?
Why not put my vast experience to use with another TPRM firm? (Honoring the first firm’s NDA, of course.)
So I applied for a product marketing position with another TPRM firm, highlighting my TPRM consulting experience.
The company decided to move forward with other candidates.
The firm had another product marketing opening, so I applied again.
The company decided to move forward with other candidates.
Even if this company had a third position, I couldn’t apply for it because of its “maximum 2 applications in 60 days” rule.
Luckily for me, another TPRM firm had a product marketing opening. TPRM is active; the identity/biometrics industry isn’t hiring this many product marketers.
“Thank you for your application for the Senior Product Marketing Manager position at REDACTED. We really appreciate your interest in joining our company and we want to thank you for the time and energy you invested in your application to us.
“We received a large number of applications, and after carefully reviewing all of them, unfortunately, we have to inform you that this time we won’t be able to invite you to the next round of our hiring process.
“Due to the high number of applications, we are unfortunately not able to provide individual feedback to your application at this early stage of the process.
“Again, we really appreciated your application and we would welcome you to apply to REDACTED in the future. Be sure to keep up to date with future roles at REDACTED by following us on LinkedIn and our other social channels.
“We wish you all the best in your job search.”
Unfortunately, I apparently did not have “impressive credentials.” Oh well.
What now?
If nothing else, I will continue to write about TPRM and the issues I listed above.
Well, if any TPRM firm wants to contract with Bredemarket, schedule a meeting: https://bredemarket.com/cpa/
And if any TPRM firm wants to use my technology experience and hire me as a full-time product marketer, contact my personal LinkedIn account: https://www.linkedin.com/in/jbredehoft
I’m motivated to help your firm succeed, and make your competitors regret passing on me.
Sadly, despite my delusions of grandeur and expositor syndrome (to be addressed in a future Bredemarket blog post), I don’t think any TPRM CMOs are quaking in their boots and fearfully crying, “We missed out on Bredehoft, and now he’s going to work for the enemy and crush us!”
But I could be wrong.
(Part of the biometric product marketing expert series)
Are facial recognition algorithms accurate?
Who tests them?
The U.S. National Institute of Standards and Technology, or NIST.
Visit https://www.nist.gov/programs-projects/face-technology-evaluations-frtefate
Mike Bowers (CSIDDS) shared a Substack article by Max Houck regarding the uneven nature of forensic science in the United States. Houck’s thesis:
…how the fragmented, decentralized nature of American law enforcement and forensic practice creates a landscape where what counts as science (and possibly what counts as justice) can vary wildly depending on where you happen to be.
There are about 18,000 police agencies in the United States at all levels of government, and 400 separate forensic laboratories.
But we have standards, right?
Do Even when national scientific bodies like ASTM or NIST’s OSAC develop well-reasoned, consensus-based forensic standards, adoption is purely voluntary. Some laboratories fully integrate these standards, using them to validate methods, structure protocols, and train staff. Most others ignore them, modify them, or apply them selectively based on local preference or operational convenience. There is no enforcement mechanism, no unified system of oversight. The science exists, but whether it is followed depends on where you are.
Houck’s article details many other issues that plague forensic science, but the main issues arise because there are 18,000 different authorities on the matter. Because this is a structural issue, deeply rooted in how Americans think of governing ourselves, Houck doesn’t see an easy solution.
Reforming this system will not be easy. It runs up against the powerful American instincts toward local control, political independence, and legal precedent. Federal mandates for forensic accreditation, national licensing of analysts, or the establishment of an independent forensic science* oversight body (all ideas floated over the years) face stiff political and logistical resistance. I don’t give these ideas much of a chance.
Even Houck’s minimal suggestions for reform are questionable. In fact, if you read the list of his solutions at the bottom of his article, you’ll see that he’s already crossed one of them out.
Federal funding could be tied to meaningful accreditation and quality assurance requirements.
(Imagen 3)
I just listened to a third-party risk management (TPRM) Mitratech webinar about NIST cybersecurity frameworks, hosted by OCEG, which talked about a farm.
No, they’re not planting corn at NIST’s Gaithersburg headquarters.
(At least I don’t think so. I haven’t been there since early 2009, back when Motorola and Safran people couldn’t talk about the possible acquisition. We did anyway. But I digress.)
Back to TPRM. In Mitratech’s case, FARM stands for “frame, assess, respond, and monitor.”
Here’s how Mitratech introduced the topic in a 2022 post:
NIST SP 800-53 is considered the foundation upon which all other cybersecurity controls are built. With SP 800-161 Rev. 1, NIST outlines a complementary framework to frame, assess, respond to, and monitor cybersecurity supply chain risks. Together, SP 800-53 and supplemental SP 800-161 control guidance present a comprehensive framework for assessing and mitigating supplier risks.
If you visit the latest (as of 2024) update to SP 800-161, you can find NIST’s explanation of the FARM in Appendix G. The three referenced levels in the quote below are the enterprise, mission, and operations levels.
The first approach is known as FARM and consists of four steps: Frame, Assess, Respond, and Monitor. FARM is primarily used at Level 1 and Level 2 to establish the enterprise’s risk context and inherent exposure to risk. Then, the risk context from Level 1 and Level 2 iteratively informs the activities performed as part of the second approach described in The Risk Management Framework (RMF). The RMF predominantly operates at Level 3 [SP80037], – the operational level – and consists of seven process steps: Prepare, Categorize, Select, Implement, Assess, Authorize, and Monitor.
Briefly:
Section G.2 of the document includes much, much more detailed definitions of the FARM elements, should you be interested. I’d provide those details myself, but then I fear I’d have to say to you, “Sorry if I’ve stayed too long.”
I’ve been around a ton of compliance frameworks during and after the years I worked at Motorola.
There is one compliance framework that is a little different from CMM, ISO, GDPR, and all the others: the System and Organization Controls (SOC) suite of Services.
The most widely known member of the suite is SOC 2® – SOC for Service Organizations: Trust Services Criteria. But you also have SOC 1, SOC 3, SOC for Cybersecurity, SOC for Supply Chain, SOC for Steak…whoops, I made that one up because I’m hungry as I write this. But the others are real.
But the difference about the SOC suite is that it’s not governed by engineers or scientists or academics.
It’s governed by CPAs.
And for once I’m not talking about content-proposal-analysis experts.
I’m talking about the AICPA, or the Association of International Certified Professional Accountants.
Which begs the question: why are a bunch of bean counters defining compliance frameworks for cybersecurity?
Ask Schneider Downs. As an accounting firm, they may have an obvious bias regarding this question. But their answers are convincing.
So that’s why the accountants are running your SOC 2 audit.
And don’t try to cheat when you pay them for the audit.
A few of you may have detected that the phrase “SOC it to me” is derived from a popular catchphrase from the old TV show Rowan & Martin’s Laugh-In.
A phrase that EVERYBODY said.
(Wildebeest accountants from Imagen 3)
(Part of the biometric product marketing expert series)
For many years, the baseline for high-quality capture of fingerprint and palm print images has been to use a resolution of 500 pixels per inch. Or maybe 512 pixels per inch. Whatever.
The crime scene (latent) folks weren’t always satisfied with this, so they pushed to capture latent fingerprint and latent palm print images at 1000 pixels per inch. Pardon me, 1024.
But beyond this, the resolution of captured prints hasn’t really changed in decades. I’m sure some people have been capturing prints at 2000 (2048) pixels per inch, but there aren’t massive automated biometric identification systems that fully support this resolution from end to end.
But that may be changing.
For about as long as latent examiners have pursued 1000 ppi print capture, people outside of the criminal justice arena have been looking at fingerprints for a very different purpose.
Our normal civil fingerprint processes require us to identify people via fingerprints beginning at the age of 18, or perhaps at the age of 12.
But gow do we identify people in those first 12 years?
More specifically, can we identify someone via their fingerprints at birth, and then authenticate them as an adult by comparing to those original prints?
It’s a dream, but many have pursued this dream. Dr. Anil Jain at Michigan State University has pursued this for years, and co-authored a 2014 paper on the topic.
Given that children, as well as the adults, in low income countries typically do not have any form of identification documents which can be used for this purpose [vaccination], we address the following question: can fingerprints be effectively used to recognize children from birth to 4 years? We have collected 1,600 fingerprint images (500 ppi) of 20 infants and toddlers captured over a 30-day period in East Lansing, Michigan and 420 fingerprints of 70 infants and toddlers at two different health clinics in Benin, West Africa.
At the time, it probably made sense to use 500 pixel per inch scanners to capture the prints, since developing countries don’t have a lot of money to throw around on expensive 1000 ppi scanners. But the use of regular scanners runs counter to a very important truth about infants and their fingerprints. Are you sitting down?
Because infants are smaller than adults, infant fingerprints are smaller than adult fingerprints.
Think about it. The standard FBI fingerprint card assumes that a rolled fingerprint occupies 1.6 inches x 1.5 inches of space. If you were to roll an infant fingerprint, it would occupy much less than that. Heck, I don’t even know if an infant’s entire FINGER is 1.6 inches long.
So the capture device is obtaining these teeny tiny ridges, and these teeny tiny ridge endings, and these teeny tiny bifurcations. Or trying to. And if those second-level details can’t be captured, then you’re not going to get the minutiae, and your fingerprint matching is going to fail.
So a decade later, researchers today are adopting a newer approach, according to a Biometric Update summary of an ID4Africa webinar. (This particular portion is at the very end of the webinar, at around the 2 hour 40 minute mark.)
A video presentation from Judge Lidia Maejima of the Court of Justice of Parana, Brazil introduced the emerging legal framework for biometric identification of infants. Her representative Felipe Hay explained how researchers in Brazil developed 5,000 dpi scanners, he says, which accurately record the minutiae of infants’ fingerprints.
Did you capture that? We’re moving from five hundred pixels per inch to FIVE THOUSAND pixels per inch. (Or maybe 5120.) Whether even that resolution is capable of capturing infant fingerprint detail remains to be seen.
And as Dr. Joseph Atick noted, all this research is still in its…um…infancy. We won’t know for years whether the algorithms can truly match infant fingerprints to child or adult fingerprints.
By the way, when talking about digital images, Adobe notes that the correct term is pixels per inch, not dots per inch. DPI specifically refers to printer resolution, which is appropriate when you’re printing a fingerprint card but not when you’re displaying an image on a screen.
(Image from From https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.500-290e3.pdf )
If you create your own test data, you’re more likely to pass the test. So what data was used for Amazon One palm/vein identity scanning accuracy testing?
(Part of the biometric product marketing expert series)
(Image from Imagen 3)
I’ve previously discussed Amazon’s biometric palm/vein identity scanning efforts. But according to Dr. Sai Balasubramanian, M.D., J.D. in Forbes, Amazon is entering a new market, healthcare.
“Amazon announced that it is partnering with NYU Langone to launch Amazon One, a contactless palm screening technology, throughout the health system.”
Which makes sense, as long as the medical professional isn’t wearing gloves. I don’t know if Amazon One can read veins through medical gloves.
As I reflected upon this further, I realized something:
But NIST has never conducted regular testing of palm identification in general, or palm/vein identity scanning in particular. Not for Amazon. Not for Fujitsu. Not for Imprivata. Not for Ingenico. Not for Pearson. Not for anybody.
So how do we know that Amazon One works?
Because Amazon said so.
“Amazon One is 100 times more accurate than scanning two irises. It raises the bar for biometric identification by combining palm and vein imagery, and after millions of interactions among hundreds of thousands of enrolled identities, we have not had a single false positive.”
Claims may dazzle some people, but (as of 2023) Jim Nash was not among them:
“The company claims it is 99.999 percent accurate but does not offer information supporting that statistic.”
And so far I haven’t found any either.
Since the company trains its algorithm on synthetically generated palms, I would like to make sure the company performs its palm/vein identity scanning accuracy testing on REAL palms. If you actually CREATE the data for any test, including an accuracy test, there’s a higher likelihood that you will pass.
I think many people would like to see public substantiated Amazon One accuracy data. ZERO false positives is a…BOLD claim to make.