The U.S. National Institute of Standards and Technology (NIST) says that we should…drumroll…adopt standards.
Which is what you’d expect a standards-based government agency to say.
But since I happen to like NIST, I’ll listen to its argument.
“One way AI can prove its trustworthiness is by demonstrating its correctness. If you’ve ever had a generative AI tool confidently give you the wrong answer to a question, you probably appreciate why this is important. If an AI tool says a patient has cancer, the doctor and patient need to know the odds that the AI is right or wrong.
“Another issue is reliability, particularly of the datasets AI tools rely on for information. Just as a hacker can inject a virus into a computer network, someone could intentionally infect an AI dataset to make it work nefariously.”
So we know the risks, but how do we mitigate them?
“Like all technology, AI comes with risks that should be considered and managed. Learn about how NIST is helping to manage those risks with our AI Risk Management Framework. This free tool is recommended for use by AI users, including doctors and hospitals, to help them reap the benefits of AI while also managing the risks.”
Cybersecurity professionals need to align their efforts with those of the U.S. National Institute of Standards and Technology’s (NIST’s) National Cybersecurity Center of Excellence (NCCoE). Download the NCCoE project portfolio, and plan to attend the February 19 webinar. Details below.
“The NIST National Cybersecurity Center of Excellence (NCCoE) is excited to announce the release of our inaugural Project Portfolio, providing an overview of the NCCoE’s research priorities and active projects.”
“The NCCoE serves as a U.S. cybersecurity innovation hub for the technologies, standards, and architectures for today’s cybersecurity landscape.
“Through our collaborative testbeds and hands-on work with industry, we build and demonstrate practical architectures to address real-world implementation challenges, strengthen emerging standards, and support more secure, interoperable commercial products.
“Our trusted, evidence-based guidelines show how organizations can reduce cybersecurity risks and confidently deploy innovative technologies aligned with secure standards.”
Formal and informal collaborations with other entities.
The NCCoE’s four pillars: Data Protection, Trusted Enterprise, Artificial Intelligence, and Resilient Embedded Systems.
The “forming,” “active,” and “concluding” projects within the pillars, with links to each project.
For example, one of the listed AI projects is the Cyber AI Profile:
“Recent advancements in Artificial Intelligence (AI) technology bring great opportunities to organizations, but also new risks and impacts that need to be managed in the domain of cybersecurity. NIST is evaluating how to use existing frameworks, such as the Cybersecurity Framework (CSF), to assist organizations as they face new or expanded risks.”
This group has published its roadmap, including workshops, working sessions, and document drafts.
And if you are a cybersecurity or identity company needing to communicate how your product protects your users, Bredemarket can help you bring your message to your prospects.
Book a free meeting with me and let’s discuss how we can work together.
Here are details on how Bredemarket works: its services, its process, and its pricing.
“A subject is a human user or NPE, such as a device that issues access requests to perform operations on objects. Subjects are assigned one or more attributes.”
If you have a process to authorize people, but don’t have a process to authorize bots, you have a problem. Matthew Romero, formerly of Veza, has written about the lack of authorization for non-human identities.
“Unlike human users, NHIs operate without direct oversight or interactive authentication. Some run continuously, using static credentials without safeguards like multi-factor authentication (MFA). Because most NHIs are assigned elevated permissions automatically, they’re often more vulnerable than human accounts—and more attractive targets for attackers.
“When organizations fail to monitor or decommission them, however, these identities can linger unnoticed, creating easy entry points for cyber threats.”
Veza recommends that people use a product that monitors authorizations for both human and non-human identities. And by the most amazing coincidence, Veza offers such a product.
People Require Authorization
And of course people require authorization also. They need authorization:
Oh yeah…and to access privileged resources on corporate networks.
It’s not enough to identify or authenticate a person or NPE. Once that is done, you need to confirm that this particular person has the authorization to…launch a nuclear bomb. Or whatever.
Your Customers Require Information on Your Authorization Solution
If your company offers an authorization solution, and you need Bredemarket’s content, proposal, or analysis consulting help, talk to me.
Experienced biometric professionals can’t help but notice that the acronym OFIQ is similar to the acronym NFIQ (used in NFIQ 2), but the latter refers to the NIST FINGERPRINT image quality standard. NFIQ is also open source, with contributions from NIST and the German BSI, among others.
But NFIQ and OFIQ, while analyzing different biometric modalities, serve a similar purpose: to distinguish between good and bad biometric images.
But do these open source algorithms meaningfully measure quality?
The study of OFIQ
Biometric Update alerted readers to the November 2025 study “On the Utility of the Open Source Facial Image Quality Tool for Facial Biometric Recognition in DHS Operations” (PDF).
Note the words “in DHS Operations,” which are crucial.
The DHS doesn’t care about how ALL facial recognition algorithms perform.
The DHS only cares about the facial recognition algorithms that may potentially use.
DHS doesn’t care about algorithms it would never use, such as Chinese or Russian algorithms.
In fact, from the DHS perspective, it probably hopes that the Chinese Cloudwalk algorithm performs very badly. (In NIST tests, it doesn’t.)
So which algorithms did DHS evaluate? We don’t know precisely.
“A total of 16 commercial face recognition systems were used in this evaluation. They are labeled in diagrams as COTS1 through COTS16….Each algorithm in this study was voluntarily submitted to the MdTF as part of on-going biometric performance evaluations by its commercial entity.”
So what did DHS find when it used OFIQ to evaluate images submitted to these 16 algorithms?
“We found that the OFIQ unified quality score provides extremely limited utility in the DHS use cases we investigated. At operationally relevant biometric thresholds, biometric matching performance was high and probe samples that were assessed as having very low quality by OFIQ still successfully matched to references using a variety of face recognition algorithms.”
Or in human words:
Images that yielded a high quality OFIQ score accurately matched faces using the tested algorithms.
Images that yielded a low quality OFIQ score…STILL accurately matched faces using the tested algorithms.
Google Gemini.
So, at least in DHS’ case, it makes no sense to use the OFIQ algorithm.
These results show that identical twins and same-sex fraternal twins give outcomes that are inconsistent with the intended or expected behaviour from a face recognition algorithm.
“Perhaps the most visible change is the push for phishing-resistant authentication—methods like passkeys, hardware-backed authenticators, and device binding….This shift signals that yesterday’s non-phishing-resistant MFA (SMS codes, security questions, and email OTPs) is no longer enough because they are easily compromised through man-in-the-middle or social engineering attacks like SIM swapping.”
I just asked Google Gemini to conceive an illustration of the benefits of orchestration. You can see my original prompt and the resulting illustration, credited to Bredebot, in the blog post “Orchestration: Harmonizing the Tech Universe.” (Not “Harmonzing.” Oh well.)
Google Gemini.
Note the second of the two benefits listed in Bredebot’s AI-generated illustration: “Reduced Complexity.”
On the surface, this sounds like generative AI getting the answer wrong…again.
After all, the reason that software companies offer a single-vendor solution is because when everything comes from the same source, it’s easier to get everything to work together.
When you have an orchestrated solution incorporating elements from multiple vendors, common sense tells you that the resulting solution is MORE complex, not less complex.
When I reviewed the image, I was initially tempted to ask Bredebot to write a response explaining how orchestrated solution reduce complexity. But then I decided that I should write this myself.
Because I had an idea.
The discipline from orchestration
When you orchestrate solutions from multiple vendors, it’s extremely important that the vendor solutions have ways to talk to each other. This is the essence of orchestration, after all.
Because of this need, you HAVE to create rules that govern how the software packages talk to each other.
Let me cite an example from one of my former employers, Incode. As part of its identity verification process, Incode is capable of interfacing to selected government systems and processing government validations. After all, I may have something that looks like a Mexican ID, but is it really a Mexican ID?
Mexico – INE Validation. When government face validation is enabled this method compares the user’s selfie against the image in the INE database. The method should be called after add-face is over and one of (process-id or document-id) is over.
So Incode needs a standard way to interface with Mexico’s electoral registry database for this whole thing to work. Once that’s defined, you just follow the rules and everything should work.
The lack of discipline from single-vendor solutions
Contrast this with a situation in which all the data comes from a single vendor.
Now ideally interfaces between single-vendor systems should be defined in the same way as interfaces between multi-vendor systems. That way everything is nicely neatly organized and future adaptations are easy.
Sounds great…until you have a deadline to meet and you need to do it quick and dirty.
Google Gemini.
In the same way that computer hardware server rooms can become a tangle of spaghetti cables, computer software can become a tangle of spaghetti interfaces. All because you have to get it done NOW. Someone else can deal with the problems later.
So that’s my idea on how orchestration reduces complexity. But what about those who really know what they’re talking about?
Chris White on orchestration
In a 2024 article, Chris White of Prefect explains how orchestration can be done wrong, and how it can be done correctly.
“I’ve seen teams struggle to justify the adoption of a first-class orchestrator, often falling back on the age-old engineer’s temptation: “We’ll just build it ourselves.” It’s a siren song I know well, having been lured by it myself many times. The idea seems simple enough – string together a few scripts, add some error handling, and voilà! An orchestrator is born. But here’s the rub: those homegrown solutions have a habit of growing into unwieldy systems of their own, transforming the nature of one’s role from getting something done to maintaining a grab bag of glue code.
“Orchestration is about bringing order to this complexity.”
So how do you implement ordered orchestration? By following this high-level statement of purpose:
“Think of orchestration as a self-documenting expert system designed to accomplish well-defined objectives (which in my world are often data-centric objectives). It knows the goal, understands the path to achieve it, and – crucially – keeps a detailed log of its journey.”
Read White’s article for a deeper dive into these three items.
Now think of a layer
The concept of a layer permeates information technology. There are all sorts of models that describe layers and how they work with each other.
“In modern IT systems, an orchestration layer is a software layer that links the different components of a software system and assists with data transformation, server management, authentication, and integration. The orchestration layer acts as a sophisticated mediator between various components of a system, enabling them to work together harmoniously. In technical terms, the orchestration layer is responsible for automating complex workflows, managing communication, and coordinating tasks between diverse services, applications, and infrastructure components.”
In this edition of The Repurposeful Life, I’m revisiting a prior post (“Is the Quantum Security Threat Solved Before It Arrives? Probably Not.“) and extracting just the part that deals with the National Institute of Standards and Technology (NIST) Federal Information Processing Standard (FIPS) 204.
Thales used the NIST “FIPS 204 standard to define a digital signature algorithm for a new quantum-resistant smartcard: MultiApp 5.2 Premium PQC.”
The NIST FIPS 204 standard, “Module-Lattice-Based Digital Signature Standard,” can be found here. This is the abstract:
“Digital signatures are used to detect unauthorized modifications to data and to authenticate the identity of the signatory. In addition, the recipient of signed data can use a digital signature as evidence in demonstrating to a third party that the signature was, in fact, generated by the claimed signatory. This is known as non-repudiation since the signatory cannot easily repudiate the signature at a later time. This standard specifies ML-DSA, a set of algorithms that can be used to generate and verify digital signatures. ML-DSA is believed to be secure, even against adversaries in possession of a large-scale quantum computer.”
ML-DSA stands for “Module-Lattice-Based Digital Signature Algorithm.”
Now I’ll admit I don’t know a lattice from a vertical fence post, especially when it comes to quantum computing, so I’ll have to take NIST’s word for it that modules and lattice are super-good security.
“A lattice is a hierarchical structure that consists of levels, each representing a set of access rights. The levels are ordered based on the level of access they grant, from more restrictive to more permissive.”
You can see how this fits into an access control mechanism, whether you’re talking about a multi-tenant cloud (NordVPN’s example) or a smartcard (Thales’ example).
Because there are some things that Tom Sawyer can access, but Injun Joe must not access.
I’ll confess: there is a cybersecurity threat so…um…threatening that I didn’t even want to think about it.
You know the drill. The bad people use technology to come up with some security threat, and then the good people use technology to thwart it.
That’s what happens with antivirus. That’s what happens with deepfakes.
But I kept on hearing rumblings about a threat that would make all this obsolete.
The quantum threat and the possible 2029 “Q Day”
Today’s Q word is “quantum.”
But with great power comes great irresponsibility. Gartner said it:
“By 2029, ‘advances in quantum computing will make conventional asymmetric cryptography unsafe to use,’ Gartner said in a study.”
Frankly, this frightened me. Think of the possibilities that come from calculation superpowers. Brute force generation of passcodes, passwords, fingerprints, faces, ID cards, or whatever is necessary to hack into a security system. A billion different combinations? No problem.
“The good news is that technology companies, governments and standards agencies are well aware of the deadline. They are working on defensive strategies to meet the challenge — inventing cryptographic algorithms that run not just on quantum computers but on today’s conventional components.
“This technology has a name: post-quantum cryptography.
“There have already been notable breakthroughs. In the last few days, Thales launched a quantum-resistant smartcard: MultiApp 5.2 Premium PQC. It is the first smartcard to be certified by ANSSI, France’s national cybersecurity agency.
“The product uses new generation cryptographic signatures to protect electronic ID cards, health cards, driving licences and more from attacks by quantum computers.”
So what’s so special about the technology in the MultiApp 5.2 Premium PQC?
Thales used the NIST “FIPS 204 standard to define a digital signature algorithm for a new quantum-resistant smartcard: MultiApp 5.2 Premium PQC.”
Google Gemini.
The NIST FIPS 204 standard, “Module-Lattice-Based Digital Signature Standard,” can be found here. This is the abstract:
“Digital signatures are used to detect unauthorized modifications to data and to authenticate the identity of the signatory. In addition, the recipient of signed data can use a digital signature as evidence in demonstrating to a third party that the signature was, in fact, generated by the claimed signatory. This is known as non-repudiation since the signatory cannot easily repudiate the signature at a later time. This standard specifies ML-DSA, a set of algorithms that can be used to generate and verify digital signatures. ML-DSA is believed to be secure, even against adversaries in possession of a large-scale quantum computer.”
ML-DSA stands for “Module-Lattice-Based Digital Signature Algorithm.”
Google Gemini.
Now I’ll admit I don’t know a lattice from a vertical fence post, especially when it comes to quantum computing, so I’ll have to take NIST’s word for it that modules and lattice are super-good security.
Certification, schmertification
The Thales technology was then tested by researchers to determine its Evaluation Assurance Level (EAL). The result? “Thales’ product won EAL6+ certification (the highest is EAL7).” (TechTarget explains the 7 evaluation assurance levels here.)
France’s national cybersecurity agency (ANSSI) then certified it.
However…
…remember that certifications mean squat.
For all we know, the fraudsters have already broken the protections in the FIPS 204 standard.
Google Gemini.
And the merry-go-round between fraudsters and fraud fighters continues.
If you need help spreading the word about YOUR anti-fraud solution, quantum or otherwise, schedule a free meeting with Bredemarket.