In business, it is best to use a three-legged stool.
A two-legged stool obviously tips over, and you fall to the ground.
A four-legged stool is too robust for these cost-conscious days, where the jettisoning of employees is policy at both the private and public level.
But a three-legged stool is just right, as project managers already know when they strive to balance time, cost, and quality.
Perhaps the three-legged stool was in the back of Yunique Demann’s mind when she wrote a piece for the Information Systems Audit and Control Association (ISACA) entitled “The New Triad of AI Governance: Privacy, Cybersecurity, and Legal.” If you only rely on privacy and cybersecurity, you will fall to the ground like someone precariously balanced on a two-legged stool.
“As AI regulations evolve globally, legal expertise has become a strategic necessity in AI governance. The role of legal professionals now extends beyond compliance into one that is involved in shaping AI strategy and legally addressing ethical considerations…”
When marketers write content for Chief Information Security Officers, we need to ensure they’re listening. The content needs to speak to their concerns. Understanding their emotions helps us to do that.
Tapping into their emotions helps to ensure the CISOs are paying attention, and that the CISOs are not dismissing our content as unimportant and unworthy of their attention. (See what I did there, dear marketer?)
Are our prospects listening to us?
I’ve talked about emotions and content before. My approach is fairly simple, identifying the emotions encountered at two stages of the customer journey:
The negative emotions faced at the “problem” stage. Perhaps fear, anger, or helplessness.
The positive emotions faced at the “results” stage, after you have provided the customer with the solution to their problem. This could be the happiness or satisfaction resulting from hope, accomplishment, or empowerment.
What do CISOs fear?
I’m reworking a client piece targeted to Chief Information Security Officers (CISOs), and I needed to re-examine the things that keep CISOs up at night. I started with a rudimentary list.
Cyberattacks. (Duh.)
Technological complexity.
Resource constraints.
Corporate liability.
Job security.
A good list—well, I think so—but is it good enough? (Or big enough?) The elements are rather abstract, since you can discuss concepts such as “resource constraints” without FEELING them.
What do CISOs really fear?
Maslow’s famous hierarchy of needs is (literally) based upon physiological (survival) and safety needs. Can I translate the abstractions above into something more primal?
Loss of all our information, leaving us dumb and helpless.
Confusion and bewilderment in (as the AI bots are fond of saying) “the ever-changing landscape.”
Overwhelming burnout from too much to do.
No money after being sued into oblivion.
Wandering the streets homeless and starving after losing your job and your income.
How should we express those fears?
Now there are various ways to express those primal fears. I could go for maximum effect (will the wrong decision today leave you homeless and starving tomorrow?), or I could write something a little less dramatic (are you vulnerable to the latest cyber threats?). The words you choose depend on your company’s messaging tone, which is why I recently reshared my original brand archetypes post from August 2021. A Sage will say one thing, a Hero another.
Why?
Anyway, thank you for reading. Writing this helped me, and maybe it gave you some ideas. And if you want to know more about the seven questions I like to ask before creating content (emotions being the 7th), read my ebook on the topic.
I just read a post by SentinelOne, but it’s too early to tell if this is just a string of buzzwords or a legitimate endeavor.
The post about a proposed “Autonomous SOC Maturity Model” (ASOCMM?) includes buzzwords such as “autonomous,” “SOC” (system and organizational controls, or security operations center – take your pick), “agentic AI,” and of course “maturity model.”
Having done my maturity model time during my days at Motorola Solutions predecessor Motorola (although our group stuck with CMM rather then moving on to CMMI), I’ve certainly seen the benefits and drawbacks of maturity models for organizations large and small. Or for organizations large: I shudder at the thought of implementing a maturity model at a startup; the learning curve at the Printrak part of Motorola was bad enough. You need to hit the target between no process, and process for process’ sake.
So what of this autonomous SOC maturity model? Perhaps it can be real.
“At SentinelOne, we see the Autonomous SOC through the lens of a maturity model. We welcome debate on where we, as an industry, are on this evolutionary revolution. We hope most will agree that this is a better way to look at Autonomous SOC innovation and adoption – far better than the binary, all-or-nothing debates that have long fueled analyst, vendor, and industry watcher blogs and keynotes.”
If nothing else, a maturity model approach lends (or can lend) itself to continuous improvement, rather than just checking off a box and saying you’re done. A Level 5 (or Level 4 on a 0-4 scale) organization, if it believes what it’s saying, is ALWAYS going to improve.
This metal injection attack isn’t from an Ozzy Osbourne video, but from a video made by an expert lock picker in 2019 against a biometric gun safe.
The biometric gun safe is supposed to deny access to a person whose fingerprint biometrics aren’t registered (and who doesn’t have the other two access methods). But as Hackaday explains:
“(T)he back of the front panel (which is inside the safe) has a small button. When this button is pressed, the device will be instructed to register a new fingerprint. The security of that system depends on this button being inaccessible while the safe is closed. Unfortunately it’s placed poorly and all it takes is a thin piece of metal slid through the thin opening between the door and the rest of the safe. One press, and the (closed) safe is instructed to register and trust a new fingerprint.”
Biometric protection is of no use if you can bypass the biometrics.
But was the safe (subsequently withdrawn from Amazon) over promising? The Firearm Blog asserts that we shouldn’t have expected much.
“To be fair, cheap safes like this really are to keep kids, visitors, etc from accessing your guns. Any determined person will be able to break into these budget priced sheet metal safes….”
But still the ease at bypassing the biometric protection is deemed “inexcusable.”
So how can you detect this injection attack? One given suggestion: only allow the new biometric registration control to work when the safe is open (meaning that an authorized user has presumably opened the safe). When the safe is closed, insertion of a thin piece of metal shouldn’t allow biometric registration.
For other discussions of injection attack detection, see these posts: one, two.
By the way, this is why I believe passwords will never die. If you want a cheap way to lock something, just use a combination. No need to take DNA samples or anything.
Oh, and a disclosure: I used Google Gemini to research this post. Not that it really helped.
And of course I referenced VeriDas in my February 7 post when it defined the difference between presentation attack detection and injection attack detection.
Biometric Update played up this difference:
To stay ahead of the curve, Spanish biometrics company Veridas has introduced an advanced injection attack detection capability into its system, to combat the growing threat of synthetic identities and deepfakes….
Veridas says that standard fraud detection only focuses on what it sees or hears – for example, face or voice biometrics. So-called Presentation Attack Detection (PAD) looks for fake images, videos and voices. Deepfake detection searches for the telltale artifacts that give away the work of generative AI.
Neither are monitoring where the feed comes from or whether the device is compromised.
I can revisit the arguments about whether you should get PAD and…IAD?…from the same vendor, or whether you should get best in-class solutions to address each issue separately.
When considering falsifying identity verification or authentication, it’s helpful to see how VeriDas defines two different types of falsification:
Presentation Attacks: These involve an attacker presenting falsified evidence directly to the capture device’s camera. Examples include using photocopies, screenshots, or other forms of impersonation to deceive the system.
Injection Attacks: These are more sophisticated, where the attacker introduces false evidence directly into the system without using the camera. This often involves manipulating the data capture or communication channels.
To be honest, most of my personal experience involves presentation attacks, in which the identity verification/authentication system remains secure but the information, um, presented to it is altered in some way. See my posts on Vision Transformer (ViT) Models and NIST IR 8491.
In an injection attack, the identity verification/authentication system itself is compromised. For example, instead of taking its data from the camera, data from some other source is, um, injected so that it look like it came from the camera.
Incidentally, I should tangentially note that injection attacks greatly differ from scraping attacks, in which content from legitimate blogs is stolen and injected into scummy blogs that merely rip off content from their original writers. Speaking for myself, it is clear that this repurpose is not an honorable practice.
Note that injection attacks don’t only affect identity systems, but can affect ANY computer system. SentinelOne digs into the different types of injection attacks, including manipulation of SQL queries, cross-site scripting (XSS), and other types. Here’s an example from the health world that is pertinent to Bredemarket readers:
In May 2024, Advocate Aurora Health, a healthcare system in Wisconsin and Illinois, reported a data breach exposing the personal information of 3 million patients. The breach was attributed to improper use of Meta Pixel on the websites of the provider. After the breach, Advocate Health was faced with hefty fines and legal battles resulting from the exposure of Protected Health Information(PHI).
Deepfakes utilize AI and machine learning to create lifelike videos of real people saying or doing things they never actually did. By injecting such videos into a system’s feed, fraudsters can mimic the appearance of a legitimate user, thus bypassing facial recognition security measures.
Again, this differs from someone with a mask getting in front of the system’s camera. Injections bypass the system’s camera.
Fight back, even when David Horowitz isn’t helping you
Do how do you detect that you aren’t getting data from the camera or capture device that is supposed to be providing it? Many vendors offer tactics to attack the attackers; here’s what ID R&D (part of Mitek Systems) proposes.
These steps include creating a comprehensive attack tree, implementing detectors that cover all the attack vectors, evaluating potential security loopholes, and setting up a continuous improvement process for the attack tree and associated mitigation measures.
As you can see, the tactics to fight injection attacks are far removed from the more forensic “liveness” procedures such as detecting whether a presented finger is from a living breathing human.
Presentation attack detection can only go so far.
Injection attack detection is also necessary.
So if you’re a company guarding against spoofing, you need someone who can create content, proposals, and analysis that can address both biometric and non-biometric factors.
A little (just a little) behind the scenes of why I write what I write.
What does TPRM mean?
I was prompted to write my WYSASOA post when I encountered a bunch of pages on a website that referred to TPRM, with no explanation.
Now if I had gone to the home page of that website, I would have seen text that said “Third Party Risk Management (TPRM).”
But I didn’t go to the home page. I entered the website via another page and therefore never saw the home page explanation of what the company meant by the acronym.
Unless you absolutely know that everybody in the world agrees on your acronym definition, always spell out the first instance of an acronym on a piece of content. So if you mention that acronym on 10 web pages, spell it out on all 10 of them.
That’s all I wanted to say…
How is NIST related to TPRM?
…I lied.
Because now I assume you want to know what Third Party Risk Management (TPRM) actually is.
Let’s go to my esteemed friends at the National Institute of Standards & Technology, or NIST.
When companies began extensively outsourcing and globalizing the supply chain in the 1980’s and 1990’s, they did so without understanding the risks suppliers posed. Lack of supplier attention to quality management could compromise the brand. Lack of physical or cybersecurity at supplier sites could result in a breach of corporate data systems or product corruption. Over time, companies have begun implementing vendor management systems – ranging from basic, paper-based approaches to highly sophisticated software solutions and physical audits – to assess and mitigate vendor risks to the supply chain.
Because if MegaCorp is sharing data with WidgetCorp, and WidgetCorp is breached, MegaCorp is screwed. So MegaCorp has to reduce the risk that it’s dealing with breachable firms.
The TPRM problem
And it’s not just my fictional MegaCorp. Cybersecurity risks are obviously a problem. I only had to go back to January 26 to find a recent example.
Bank of America has confirmed a data breach involving a third-party software provider that led to the exposure of sensitive customer data.
What Happened: According to a filing earlier this month, an unidentified third-party software provider discovered unauthorized access to its systems in October. The breach did not directly impact Bank of America’s systems, but the data of at least 414 customers is now at risk.
The breach pertains to mortgage loans and the compromised data includes customers’ names, social security numbers, addresses, phone numbers, passport numbers, and loan numbers.
Note that the problem didn’t occur at Bank of America’s systems, but at the systems of some other company.
Manage your TPRM…now that you know what I mean by the acronym.