The United States’ 16 Critical Infrastructure Sectors

I was working with these sectors back when I was at MorphoTrak.

“There are 16 critical infrastructure sectors whose assets, systems, and networks, whether physical or virtual, are considered so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety, or any combination thereof. Presidential Policy Directive 21 (PPD-21): Critical Infrastructure Security and Resilience advances a national policy to strengthen and maintain secure, functioning, and resilient critical infrastructure. This directive supersedes Homeland Security Presidential Directive 7.”

The sectors are:

See:

https://www.cisa.gov/topics/critical-infrastructure-security-and-resilience/critical-infrastructure-sectors

https://www.cisa.gov/resources-tools/resources/presidential-policy-directive-ppd-21-critical-infrastructure-security-and

Ambient Clinical Intelligence in Healthcare

Another topic raised by Nadaa Taiyab during today’s SoCal Tech Forum meeting was ambient clinical intelligence. See her comments on how AI benefits diametrically opposing healthcare entities here.

There are three ways that a health professional can create records during, and/or after, a patient visit.

  • Typing. The professional has their hands on the keyboard during the meeting, which doesn’t make a good impression on the patient.
  • Structured dictation. The professional can actually look at the patient, but the dictation is unnatural. As Bredebot characterizes it: “where you have to speak specific commands like ‘Period’ or ‘New Paragraph.’”
  • Ambient clinical intelligence.

Here is how DeepScribe defines ambient clinical intelligence:

“Ambient clinical intelligence, or ACI, is advanced, AI-powered voice recognizing technology that quietly listens in on clinical encounters and aids the medical documentation process by automating medical transcription and note taking. This all-encompassing technology has the ability to totally transform the lives of clinicians, and thus healthcare on every level.”

Like any generative AI model, ambient clinical intelligence has to provide my four standard benefits: accuracy, ease of use, security, and speed.

  • Accuracy is critically important in any health application, since inaccurate coding could literally affect life or death.
  • Ease of use is of course the whole point of ambient clinical intelligence, since it replaces harder-to-use methods.
  • Security and privacy are necessary when dealing with personal health information (PHI).
  • Speed is essential also. As Taiyab noted elsewhere in her talk, the work is increasing and the workforce not increasing as rapidly.

But if the medical professional and patient benefit from the accuracy, ease of use, security, and speed of ambient clinical intelligence, we all win.

Google Gemini.

Health AI Battle Bots

In this morning’s SoCal Tech Forum meeting, Nadaa Taiyab noted that generative AI can aid both sides of healthcare funding battles.

  • Medical providers and patients benefit when AI speeds authorization approvals.
  • Insurance companies benefit when AI speeds authorization denials.

Who will win?

(Also see my related post on ambient clinical intelligence in healthcare.)

Justin Welsh’s Purple Squirrel Story

While I talk about wildebeests, iguanas, wombats, and friction ridge-equipped koalas, Justin Welsh talks about squirrels.

Purple squirrels.

Google Gemini,.

Welsh explains what a purple squirrel is:

“A purple squirrel is a candidate so rare and perfectly matched to what you need that finding one feels impossible. Someone who checks every single box, including boxes you didn’t even know you cared about.”

Then Welsh provided an example of a purple squirrel, a man named Sagar Patel who worked for him at PatientPop.

On paper pyramids

At the time PatientPop had less than $40,000 in annual revenue, so it didn’t have a huge marketing department. It didn’t even have Bredemarket as a product marketing consultant because Bredemarket didn’t exist yet. And anyway, at the time I knew next to nothing about PatientPop’s healthcare-centered hungry people, physicians who needed to attract prospects and clients via then-current search engine optimization (SEO) techniques.

Google Gemini.

Patel could have launched into a complex, feature-laden SEO discussion, but his target physicians would have responded, “So what?” Doctors want to doctor, not obsess over choosing trailing keywords…and understand the benefits of a solution immediately.

So Patel, without the resources of a marketing department, took another approach.

“So Sagar grabbed some notebook paper and drew five sides of a pyramid. He labeled each one, describing his ‘5 sides of local SEO for healthcare providers,’ and then taped them all together.

“He made himself a little paper pyramid to use in his sales pitches.”

Google Gemini. My prompt asked Nano Banana to create a “realistic” picture.

Was Patel’s paper pyramid an effective sales tool for PatientPop? Read Welsh’s article to find out.

What’s your paper pyramid?

Too many companies wait months for the perfect marketing solution instead of doing something NOW and refining it later.

Bredemarket’s different. I ask, then I act.

I ask, then I act.

Once I’ve set my compass, I get my clients a draft within days. Last week alone I turned out drafts for two clients, moving them forward so the content is available to their prospects and clients.

With my suggested schedule for short content—three day drafts, three day reviews, three day redrafts—your new content can become your online “secret salesperson” within two weeks or less.

Don’t believe me? This post alone is chock-full of links to other Bredemarket posts and Bredemarket pages, all of which are functioning as “secret salespeople” for me every single day.

If you want secret salespeople to work for you, talk to me and we’ll devise a plan to improve your product marketing awareness RIGHT NOW.

This is Why You Should Avoid Acronyms

So an article came across my eyes that begins with the words “Minnesota DHS.”

The full title? “Minnesota DHS Reports Access-Related Data Breach.”

Now anyone reading that article over the weekend was probably very confused, since the death of Alex Pretti isn’t exactly a DATA breach.

And, of course, Minnesota doesn’t have a “department of homeland security.”

It does, however, have a Department of Human Services…and THAT was what was breached.

“A single user inappropriately accessed private data within the Minnesota Department of Human Services (DHS) ecosystem, potentially impacting 303,965 individuals, officials report.”

This was not a hack per se, but a case in which a legitimate person accessed something they shouldn’t have accessed. Certainly a breach, and the person’s access was terminated.

But nobody died.

Humans and Fraudulently Inaccurate Medical Coding

You know what the problem is with these AI medical bots? They hallucinate and do inaccurate stuff. When you use humans for your medical needs, they’re gonna get it right.

Um…right. Unless the humans are committing fraud.

Google Gemini.

The company that replaced a steel mill with a hospital is in a bit of trouble with the U.S. Department of Justice, in an action started under the Biden Administration and concluded under the Trump Administration.

“Affiliates of Kaiser Permanente, an integrated healthcare consortium headquartered in Oakland, California, have agreed to pay $556 million to resolve allegations that they violated the False Claims Act by submitting invalid diagnosis codes for their Medicare Advantage Plan enrollees in order to receive higher payments from the government….

“Specifically, the United States alleged that Kaiser systematically pressured its physicians to alter medical records after patient visits to add diagnoses that the physicians had not considered or addressed at those visits, in violation of [Centers for Medicare & Medicaid Services (CMS)] rules.”

Now of course you can code a bot to perform fraud, but it’s easier to induce a human to do it.

Avoiding Bot Medical Malpractice Via…Standards!

Back in the good old days, Dr. Welby’s word was law and was unquestioned.

Then we started to buy medical advice books and researched things ourselves.

Later we started to access peer-reviewed consumer medical websites and researched things ourselves.

Then we obtained our medical advice via late night TV commercials and Internet advertisements.

OK, this one’s a parody, but you know the real ones I’m talking about. Silver Solution?

Finally, we turned to generative AI to answer our medical questions.

With potentially catastrophic results.

So how do we fix this?

The U.S. National Institute of Standards and Technology (NIST) says that we should…drumroll…adopt standards.

Which is what you’d expect a standards-based government agency to say.

But since I happen to like NIST, I’ll listen to its argument.

“One way AI can prove its trustworthiness is by demonstrating its correctness. If you’ve ever had a generative AI tool confidently give you the wrong answer to a question, you probably appreciate why this is important. If an AI tool says a patient has cancer, the doctor and patient need to know the odds that the AI is right or wrong.

“Another issue is reliability, particularly of the datasets AI tools rely on for information. Just as a hacker can inject a virus into a computer network, someone could intentionally infect an AI dataset to make it work nefariously.”

So we know the risks, but how do we mitigate them?

“Like all technology, AI comes with risks that should be considered and managed. Learn about how NIST is helping to manage those risks with our AI Risk Management Framework. This free tool is recommended for use by AI users, including doctors and hospitals, to help them reap the benefits of AI while also managing the risks.”

Who or What Requires Authorization?

There are many definitions of authorization, but the one in RFC 4949 has the benefit of brevity.

“An approval that is granted to a system entity to access a system resource.”

Non-person Entities Require Authorization

Note that it uses the word “entity.” It does NOT use the word “person.” Because the entity requiring authorization may be a non-person entity.

I made this point in a previous post about attribute-based access control (ABAC), when I quoted from the 2014 version of NIST Special Publication 800-162. Incidentally, if you wonder why I use the acronym NPE (non-person entity) rather than the acronym NHI (non-human identity), this is why.

“A subject is a human user or NPE, such as a device that issues access requests to perform operations on objects. Subjects are assigned one or more attributes.”

If you have a process to authorize people, but don’t have a process to authorize bots, you have a problem. Matthew Romero, formerly of Veza, has written about the lack of authorization for non-human identities.

“Unlike human users, NHIs operate without direct oversight or interactive authentication. Some run continuously, using static credentials without safeguards like multi-factor authentication (MFA). Because most NHIs are assigned elevated permissions automatically, they’re often more vulnerable than human accounts—and more attractive targets for attackers. 

“When organizations fail to monitor or decommission them, however, these identities can linger unnoticed, creating easy entry points for cyber threats.”

Veza recommends that people use a product that monitors authorizations for both human and non-human identities. And by the most amazing coincidence, Veza offers such a product.

People Require Authorization

And of course people require authorization also. They need authorization:

It’s not enough to identify or authenticate a person or NPE. Once that is done, you need to confirm that this particular person has the authorization to…launch a nuclear bomb. Or whatever.

Your Customers Require Information on Your Authorization Solution

If your company offers an authorization solution, and you need Bredemarket’s content, proposal, or analysis consulting help, talk to me.

The Government Wants You To Work for A Company, Not Yourself

I’m sure you’ve heard the empowerment gurus on LinkedIn who say that people working for companies are idiots. Admittedly it seems that too many companies don’t care about their employees and will jettison them at a moment’s notice.

So what do the empowerment gurus recommend? They tell people to take control of their own destiny and work for themselves. Don’t use your talents to fatten some executive’s stock options.

Google Gemini.

However, those of us in the United States face a huge barrier to that.

Healthcare.

Unless a solopreneur’s spouse has employer-subsidized healthcare, the financial healthcare penalty for working for yourself is huge. From an individual perspective, anyway.

The average annual premium for employer-sponsored family coverage totaled about $27,000 in 2025, according to [the Kaiser Family Foundation]. This is coverage for a family of four.

But workers don’t pay the full sum. They contributed just $6,850 — about 25% — toward the total premium, according to KFF. Employers subsidized the rest, paying about $20,000, on average.

By comparison, if the enhanced ACA subsidies expire next year, the average family of four earning $130,000 would pay the full, unsubsidized premium for marketplace coverage.

Their annual insurance premiums would jump to about $23,900, more than double the subsidized cost of $11,050 — an increase of almost $12,900, according to the Center on Budget and Policy Priorities.

Google Gemini.

Do how do those who oppose Communist subsidies propose to solve ACA healthcare costs?

By providing people with an annual health savings account funding of…checks notes…$1,500.

Perhaps I’m deprived because of my 20th century math education, but last I checked $1,500 in funding is less than $12,900 in losses.

People who are on COBRA, or a similar program such as Cal COBRA, experience similar sticker shocks.

So my advice to people is to do one or both of the following:

  • Get employer-subsidized healthcare.
  • Marry someone with employer-subsidized healthcare.

Detecting Deceptively Authoritative Deepfakes

I referenced this on one of my LinkedIn showcase pages earlier this week, but I need to say more on it.

We all agree that deepfakes can (sometimes) result in bad things, but some deepfakes present particular dangers that may not be detected. Let’s look at how deepfakes can harm the healthcare and legal professions.

Arielle Waldman of Dark Reading pointed out these dangers in her post “Sora 2 Makes Videos So Believable, Reality Checks Are Required.”

But I don’t want to talk about the general issues with believable AI (whether it’s Sora 2, Nano Banana Pro, or something else). I want to hone in on this:

“Sora 2 security risks will affect an array of industries, primarily the legal and healthcare sectors. AI generated evidence continues to pose challenges for lawyers and judges because it’s difficult to distinguish between reality and illusion. And deepfakes could affect healthcare, where many benefits are doled out virtually, including appointments and consultations.”

Actually these are two separate issues, and I’ll deal with them both.

Health Deepfakes

It’s bad enough that people can access your health records just by knowing your name and birthdate. But what happens when your medical practitioner sends you a telehealth appointment link…except your medical practitioner didn’t send it?

Grok.

So here you are, sharing your protected health information with…who exactly?

And once you realize you’ve been duped, you turn to a lawyer.

This one is not a deepfake. From YouTube.

Or you think you turn to a lawyer.

Legal Deepfakes

First off, is that lawyer truly a lawyer? And are you speaking to the lawyer to whom you think you’re speaking?

Not Johnnie Cochran.

And even if you are, when the lawyer gathers information for the case, who knows if it’s real. And I’m not talking about the lawyers who cited hallucinated legal decisions. I’m talking about the lawyers whose eDiscovery platforms gather faked evidence.

Liquor store owner.

The detection of deepfakes is currently concentrated in particular industries, such as financial services. But many more industries require this detection.