Artificial Intelligence and Healthcare, A Qualified View

As I’ve noted before, healthcare is a pioneering user of artificial intelligence, although (hopefully) under robust controls to maintain accuracy and preserve HIPAA-level privacy.

And a number of companies poured $125 million into Qualified Health to advance AI in healthcare.

Why?

“We are living through a generational shift, one where AI doesn’t just augment how organizations work but fundamentally transforms them from the inside out,” said Mohamad Makhzoumi, Co-CEO of NEA, who will join Qualified Health’s Board of Directors in conjunction with the financing. “From NEA’s nearly five decades of company-building experience, we believe the organizations shaping the next era of healthcare innovation will be those helping health systems reimagine every administrative and clinical workflow from the ground up, and Qualified Health is exactly that company. We are thrilled to lead this financing and to partner with Justin and team to accelerate healthcare’s AI transformation and shape the future of healthcare enterprises across the country.”

“Health systems today are operating under extraordinary pressure, from rising labor costs to tightening reimbursement, while managing increasing complexity in patient care,” said Jared Kesselheim, MD, Managing Partner at Transformation Capital. “What stood out to us about Qualified Health is that the team approaches this work as medical care specialists, with a deep understanding of the realities health systems face every day. That perspective allows them to identify where AI can create meaningful clinical and operational impact. We’re excited to partner with Justin and the Qualified Health team as they help leading health systems navigate this next phase of healthcare.”

Because a Marketer Needs to Eat: Translating OpenAPI Version Difference Explanations into Marketing-Speak

To better understand two of the projects for a Bredemarket client, I asked Google Gemini for an explanation of the differences between OpenAPI versions 3.1.1, 3.1.2, and 3.2.

Google Gemini provided me with that explanation.

Then I supplied the following prompt:

Rewrite the above so a marketer can understand it.

Here is part of what Google Gemini provided to me.

🏗️ OpenAPI 3.1.1 & 3.1.2: The “Under the Hood” Fixes

These versions were like a restaurant updating its internal plumbing. They didn’t change what was on the menu, but they made sure the kitchen (the code) followed the latest global safety standards (JSON Schema).

  • The Goal: Accuracy and stability.
  • The Benefit: It ensures that when a developer tries to connect to your service, they don’t run into “grammar” errors in the documentation.

✨ OpenAPI 3.2: The “UX & Branding” Upgrade

Released recently (late 2025), this version is all about making your API easier to sell and easier to use.

1. Organized Menus (Hierarchical Tags)

Before, your API list was just one long, flat list of features.

  • Old way: A messy “junk drawer” of links.
  • 3.2 way: You can now create folders and sub-categories (e.g., Payments > Refunds > International). This makes your documentation look professional and much easier for customers to navigate.

2. Better Support for AI and Live Data (Streaming)

If your product uses AI or real-time data (like a live stock ticker or a ChatGPT-style chat), 3.2 is a game-changer. It has a specific way to describe “Streaming” data so developers know exactly what to expect from your “live” features.

3. The “Smart Search” (The QUERY Method)

In older versions, developers often had to “hack” the system to do complex searches. 3.2 introduces a dedicated Search button (the QUERY method) that is cleaner, safer, and follows modern web standards.

4. IoT & Smart Device Support

If your product works on a Smart TV, an Apple Watch, or an IoT device, 3.2 finally adds official support for “Device Pairing” (that thing where you enter a code on a website to log in to your TV).

Hey, it helped me. Marketers need to eat, and writers understand grammar.

You CAN Modernize…But Should You?

In the past, I have said:

“[T]he technology is easy. The business part is the difficult part.”

But Chris Burt of Biometric Update phrased it more succinctly:

“[P]olicy chases modernization”

As Burt notes, examples of policy chasing modernization include:

  • Digital sovereignty, a topic of discussion with everyone from ID4Africa to an organization called the World Ethical Data Foundation. (As an aside, a Bredemarket client and I were recently discussing the pros and cons of managing digital identities in the cloud vs. peer-to-peer synchronization.)
  • Cybersecurity and digital identity, a topic of discussion in government (the White House, NIST) and industry (Jordan Burris of Socure).
  • Other topics, including police facial recognition policy. (Hmm…I recall that both government and vendor biometric policies were the topic of a Biometric Update guest article last year.)

All of you recall Pandora’s Box. I’ve used the story multiple times, including when discussing my creation of Bredebot and its nearly-instantaneous hallucinations. Yes, I do have “policies” regarding this “modernization,” including full disclosure.

But are policies enough?

Returning to Lattice Identity

The last time I delved into lattices, it was in connection with the NIST FIPS 204 Module-Lattice-Based Digital Signature Standard. To understand why the standard is lattice-based, I turned to NordVPN:

“A lattice is a hierarchical structure that consists of levels, each representing a set of access rights. The levels are ordered based on the level of access they grant, from more restrictive to more permissive.”

In essence, the lattice structure allows more elaborate access rights.

This article (“Lattice-Based Identity and Access Management for AI Agents”) discusses lattices more. Well, not explicitly; the word “lattice” only appears in the title. But here is the article’s main point:

“We are finally moving away from those clunky, “if-this-then-that” systems. The shift to deep learning means agents can actually reason through a mess instead of just crashing when a customer uses a slang word or a shipping invoice is slightly blurry.”

It then says

“Deep learning changes this because it uses neural networks to understand intent, not just keywords.”

Hmm…intent? Sounds a little somewhat you why…or maybe it’s just me.

But it appears that we sometimes don’t care about the intent of AI agents.

“If you gave a new employee the keys to your entire office and every filing cabinet on day one, you’d be sweating, right? Yet, that is exactly what many companies do with ai agents by just slapping an api key on them and hoping for the best.”

This is not recommended. See my prior post on attribute-based access control, which led me to focus more on non-person entities (non-human identities).

As should we all.

Modem Replacement Part Three

March 27 update here.

Also see part one and part two.

For a day and a half my home Internet worked great.

Sunday morning my wi-fi started failing again, and my ISP is sending a second technician out to work on “signal issues.”

The remainder of this post is written by Google Gemini (but not as Bredebot).

  • I do not have the knowledge to detect if Gemini is hallucinating, so if anything seems off please comment.
  • Note how Gemini adopts a reassuring tone at the beginning of this passage, having been trained to know that someone asking such a technical question may be frustrated. It fakes emotion well.

Google Gemini on fiber network “signal issues”

(Written by Google Gemini)

It can be incredibly frustrating when your internet isn’t working and the fix feels like it’s being kicked down the road. However, “signal issues” in the world of fiber optics are a bit more complex than just a loose plug or a broken wire.

Here is the breakdown of what is happening behind the scenes and why a “balance” is necessary.

What are “Signal Issues”?

In a fiber network, data is transmitted as pulses of light through thin strands of glass. A “signal issue” occurs when that light doesn’t reach your router with the correct intensity or clarity.  

Think of it like trying to read a book by flashlight from across a dark room. If the light is too dim (loss), too bright (saturation), or flickering (interference), you can’t read the words.

Common causes for signal degradation:

  • Micro-bends: A fiber cable that is pinched or bent too sharply.  
  • Contamination: A single microscopic speck of dust on a connector can block the light path.
  • Attenuation: The natural loss of light intensity as it travels over long distances or through splitters.  
  • Splice Failures: Where two fibers are fused together, a “cold” or imperfect weld can cause light to scatter.

Why does the line need to be “Balanced”?

Fiber networks (specifically GPON or Passive Optical Networks) are shared. One main fiber from the provider’s hub is split into many smaller branches that go to different houses.  

“Balancing” is the process of ensuring that the light levels across these branches are within a specific “sweet spot.”

  • Signal too weak: The light is too faint for your ONT (Optical Network Terminal) to distinguish data from background noise.
  • Signal too strong: The light “blinds” the receiver (saturation), much like a camera flash being too close to your eyes.

The “Second Technician” logic

The first technician usually checks the “drop” (the line from the street to your house). If they find the signal is bad before it even hits your property, the issue is further “upstream”—likely at a splitter cabinet or a main distribution frame.

Balancing requires looking at the whole neighborhood’s light levels to ensure that fixing your signal doesn’t accidentally “starve” your neighbor’s connection or blow out their receiver.

Why it takes two visits

Often, the first tech is a residential installer who handles home wiring. The second tech is usually a network or “outside plant” technician. They have specialized equipment (like an OTDR—Optical Time-Domain Reflectometer) to “see” miles down the glass and pinpoint exactly where the light is leaking. 

What If All The Bots Suddenly Disappeared?

As technology ages it sometimes becomes less essential.

Sometimes not. A wheel is just as useful as it has ever been.

And a few people still find punch cards and floppies useful.

But some technologies are simply gone and forgotten.

But what if the technology’s disappearance is not due to age, but to simple unavailability?

For economic, political, or cultural reasons, we could wake up one morning and an entire technology could no longer exist.

How would we adjust?

Grok.

Why Biometric Marketing Experience Beats Biometric Marketing Immaturity

I know that the experts say that “too much knowledge is actually bad in tech.” But based upon what I just saw from an (unnamed) identity verification company, I assert that too little knowledge is much worse.

As a biometric product marketing expert and biometric product marketing writer, I pay a lot of attention to how identity verification companies and other biometric and identity companies market themselves. Many companies know how to speak to their prospects…and many don’t.

Take a particular company, which I will not name. Here is the “marketing” from this company.

  • We have funding!
Google Gemini.
  • We offer lower pricing than selected competitors!
  • We claim high facial recognition accuracy but don’t publish our NIST FRTE results! (While the company claims to author its technology, the company name does not appear in either the NIST FRTE 1:1 or NIST FRTE 1:N results.)
  • We claim liveness detection (presentation attack detection) but don’t publish any confirmation letters! (Again, I could not find the company name on the confirmation letter lists from BixeLab or iBeta.)
Google Gemini.

So what is the difference between this company and the other 100+ identity verification companies…many of which explicitly state their benefits, trumpet their NIST FRTE performance, and trumpet their third-party liveness detection confirmation letters?

If you claim great accuracy and great liveness detection but can’t support it via independent third-party verification, your claim is “so what?” worthless. Prove your claims.

Now I’m sure I could help this company. Even if they have none of the certifications or confirmations I mentioned, I could at least get the company to focus on meaningful differentiation and meaningful benefits. But there’s no need to even craft a Bredemarket pitch to the company, since the only marketer on staff is an intern who is indifferent to strategy.

Google Gemini.

Because while many companies assert that all they need is a salesperson, an engineer, an African data labeler, and someone to run the generative AI for everything else…there are dozens of competitors doing the exact same thing.

But some aren’t. Some identity/biometric companies are paying attention to their long-term viability, and are creating content, proposals, and analyses that support that viability.

Take a look at your company’s marketing. Does it speak to prospects? Does it prove that you will meet your customers’ needs? Or does it sound like every other company that’s saying “We use AI. Trust us“?

And if YOUR company needs experienced help in conveying customer-focused benefits to your prospects…contact Bredemarket. I’ve delivered meaningful biometric materials to two dozen companies over the years. And yes, I have experience. Let me use it for your advantage.

What About the Data Labelers Themselves?

Earlier this month I discussed a class action lawsuit, originated in the United States, from people who believe their privacy is being violated by the use of Kenyan data labelers to view their video output.

And the data labelers themselves are not happy, according to a 404 Media article “AI is African Intelligence.”

Before I get to the Kenyans, let’s talk about the reality of AI. No, AI output is not 100% generated by computers alone. There is often human review.

In some cases human review is understandable. There was a recent brouhaha when it was publicly highlighted that when a Waymo vehicle runs into a problematic situation, Waymo calls upon a human reviewer to intervene. People’s anger about this is pointless: would they prefer that Waymo NOT call upon a human reviewer, and just let the car do whatever?

Back to Kenya and the Data Labelers Association (DLA) reports of what data labelers actually do.

“Every day, Michael Geoffrey Asia spent eight consecutive hours at his laptop in Kenya staring at porn, annotating what was happening in every frame for an AI data labeling company. When he was done with his shift, he started his second job as the human labor behind AI sex bots, sexting with real lonely people he suspected were in the United States. His boss was an algorithm that told him to flit in and out of different personas.”

I’ve previously seen reports about people in the U.S. reviewing shocking material for social media companies, but it’s a heck of a lot cheaper to outsource the work abroad.

Unless the U.S. Government insists on bringing data labeling work to the United States, in the same way that it wants to bring call center jobs back here.

I do offer one caution: there is a lot of data labeling work that is NOT pornographic. In the identity verification industry, data labelers review real and fake faces, real and fake documents, and the like to train AI models. Such work does not have the emotional stress that you get from watching certain videos.

But it’s still hard work.