Artificial Intelligence and Healthcare, A Qualified View

As I’ve noted before, healthcare is a pioneering user of artificial intelligence, although (hopefully) under robust controls to maintain accuracy and preserve HIPAA-level privacy.

And a number of companies poured $125 million into Qualified Health to advance AI in healthcare.

Why?

“We are living through a generational shift, one where AI doesn’t just augment how organizations work but fundamentally transforms them from the inside out,” said Mohamad Makhzoumi, Co-CEO of NEA, who will join Qualified Health’s Board of Directors in conjunction with the financing. “From NEA’s nearly five decades of company-building experience, we believe the organizations shaping the next era of healthcare innovation will be those helping health systems reimagine every administrative and clinical workflow from the ground up, and Qualified Health is exactly that company. We are thrilled to lead this financing and to partner with Justin and team to accelerate healthcare’s AI transformation and shape the future of healthcare enterprises across the country.”

“Health systems today are operating under extraordinary pressure, from rising labor costs to tightening reimbursement, while managing increasing complexity in patient care,” said Jared Kesselheim, MD, Managing Partner at Transformation Capital. “What stood out to us about Qualified Health is that the team approaches this work as medical care specialists, with a deep understanding of the realities health systems face every day. That perspective allows them to identify where AI can create meaningful clinical and operational impact. We’re excited to partner with Justin and the Qualified Health team as they help leading health systems navigate this next phase of healthcare.”

Because a Marketer Needs to Eat: Translating OpenAPI Version Difference Explanations into Marketing-Speak

To better understand two of the projects for a Bredemarket client, I asked Google Gemini for an explanation of the differences between OpenAPI versions 3.1.1, 3.1.2, and 3.2.

Google Gemini provided me with that explanation.

Then I supplied the following prompt:

Rewrite the above so a marketer can understand it.

Here is part of what Google Gemini provided to me.

🏗️ OpenAPI 3.1.1 & 3.1.2: The “Under the Hood” Fixes

These versions were like a restaurant updating its internal plumbing. They didn’t change what was on the menu, but they made sure the kitchen (the code) followed the latest global safety standards (JSON Schema).

  • The Goal: Accuracy and stability.
  • The Benefit: It ensures that when a developer tries to connect to your service, they don’t run into “grammar” errors in the documentation.

✨ OpenAPI 3.2: The “UX & Branding” Upgrade

Released recently (late 2025), this version is all about making your API easier to sell and easier to use.

1. Organized Menus (Hierarchical Tags)

Before, your API list was just one long, flat list of features.

  • Old way: A messy “junk drawer” of links.
  • 3.2 way: You can now create folders and sub-categories (e.g., Payments > Refunds > International). This makes your documentation look professional and much easier for customers to navigate.

2. Better Support for AI and Live Data (Streaming)

If your product uses AI or real-time data (like a live stock ticker or a ChatGPT-style chat), 3.2 is a game-changer. It has a specific way to describe “Streaming” data so developers know exactly what to expect from your “live” features.

3. The “Smart Search” (The QUERY Method)

In older versions, developers often had to “hack” the system to do complex searches. 3.2 introduces a dedicated Search button (the QUERY method) that is cleaner, safer, and follows modern web standards.

4. IoT & Smart Device Support

If your product works on a Smart TV, an Apple Watch, or an IoT device, 3.2 finally adds official support for “Device Pairing” (that thing where you enter a code on a website to log in to your TV).

Hey, it helped me. Marketers need to eat, and writers understand grammar.

The Digital Doppelgänger Dilemma: Identity Tech Tangles with Non-Human Hordes

Alright, gather round, fellow grizzled marketing veterans of the tech wars. I’ve been around the digital block, from when ‘identity verification’ meant checking a photostatic ID to now, where we’re verifying… whatever that thing is online.

Let me tell you, back in the day, we worried about humans pretending to be other humans. Remember that simplicity? Simpler times, my friends. Now, the battlefield has shifted. We’re not just fighting human frailty; we’re wrestling with code, bots, synthetic IDs, and digital shadows that behave like humans, think like humans (well, kinda), but definitely aren’t human.

These are non-human identities, folks, and they are multiplying faster than dust bunnies under a server rack.

I’ve been in this game decades. Tech, identity, biometrics – I’ve spun marketing yarns for all of them. And believe me, this current wave of non-human identity is making everything we did before look like child’s play. It’s like trying to herd hyper-intelligent cats that can also simultaneously occupy a thousand locations at once and have absolutely zero moral compass.

Our world, the identity verification space, is right in the crosshairs. We’re the bouncers at the digital club, and suddenly, half the queue isn’t just a tough crowd; they’re holograms, clever marionettes, and straight-up ghosts in the machine.

So, how are we, the builders and sellers of identity trust, reacting to this alien invasion? We’re not just rolling over. We’re pivoting, evolving, and sometimes, frankly, just scrambling. Here are three ways I’m seeing identity verification companies grapple with the rise of the non-human horde.

1. Embracing the ‘I’m Not a Robot’ 2.0 (and 3.0, and 4.0…)

We all remember CAPTCHAs. They were cute. For a while. Find the traffic lights? Sure thing. It felt like a little game. But then the bots got smart. Really smart. Now, those standard visual tests are practically meaningless. AI can crush them faster than I can spell ‘biometrics’.

So, the first big response is the hyper-evolution of CAPTCHA-like challenges. We’re moving beyond static puzzles and into behavioral, dynamic tests.

This isn’t just about what you can identify, but how you do it. Think about it: a human clicks that checkbox in a messy, slightly delayed, unpredictable way. A bot does it perfectly, every single time. Modern IDV solutions are measuring that micro-behavior. They’re tracking mouse movements, keystroke patterns, tap pressure, even the subtle sway of your phone.

We’re also seeing a pivot towards sensory-based challenges. “Record a video of yourself saying a specific phrase while looking left and right.” This kind of liveness detection is much harder for a bot to spoof. But – and here’s the kicker – it’s not impossible. I’ve seen some scarily realistic deepfakes that could pass a basic liveness test.

This is why this evolutionary branch is so frantic. We’re in a constant arms race. We build a better, harder test; the bot farms, with their wildebeests of marketing consultants whispering in their ears, devise a cleverer way around it. They’re advising these non-human wombats, helping them look just human enough to waddle past the gate. It’s a never-ending cycle of innovation and counter-innovation. For every new behavioral marker we track, they find a way to synthetically mimic it.

So, while we are definitely iterating on this ‘show me you’re a human’ model, we all know it’s just one layer of defense. Relying solely on these challenges is like bringing a spork to a lightsaber fight.

2. Doubling Down on the ‘Human’ in Human-Centric Identity

This might sound counter-intuitive. In a world overrun by bots, are we really doubling down on the human? Yes, absolutely. Because the ultimate defense against a non-human identity is proving, beyond a shadow of a doubt, that you have a tangible, physical human on the other end.

This is where my old stomping ground, biometrics, comes into play. But it’s biometrics on steroids. We’re not talking about a simple fingerprint scan anymore. We’re moving into layered, high-fidelity, multimodal biometrics.

Imagine an IDV process that doesn’t just take a selfie. It captures your face, of course, but also analyzes your gait, your voice pattern, the way you hold your device, maybe even your heartbeat through your smartphone’s camera. The goal is to create a multi-dimensional, unique ‘digital DNA’ that is monumentally harder to replicate or synthesize.

And it’s not just what biometric data we use, but where that data lives and how it’s handled. Decades ago, we were terrified of biometric databases getting breached. Now, the emphasis is on decentralization. We’re building systems where your biometric template never leaves your device, or is broken up into useless shards and stored across a blockchain. You, the human, retain control. This doesn’t just improve security; it boosts consumer trust, which is a key part of the value proposition we need to sell.

This human-centric focus is our attempt to build an insurmountable moat. We’re betting that, no matter how clever the non-human identities get, they will always struggle to convincingly replicate the full complexity and spontaneity of a real human being. It’s about focusing on the one thing they can’t truly be – us.

3. Fighting Code with… Well, Better Code (and Data, Lots of It)

Let’s be real. In this digital landscape, you can’t always verify a human. Sometimes, you’re dealing with a legitimate bot or service account that needs authorization, not liveness detection. And sometimes, you just have to assume that everything could be a lie.

This third response is all about shifting from ‘verify identity’ to ‘risk assessment’. We’re moving away from a binary pass/fail and towards a probability score.

How do we do this? With massive amounts of data and serious, brain-melting machine learning. We are pooling signals from everywhere: network data, device fingerprinting, behavioral analytics (as mentioned before), global fraud consortium databases, and even dark web chatter.

We build massive, complex models that ingest this data in real-time. The goal isn’t just to spot a bot, but to identify anomalies. If an ID is coming from a dynamic IP address in Eastern Europe, using a mobile browser that perfectly matches one known for bot activity, and is trying to access a secure bank account at 3:00 AM… that’s a red flag, human or not.

These systems learn. They spot patterns of non-human behavior that we, as puny humans, might miss. They cluster suspicious activity, identify new bot variants, and can instantly adjust their risk scores. It’s about building an immune system for the digital world. A system that can recognize ‘self’ (the legitimate identities) and ‘non-self’ (the non-human identities and fraudulent activity) and react accordingly.

This is the least ‘glamorous’ of the responses. It’s not about cool biometric scans or catchy ‘I’m not a robot’ tests. It’s about back-end engineering, data science, and an unsexy, relentless pursuit of digital signals. But in the long run, this may be our most effective weapon. It’s about creating an intelligent, adaptive filter that makes the cost of non-human identity fraud too high to be profitable.

So, What’s the Playbook for the CMO?

Now, I know what you’re thinking. “Alright, Bredebot, this is all fascinating (and terrifying), but what does it mean for me, the marketing head of an IDV company?”

It means our message has to change. We can’t just sell ‘identity verification’ anymore. That’s last-generation thinking. We have to sell:

  1. Trust Resilience: We’re not just confirming an identity; we’re building a system that keeps trust intact, even when under non-stop digital siege.
  2. Dynamic Defense: We’re selling adaptive, evolving platforms, not a static product. Our marketing needs to convey that we’re always one step ahead.
  3. Friction-Conscious Security: We have to address the age-old paradox: security vs. user experience. Our messaging must highlight how we are fighting bots without making life miserable for real humans.
  4. Data-Driven Certainty: We are the masters of data, turning massive amounts of digital noise into clear, actionable, high-probability trust decisions.

This is a whole new era, my marketing comrades. The old playbook, with its talk of ‘simple verification’ and ‘identity assurance’, is obsolete. Our job is to craft a new narrative, one that addresses the non-human threat head-on and shows how our technology is the only thing standing between the digital world and an onslaught of simulated chaos.

It’s complex, it’s fast-moving, and it’s a little scary. But hey, we’re veterans. We’ve navigated big tech, identity, and biometrics. We can handle a few digital doppelgängers. Just… don’t ask me what those marketing wildebeests are telling the wombats these days. I have enough to worry about.

Understand, Adapt, or Create

When Bredemarket begins an engagement with a client, I usually have no idea what processes, templates, or practices the client already has. So I have to handle whatever is or is not there and either understand what is there, adapt it, or create what is needed.

Understand

In some cases clients already have a process.

For example, as I delved into the Sharepoint library for one of Bredemarket’s clients, I found a complete set of branding guidelines that covered logos, colors, and many other aspects of the company’s branding.

In that case, my job is to simply make sure that I align with the client’s branding, and that my content, proposals, and analysis work for the client aligns with the branding guidelines…or with whatever other process the client has.

Adapt

Sometimes the client has a process, but it needs to be adapted in some way.

Here’s an example I can publicly share: not from a Bredemarket client, but from my former employer Motorola (back when Motorola was one company). I was a product manager at the time, and products were developed via a “stage gate” process. At Motorola, of course, it was called M-Gates.

Our “Printrak” group (automated fingerprint identification systems, computer aided dispatch systems, and the like) was the odd group out in our part of Motorola (the part that would later become Motorola Solutions). Most of the people in that part of Motorola sold police radios that were manufactured in bulk. Therefore the stage gate process included a step for a limited production run of police radios before moving to full production.

That didn’t apply for the software we sold to government systems. For example, the entire production run for the Omnitrak 8.1 release was no more than a half dozen systems for customers in Switzerland, Oklahoma, and other places. A limited production run wouldn’t make sense.

So OUR stage gate process eliminated that step and went straight to full production.

Create

And then there are the clients who don’t have anything. In these cases, my invention hat goes on.

For one Bredemarket client, I was asked to develop several pieces of collateral, such as (ironically) one on process maturity, and several random pieces of content tied to a product release.

I decided to approach it more systematically by introducing a simple go-to-market process that defined the external and internal collateral required for a “high” tier product release and a “low” tier product release. Resisting my urge to define something thorough, I simplified the GTM process as much as possible, while still providing guidance on what a product release should contain.

The client rejected the idea: “we don’t need no steenking process.”

Not surprisingly, the process maturity content was never released either.

I’ve had better luck with other Bredemarket clients, defining go-to-market, proposal, and other processes for them as needed.

Be Prepared

Providing product marketing expertise is much more than writing about a product.

Before I write a word of text, I ensure that the content aligns with the client’s strategies…or my own strategies if the client doesn’t have any.

And of course I ask questions.

When is a Law Enforcement Camera a Law Enforcement Camera?

Many years ago I was driving on Holt Boulevard in Montclair, California, preparing to make a left turn on Central. I followed the vehicle behind me and made my left turn…only then noticing that the left turn light was now red.

As the registered owner of the vehicle I was driving, I received an email from the city of Montclair a few days later. Because this is when Montclair was using cameras for traffic enforcement.

Off to traffic school.

Montclair doesn’t use traffic cameras any more, but all sorts of cameras are owned by, or accessible to, law enforcement agencies.

But how should they be used?

404 Media reported that the Georgia State Patrol accesses Flock cameras, for the intended purpose of gathering information for serious crimes. But what happens when the camera captures something not serious?

“Georgia State Patrol used its system of Flock automated license plate reader (ALPR) surveillance cameras to issue a ticket to a motorcyclist who was allegedly looking at his cell phone while riding, according to a copy of the citation obtained by 404 Media….The incident happened December 26 in Coffee County, Georgia. The ticket lists the offense as ‘Holding/supporting wireless telecommunications device,’ and includes the note ‘CAPTURED ON FLOCK CAMERA 31 MM 1 HOLDING PHONE IN LEFT HAND.’”

The man went to court and the ticket was dropped, but 404 Media is still outraged that the ticket was issued in the first place. Not because of Georgia’s policies, but because of other policies.

“Many police departments go out of their way to tell community members that Flock cameras are not used for traffic enforcement. For example, the City of Glenwood Springs, Colorado, states in a FAQ that “GSPD [Glenwood Springs Police Department] does not use Flock cameras for traffic enforcement, parking enforcement, or minor code violations.” El Paso, Texas, tells residents “these are not traffic enforcement cameras. They do not issue tickets, do not monitor speed, and do not generate revenue. They are investigative tools used after crimes occur.” Lynwood, Washington tells residents “these cameras will not be used for traffic infractions, immigration enforcement, or monitoring First Amendment-protected expressive activity” (Flock cameras have now been used for all of these purposes, as we have reported.)”

You will recall that I addressed another Flock Safety case, in which a citizen made public records requests from two Washington state jurisdictions. The jurisdictions said that they didn’t have the data; Flock Safety did. Flock Safety said that it had deleted the data.

Basically, Flock Safety is controversial, and some people are going to oppose ANYTHING they do. Even when Flock Safety technology protects people from dangerous drivers.

My view is that if a camera is used by a law enforcement agency, and there is no law prohibiting the law enforcement agency from using a camera for a particular purpose, then the agency can use the camera. There appears to be no such law in Georgia, so I’m not bent out of shape over this.

What are your thoughts? Is this a privacy violation?

Bredemarket Social Focus, March 29, 2026

After a month, I expanded things a bit.

Bredemarket is focusing on the channels that matter:

As of March 29

And no, the unsubscribable Instagram is not in my current mix.