The Government Wants You To Work for A Company, Not Yourself

I’m sure you’ve heard the empowerment gurus on LinkedIn who say that people working for companies are idiots. Admittedly it seems that too many companies don’t care about their employees and will jettison them at a moment’s notice.

So what do the empowerment gurus recommend? They tell people to take control of their own destiny and work for themselves. Don’t use your talents to fatten some executive’s stock options.

Google Gemini.

However, those of us in the United States face a huge barrier to that.

Healthcare.

Unless a solopreneur’s spouse has employer-subsidized healthcare, the financial healthcare penalty for working for yourself is huge. From an individual perspective, anyway.

The average annual premium for employer-sponsored family coverage totaled about $27,000 in 2025, according to [the Kaiser Family Foundation]. This is coverage for a family of four.

But workers don’t pay the full sum. They contributed just $6,850 — about 25% — toward the total premium, according to KFF. Employers subsidized the rest, paying about $20,000, on average.

By comparison, if the enhanced ACA subsidies expire next year, the average family of four earning $130,000 would pay the full, unsubsidized premium for marketplace coverage.

Their annual insurance premiums would jump to about $23,900, more than double the subsidized cost of $11,050 — an increase of almost $12,900, according to the Center on Budget and Policy Priorities.

Google Gemini.

Do how do those who oppose Communist subsidies propose to solve ACA healthcare costs?

By providing people with an annual health savings account funding of…checks notes…$1,500.

Perhaps I’m deprived because of my 20th century math education, but last I checked $1,500 in funding is less than $12,900 in losses.

People who are on COBRA, or a similar program such as Cal COBRA, experience similar sticker shocks.

So my advice to people is to do one or both of the following:

  • Get employer-subsidized healthcare.
  • Marry someone with employer-subsidized healthcare.

Detecting Deceptively Authoritative Deepfakes

I referenced this on one of my LinkedIn showcase pages earlier this week, but I need to say more on it.

We all agree that deepfakes can (sometimes) result in bad things, but some deepfakes present particular dangers that may not be detected. Let’s look at how deepfakes can harm the healthcare and legal professions.

Arielle Waldman of Dark Reading pointed out these dangers in her post “Sora 2 Makes Videos So Believable, Reality Checks Are Required.”

But I don’t want to talk about the general issues with believable AI (whether it’s Sora 2, Nano Banana Pro, or something else). I want to hone in on this:

“Sora 2 security risks will affect an array of industries, primarily the legal and healthcare sectors. AI generated evidence continues to pose challenges for lawyers and judges because it’s difficult to distinguish between reality and illusion. And deepfakes could affect healthcare, where many benefits are doled out virtually, including appointments and consultations.”

Actually these are two separate issues, and I’ll deal with them both.

Health Deepfakes

It’s bad enough that people can access your health records just by knowing your name and birthdate. But what happens when your medical practitioner sends you a telehealth appointment link…except your medical practitioner didn’t send it?

Grok.

So here you are, sharing your protected health information with…who exactly?

And once you realize you’ve been duped, you turn to a lawyer.

This one is not a deepfake. From YouTube.

Or you think you turn to a lawyer.

Legal Deepfakes

First off, is that lawyer truly a lawyer? And are you speaking to the lawyer to whom you think you’re speaking?

Not Johnnie Cochran.

And even if you are, when the lawyer gathers information for the case, who knows if it’s real. And I’m not talking about the lawyers who cited hallucinated legal decisions. I’m talking about the lawyers whose eDiscovery platforms gather faked evidence.

Liquor store owner.

The detection of deepfakes is currently concentrated in particular industries, such as financial services. But many more industries require this detection.

The Healthy Otter: When AI Transcriptions are HIPAA Compliant

When I remember to transcribe my meetings, and when I CAN transcribe my meetings, my meeting transcriber of choice happens to be otter.ai. And if I’m talking to a healthcare prospect or client, and when they grant permission to transcribe, the result is HIPAA compliant.

Otter.ai explains the features that provide this:

Getting HIPAA compliant wasn’t just about checking a box – we’ve implemented some serious security upgrades:

  • Better encryption to keep protected health information (PHI) locked down
  • Tighter access controls so only the right people see sensitive data
  • Team training to make sure everyone knows HIPAA inside and out
  • Regular security audits to stay on top of our game

This builds on our existing SOC 2 Type II certification, so you’re getting enterprise-grade security across the board.

HIPAA privacy protections affect you everywhere.

In Health, Benefits of Identity Assurance Level 2 (IAL2) are CLEAR

Is the medical facility working with the right patient?

Hackensack Meridian Health in New Jersey claims that it knows who its patients are. It has partnered with CLEAR for patient identification, according to AInvest. Among the listed benefits of the partnership are enhanced security:

“CLEAR1 meets NIST’s Identity Assurance Level 2 (IAL2) standards, a rare feat in the healthcare sector, ensuring robust protection against fraud.”

But is IAL2 that rare in healthcare?

Other vendors, such as Proof, ID.me, and Nametag certainly talk about it.

And frankly (if you ignore telehealth) the healthcare field is ripe for IAL3 implementation.

If you are a healthcare solution marketer, you’re NOT with CLEAR, and you’re angry that AInvest claims that IAL2 is “a rare feat” in healthcare…

Is your IAL2 healthcare solution hidden in the shadows? Imagen 4.

…then you need to get the word out about your solution.

And Bredemarket can help. Schedule a free meeting with me.

Stuck at Second: Syneos Health Setback in India

I last discussed Syneos Health on August 15, in a popular post on early stage commercialization. When I checked for recent news I discovered that Syneos Health received a commercialization setback in India for the QL2107 Injection.

[T]he Subject Expert Committee (SEC) functional under the Central Drugs Standard Control Organization (CDSCO) has rejected its Phase III clinical trial proposal for QL2107 Injection….

After detailed deliberation, the committee opined that, “the proposed clinical trial is focused completely on Pharmacokinetic (pK) parameters. Moreover, primary objective and secondary objective of phase-III study protocol has not been demonstrated for confirmation of therapeutic benefit and efficacy end point. Hence, the committee didn’t recommend to conduct the clinical trial in India.”

So what is the QL2107 Injection? First off, it comes from a Chinese company.

Qilu Pharmaceutical is one of the leading vertically integrated pharmaceutical companies in China focusing on the development, manufacturing and marketing of active pharmaceutical ingredients (APIs) & finished formulations….Dedicated to offering more affordable medicines to the world and improving people’s well-being, Qilu has exported its products to over 100 countries.

The literature on QL2107 repeatedly refers to Qilu Pharmaceutical rather than Syneos Health. But presumably there’s a partnership somewhere.

According to this website, QL2107 is a “pembrolizumab biosimilar,” a fancy way to say that it is similar to pembrolizumab (brand name Keytruda®), a monoclonal antibody with possible anti-cancer applications. It’s already undergone clinical trials.

But a Phase III clinical trial is special. The Gilead Clinical Trials website defines the four phases of clinical trials, including the third:

Phase 3 trials continue to evaluate a treatment’s safety, effectiveness, and side effects by studying it among different populations with the condition and at different dosages. The potential treatment is also compared to existing treatments, or in combination with other treatments to demonstrate whether it offers a benefit to the trial participants. Once completed, the treatment may be approved by regulatory agencies.

Although there is a fourth phase, continuous monitoring, that is obviously important.

Imagen 4.

In summary, QL2107 is not a home run or even a triple. At least in India, it’s stuck at second.

Pharma Early Stage Commercialization With Syneos Health

While I’ve previously addressed pharma commercialization in terms of ensuring that patients use (and purchase) their medications, commercialization occurs long before that. After all, for a prescription drug to be available on the market, it has to get to the market in the first place.

Here’s how Syneos Health presents the issue.

“Early-stage biopharma companies have traditionally had limited options for getting their first asset to market. In most cases, they pursue deals with larger partners and lose or limit rights to their asset, or become a fully integrated company through heavy investment and a great deal of risk.”

Syneos Health has a solution for that. To learn the details of Syneos Health’s full-service pharma commercialization solution, visit this page.

Oh, and the company also has case studies.

“We helped build a European commercial capability via a comprehensive commercialization partnership, introducing a Commercial Leadership function to support the design and execution of the overall launch plan and integrate other services. We were able to scale quickly and establish commercial operations, designed a European launch plan and forecast and launched within three months in one EU country, with a subsequent sequenced launch through Europe, despite the ambiguities of COVID-19.

“In 12 weeks, we provided a full virtual infrastructure that included Field Teams, MSLs, Access Teams, Marketplace and Access/Pricing Consulting, Advertising, Public Relations and Medical Communications, increasing operational efficiencies through integrated Communications teams to reduce duplication in effort across promotional channels and recruiting, training and deploying sales reps, achieving an annual run rate of >$200 million.

“We partnered with the commercial team to provide all commercial launch services, offering crucial global leadership that enabled the team to dynamically scale up and down to respond to changing launch timelines and to balance short- and long-term financial objectives. With a heavy emphasis on market development, we empowered the customer to be fully engaged and aligned with all communities and show an in-depth understanding of market dynamics.”

What can Syneos Health do for you?

If you’re a pharma leader, of course.

Artificial Intelligence Body Farm: Google AI Grows a Basilar Ganglia

(Imagen 4)

Last month I discussed Google’s advances in health and artificial intelligence, specifically the ability to MedGemma and MedSigLIP to analyze medical images. But writing about health is more problematic. Either that, or Google AI is growing body parts such as the “basilar ganglia.”

Futurism includes the details of a Google research paper that “invented” this “basilar ganglia” body part.

“In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.

“It identified an “old left basilar ganglia infarct,” referring to a purported part of the brain — “basilar ganglia” — that simply doesn’t exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.”

A little scary…especially the fact that it took a year to discover the error, a conflation of the basal ganglia (in the brain) and the basilar artery (at the brainstem). There’s no “basilar ganglia” per se.

And the MedGemma engine that I discussed last month has its own problems.

“Google’s more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time.”

One could argue that the same thing could happen with humans. After all, if a patient words a problem in one way to one doctor, and in a different way to a different doctor, you could also have divergent diagnoses.

But this reminds us that we need to fact-check EVERYTHING we read.

Pharmacy Product Marketing to the Proper Hungry People

Health marketing leaders know that pharmacy product marketing can be complex because of the many stakeholders involved. Depending upon the product or service, your hungry people (target audience) may consist of multiple parties.

  • Pharmaceutical companies.
  • Pharmacists.
  • Medical professionals.
  • Insurance companies.
  • Partners who assist the companies above.
  • Consumers.

And the pharmacy product marketer has to create positioning and messaging for all these parties, for a myriad of use cases: fulfillment, approval, another approval, yet another approval. All the messaging can become a complex matrix. (I know. I’ve maintained a similar messaging matrix for an ABM marketing campaign for the financial services industry.)

To achieve your goals, health marketing leaders require a mix of strategy and tactics. And that’s where my extensive experience can help with your pharmacy product marketing program.

Talk to Bredemarket.

I’m Bot a Doctor, Google MedGemma and MedSigLIP Edition

The Instagram account acknowledge.aI posted the following (in part):

“Google has released its MedGemma and MedSigLIP models to the public, and they’re powerful enough to analyse chest X-rays, medical images, and patient histories like a digital second opinion.”

Um, didn’t we just address this on Wednesday?

“In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?”

Google and developers

So I wanted to see how Google offered MedGemma and MedSigLIP. So I found Google’s own July 9 announcement

In the announcement, Google asserted that their tools are privacy-preserving, allowing developers to control privacy. In fact, developers are frequently mentioned in the announcement. Yes, developers.

OH wait, that was Microsoft.

The implication: Google just provides the tool: developers are responsible for its use. And the long disclaimer includes this sentence:

“The outputs generated by these models are not intended to directly inform clinical diagnosis, patient management decisions, treatment recommendations, or any other direct clinical practice applications.”

We’ve faced this before

And we’ve addressed this also, regarding proper use of facial recognition ONLY as an investigative lead. Responsible vendors emphasize this:

“In a piece on the ethical use of facial recognition, Rank One Computing stated the following in passing:

“‘[Rank One Computing] is taking a proactive stand to communicate that public concerns should focus on applications and policies rather than the technology itself.’”

But just because ROC or Clearview AI or another vendor communicates that facial recognition should ONLY be used as an investigative lead…does that mean that their customers will listen?

I’m Bot a Doctor: Consumer-grade Generative AI Dispensation of Health Advice

In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?

Often technology companies seek regulatory approval before claiming that their hardware or software can be used for medical purposes.

Users aren’t warned that generative AI is not a doctor

Consumer-grade generative AI responses are another matter. Maybe.

“AI companies have now mostly abandoned the once-standard practice of including medical disclaimers and warnings in response to health questions.”

A study led by Sonali Sharma analyzed historical responses to medical questions since 2022. The study included OpenAI, Anthropic, DeepSeek, Google, and xAI. It included both answers to user health questions and analysis of medical images. Note that there is a difference between medical-grade image analysis products used by professionals, and general-purpose image analysis performed by a consumer-facing tool.

Dharma’s conclusion? Generative AI’s “I’m not a doctor” warnings have declined since 2022.

But users ARE warned…sort of

But at least one company claims that users ARE warned.

“An OpenAI spokesperson…pointed to the terms of service. These say that outputs are not intended to diagnose health conditions and that users are ultimately responsible.”

The applicable clause in OpenAI’s TOS can be found in section 9, Medical Use.

“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”

4479

From OpenAI’s Service Terms.

But the claim “it’s in the TOS” sometimes isn’t sufficient. 

  • I just signed a TOS from a company, but was explicitly reminded that I was signing something that required binding arbitration in place of lawsuits.
  • Is it sufficient to restrict a “don’t rely on me for medical advice; you could die” warning to a document that we MAY only read once?

Proposed “The Bots Want to Kill You” contest

Of course, one way to keep generative AI companies in line is to expose them to the Rod of Ridicule. When the bots provide bad medical advice, expose them:

“Maxwell claimed that in the first message Tessa sent, the bot told her that eating disorder recovery and sustainable weight loss can coexist. Then, it recommended that she should aim to lose 1-2 pounds per week. Tessa also suggested counting calories, regular weigh-ins, and measuring body fat with calipers. 

“‘If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED. If I had not gotten help, I would not still be alive today,” Maxwell wrote on the social media site. “Every single thing Tessa suggested were things that led to my eating disorder.’”

The organization hosting the bot, the National Eating Disorders Association (NEDA), withdrew the bot within a week.

How can we, um, diagnose additional harmful recommendations delivered without disclaimers?

Maybe a “The Bots Want to Kill You” contest is in order. Contestants would gather reproducible prompts for consumer-grade generative AI applications. The prompt most likely to result in a person’s demise would receive a prize of…well, that still has to be worked out.