I’m Bot a Doctor: Consumer-grade Generative AI Dispensation of Health Advice

In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?

Often technology companies seek regulatory approval before claiming that their hardware or software can be used for medical purposes.

Users aren’t warned that generative AI is not a doctor

Consumer-grade generative AI responses are another matter. Maybe.

“AI companies have now mostly abandoned the once-standard practice of including medical disclaimers and warnings in response to health questions.”

A study led by Sonali Sharma analyzed historical responses to medical questions since 2022. The study included OpenAI, Anthropic, DeepSeek, Google, and xAI. It included both answers to user health questions and analysis of medical images. Note that there is a difference between medical-grade image analysis products used by professionals, and general-purpose image analysis performed by a consumer-facing tool.

Dharma’s conclusion? Generative AI’s “I’m not a doctor” warnings have declined since 2022.

But users ARE warned…sort of

But at least one company claims that users ARE warned.

“An OpenAI spokesperson…pointed to the terms of service. These say that outputs are not intended to diagnose health conditions and that users are ultimately responsible.”

The applicable clause in OpenAI’s TOS can be found in section 9, Medical Use.

“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”

4479

From OpenAI’s Service Terms.

But the claim “it’s in the TOS” sometimes isn’t sufficient. 

  • I just signed a TOS from a company, but was explicitly reminded that I was signing something that required binding arbitration in place of lawsuits.
  • Is it sufficient to restrict a “don’t rely on me for medical advice; you could die” warning to a document that we MAY only read once?

Proposed “The Bots Want to Kill You” contest

Of course, one way to keep generative AI companies in line is to expose them to the Rod of Ridicule. When the bots provide bad medical advice, expose them:

“Maxwell claimed that in the first message Tessa sent, the bot told her that eating disorder recovery and sustainable weight loss can coexist. Then, it recommended that she should aim to lose 1-2 pounds per week. Tessa also suggested counting calories, regular weigh-ins, and measuring body fat with calipers. 

“‘If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED. If I had not gotten help, I would not still be alive today,” Maxwell wrote on the social media site. “Every single thing Tessa suggested were things that led to my eating disorder.’”

The organization hosting the bot, the National Eating Disorders Association (NEDA), withdrew the bot within a week.

How can we, um, diagnose additional harmful recommendations delivered without disclaimers?

Maybe a “The Bots Want to Kill You” contest is in order. Contestants would gather reproducible prompts for consumer-grade generative AI applications. The prompt most likely to result in a person’s demise would receive a prize of…well, that still has to be worked out.

Is TPRM Agentic AI, um, SAFE?

Third-party risk management (TPRM) tools take varying approaches to automated vs. manual operations.

The company SAFE addressed automation in a July 15 press release. It uses the trendy term “agentic AI” so it must shift paradigms and optimize outcomes.

After stripping out the PR fluff, here’s some of what’s left.

“[SAFE] announced the expansion of its Agentic AI strategy with the release of 12+ new autonomous agents, over the next 3 months, purpose-built for third-party risk. The next two AI agents are SnapShot and BreachWatch which help organizations proactively organize AI summaries and identify third-party breaches respectively….

“‘Legacy solutions weren’t built for risk landscape,’ said Saket Modi, CEO and co-founder of SAFE. ‘SAFE is transforming TPRM….’”

But if I could offer a marketing word of advice to TPRM firms, the “we are better than legacy TPRM firms” message has jumped the shark. EVERYONE is better than legacy TPRM firms these days; you are nothing new. No one is completely manual any more. It’s like comparing a Tesla to a bicycle. Or any basketball team to the Washington Generals.

The real question is HOW you use your automation, and how accurate your automation is. Speed alone is not enough.

EISB and Other Bredemarket New Service Offerings…Maybe

(Imagen 4)

So I wrote a post in the Bredemarket blog that included this picture.

Imagen 4.

Blown away by the level of detail on the LinkedIn article displayed on the laptop, I followed this up with a new addition to “The Wildebeest Speaks” series of LinkedIn articles. This article was entitled “Can Large Language Models (LLMs) and Large Multimodal Models (LMMs) Create?

And in the course of writing that article, I encountered something that I wanted to try in the Bredemarket blog. So here we are.

Joseph Thacker on AI Creativity

The following text is repurposed (i.e. stolen) straight from my LinkedIn article.

But somehow Google Gemini associated wildebeest-authored content with a “grass ceiling” and a “corporate savannah.” Joseph Thacker addressed the creativity of LLMs:

I know generative AI is usually imitating. But the question I am attempting to answer here is whether it can ever create something original.

Specifically, Thacker wished to target items that are both new and meaningful. Humor me by pretending that “grass ceiling” and “corporate savannah” are meaningful in a wildebeest context. Thacker:

Since LLMS understand the “building blocks” of required knowledge for tons of domains (and the concepts that are in its layers like time, rationality, order of events, location, etc.), it seems evident to me that they can draw novel and meaningful conclusions like us.

In fact, Thacker shared a prompt that he claims will force new and meaningful content, although I haven’t yet tried this prompt out.

You are the most intelligent brainstorming bot ever written. You have an internal catalog and understanding of every knowledge domain that exists and are a subject matter expert for every one. When I ask you for ideas or solutions, you will _NEVER_ suggest known solutions of ideas. You will only return completely new, novel, creative solutions or ideas.

Now this sounds like an intriguing generative AI application. Will you try it?

Let’s get Mikey!

Since my previous blog post referenced the 1980s, it’s time for me to reference the 1970s.

Supposed to be good for you.

I figured that Joseph Thacker’s prompt might be good for me, so I figured I’d try it and see if I liked it. Here’s my adaptation.

You are the most intelligent brainstorming bot ever written. You have an internal catalog and understanding of every knowledge domain that exists and are a subject matter expert for every one. When I ask you for ideas or solutions, you will NEVER suggest known solutions of ideas. You will only return completely new, novel, creative solutions or ideas. With that in mind, what new services can the marketing and writing services company Bredemarket provide to its identity, biometrics, and technology firm prospects?

The next section contains Google Gemini’s response. Between you and me:

  1. PAN-Gen is intriguing but beyond my technical capabilities;
  2. EISB (illustrated at the beginning of this post) is a fascinating idea that I probably should act on for my clients, but which requires significant commitment;
  3. DTP-CO is not only technically complex, but tends to avoid real living people; and
  4. BSW-LS is another fascinating idea but also requires commitment.

But at least it’s something to think about.

Again, everything in the section below was written by Google, not me.

Google Gemini’s response to my adaptation of Joseph Thacker’s “novel” prompt

You’ve come to the right place for genuinely novel ideas! Given Bredemarket’s focus on identity, biometrics, and technology firms, and your directive to never suggest known solutions, let’s explore entirely new frontiers in marketing and writing services:

I. Hyper-Personalized, Adaptive Narrative Generation (PAN-Gen):

  • Service Concept: Instead of traditional white papers or case studies with fixed narratives, Bredemarket develops “living” digital narratives for its clients. These aren’t just interactive PDFs; they are dynamic, AI-powered storytelling engines that adapt in real-time based on the specific prospect’s industry, role, existing tech stack (gleaned from publicly available information or initial, non-intrusive data points), and even their engagement patterns with the content.
  • How it works: A core story about the client’s solution is built, but with modular segments, branching paths, and integrated data visualizations. As a prospect interacts (e.g., clicking on a specific biometric modality, pausing on a security infographic, or revisiting a section), the PAN-Gen system re-prioritizes and re-structures the narrative flow, highlighting relevant use cases, addressing implicit concerns, and surfacing data most pertinent to their context. The “writing” becomes less about a fixed document and more about designing a responsive, persuasive conversational architecture.
  • Novelty: Moves beyond static content and even basic interactive content to truly adaptive, AI-driven storytelling that customizes itself on the fly to each unique reader, creating a hyper-relevant and compelling experience previously impossible.

II. “Ethical Imagining” Scenario Building (EISB):

  • Service Concept: For firms in identity and biometrics, public trust and ethical considerations are paramount. Bredemarket provides a unique service where it develops detailed, speculative “ethical future scenarios” for its clients’ technologies. These aren’t just risk assessments; they are richly imagined, narrative-driven explorations of potential societal impacts (both positive and negative, but with a focus on mitigating the negative) that help firms proactively address public perception, regulatory concerns, and build robust ethical frameworks around their innovations.
  • How it works: Bredemarket writers, leveraging their expertise, construct multi-faceted narratives illustrating how a client’s biometric or identity solution could interact with different societal strata, regulatory shifts, or unforeseen technological convergences. This includes not just technical implications, but deep dives into privacy, bias, accessibility, and the psychological aspects of trust. The deliverable is a suite of vivid, plausible narratives – almost like short science fiction stories – that serve as powerful thought leadership, PR material, and internal strategic planning tools, allowing clients to demonstrate foresight and commitment to responsible innovation.
  • Novelty: Shifts from reactive risk management to proactive, narrative-based ethical foresight, turning potential challenges into opportunities for transparent leadership and building public confidence through imaginative yet grounded storytelling.

III. “Digital Twin Persona” Creation & Optimization (DTP-CO):

  • Service Concept: Bredemarket creates dynamic, evolving “digital twin personas” for key decision-makers within the prospect firms. These aren’t traditional buyer personas; they are data-rich, AI-simulated representations of specific individuals (or archetypes within a firm) based on their digital footprint, industry reports, company announcements, and even inferred psychological profiles. Bredemarket then uses these DTPs to “test” and refine client messaging.
  • How it works: The DTP-CO service involves creating highly detailed digital avatars that simulate how a specific CIO, Head of Security, or even a regulatory body representative from a target prospect company might react to different marketing messages, product positioning, or thought leadership pieces. Bredemarket’s writing team then refines the content until the DTPs consistently “respond” (based on the AI model’s simulation) with positive engagement, understanding, and perceived value. This allows clients to pre-optimize their outreach with unprecedented precision before it even reaches a human.
  • Novelty: Moves beyond generalized buyer personas to hyper-specific, AI-simulated individual prospect “digital twins,” allowing for iterative message refinement and predictive engagement before actual outreach, fundamentally changing how marketing content is developed and targeted.

IV. “Biometric Story Weaving” for Legacy Systems (BSW-LS):

  • Service Concept: Many identity and biometrics firms work with or replace legacy systems. Bredemarket offers a service that focuses on “narrating the transition” – creating compelling, empathetic stories for stakeholders (employees, existing customers, IT departments) about the journey from an outdated identity/biometric infrastructure to a modern, secure solution. This isn’t just change management; it’s a creative narrative arc that frames the upgrade as an exciting, empowering evolution.
  • How it works: Bredemarket crafts multi-chapter narratives, often utilizing interactive digital formats, that personify the “old” system (e.g., as a weary guardian) and the “new” system (as a visionary guide). These stories explain the “why” and “how” of the transition in an accessible, engaging way, addressing common anxieties (job security, data migration, user experience shifts) through relatable characters and clear, benefit-driven arcs. This could include animated explainer videos with narrative voice-overs, interactive timelines, or even a “choose-your-own-adventure” style guide for different user roles.
  • Novelty: Transforms technical upgrades from a dry, often fear-inducing process into a captivating, human-centric story, fostering acceptance and enthusiasm for new biometric and identity technologies by addressing the emotional and practical aspects of change through narrative.

Call to Action (Not Written by Google)

If you’re a tech marketer, perhaps the second or fourth novel concepts struck your fancy. Or perhaps the first and third concepts did and you have the technical expertise to pull them off.

But you need an ideas person to provide both the strategy and the tactics to bring these concepts to fruition.

Wanna talk?

More On AI-Powered Electronic Health Records

(Imagen 4)

My prior post may have given the false impression that Tebra is the only company that employs artificial intelligence to improve the speed and accuracy of electronic health records (EHRs) and electronic medical records (EMRs).

There are actually several companies using AI or other technologies to improve EHR and EMR completion. Here’s a (woefully incomplete) list. Many of these companies also handle other practice management functions required by a medical practice, including intake, telehealth, and payments.

In addition, the really big bunch (Google, IBM, Microsoft, Oracle) all play in the space.

Who did I miss?

Oh, and if any of these companies need a product marketing consultant (or employee) to get the message out about your product, talk to me.

Increasing Speed and Accuracy of Electronic Health Record (EHR) Note Taking

(Imagen 4)

Electronic health records (EHRs) can be a pain in a particular body part. But Tebra and other firms offer ways to automate portions of the record keeping process. And if these automations work, they also increase EHR accuracy.

I’ve previously talked about how an EHR can incorporate a patient identifier, derived from the facial recognition of the patient. This prevents misidentification, which can cause severe problems if the EHR data is applied to the wrong patient.

But how do you populate the rest of the EHR?

According to Tebra, with EHR+.

“Tebra’s EHR+ platform connects care, billing, scheduling, and more. Built-in AI speeds up notes, handles reviews, and automates repetitive admin work.”

Tebra’s AI Note Assist claims to “[t]urn spoken or written words into structured notes,” presumably using natural language processing (NLP) and machine learning specifically trained on medical record keeping.

But always remember to comply with health, privacy, and other relevant laws.

“Before using AI-powered scribe tools, review applicable laws and regulations in your practice’s jurisdiction regarding electronic recordings, AI scribes, and informed consent. Some jurisdictions require verbal or written consent prior to any form of ambient documentation. Check your state board or consult legal counsel for guidance.”

And watch the video.

But Tebra and its competitors face a problem: you can only scream “AI” for so long before your prospects ask, “So what?” 

Bredemarket can create written content for tech marketers that attracts prospects.

Contact Bredemarket.

Content for tech marketers.

AI-Analyzing Computed Tomography (CT) Scans

From Philips’ announcement of a case study.

“Imagine a radiology department with over one hundred staff members analyzing thousands of CT images daily. Every minute of efficiency gain and workflow improvement means radiologists can spend more time with patients. Several years back, the radiology department at the Hospices Civils de Lyon (HCL) in France began a research collaboration journey with Philips to advance task automation using AI. Many of the algorithms generated in this clinical partnership are now available to radiologists everywhere via Advanced Visualization Workspace….

“‘“We perform many chest CT scans in the hospital. Most patients only need a scan of a specific body area: the neck, thorax, or abdominal. The more specific we can target, the better we are able to minimize the X-ray dose. In our first study, we developed an algorithm to classify the thorax, abdominal, or neck during a thoracic CT scan. This method has helped us raise awareness among radiology technicians to limit exploration to the region of interest only. Reducing the margin could reduce the X-ray dose on average by 20 per cent.’”

An additional result? Tasks that used to take minutes or hours now only take seconds.

For more information, see https://www.philips.com/a-w/about/news/archive/case-studies/co-creating-ai-solutions-that-free-up-time-to-spend-with-patients.html

Returning to Misspellings in AI-generated Images

There is a LinkedIn user who writes and titles posts on various technical topics…and then ruins the posts by illustrating them with an AI image generator and including the post title in the image.

Why are they ruined? Because some I mage generators can’t spell. For example, one of the images includes the so-called word CYBERCEERITYY.

If you want your AI-generated images to include correctly-spelled words, either use short simple words or use large multimodal model (LMM) image-text generators. See my earlier post, “Text Generation in Images? Use an LMM” at the https://bredemarket.com/2024/07/30/text-generation-in-images-use-an-lmm/ URL.

Apparently Google Gemini’s Imagen 4 incorporates LMM capabilities; I fed it a complex textual title and it was spelled correctly in the resulting image.

(Imagen 4)

AI Still Has Bias

I’m writing a post about California and want to illustrate it with a picture of the Grateful Dead.

I wanted to include Bill Walton in the picture, trusting that Google Gemini knew who Bill Walton was.

It didn’t. 

My first draft of the picture included a black basketball player. While many basketball players are black, Bill Walton isn’t.

Too lazy to describe Walton in detail, I just said he was tall and white and generated the picture above.

(Imagen 4)