I’m Bot a Doctor: Consumer-grade Generative AI Dispensation of Health Advice

In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?

Often technology companies seek regulatory approval before claiming that their hardware or software can be used for medical purposes.

Users aren’t warned that generative AI is not a doctor

Consumer-grade generative AI responses are another matter. Maybe.

“AI companies have now mostly abandoned the once-standard practice of including medical disclaimers and warnings in response to health questions.”

A study led by Sonali Sharma analyzed historical responses to medical questions since 2022. The study included OpenAI, Anthropic, DeepSeek, Google, and xAI. It included both answers to user health questions and analysis of medical images. Note that there is a difference between medical-grade image analysis products used by professionals, and general-purpose image analysis performed by a consumer-facing tool.

Dharma’s conclusion? Generative AI’s “I’m not a doctor” warnings have declined since 2022.

But users ARE warned…sort of

But at least one company claims that users ARE warned.

“An OpenAI spokesperson…pointed to the terms of service. These say that outputs are not intended to diagnose health conditions and that users are ultimately responsible.”

The applicable clause in OpenAI’s TOS can be found in section 9, Medical Use.

“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”

4479

From OpenAI’s Service Terms.

But the claim “it’s in the TOS” sometimes isn’t sufficient. 

  • I just signed a TOS from a company, but was explicitly reminded that I was signing something that required binding arbitration in place of lawsuits.
  • Is it sufficient to restrict a “don’t rely on me for medical advice; you could die” warning to a document that we MAY only read once?

Proposed “The Bots Want to Kill You” contest

Of course, one way to keep generative AI companies in line is to expose them to the Rod of Ridicule. When the bots provide bad medical advice, expose them:

“Maxwell claimed that in the first message Tessa sent, the bot told her that eating disorder recovery and sustainable weight loss can coexist. Then, it recommended that she should aim to lose 1-2 pounds per week. Tessa also suggested counting calories, regular weigh-ins, and measuring body fat with calipers. 

“‘If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED. If I had not gotten help, I would not still be alive today,” Maxwell wrote on the social media site. “Every single thing Tessa suggested were things that led to my eating disorder.’”

The organization hosting the bot, the National Eating Disorders Association (NEDA), withdrew the bot within a week.

How can we, um, diagnose additional harmful recommendations delivered without disclaimers?

Maybe a “The Bots Want to Kill You” contest is in order. Contestants would gather reproducible prompts for consumer-grade generative AI applications. The prompt most likely to result in a person’s demise would receive a prize of…well, that still has to be worked out.

2 Comments

Leave a Comment