The Instagram account acknowledge.aI posted the following (in part):
“Google has released its MedGemma and MedSigLIP models to the public, and they’re powerful enough to analyse chest X-rays, medical images, and patient histories like a digital second opinion.”
Um, didn’t we just address this on Wednesday?
“In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?”
Google and developers
So I wanted to see how Google offered MedGemma and MedSigLIP. So I found Google’s own July 9 announcement.
In the announcement, Google asserted that their tools are privacy-preserving, allowing developers to control privacy. In fact, developers are frequently mentioned in the announcement. Yes, developers.
OH wait, that was Microsoft.
The implication: Google just provides the tool: developers are responsible for its use. And the long disclaimer includes this sentence:
“The outputs generated by these models are not intended to directly inform clinical diagnosis, patient management decisions, treatment recommendations, or any other direct clinical practice applications.”
We’ve faced this before
And we’ve addressed this also, regarding proper use of facial recognition ONLY as an investigative lead. Responsible vendors emphasize this:
“In a piece on the ethical use of facial recognition, Rank One Computing stated the following in passing:
“‘[Rank One Computing] is taking a proactive stand to communicate that public concerns should focus on applications and policies rather than the technology itself.’”
But just because ROC or Clearview AI or another vendor communicates that facial recognition should ONLY be used as an investigative lead…does that mean that their customers will listen?

1 Comment