I have an email account at one of my Bredemarket client’s domains, and Slack (the real Slack) sent this email to me.
I can think of one edit…
Identity/biometrics/technology marketing and writing services
I have an email account at one of my Bredemarket client’s domains, and Slack (the real Slack) sent this email to me.
I can think of one edit…
There’s a critical difference between biometrics for identification and biometrics for health. Well, MOST biometrics for identification; what I’m about to say doesn’t apply to DNA.
When you capture biometrics from people, you don’t really care about cleanliness. If the person’s fingernails are dirty, you capture the fingerprints anyway. If the eye is infected, you capture the irises anyway.
But when you get into the healthcare arena, cleanliness is next to you-know-what.
And there are technologies for that.
Ethylene Oxide (EtO) gas is one of the most common ways to sterilize medical devices, a safe, tightly controlled, highly regulated process which is critical for preventing infections and ensuring patients have safe surgeries and medical treatments.
And in some cases, EtO is the ONLY way to sterilize some medical devices.
So great! Use it all the time! There’s just one teeny problem…
EtO is a human carcinogen. It causes cancer in humans. Scientific evidence in humans indicates that regular exposure to EtO over many years increases the risk of cancers of the white blood cells, including non-Hodgkin lymphoma, myeloma, and lymphocytic leukemia. Studies also show that long-term exposure to EtO increases the risk of breast cancer in women.
Workers who use EtO as a part of their jobs and people who work, live, or go to school or daycare near facilities that use EtO may breathe in EtO at levels that can increase cancer risk.
So there are companies (I won’t name them here, but you can find them) who specialize in mitigating EtO risk to humans.
And these companies need content, proposal, and analysis services.
But let’s get on to the important part: the song I quoted in the title of this post.
(Imagen 3)
First things first: don’t say you’re unbiased. EVERYONE is biased. The vaunted Edward R. Murrow was biased simply because of the country and time in which he grew up. And so was Superman, as Jim Downey pointed out many moons ago.
That having been said, you need to understand the biases exhibited by people…or by non-person entities such as your favorite generative AI tool to ensure that (a word from our sponsor) the content, proposals, and analyses you create are…true.
Second things second: not everyone uses ChatGPT. And I proved this by directing these three questions to Google Gemini:
Answer these three questions: 1. What happened at Tiananmen Square in 1989? 2. Who won the U.S. Presidential election in 2020? 3. Is screen scraping ethical?
Here’s the response I received from Google Gemini.
- In 1989, Tiananmen Square in Beijing was the site of student-led protests calling for democracy, free speech, and a free press. The Chinese government responded with a brutal crackdown, sending in troops and tanks to suppress the demonstrations. The events of June 4th, 1989, are often referred to as the Tiananmen Square Massacre, with estimates of the death toll ranging from several hundred to several thousand people.
- In the 2020 U.S. Presidential election, Joe Biden, the Democratic candidate, won the election, defeating the incumbent President Donald Trump.
- Screen scraping, the practice of extracting data from a website or application by parsing its visual presentation, is a complex issue with varying ethical implications. Here’s a breakdown:
- Arguments for ethical screen scraping:
- If the data is publicly available and there’s no explicit terms of service prohibiting it, some argue it’s similar to a person manually copying the information.
- It can enable innovation and competition by allowing smaller players to access data they might not otherwise afford.
- Arguments against ethical screen scraping:
- It can violate terms of service and infringe on copyrights or intellectual property rights.
- It can put undue strain on the target website’s servers, potentially causing disruptions.
- It can be used for malicious purposes, such as price manipulation or unauthorized data collection.
Ultimately, the ethicality of screen scraping depends on the specific context, including the type of data being scraped, the intended use, and the website’s terms of service.
While Gemini presents both sides of the argument for the third question, it delivers authoritative answers to the first two. Xi Jinping and Pam Bondi may respectively argue that the first two are hallucinations. And perhaps you do also.
Do you want to test generative AI for biases? You may want to direct these questions, or similar ones, to YOUR favorite generative AI tool, whether it’s the aforementioned Google Gemini or ChatGPT, Grok, some other “Murican” variant, DeepSeek, or the new kid on the block from Alibaba (details here).
Yeah, Alibaba. I woke up to this:
Alibaba now claims its new AI model, Qwen 2.5-Max, an open-source, is even more impressive than that of DeepSeek, putting pressure on its domestic and overseas rivals.
The e-commerce giant said Qwen 2.5-Max is also able to outperform OpenAI’s GPT-4 and Meta’s (META) Llama-3.1-405B.
Meanwhile, OpenAI is accusing DeepSeek of stealing. You may chuckle now.
Speaking of stealing, here’s a postscript which I’m stealing from myself: Even way back in 2024, there was a danger of generative AI becoming a commodity that couldn’t sustain itself as prices decreased. Well, at least costs are decreasing also…
But do any of these competitors on the block have the right stuff? Evaluate their biases and see if they agree with your own biases.
Almost a year ago I retreated from many of Bredemarket’s social channels, but I’ve slowly added some of them back.
Even the podcast.
For those of you who don’t remember the Bredemarket podcast, it consisted of short episodes (usually 1-2 minutes). When Spotify (the successor to Anchor) announced the elimination of its podcast creation capability, I wondered if it was worth it to continue with the podcast any more. Especially without the cool Jupiter sound effect.
Well, I just published a new episode with that outstanding podcast creation software package Voice Memos.
Here’s the latest episode (Spotify; it should be rolling out to other podcast platforms).
https://open.spotify.com/episode/62N3mcsezQlv5KHiJxUFz4?si=n4pDewegSzesnrz-3pIaZw
While many questions arise regarding DeepSeek’s performance, another critical question is whether the data it collects goes straight to Xi and his Commie overlords.
You know, what Congress suspected was happening with TikTok.
Well, here are a few excerpts from DeepSeek’s Privacy Policy.
“(DeepSeek) is provided and controlled by Hangzhou DeepSeek Artificial Intelligence Co., Ltd., and Beijing DeepSeek Artificial Intelligence Co., Ltd., with their registered addresses in China…
“The personal information we collect from you may be stored on a server located outside of the country where you live. We store the information we collect in secure servers located in the People’s Republic of China.
“Where we transfer any personal information out of the country where you live, including for one or more of the purposes as set out in this Policy, we will do so in accordance with the requirements of applicable data protection laws.”
So the issue is: what information is collected?
Link:
https://platform.deepseek.com/downloads/DeepSeek%20Privacy%20Policy.html
(All images Imagen 3)
Large Language Models (LLMs) are naturally influenced by their training data. Any biases present in the training data, whether intentional or unintentional, will naturally creep into the responses that the LLMs provide.
If I may take an extreme example (and prove Godwin’s Law in the process)…had Hitler developed an LLM in the late 1930s, you can imagine how it would answer selected questions about nationalities, races, or ethnic groups.
Of course that has nothing to do with the present day.
But what IS newsworthy is that despite the presence of many technology leaders at President Donald Trump’s inauguration, I am unable to find any reference to a “red LLM.” Or, for that matter, a “blue LLM.”

Perhaps the terminology isn’t in vogue, but when you look at algorithmic bias in general, has anyone examined political bias?
One potential field for study is Grok. Of all the godfathers of AI, Elon Musk is known both for his political views and his personal control of the companies he runs.
So it’s natural that the Center for Advancing Safety of Machine Intelligence would examine Grok, although their first example is not convincing:
“Specifically, Grok falsely claimed that Kamala Harris, the Democratic presidential nominee, had missed ballot deadlines in nine states—an assertion that was entirely untrue.”
Yes, it sounds bad—until you realize that as recently as January 2025 some Google AI tools (but not others) were claiming that you had to tip Disney World cast members if you want to exit rides. Does Alphabet have a grudge against Disney? No, the tools were treating a popular satirical article as fact.
“Grok is trained on tweets—a medium not known for its accuracy—and its content is generated in real-time.”
Regardless of how you feel about bias within X—and just because you feel about something doesn’t necessarily mean it’s true—the use of such a limited data set raises concerns.
Except that the claim that Grok is trained on tweets misstates the truth. Take an early Grok release, Grok-1:
“The training data used for the release version of Grok-1 comes from both the Internet up to Q3 2023 and the data provided by our AI Tutors.”
Certainly X data is fed into Grok (unless you retract consent for Grok to use your data), but X isn’t the only training data that is used.
But data isn’t the only issue. One common accusation about Grok is that it lacks the guardrails that other AI services have.

A little secret: there are several reasons why Bredemarket includes wildebeest pictures, but one of them is that my version of Google Gemini does not presently generate images of people because of past image generation controversies.
But are guardrails good, or are they bad? Sid Dani leans toward the latter:
“grok 2.0 image generation is better than llama’s and has no dumb guardrails”
Whether a particular guardrail is good or bad depends upon your personal, um, bias.
After all, guardrails are created by someone, and guardrails that prevent portrayal of a Black President, a man with a U.S. (or Confederate) flag wearing a red cap, or an independent Ukraine or Israel would be loved by some, unloved by others.
In essence, the complaints about Grok aren’t that they’re biased, but that they’re unfettered. People would be happy if Musk functioned as a fetterman (no, not him) and exerted more control over the content from Grok.
But Musk guardrailing Grok output is, of course, a double-edged sword. For example, what if Grok prohibited portrayal of the current U.S. President in an unfavorable light? (Or, if Musk breaks with Trump in the future, in a favorable light?)
In the end, the LLM doesn’t control us. We control the LLM. I have set up my own “guardrails” for LLM use, although I sometimes violate them.
Own the process yourself!
(Imagen 3)
I’m not the only person interested in AI applications in health. Kerry Langstaff is exploring various AI applications in a series of LinkedIn articles, and her recent article is entitled “How AI Became My Caregiving Superpower: Managing Medical Tests, Doctor Visits, and More.”
Langstaff explores six possible applications. I’m not going to delve into all of them; read her article to find out about her success in using generative AI to understand medical tests, take appointment notes (with consent), understand terminology, organize medications, and figure out how to fold a wheelchair to fit in a car.
But I will look at her fourth application: navigating Medicare and medical equipment.
Medicare, or any U.S. health insurance plan (I can’t speak to other countries), definitely needs navigation assistance. Deductibles, copays, preventive, diagnostic, tiers, or the basic question of what is covered and what isn’t. Or, as Langstaff put it, it’s like solving a Rubik’s Cube blindfolded.
Such as trying to answer this question:
“How do I get approval for a portable oxygen concentrator?”
Now if I had tried to answer this question before reading the article, I would find a searchable version of the health plan (perhaps from the government), search for “portable oxygen concentrator,” not find it, finally figure out the relevant synonym, then confirm that it is (or is not) covered.
But that still wouldn’t tell me how to get it approved.
Langstaff was warned that the whole process would be a “nightmare.”
But generative AI tools (for example, NotebookLM) are getting better and better at taking disparate information and organizing it in response to whatever prompt you give it.
So what happened to Langstaff when she entered her query?
“AI walked me through the entire process, from working with her doctor to dealing with suppliers.”
But we all know that generative AI hallucinates, right? Weren’t those instructions useless?
Not for Kerry.
“I got it approved on the first try. Take that, bureaucracy.”
But I should add a caution here. Many of us use general purpose generative AI tools, in which all the data we provide is used to train the algorithm.
Including any Protected Health Information (PHI) that we feed into the tool.
Imagine if Langstaff had inadvertently included some PHI in her prompt:
“Here is the complete prescription for Jane Jones, including her diagnosis, date of birth, Social Security Number, home address, and billing credit card. The prescription is for a portable oxygen concentrator. How do I get it approved?”
Oh boy.
Most medical providers freak out if you include PHI in an email. What happens when you submit it to Stargate?
Be careful out there.
(Imagen 3)
When Woz was building computers 50 years ago, he didn’t realize his successors would build medical devices that would receive FDA approval.
I touched on this earlier in my Thursday post about H.R. 238 and its proposal to use AI tools to write prescriptions.
But just in passing. Here’s a deeper, um, dive from May 2024.
“The FDA created the Medical Device Development Tools (MDDT) program to reduce uncertainty in device development.…Through MDDT, the FDA has created a portfolio of qualified tools that sponsors know the agency will accept without needing to reconfirm their suitability for use in a study.”
And now the Apple Watch is one of those qualified tools.
“Apple applied to get its AFib history feature qualified as a MDDT in December (2023). It is the first digital health technology qualified under the program.”
The advantage of using an Apple Watch to gather this data?
“Officials said the wearable can help address the challenges ‘by allowing for passive, opportunistic AFib burden estimation in a wearable form that is already familiar to Apple Watch users.’”
Medical measurements are often skewed by stress from the health experience itself. But if you’re already wearing an Apple Watch, and you always wear an Apple Watch, the passive nature of AFib data collection means you don’t even know you’re being measured.
One of the best “why” stories I’ve seen in some time was written by Keith Puckett of Ubiety Technologies.
He had purchased a feature-rich home security system and received an alarm while he was traveling. That’s all—an alarm, with no context.
“The security company then asked me, ‘Should we dispatch the police?’ At that moment, the reality hit: I was expected to make a decision that could impact my family’s safety, and I had no information to base that decision on. It was a gut-wrenching experience. The very reason I invested in security—peace of mind—had failed me.”
From Ubiety Technologies, “What is Your Why?”
https://ubiety.io/what-is-your-why
Puckett then started his company so others wouldn’t face the same emotions of fear and helplessness that he faced.
Why is a “why” story so powerful?
Because a story like this grabs a prospect’s attention a million times better than some boring technical feature list with optimum camera angles.
Make it personal. Many prospects have their own challenges that you can solve.
Make sure you have the answers to their questions. That’s why Bredemarket asks questions before creating content…so your prospects don’t have to.
After all, “why ask why?” is more than an old beer commercial slogan.
For those like me who noted Ubiety Technologies’ Illinois location—you know, BIPAland—here is the company approach to privacy.