While many questions arise regarding DeepSeek’s performance, another critical question is whether the data it collects goes straight to Xi and his Commie overlords.
You know, what Congress suspected was happening with TikTok.
Well, here are a few excerpts from DeepSeek’s Privacy Policy.
“(DeepSeek) is provided and controlled by Hangzhou DeepSeek Artificial Intelligence Co., Ltd., and Beijing DeepSeek Artificial Intelligence Co., Ltd., with their registered addresses in China…
“The personal information we collect from you may be stored on a server located outside of the country where you live. We store the information we collect in secure servers located in the People’s Republic of China.
“Where we transfer any personal information out of the country where you live, including for one or more of the purposes as set out in this Policy, we will do so in accordance with the requirements of applicable data protection laws.”
Large Language Models (LLMs) are naturally influenced by their training data. Any biases present in the training data, whether intentional or unintentional, will naturally creep into the responses that the LLMs provide.
If I may take an extreme example (and prove Godwin’s Law in the process)…had Hitler developed an LLM in the late 1930s, you can imagine how it would answer selected questions about nationalities, races, or ethnic groups.
Of course that has nothing to do with the present day.
Red LLM, blue LLM?
But what IS newsworthy is that despite the presence of many technology leaders at President Donald Trump’s inauguration, I am unable to find any reference to a “red LLM.” Or, for that matter, a “blue LLM.”
Take the red LLM or the blue LLM.
Perhaps the terminology isn’t in vogue, but when you look at algorithmic bias in general, has anyone examined political bias?
Grok and bias
One potential field for study is Grok. Of all the godfathers of AI, Elon Musk is known both for his political views and his personal control of the companies he runs.
“Specifically, Grok falsely claimed that Kamala Harris, the Democratic presidential nominee, had missed ballot deadlines in nine states—an assertion that was entirely untrue.”
Yes, it sounds bad—until you realize that as recently as January 2025 some Google AI tools (but not others) were claiming that you had to tip Disney World cast members if you want to exit rides. Does Alphabet have a grudge against Disney? No, the tools were treating a popular satirical article as fact.
What data does Grok use?
“Grok is trained on tweets—a medium not known for its accuracy—and its content is generated in real-time.”
Regardless of how you feel about bias within X—and just because you feel about something doesn’t necessarily mean it’s true—the use of such a limited data set raises concerns.
But data isn’t the only issue. One common accusation about Grok is that it lacks the guardrails that other AI services have.
No guardrails.
A little secret: there are several reasons why Bredemarket includes wildebeest pictures, but one of them is that my version of Google Gemini does not presently generate images of people because of past image generation controversies.
“grok 2.0 image generation is better than llama’s and has no dumb guardrails”
Whether a particular guardrail is good or bad depends upon your personal, um, bias.
After all, guardrails are created by someone, and guardrails that prevent portrayal of a Black President, a man with a U.S. (or Confederate) flag wearing a red cap, or an independent Ukraine or Israel would be loved by some, unloved by others.
In essence, the complaints about Grok aren’t that they’re biased, but that they’re unfettered. People would be happy if Musk functioned as a fetterman (no, not him) and exerted more control over the content from Grok.
But Musk guardrailing Grok output is, of course, a double-edged sword. For example, what if Grok prohibited portrayal of the current U.S. President in an unfavorable light? (Or, if Musk breaks with Trump in the future, in a favorable light?)
Langstaff explores six possible applications. I’m not going to delve into all of them; read her article to find out about her success in using generative AI to understand medical tests, take appointment notes (with consent), understand terminology, organize medications, and figure out how to fold a wheelchair to fit in a car.
Understanding a health insurance plan
But I will look at her fourth application: navigating Medicare and medical equipment.
Medicare, or any U.S. health insurance plan (I can’t speak to other countries), definitely needs navigation assistance. Deductibles, copays, preventive, diagnostic, tiers, or the basic question of what is covered and what isn’t. Or, as Langstaff put it, it’s like solving a Rubik’s Cube blindfolded.
Such as trying to answer this question:
“How do I get approval for a portable oxygen concentrator?”
The old way
Now if I had tried to answer this question before reading the article, I would find a searchable version of the health plan (perhaps from the government), search for “portable oxygen concentrator,” not find it, finally figure out the relevant synonym, then confirm that it is (or is not) covered.
But that still wouldn’t tell me how to get it approved.
Langstaff was warned that the whole process would be a “nightmare.”
The new way
But generative AI tools (for example, NotebookLM) are getting better and better at taking disparate information and organizing it in response to whatever prompt you give it.
So what happened to Langstaff when she entered her query?
“AI walked me through the entire process, from working with her doctor to dealing with suppliers.”
But we all know that generative AI hallucinates, right? Weren’t those instructions useless?
Not for Kerry.
“I got it approved on the first try. Take that, bureaucracy.”
But wait
But I should add a caution here. Many of us use general purpose generative AI tools, in which all the data we provide is used to train the algorithm.
Imagine if Langstaff had inadvertently included some PHI in her prompt:
“Here is the complete prescription for Jane Jones, including her diagnosis, date of birth, Social Security Number, home address, and billing credit card. The prescription is for a portable oxygen concentrator. How do I get it approved?”
Oh boy.
Most medical providers freak out if you include PHI in an email. What happens when you submit it to Stargate?
“The FDA created the Medical Device Development Tools (MDDT) program to reduce uncertainty in device development.…Through MDDT, the FDA has created a portfolio of qualified tools that sponsors know the agency will accept without needing to reconfirm their suitability for use in a study.”
And now the Apple Watch is one of those qualified tools.
“Apple applied to get its AFib history feature qualified as a MDDT in December (2023). It is the first digital health technology qualified under the program.”
The advantage of using an Apple Watch to gather this data?
“Officials said the wearable can help address the challenges ‘by allowing for passive, opportunistic AFib burden estimation in a wearable form that is already familiar to Apple Watch users.’”
Medical measurements are often skewed by stress from the health experience itself. But if you’re already wearing an Apple Watch, and you always wear an Apple Watch, the passive nature of AFib data collection means you don’t even know you’re being measured.
He had purchased a feature-rich home security system and received an alarm while he was traveling. That’s all—an alarm, with no context.
“The security company then asked me, ‘Should we dispatch the police?’ At that moment, the reality hit: I was expected to make a decision that could impact my family’s safety, and I had no information to base that decision on. It was a gut-wrenching experience. The very reason I invested in security—peace of mind—had failed me.”
On Threads, Dr. Jen Gunter called our attention to the newly-introduced H.R. 238, “To amend the Federal Food, Drug, and Cosmetic Act to clarify that artificial intelligence and machine learning technologies can qualify as a practitioner eligible to prescribe drugs if authorized by the State involved and approved, cleared, or authorized by the Food and Drug Administration, and for other purposes.”
Presumably these non-person entities would not be your run-of-the-mill consumer generative AI packages, by rather specially trained Large Medical Models (LMMs).
Even so, don’t count on this becoming law in the next two years. For one, Rep. David Schweikert introduced a similar bill in 2023 which never made it out of committee.
““How do I make sure we’re embracing technology and using it to bring disruptive cures to market, or other opportunities to market?” Schweikert asked. “And does that also now help lower drug pricing?””
Before you reject this idea entirely, Rep. Schweikert cited one example of technology decision-making:
“Schweikert noted that the FDA last month approved Apple Watch’s atrial fibrillation feature for use in clinical trials — the first such digital health tool approved for inclusion in the agency’s Medical Device Development Tools program.”
But before anything like this will ever happen with prescriptions, the FDA will insist on extremely rigorous testing, including double-blind tests in which some prescriptions are written by currently-authorized medical professionals, while other prescriptions are written by LMMs.
And even when the ethical questions surrounding this are overcome, this won’t happen overnight.
From the early 1990s to 2019, the majority of my identity/biometric proposal work was with U.S. state and local agencies, with some work with foreign agencies (such as Canada’s RCMP), private entities, and a few proposals to U.S. federal agencies.
I had no idea what was going to happen in 2020, and one of the surprises is that the majority of my identity/biometric proposal work since 2020 has been with U.S. federal agencies. Many requests for information (RFIs) as well as other responses.
The L & N, not M, but close enough for government work.
I’ve worked on client proposals (and Bredemarket’s own responses) to the Departments of Defense, Homeland Security, Justice, and perhaps some others along the way.
And no, there’s no uniformity
Same department, different requirements.
Coincidentally, the two most recent identity/biometric proposals I managed for Bredemarket clients went to the same government department. But that’s where the similarities ended.
The first required an e-mail submission of a PDF (10 pages maximum) to two email addresses. A relative piece of cake.
Mmm…cake. Always reward your proposal people.
The last required an online submission. No, not a simple upload of a PDF to a government website. While my client did have to upload 2 PDFs, the majority of the submission required my client to complete a bunch of online screens.
And there were two separate sets of instructions regarding how to complete these online screens…which contradicted each other. So I had to ask a clarification question…and you know how THAT can go.
Oh, and as the consulting proposal expert, I could not complete the online screens on behalf of the client. The client’s company had a single login, which was assigned to a single person (a company executive) and could NOT be used by anybody else.
So on the day of proposal submission the executive and I videoconferenced, and I watched as the executive answered the responses, in part using a document in which I had drafted responses.
And of course things were not perfect. The executive pasted one of my responses into the space provided, and only THEN did we discover that the response had an unadvertised character limit. So I rewrote it…at the same time that I resized a required image with unadvertised dimension restrictions.
But there’s some uniformity
Perhaps if I had written more federal proposals at Printrak, Motorola, MorphoTrak, IDEMIA, and Incode, I would have known these things. Perhaps not; as late as 2014 I was still printing proposals on paper and submitting 10 or more volumes of binders (yes, binders) along with CDs that had to be virus-checked.
Some Requests for Proposal (RFPs) provide helpful checklists.
But regardless of whether you submit proposals online, via CD, or in paper volumes, some things remain constant.
Follow the instructions.
Answer the questions.
Emphasize the benefits.
And don’t misspell the name of the Contracting Officer.