(All images Imagen 3)
Large Language Models (LLMs) are naturally influenced by their training data. Any biases present in the training data, whether intentional or unintentional, will naturally creep into the responses that the LLMs provide.
If I may take an extreme example (and prove Godwin’s Law in the process)…had Hitler developed an LLM in the late 1930s, you can imagine how it would answer selected questions about nationalities, races, or ethnic groups.
Of course that has nothing to do with the present day.
Red LLM, blue LLM?
But what IS newsworthy is that despite the presence of many technology leaders at President Donald Trump’s inauguration, I am unable to find any reference to a “red LLM.” Or, for that matter, a “blue LLM.”

Perhaps the terminology isn’t in vogue, but when you look at algorithmic bias in general, has anyone examined political bias?
Grok and bias
One potential field for study is Grok. Of all the godfathers of AI, Elon Musk is known both for his political views and his personal control of the companies he runs.
So it’s natural that the Center for Advancing Safety of Machine Intelligence would examine Grok, although their first example is not convincing:
“Specifically, Grok falsely claimed that Kamala Harris, the Democratic presidential nominee, had missed ballot deadlines in nine states—an assertion that was entirely untrue.”
Yes, it sounds bad—until you realize that as recently as January 2025 some Google AI tools (but not others) were claiming that you had to tip Disney World cast members if you want to exit rides. Does Alphabet have a grudge against Disney? No, the tools were treating a popular satirical article as fact.
What data does Grok use?
“Grok is trained on tweets—a medium not known for its accuracy—and its content is generated in real-time.”
Regardless of how you feel about bias within X—and just because you feel about something doesn’t necessarily mean it’s true—the use of such a limited data set raises concerns.
Except that the claim that Grok is trained on tweets misstates the truth. Take an early Grok release, Grok-1:
“The training data used for the release version of Grok-1 comes from both the Internet up to Q3 2023 and the data provided by our AI Tutors.”
Certainly X data is fed into Grok (unless you retract consent for Grok to use your data), but X isn’t the only training data that is used.
Grok and guardrails
But data isn’t the only issue. One common accusation about Grok is that it lacks the guardrails that other AI services have.

A little secret: there are several reasons why Bredemarket includes wildebeest pictures, but one of them is that my version of Google Gemini does not presently generate images of people because of past image generation controversies.
But are guardrails good, or are they bad? Sid Dani leans toward the latter:
“grok 2.0 image generation is better than llama’s and has no dumb guardrails”
Whether a particular guardrail is good or bad depends upon your personal, um, bias.
After all, guardrails are created by someone, and guardrails that prevent portrayal of a Black President, a man with a U.S. (or Confederate) flag wearing a red cap, or an independent Ukraine or Israel would be loved by some, unloved by others.
In essence, the complaints about Grok aren’t that they’re biased, but that they’re unfettered. People would be happy if Musk functioned as a fetterman (no, not him) and exerted more control over the content from Grok.
But Musk guardrailing Grok output is, of course, a double-edged sword. For example, what if Grok prohibited portrayal of the current U.S. President in an unfavorable light? (Or, if Musk breaks with Trump in the future, in a favorable light?)
It doesn’t matter!
In the end, the LLM doesn’t control us. We control the LLM. I have set up my own “guardrails” for LLM use, although I sometimes violate them.
Own the process yourself!

4 Comments