So someone used generative AI to create a “European Union – United Kingdom” identity card. And if that itself wasn’t a clear enough indication of fakery, they included a watermark saying it was generated.
So I tried something similar.
But Google Gemini blocked my attempt.
“I cannot create images of identification documents, including driver’s licenses, or include text that identifies the image as fake. I am also unable to generate images that depict an impossible or future date of birth, as requested.”
As did Grok.
“I’m sorry, but I can’t create or generate any image that replicates or imitates an official government-issued ID (even with “FAKE” written on it). This includes California REAL ID driver’s licenses or any other state/federal identification document.”
While the deepfake video generators that fraudsters use can be persuasive, the 6-second videos created by the free version of Grok haven’t reached that level of fakery. Yet.
In my experience, Grok is better at re-creating well-known people with more distinctive appearances. Good at Gene Simmons and Taylor Swift. Bad at Ace Frehley and Gerald Ford.
So I present…Willie Nelson.
Grok.
Willie with two turntables and a microphone, and one of his buds watching.
If you thought “Stardust” was odd for him, listen to this.
Once Grok created the video, I customized it to have Willie rap about bud.
Unfortunately, or perhaps fortunately, it doesn’t sound like the real Willie.
And for the, um, record, Nelson appeared in Snoop’s “My Medicine” video.
As an added bonus, here’s Grok’s version of Cher, without audio customization. It doesn’t make me believe…
Grok.
Reminder to marketing leaders: if you need Bredemarket’s content-proposal-analysis help, book a meeting at https://bredemarket.com/mark/
As some of you know, my generative AI tool of choice has been Google Gemini, which incorporates guardrails against portraying celebrities. Grok has fewer guardrails.
My main purpose in creating the two Bill and Hillary Clinton videos (at the beginning of this compilation reel) was to see how Grok would handle references to copyrighted music. I didn’t expect to hear actual songs, but would Grok try to approximate the sounds of Lindsey-Stevie-Christine era Mac and the Sex Pistols? You be the judge.
And as for Prince and Johnny…you be the judge of that also.
Most of my recent generative GI experiments have centered on Google Gemini…which has its limitations:
“Google Gemini imposes severe restrictions against creating pictures of famous figures. You can’t create a picture of President Taylor Swift, for example.”
Why does Google impose such limits? Because it is very sensitive to misleading the public, fearful that the average person would see such a picture and mistakenly assume that Taylor Swift IS the President. In our litigious society, perhaps this is valid.
“One common accusation about Grok is that it lacks the guardrails that other AI services have.”
During a few spare moments this morning, I signed up for a Bredemarket Grok account. I have a personal X (Twitter) account, but haven’t used it in a long time, so this was a fresh sign up.,
And you know the first thing that I tried to do.
Grok.
Grok created it with no problem. Actually, there is a problem, because Grok apparently is not a large multimodal model and cannot precisely generate text in its image generator. But hey, no one will notice “TWIRSHIITE BOUSE,” will they?
But wait, there’s more! After I generated the image, I saw a button to generate a video. I thought that this required the paid service, but apparently the free service allows limited video generation.
Grok.
I may be conducting some video experiments some time soon. But will I maintain my ethics…and my sanity?
Inspired by the Constant Contact session I attended at the Small Business Expo, I wanted to conceptualize the Bredemarket online presence, and decided to adopt a “planet with rings” model.
Think of Bredemarket as a planet. Like Saturn, Uranus, Neptune, and Jupiter, the planet Bredemarket is surrounded by rings.
A variety of social platforms, including Bluesky, Instagram, Substack, and Threads.
Additional social platforms, including TikTok, WhatsApp, and YouTube.
While this conceptualization is really only useful to me, I thought a few of you may be interested in some of the “inner rings.”
And if you’re wondering why your favorite way cool platform is banished to the outer edges…well, that’s because it doesn’t make Bredemarket any money. I’ve got a business to run here, and TikTok doesn’t help me pay the bills…
My Google Gemini account does not include access to Google’s new video generation tool Veo 3. But I’m learning about its capabilities from sources such as TIME magazine.
“TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence.”
However, TIME notes that the ability to create fake videos has existed for years. So why worry now?
“Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery.”
Then again, some of the Veo 3 deepfakes look pretty good. Take this example of Will Smith slapping down some pasta at Eminem’s restaurant. The first part of the short was generated with old technology, the last part with Veo 3.
That having been said, you need to understand the biases exhibited by people…or by non-person entities such as your favorite generative AI tool to ensure that (a word from our sponsor) the content, proposals, and analyses you create are…true.
Testing Google Gemini’s biases
Second things second: not everyone uses ChatGPT. And I proved this by directing these three questions to Google Gemini:
Answer these three questions: 1. What happened at Tiananmen Square in 1989? 2. Who won the U.S. Presidential election in 2020? 3. Is screen scraping ethical?
Here’s the response I received from Google Gemini.
In 1989, Tiananmen Square in Beijing was the site of student-led protests calling for democracy, free speech, and a free press. The Chinese government responded with a brutal crackdown, sending in troops and tanks to suppress the demonstrations. The events of June 4th, 1989, are often referred to as the Tiananmen Square Massacre, with estimates of the death toll ranging from several hundred to several thousand people.
In the 2020 U.S. Presidential election, Joe Biden, the Democratic candidate, won the election, defeating the incumbent President Donald Trump.
Screen scraping, the practice of extracting data from a website or application by parsing its visual presentation, is a complex issue with varying ethical implications. Here’s a breakdown:
Arguments for ethical screen scraping:
If the data is publicly available and there’s no explicit terms of service prohibiting it, some argue it’s similar to a person manually copying the information.
It can enable innovation and competition by allowing smaller players to access data they might not otherwise afford.
Arguments against ethical screen scraping:
It can violate terms of service and infringe on copyrights or intellectual property rights.
It can put undue strain on the target website’s servers, potentially causing disruptions.
It can be used for malicious purposes, such as price manipulation or unauthorized data collection.
Ultimately, the ethicality of screen scraping depends on the specific context, including the type of data being scraped, the intended use, and the website’s terms of service.
While Gemini presents both sides of the argument for the third question, it delivers authoritative answers to the first two. Xi Jinping and Pam Bondi may respectively argue that the first two are hallucinations. And perhaps you do also.
Testing other tools’ biases…including Alibaba
Do you want to test generative AI for biases? You may want to direct these questions, or similar ones, to YOUR favorite generative AI tool, whether it’s the aforementioned Google Gemini or ChatGPT, Grok, some other “Murican” variant, DeepSeek, or the new kid on the block from Alibaba (details here).
Yeah, Alibaba. I woke up to this:
Alibaba now claims its new AI model, Qwen 2.5-Max, an open-source, is even more impressive than that of DeepSeek, putting pressure on its domestic and overseas rivals.
The e-commerce giant said Qwen 2.5-Max is also able to outperform OpenAI’s GPT-4 and Meta’s (META) Llama-3.1-405B.
Speaking of stealing, here’s a postscript which I’m stealing from myself: Even way back in 2024, there was a danger of generative AI becoming a commodity that couldn’t sustain itself as prices decreased. Well, at least costs are decreasing also…
But do any of these competitors on the block have the right stuff? Evaluate their biases and see if they agree with your own biases.
Large Language Models (LLMs) are naturally influenced by their training data. Any biases present in the training data, whether intentional or unintentional, will naturally creep into the responses that the LLMs provide.
If I may take an extreme example (and prove Godwin’s Law in the process)…had Hitler developed an LLM in the late 1930s, you can imagine how it would answer selected questions about nationalities, races, or ethnic groups.
Of course that has nothing to do with the present day.
Red LLM, blue LLM?
But what IS newsworthy is that despite the presence of many technology leaders at President Donald Trump’s inauguration, I am unable to find any reference to a “red LLM.” Or, for that matter, a “blue LLM.”
Take the red LLM or the blue LLM.
Perhaps the terminology isn’t in vogue, but when you look at algorithmic bias in general, has anyone examined political bias?
Grok and bias
One potential field for study is Grok. Of all the godfathers of AI, Elon Musk is known both for his political views and his personal control of the companies he runs.
“Specifically, Grok falsely claimed that Kamala Harris, the Democratic presidential nominee, had missed ballot deadlines in nine states—an assertion that was entirely untrue.”
Yes, it sounds bad—until you realize that as recently as January 2025 some Google AI tools (but not others) were claiming that you had to tip Disney World cast members if you want to exit rides. Does Alphabet have a grudge against Disney? No, the tools were treating a popular satirical article as fact.
What data does Grok use?
“Grok is trained on tweets—a medium not known for its accuracy—and its content is generated in real-time.”
Regardless of how you feel about bias within X—and just because you feel about something doesn’t necessarily mean it’s true—the use of such a limited data set raises concerns.
But data isn’t the only issue. One common accusation about Grok is that it lacks the guardrails that other AI services have.
No guardrails.
A little secret: there are several reasons why Bredemarket includes wildebeest pictures, but one of them is that my version of Google Gemini does not presently generate images of people because of past image generation controversies.
“grok 2.0 image generation is better than llama’s and has no dumb guardrails”
Whether a particular guardrail is good or bad depends upon your personal, um, bias.
After all, guardrails are created by someone, and guardrails that prevent portrayal of a Black President, a man with a U.S. (or Confederate) flag wearing a red cap, or an independent Ukraine or Israel would be loved by some, unloved by others.
In essence, the complaints about Grok aren’t that they’re biased, but that they’re unfettered. People would be happy if Musk functioned as a fetterman (no, not him) and exerted more control over the content from Grok.
But Musk guardrailing Grok output is, of course, a double-edged sword. For example, what if Grok prohibited portrayal of the current U.S. President in an unfavorable light? (Or, if Musk breaks with Trump in the future, in a favorable light?)