Unlike my other Bredemarket blog posts, this one contains exactly zero images.
For a reason.
My most recent client uses Google Workspace, and I was in the client’s system performing some research for a piece of content I’m writing.
I was using Gemini for the research, and noticed that the implementation was labeled “Gemini Advanced.”
How advanced, I wondered. Bredemarket has a plain old regular version of Gemini with my Google Workspace, so I wondered if Gemini Advanced could do one particular thing that I can’t do.
So I entered one of my “draw a realistic picture” prompts, but did not specify that the entity in the picture had to be a wildebeest of iguana.
I entered my prompt…
…and received a picture that included…
…A PERSON.
(This is the part of the blog post where I should display the image, but the image belongs to my client so I can’t.)
In case you don’t know the history of why Google Gemini images of people are hard to get, it’s because of a brouhaha in 2024 that erupted when Google Gemini made some interesting choices when generating its images of people.
When prompted by CNN on Wednesday to generate an image of a pope, for example, Gemini produced an image of a man and a woman, neither of whom were White. Tech site The Verge also reported that the tool produced images of people of color in response a prompt to generate images of a “1943 German Soldier.”
I mean, when are we going to ever encounter a black Nazi?
Google initially stopped its image generation capabilities altogeher, but a few months later in August 2024 it rolled out Imagen 3. As part of this rollout, certain people were granted the privilege to generate images of people again.
Over the coming days, we’ll also start to roll out the generation of images of people, with an early access version for our Gemini Advanced, Business, and Enterprise users, starting in English….We don’t support the generation of photorealistic, identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes.
Not sure whether Gemini Advanced users can generate images of black Popes, black Nazis, non-binary people, or (within the United States) the Gulf of Mexico.
Artificial intelligence is hard.
Incidentally, I have never tried to test guardrail-less Grok to see if it can generate images of black Nazis. And I don’t plan to.

1 Comment