I participate in several public and private AI communities, and one fun exercise is to take another creator’s image generation prompt, run it yourself (using the same AI tool or a different tool), and see what happens. But certain tools can yield similar results, for explicable reasons.
On Saturday morning in a private community Zayne Harbison shared his Nano Banana prompt (which I cannot share here) and the resulting output. So I ran his prompt in Nano Banana and other tools, including Microsoft Copilot and OpenAI ChatGPT.
The outputs from those two generative AI engines were remarkably similar.


Not surprising, given the history of Microsoft and OpenAI. (It got more tangled later.)
But Harbison’s prompt was relatively simple. What if I provided a much more detailed prompt to both engines?
Create a realistic photograph of a coworking space in San Francisco in which coffee and hash brownies are available to the guests. A wildebeest, who is only partaking in a green bottle of sparkling water, is sitting at a laptop. A book next to the wildebeest is entitled “AI Image Generation Platforms.” There is a Grateful Dead poster on the brick wall behind the wildebeest, next to the hash brownies.
So here’s what I got from the Copilot and ChatGPT platforms.


For comparison, here is Google Gemini’s output for the same prompt.

So while there are more differences when using the more detailed prompt (see ChatGPT’s brownie placement), the Copilot and ChatGPT results still show similarities, most notably in the Grateful Dead logo and the color used in the book.
So what have we learned, Johnny? Not much, since Copilot and ChatGPT can perform many tasks other than image generation. There may be more differentiation when they perform SWOT analyses or other operations. As any good researcher would say, more funding is needed for further research.
But I will hazard two lessons learned:
- More detailed prompts are better.
- If the answer is critically important, submit your prompts to multiple generative AI tools.







