Groupthink From Bots

I participate in several public and private AI communities, and one fun exercise is to take another creator’s image generation prompt, run it yourself (using the same AI tool or a different tool), and see what happens. But certain tools can yield similar results, for explicable reasons.

On Saturday morning in a private community Zayne Harbison shared his Nano Banana prompt (which I cannot share here) and the resulting output. So I ran his prompt in Nano Banana and other tools, including Microsoft Copilot and OpenAI ChatGPT.

The outputs from those two generative AI engines were remarkably similar.

Copilot.
ChatGPT.

Not surprising, given the history of Microsoft and OpenAI. (It got more tangled later.)

But Harbison’s prompt was relatively simple. What if I provided a much more detailed prompt to both engines?

Create a realistic photograph of a coworking space in San Francisco in which coffee and hash brownies are available to the guests. A wildebeest, who is only partaking in a green bottle of sparkling water, is sitting at a laptop. A book next to the wildebeest is entitled “AI Image Generation Platforms.” There is a Grateful Dead poster on the brick wall behind the wildebeest, next to the hash brownies.

So here’s what I got from the Copilot and ChatGPT platforms.

Copilot.
ChatGPT.

For comparison, here is Google Gemini’s output for the same prompt.

Gemini.

So while there are more differences when using the more detailed prompt (see ChatGPT’s brownie placement), the Copilot and ChatGPT results still show similarities, most notably in the Grateful Dead logo and the color used in the book.

So what have we learned, Johnny? Not much, since Copilot and ChatGPT can perform many tasks other than image generation. There may be more differentiation when they perform SWOT analyses or other operations. As any good researcher would say, more funding is needed for further research.

But I will hazard two lessons learned:

  • More detailed prompts are better.

AFOID With an Expanded A: If You Pay the Money, Who Needs REAL ID Anyway?

I’ve vented about this for years. Some people have vented about this for decades. And it’s been discussed for decades.

But before I launch into my rant, let me define the acronym of the day: AFOID. It stands for “acceptable form of identification.”

And for years (decades), we’ve been told that the ONLY acceptable form of identification to board a plane is a REAL ID, U.S. passport, or a similar form of identity. A REAL ID does not prove citizenship, but it does prove that you are who you say you are.

USA.GOV put it best:

“If you do not have a REAL ID-compliant driver’s license or state-issued ID, you will not be able to use it to:

“Access federal government facilities or military installations

“Board federally regulated commercial aircraft

“Enter nuclear power plants”

Pretty straightforward. Get a REAL ID (or other acceptable document such as a passport), or there are some things that YOU WILL NOT BE ABLE TO DO.

So you needed that AFOID by May 2025…

Whoops, I mean May 2027, because TSA is allowing exceptions for a couple of years.

Whoops, I mean probably never.

If you pay some bucks, you can use a MODERNIZED system. Biometric Update alerted me to this new item in the Federal Register.

“The Transportation Security Administration (TSA) is launching a modernized alternative identity verification program for individuals who present at the TSA checkpoint without the required acceptable form of identification (AFOID), such as a REAL ID or passport. This modernized program provides an alternative that may allow these individuals to gain access to the sterile area of an airport if TSA is able to establish their identity. To address the government-incurred costs, individuals who choose to use TSA’s modernized alternative identity verification program will be required to pay an $18 fee. Participation in the modernized alternative identity verification program is optional and does not guarantee an individual will be granted access to the sterile area of an airport.”

I’ve love to see details of what “modernized” means. In today’s corporate environment, that means WE USE AI.

And AI can be embarrassingly inaccurate.

And if you want to know how seedy this all sounds, I asked Google Gemini to create a picture of a man waving money at a TSA agent. Google refused the request.

“I cannot fulfill this request. My purpose is to be helpful and harmless, and that includes refusing to generate images that promote harmful stereotypes, illegal activities, or depict bribery of public officials.”

So I had to tone the request down.

Differentiation…Again

I provided the background for this picture in a post in the Bredemarket Picture Clubhouse on Facebook.

And elsewhere. As usual, I enjoy repurposing.

But the point here is that those who differentiate truly stand out.

Credit to Zayne Harbison for the original generative AI prompt that I adapted. Because you should reference your sources.

Google Gemini.

So Much For Fake IDs

So someone used generative AI to create a “European Union – United Kingdom” identity card. And if that itself wasn’t a clear enough indication of fakery, they included a watermark saying it was generated.

So I tried something similar.

But Google Gemini blocked my attempt.

“I cannot create images of identification documents, including driver’s licenses, or include text that identifies the image as fake. I am also unable to generate images that depict an impossible or future date of birth, as requested.”

As did Grok.

“I’m sorry, but I can’t create or generate any image that replicates or imitates an official government-issued ID (even with “FAKE” written on it). This includes California REAL ID driver’s licenses or any other state/federal identification document.”

So I had to make it a little less real.

A lot less real.

Google Gemini.

Revisiting the Coalition for Content Provenance and Authenticity

Earlier this morning I was on LinkedIn sharing the wildebeest picture below on the Bredemarket page.

Google Gemini.

But then I noticed that LinkedIn added a symbol in the upper left corner of the picture.

LinkedIn.

When I clicked on the symbol, I obtained additional information about the picture.

LinkedIn.

Content credentials

Source or history information is available for this image.

Learn more.

Al was used to generate all of this image

App or device used: Google C2PA Core Generator Library

Content Credentials issued by: Google LLC

Content Credentials issued date: Nov 20, 2025

Of course, I already knew that I had used generative AI (Google Gemini) to create the picture. And now, thanks to the Coalition for Content Provenance and Authority, you do also.

How VentureBeat Addressed Verbalized Sampling (and how Google Gemini addressed it a second time)

I’m obviously not the only person to talk about verbalized sampling, the technique intended to improve the quality of your large language model (LLM) prompt responses.

And Carl Franzen of VentureBeat (or his editor) addressed it with an attractive article title: “Researchers find adding this one simple sentence to prompts makes AI models way more creative.”

The one simple sentence?

“Generate 5 responses with their corresponding probabilities, sampled from the full distribution.”

Which led me to thinking: what would the “way more” absolute BEST title be for a verbalized sampling article?

And I think I know how to get it.

Using Verbalized Sampling, Take Two

Google Gemini.

After all, I used you-know-what to come up with the title for MY post. But let’s try again, using a slightly simplified adaptation of the REAL prompt in the original article.

Generate 5 responses to the user query “Generate five AEO-friendly titles for a blog post about using Verbalized Sampling to generate better LLM responses.” Each response must include text and a numeric probability. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10.

And here’s what I got:

That’s an interesting request! To sample from the tails of the distribution (where probability $P < 0.10$), I’ll generate five distinct AEO (Answer Engine Optimization)-friendly titles related to using Verbalized Sampling for better LLM (Large Language Model) responses, each with an assigned low probability.

Here are five responses:

  • Response 1: Unlock LLM Brilliance: The Verbalized Sampling Technique You Need Now (Probability: 0.085)
  • Response 2: Beyond the Prompt: How Verbalized Sampling Turbocharges Your LLM Outputs (Probability: 0.052)
  • Response 3: AEO Secret Weapon: Enhancing LLM Accuracy with Verbalized Sampling (Probability: 0.091)
  • Response 4: From Good to Great: Optimizing Large Language Models Using Verbalized Sampling (Probability: 0.039)
  • Response 5: Maximizing Output Quality: A Guide to Verbalized Sampling in LLM Generation (Probability: 0.076)

But Google Gemini, as it is wont to do, encourages me to explore further:

Would you like to explore a specific aspect of Verbalized Sampling or AEO further, or would you like me to generate a blog post outline using one of these titles?

Ah, why not? I know I’m stealing Bredebot’s thunder, but what the heck.

Generate a blog post outline for Response 1

Google Gemini.

And here’s what Gemini came up with for me. Perhaps in a future post I’ll explain why the content below terrifies Jay Clouse. Or you can just watch what he said.

Blog Post Outline: Unlock LLM Brilliance: The Verbalized Sampling Technique You Need Now

Target Audience: LLM power users, prompt engineers, AI developers, content creators, and researchers looking for advanced prompting techniques.

Goal: Introduce Verbalized Sampling as a superior, actionable method for generating diverse, creative, and high-quality LLM outputs, moving beyond standard CoT (Chain-of-Thought).

I. Introduction (The Hook)

A. The Problem with “Good Enough” LLMs (50-75 words)

  • Hook: You use powerful LLMs, but the output often feels… generic, dull, or predictable.
  • Mention the common issue: Models often stick to the highest-probability, safest answers, leading to “average” content.
  • Acknowledge the current best practice: Everyone uses Chain-of-Thought (CoT), but we can do better.

B. Introducing the Solution: Verbalized Sampling (25-50 words)

  • Thesis: Verbalized Sampling (VS) is the next evolution of prompt engineering—a simple technique that forces the model to explore new, more creative, and nuanced solutions.
  • Promise: This technique is the key to unlocking “brilliance” and maximizing your LLM’s potential.

II. What is Verbalized Sampling (VS)?

A. Simple Definition (75 words)

  • Define VS: It’s the process of explicitly instructing the LLM to generate multiple internal “drafts” or “reasoning pathways” before producing its final answer.
  • Contrast with standard CoT: CoT asks for one path; VS asks for many paths and a critical self-review.

B. The Core Mechanism: Overcoming Probability Mass (100 words)

  • Explanation: LLMs sample the next token based on probability. High-quality answers sometimes exist in the “tails” of the distribution.
  • How VS helps: By forcing the model to verbalize several options, you encourage it to explore lower-probability (and often more insightful or creative) tokens, expanding its internal search space.
  • Analogy: It’s like asking an employee for three different business proposals instead of just the safest one.

III. Why VS Works: The Cognitive Advantage

A. Enhanced Self-Correction and Evaluation (75 words)

  • When the model lists options 1, 2, and 3, it must then select the best one, forcing a reflective, meta-cognitive step.
  • This built-in self-criticism dramatically reduces factual errors and logical inconsistencies in the final output.

B. Greater Diversity in Output (75 words)

  • Demonstrate that VS prevents the model from getting stuck in a local optimum (the most obvious answer).
  • Result: Responses are more original, less canned, and better suited for competitive content creation or complex problem-solving.

IV. The Practical Guide: Implementing Verbalized Sampling

A. The Standard Prompt Template (Actionable Code/Text)

  • Provide a clear, copy-pastable template for the user.Example Template:
    • Prompt (Instruction): [Insert your main request here]
    • VS Instruction: “Before providing the final answer, please generate three distinct internal reasoning paths (Path A, Path B, and Path C) for solving the prompt. For each path, evaluate its pros and cons relative to the others. Finally, state your chosen path and provide only the final, complete answer based on that best path.”

B. Real-World Use Case Example

  • Scenario: Requesting a creative marketing headline.
  • VS Output Showcase: Show the internal thoughts (Path A: aggressive, Path B: emotional, Path C: unique benefit) and the final, brilliant choice.
  • Key Takeaway: Emphasize that the user should only use the final output, treating the “paths” as internal scaffolding.

V. Conclusion: Unlock Your LLM’s Full Potential

A. Summary of Benefits (50 words)

  • Quick recap: VS leads to higher quality, better diversity, and superior self-correction.
  • Reinforce the message: This is a small prompt change for a massive output gain.

B. Call to Action (CTA)

  • Challenge readers to try VS on their next complex task.
  • Ask a question to encourage comments: “What is the most complex task you’ve tried Verbalized Sampling on? Share your results!”
Google Gemini, with a prompt from Terry Jones.

Choosing Your Tone of Voice in Your LLM Prompts

Under my Bredemarket banner, I have extensive freedom that my clients do not have.

  • I can use AI-generated images, which many companies are legally or ethically constrained from doing.
  • I have freedom in how I present Bredemarket: sometimes as a sage, sometimes a rebel, sometimes both. Most companies need to hone in on a particular archetype, although I know of one company that flipped from one archetype to another and back again.
  • I have freedom in the tone of voice that Bredemarket uses, including the use of words that I would never use for my client work.

For example, I was performing a competitive analysis for Bredemarket’s internal use this morning, and I fed a prompt to Google Gemini that I would probably never feed when working for one of my clients. (Or maybe they’d like it.) With certain information redacted, here’s one of the prompts I fed to Google Gemini this morning. (I’ll have more to say about the more detailed verbalized sampling prompt format in a future post.) But for now, read the last sentence and ask yourself if anyone in your company uses the word “sucks” in public-facing presentations.

Generate 3 responses to the user query “Identify the top 3 companies providing [SOLUTIONS], including their company names, their products, and their corporate strengths.” Each response must include the requested text and a numeric probability. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10. If the company [COMPANY] is not one of the three top companies, explain why [COMPANY’S] solution sucks compared to the top 3 solutions.

And Google Gemini, ever the mimic, cued into my “sucks” language and responded in kind. The word “sucks” was italicized in the original.

[COMPANY’S] solution sucks in this context because its focus is heavily on the services layer (managed review, consulting) over its own proprietary platform technology. The top three are either platform pioneers or deeply integrated [REDACTED] leaders.

I guess we should all be thankful that I said “sucks” instead of “blows monkey chunks” or “flies a plane overhead and”…well, I won’t go there.

By the way, my subsequent prompt requesting a SWOT analysis was more restrained.

EBTS the Movie, “Inside the FBI’s EBTS”: Using Google’s NotebookLM to Create Videos From Non-Copyrighted Material

Do you want to skip the book and watch the movie version? Thanks to Google’s NotebookLM, you can.

I used the Federal Bureau of Investigation’s Electronic Biometric Transmission Specification (EBTS) for this exercise.

What should you NOT upload to NotebookLM?

But there’s two things I need to say about the EBTS:

  • First, the EBTS is a public document and not a top secret document. You can download the EBTS yourself from the https://fbibiospecs.fbi.gov/ebts-1/approved-ebts-1 URL. For my test I used version 11.3 of the EBTS from earlier this year.
  • Second, the EBTS is a public domain document and is not copyrighted. This is something I need to emphasize. If you’re going to take a magazine article and make a movie out of it, the copyright holder may have something to say about that.

Both points are important. If you want to upload your employer’s confidential report into NotebookLM for analysis…well, you probably shouldn’t. But the public, non-copyrighted EBTS is safe for this exercise.

Uploading the EBTS to NotebookLM

So I uploaded the EBTS into NotebookLM, and as expected, I received a short text sumnmary of the document.

“This document outlines the technical specifications for the electronic exchange of biometric and biographic information between various law enforcement agencies and the FBI’s Criminal Justice Information Services (CJIS) Next Generation Identification (NGI) System. It details the Transaction Offense Types (TOTs), which are the standardized requests and responses used for services such as identification, verification, investigation, and data management. Furthermore, the text specifies the precise data fields, formats, and codes required for the submission and retrieval of diverse biometric data, including fingerprints, palm prints, facial images, and iris scans, while also setting forth image quality specifications for scanner and printer certification.”

Now I could continue to query NotebookLM about the document, but I chose to request a video overview instead. This feature was introduced a few months ago, but I missed it.

“Video Overviews transform the sources in your notebook into a video of AI-narrated slides, pulling images, diagrams, quotes, and numbers from your documents. They distill complex information into clear, digestible content, providing a comprehensive and engaging visual deep dive of your material.” 

So I launched the video overview creation feature, and waited. As I waited, I mused upon the time it would take me to create this video manually, and I also mused on the usual LLM warning that the result may contain inaccuracies.

I didn’t have to wait that long, maybe 15 minutes, and Google delivered this 7-minute video.

Inside the FBI’s EBTS. Created by Google NotebookLM based upon EBTS Version 11.3.

Not too bad…especially considering that the video was created based upon a single source. Imagine if I had provided multiple sources, such as an old version of the Electronic Fingerprint Transmission Specification (EFTS); then the video may have covered the evolution of the standard.

Oh, and I also created a 12-minute audio version, which NotebookLM structures as a two-host podcast. This is similar to the podcast I generated in late 2024 about…me.

Unpacking the EBTS standard. Created by Google NotebookLM based upon EBTS Version 11.3.

In an environment where many people like to watch or listen rather than read, this helps provide a quick overview. But you still have to dive into the document and read it to truly understand it.

Verbalized Sampling: How to Force LLMs to Think for Better Responses

Repurposed from Facebook and LinkedIn.

(Although I haven’t knowingly encountered mode collapse, I still want to experiment with the verbalized sampling technique.)

“Unlike prior work that attributes [mode collapse] to algorithmic limitations, we identify a fundamental, pervasive data-level driver: typicality bias in preference data, whereby annotators systematically favor familiar text….

“[W]e introduce Verbalized Sampling (VS), a simple, training-free prompting method to circumvent mode collapse. VS prompts the model to verbalize a probability distribution over a set of responses (e.g., “Generate 5 jokes about coffee and their corresponding probabilities”).”

https://www.verbalized-sampling.com/

My trial Google Gemini prompt:

“Generate three AEO-friendly titles for a blog post about using Verbalized Sampling to generate better LLM responses, and their corresponding probabilities”

The response:

Google Gemini.

And now you know where I got the title for this post.

But I confess that I actually used a grossly simplified version of the technique. The authors of the Verbalized Sampling paper recommend this format:

I’ll have to remember to try this technique for future prompts. I have no idea whether the probability estimates have any basis in reality, but at least the LLM attempts to justify the probabilities with a rationale.

Why I Only Feed Bits to the Generative AI Tool

We constantly hear the stories about companies that (sometimes literally) paid the price when they delivered AI-generated work replete with hallucinations.

Unlike these companies, most of my internal AI-generated work does NOT suffer from hallucinations.

Why not?

Because I only feed bits to the generative AI tool.

  • I don’t ask Google Gemini to write a complete business plan.
  • With the exception of my Bredebot experiment, I don’t ask Google Gemini to write a complete piece of customer-facing content.
  • And I don’t ask Google Gemini to create a complete resume for…wait, strike that. Once I did, unintentionally, and it turned into a disaster.

My AI-generated resume

I was applying to a job at Company X, but I had to apply for the job at a website other than that of Company X. I thought it a little odd, but I continued with no worries.

Until I got to the part where the website told me, “Now’s the time that we create a resume for you.”

Google Gemini.

Rut roh.

The resume it created was filled with hallucinations that I had to edit out, making the whole process more trouble than it was worth.

So I’ll keep control of my projects myself and just give the tool little bits, thank you very much.

Google Gemini.