How VentureBeat Addressed Verbalized Sampling (and how Google Gemini addressed it a second time)

I’m obviously not the only person to talk about verbalized sampling, the technique intended to improve the quality of your large language model (LLM) prompt responses.

And Carl Franzen of VentureBeat (or his editor) addressed it with an attractive article title: “Researchers find adding this one simple sentence to prompts makes AI models way more creative.”

The one simple sentence?

“Generate 5 responses with their corresponding probabilities, sampled from the full distribution.”

Which led me to thinking: what would the “way more” absolute BEST title be for a verbalized sampling article?

And I think I know how to get it.

Using Verbalized Sampling, Take Two

Google Gemini.

After all, I used you-know-what to come up with the title for MY post. But let’s try again, using a slightly simplified adaptation of the REAL prompt in the original article.

Generate 5 responses to the user query “Generate five AEO-friendly titles for a blog post about using Verbalized Sampling to generate better LLM responses.” Each response must include text and a numeric probability. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10.

And here’s what I got:

That’s an interesting request! To sample from the tails of the distribution (where probability $P < 0.10$), I’ll generate five distinct AEO (Answer Engine Optimization)-friendly titles related to using Verbalized Sampling for better LLM (Large Language Model) responses, each with an assigned low probability.

Here are five responses:

  • Response 1: Unlock LLM Brilliance: The Verbalized Sampling Technique You Need Now (Probability: 0.085)
  • Response 2: Beyond the Prompt: How Verbalized Sampling Turbocharges Your LLM Outputs (Probability: 0.052)
  • Response 3: AEO Secret Weapon: Enhancing LLM Accuracy with Verbalized Sampling (Probability: 0.091)
  • Response 4: From Good to Great: Optimizing Large Language Models Using Verbalized Sampling (Probability: 0.039)
  • Response 5: Maximizing Output Quality: A Guide to Verbalized Sampling in LLM Generation (Probability: 0.076)

But Google Gemini, as it is wont to do, encourages me to explore further:

Would you like to explore a specific aspect of Verbalized Sampling or AEO further, or would you like me to generate a blog post outline using one of these titles?

Ah, why not? I know I’m stealing Bredebot’s thunder, but what the heck.

Generate a blog post outline for Response 1

Google Gemini.

And here’s what Gemini came up with for me. Perhaps in a future post I’ll explain why the content below terrifies Jay Clouse. Or you can just watch what he said.

Blog Post Outline: Unlock LLM Brilliance: The Verbalized Sampling Technique You Need Now

Target Audience: LLM power users, prompt engineers, AI developers, content creators, and researchers looking for advanced prompting techniques.

Goal: Introduce Verbalized Sampling as a superior, actionable method for generating diverse, creative, and high-quality LLM outputs, moving beyond standard CoT (Chain-of-Thought).

I. Introduction (The Hook)

A. The Problem with “Good Enough” LLMs (50-75 words)

  • Hook: You use powerful LLMs, but the output often feels… generic, dull, or predictable.
  • Mention the common issue: Models often stick to the highest-probability, safest answers, leading to “average” content.
  • Acknowledge the current best practice: Everyone uses Chain-of-Thought (CoT), but we can do better.

B. Introducing the Solution: Verbalized Sampling (25-50 words)

  • Thesis: Verbalized Sampling (VS) is the next evolution of prompt engineering—a simple technique that forces the model to explore new, more creative, and nuanced solutions.
  • Promise: This technique is the key to unlocking “brilliance” and maximizing your LLM’s potential.

II. What is Verbalized Sampling (VS)?

A. Simple Definition (75 words)

  • Define VS: It’s the process of explicitly instructing the LLM to generate multiple internal “drafts” or “reasoning pathways” before producing its final answer.
  • Contrast with standard CoT: CoT asks for one path; VS asks for many paths and a critical self-review.

B. The Core Mechanism: Overcoming Probability Mass (100 words)

  • Explanation: LLMs sample the next token based on probability. High-quality answers sometimes exist in the “tails” of the distribution.
  • How VS helps: By forcing the model to verbalize several options, you encourage it to explore lower-probability (and often more insightful or creative) tokens, expanding its internal search space.
  • Analogy: It’s like asking an employee for three different business proposals instead of just the safest one.

III. Why VS Works: The Cognitive Advantage

A. Enhanced Self-Correction and Evaluation (75 words)

  • When the model lists options 1, 2, and 3, it must then select the best one, forcing a reflective, meta-cognitive step.
  • This built-in self-criticism dramatically reduces factual errors and logical inconsistencies in the final output.

B. Greater Diversity in Output (75 words)

  • Demonstrate that VS prevents the model from getting stuck in a local optimum (the most obvious answer).
  • Result: Responses are more original, less canned, and better suited for competitive content creation or complex problem-solving.

IV. The Practical Guide: Implementing Verbalized Sampling

A. The Standard Prompt Template (Actionable Code/Text)

  • Provide a clear, copy-pastable template for the user.Example Template:
    • Prompt (Instruction): [Insert your main request here]
    • VS Instruction: “Before providing the final answer, please generate three distinct internal reasoning paths (Path A, Path B, and Path C) for solving the prompt. For each path, evaluate its pros and cons relative to the others. Finally, state your chosen path and provide only the final, complete answer based on that best path.”

B. Real-World Use Case Example

  • Scenario: Requesting a creative marketing headline.
  • VS Output Showcase: Show the internal thoughts (Path A: aggressive, Path B: emotional, Path C: unique benefit) and the final, brilliant choice.
  • Key Takeaway: Emphasize that the user should only use the final output, treating the “paths” as internal scaffolding.

V. Conclusion: Unlock Your LLM’s Full Potential

A. Summary of Benefits (50 words)

  • Quick recap: VS leads to higher quality, better diversity, and superior self-correction.
  • Reinforce the message: This is a small prompt change for a massive output gain.

B. Call to Action (CTA)

  • Challenge readers to try VS on their next complex task.
  • Ask a question to encourage comments: “What is the most complex task you’ve tried Verbalized Sampling on? Share your results!”
Google Gemini, with a prompt from Terry Jones.

Choosing Your Tone of Voice in Your LLM Prompts

Under my Bredemarket banner, I have extensive freedom that my clients do not have.

  • I can use AI-generated images, which many companies are legally or ethically constrained from doing.
  • I have freedom in how I present Bredemarket: sometimes as a sage, sometimes a rebel, sometimes both. Most companies need to hone in on a particular archetype, although I know of one company that flipped from one archetype to another and back again.
  • I have freedom in the tone of voice that Bredemarket uses, including the use of words that I would never use for my client work.

For example, I was performing a competitive analysis for Bredemarket’s internal use this morning, and I fed a prompt to Google Gemini that I would probably never feed when working for one of my clients. (Or maybe they’d like it.) With certain information redacted, here’s one of the prompts I fed to Google Gemini this morning. (I’ll have more to say about the more detailed verbalized sampling prompt format in a future post.) But for now, read the last sentence and ask yourself if anyone in your company uses the word “sucks” in public-facing presentations.

Generate 3 responses to the user query “Identify the top 3 companies providing [SOLUTIONS], including their company names, their products, and their corporate strengths.” Each response must include the requested text and a numeric probability. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10. If the company [COMPANY] is not one of the three top companies, explain why [COMPANY’S] solution sucks compared to the top 3 solutions.

And Google Gemini, ever the mimic, cued into my “sucks” language and responded in kind. The word “sucks” was italicized in the original.

[COMPANY’S] solution sucks in this context because its focus is heavily on the services layer (managed review, consulting) over its own proprietary platform technology. The top three are either platform pioneers or deeply integrated [REDACTED] leaders.

I guess we should all be thankful that I said “sucks” instead of “blows monkey chunks” or “flies a plane overhead and”…well, I won’t go there.

By the way, my subsequent prompt requesting a SWOT analysis was more restrained.

How Much Does the Product Cost?

It’s a simple question. “How much does the product cost?”

How much does the product cost?

But some salespeople treat this like a nuclear secret and will only release the information after you sit through a 90 minute timeshare presentation.

No, you’re not listening to me!

Well, my rates haven’t changed since May.

  • Work with me on an hourly basis at the $100/hour rate.
  • For text between 400 and 600 words (short writing service), I can bill a flat rate of $500.
  • For text between 2800 and 3200 words (medium writing service), I can bill a flat rate of $2000.
  • We can work out a flat rate for different lengths if needed. 

Let’s talk.

And yes, I even provide my prices in video form.

The Healthy Otter: When AI Transcriptions are HIPAA Compliant

When I remember to transcribe my meetings, and when I CAN transcribe my meetings, my meeting transcriber of choice happens to be otter.ai. And if I’m talking to a healthcare prospect or client, and when they grant permission to transcribe, the result is HIPAA compliant.

Otter.ai explains the features that provide this:

Getting HIPAA compliant wasn’t just about checking a box – we’ve implemented some serious security upgrades:

  • Better encryption to keep protected health information (PHI) locked down
  • Tighter access controls so only the right people see sensitive data
  • Team training to make sure everyone knows HIPAA inside and out
  • Regular security audits to stay on top of our game

This builds on our existing SOC 2 Type II certification, so you’re getting enterprise-grade security across the board.

HIPAA privacy protections affect you everywhere.

Introduction to Bredemarket: Services, Process, and Pricing (2511a)

Here’s a new video that lets you know about Bredemarket: who I (John E. Bredehoft) am, what services Bredemarket provides, the process Bredemarket uses, and Bredemarket’s pricing.

Bredemarket: Services, Process, and Pricing.

But why…why did I create it?

Stealing from James Tuckerman

So I was reading my emails one day, and I saw how James Tuckerman created a video to introduce himself to prospects. This allowed Tuckerman, based in Australia, to introduce himself to prospects around the world without having to wake up in the middle of the night.

Now Bredemarket doesn’t do business outside the United States (with one exception), but I could certainly use an introduction video.

Wait…I already did that

Then I remembered that I already had several “talkies” from the time when I branded myself as a “CPA”—a content, proposal, analysis expert.

And one of the talkies covered a lot of ground, even pricing.

But it was long, it dragged at times, it was incomplete, and it lacked a couple of my recent branding changes.

You go back, Jack, do it again

So here’s my first cut at a new introduction video. 

  • It’s not a “talkie,” since I wanted to keep it below 90 seconds. 
  • And I may play with different edits. 

But for now, this is my introduction video. Enjoy.

Bredemarket: Services, Process, and Pricing

Here’s the video.

Bredemarket: Services, Process, and Pricing.

And if you want to discuss my services with me, visit https://bredemarket.com/mark/ and book a free meeting.

A Fingerprint Identification Story: Bobby Driscoll

In early 1968, two boys found a dead body in New York’s East Village. There was no identification on the man, and no one in the neighborhood knew him. He was fingerprinted and buried in a mass grave, identified by the NYPD nearly two years later.

Potter’s Field monument, Hart Island. From Wikipedia. CC BY-SA 4.0.

In the 1960s, fingerprint identification of deceased persons—a laborious process in those days—often happened because the deceased had a criminal record.

And Bobby Driscoll did

His first arrest was in 1956, but he was not convicted of any crime until 1961.

“On May 1, 1961, he was arrested for attempting to cash a check that had been stolen from a liquor store the previous January, and at the same time was also charged with driving under the influence of drugs. He pled guilty to both charges and was sentenced to six months of treatment for drug addiction at the California Institute for Men at Chino.”

Driscoll reportedly cleaned up (his drug of choice was heroin), went east to New York City, and even achieved some fame.

“[H]e purportedly settled into Andy Warhol’s Greenwich Village art community known as “The Factory.” During this time, he also participated in an underground film entitled Dirt, directed by avant-garde filmmaker Piero Heliczer.”

But this was not Driscoll’s first film. He had been in a few films earlier in life.

From Wikipedia. Fair use in this form.

Here he is (in the upper right corner) playing Johnny in the Disney movie Song of the South.

From Wikipedia. Public domain.

And he provided the voice for the lead character in the later Disney movie Peter Pan.

Yes, Bobby Driscoll was a child star for Disney and other studios before appearing in Dirt.

But right after Driscoll’s voice became famous in Peter Pan, Disney declined to renew his contract. The reason? Acne…and the fact that he wasn’t a cute kid any more.

AI generated by Grok.

This led to his tailspin, which eventually led to his fingerprinting.

And his positive identification after his death.

EBTS the Movie, “Inside the FBI’s EBTS”: Using Google’s NotebookLM to Create Videos From Non-Copyrighted Material

Do you want to skip the book and watch the movie version? Thanks to Google’s NotebookLM, you can.

I used the Federal Bureau of Investigation’s Electronic Biometric Transmission Specification (EBTS) for this exercise.

What should you NOT upload to NotebookLM?

But there’s two things I need to say about the EBTS:

  • First, the EBTS is a public document and not a top secret document. You can download the EBTS yourself from the https://fbibiospecs.fbi.gov/ebts-1/approved-ebts-1 URL. For my test I used version 11.3 of the EBTS from earlier this year.
  • Second, the EBTS is a public domain document and is not copyrighted. This is something I need to emphasize. If you’re going to take a magazine article and make a movie out of it, the copyright holder may have something to say about that.

Both points are important. If you want to upload your employer’s confidential report into NotebookLM for analysis…well, you probably shouldn’t. But the public, non-copyrighted EBTS is safe for this exercise.

Uploading the EBTS to NotebookLM

So I uploaded the EBTS into NotebookLM, and as expected, I received a short text sumnmary of the document.

“This document outlines the technical specifications for the electronic exchange of biometric and biographic information between various law enforcement agencies and the FBI’s Criminal Justice Information Services (CJIS) Next Generation Identification (NGI) System. It details the Transaction Offense Types (TOTs), which are the standardized requests and responses used for services such as identification, verification, investigation, and data management. Furthermore, the text specifies the precise data fields, formats, and codes required for the submission and retrieval of diverse biometric data, including fingerprints, palm prints, facial images, and iris scans, while also setting forth image quality specifications for scanner and printer certification.”

Now I could continue to query NotebookLM about the document, but I chose to request a video overview instead. This feature was introduced a few months ago, but I missed it.

“Video Overviews transform the sources in your notebook into a video of AI-narrated slides, pulling images, diagrams, quotes, and numbers from your documents. They distill complex information into clear, digestible content, providing a comprehensive and engaging visual deep dive of your material.” 

So I launched the video overview creation feature, and waited. As I waited, I mused upon the time it would take me to create this video manually, and I also mused on the usual LLM warning that the result may contain inaccuracies.

I didn’t have to wait that long, maybe 15 minutes, and Google delivered this 7-minute video.

Inside the FBI’s EBTS. Created by Google NotebookLM based upon EBTS Version 11.3.

Not too bad…especially considering that the video was created based upon a single source. Imagine if I had provided multiple sources, such as an old version of the Electronic Fingerprint Transmission Specification (EFTS); then the video may have covered the evolution of the standard.

Oh, and I also created a 12-minute audio version, which NotebookLM structures as a two-host podcast. This is similar to the podcast I generated in late 2024 about…me.

Unpacking the EBTS standard. Created by Google NotebookLM based upon EBTS Version 11.3.

In an environment where many people like to watch or listen rather than read, this helps provide a quick overview. But you still have to dive into the document and read it to truly understand it.

The Quantum Fraudster: Why RSA-4096 and Your Strongest Passwords Will Soon Be Trivial to Break

Are your fraud protections obsolete before the quantum era even begins? I previously wrote about algorithms that purport to protect against quantum-powered fraud. See my October post “Is the Quantum Security Threat Solved Before It Arrives? Probably Not.

Let’s take a step back from Module-Lattice-Based Digital Signature Standards (NIST FIPS 204) and see what quantum-infused fraudsters can do to bypass your security protections. Your “practically unbreakable” security system today may be wide open in 10 years…or 5 years.

Shor’s Algorithm

To understand how fraud can occur, you need to understand (Peter) Shor’s Factoring Algorithm.

Peter Shor speaking after receiving the 2017 Dirac Medal from the ICTP. By International Centre for Theoretical Physics, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=75565986.

According to Classiq, Shor’s Factoring Algorithm can find the prime factors of any number, including very large numbers.

“Factoring numbers with Shor’s algorithm begins with selecting a random integer smaller than the number to be factored. The classically-calculated greatest common divisor (GCD) of these two numbers, the random number and the target number, is then used to determine whether the target number has already been factored accidentally. For smaller numbers, that’s a possibility. For larger numbers, a supercomputer could be needed. And for numbers that are believed to be cryptographically secure, a quantum computer will be needed.”

So what? I appreciate that people like the late Richard Crandall were into finding prime numbers with 20th century technology, but how does that relate to whether a fraudster can drain my bank account?

Breaking RSA encryption

It definitely relates, according to the MIT Technology Review. This article was written back in 2019.

“[C]omputer scientists consider it practically impossible for a classical computer to factor numbers that are longer than 2048 bits, which is the basis of the most commonly used form of RSA encryption.

“Shor showed that a sufficiently powerful quantum computer could do this with ease, a result that sent shock waves through the security industry.  

“And since then, quantum computers have been increasing in power. In 2012, physicists used a four-qubit quantum computer to factor 143. Then in 2014 they used a similar device to factor 56,153.”

The largest recent record number that I found was 261,980,999,226,229, as described in this paper. It should be noted that many of these numbers were factored by a variety of methods: using a pure Shor’s Factoring Algorithm, the maximum number factored so far is 21.

What does this mean?

So what does this mean for 2048-bit encryption? 2048 bits is equivalent to hundreds of decimal digits. I’ve found different numbers of decimal digits, but for all practical purposes I can’t calculate them anyway. Heck, I can’t calculate trillions in my head. And there’s RSA-4096 encryption, but…well, we’ll get to that.

But when quantum calculating abilities can crack algorithms, then it’s trivial to compute the number of combinations to crack an encryption…or guess a password…or generate a face.

From Microchip:

“Brute force attacks function by calculating every possible combination of passwords. As the password’s strength increases, the amount of time to crack it increases exponentially. So, in theory, if hackers tried to brute force their way into a key with AES-128 encryption, it would take approximately 1 billion years to crack with the best hardware available today [2023].

“But what if we lived in a post-quantum computing world? How long would a brute-force attack on popular cypher technologies take?…[We’re] likely still a decade or two away from Quantum computers that can easily break many of the cypher technologies in use today….

“[I]n a recently published report from Global Risk Institute (GRI), the time to break RSA-4096, which is practically impossible to break with classical computing technology, is under three days with a theoretical 1 megaqubit computer. While we are still a long way from a 1 megaqubit computer, the resources and time required are reducing rapidly at the same time we see advancements in Quantum computing which are in development.”

Yes, even RSA-4096 is vulnerable.

Now many claim that AES encryption such as AES-256 is quantum resistant, but even AES may have been breached, if you believe the claims of Chinese researchers. (But that’s a big if.)

I have no idea how much lattice-based access control mitigates these threats, but if you go around saying that strong encryption will never be broken, you are a fool.

What is the NIST FIPS 204 Module-Lattice-Based Digital Signature Standard?

In this edition of The Repurposeful Life, I’m revisiting a prior post (“Is the Quantum Security Threat Solved Before It Arrives? Probably Not.“) and extracting just the part that deals with the National Institute of Standards and Technology (NIST) Federal Information Processing Standard (FIPS) 204.

Thales used the NIST “FIPS 204 standard to define a digital signature algorithm for a new quantum-resistant smartcard: MultiApp 5.2 Premium PQC.”

The NIST FIPS 204 standard, “Module-Lattice-Based Digital Signature Standard,” can be found here. This is the abstract:

“Digital signatures are used to detect unauthorized modifications to data and to authenticate the identity of the signatory. In addition, the recipient of signed data can use a digital signature as evidence in demonstrating to a third party that the signature was, in fact, generated by the claimed signatory. This is known as non-repudiation since the signatory cannot easily repudiate the signature at a later time. This standard specifies ML-DSA, a set of algorithms that can be used to generate and verify digital signatures. ML-DSA is believed to be secure, even against adversaries in possession of a large-scale quantum computer.”

ML-DSA stands for “Module-Lattice-Based Digital Signature Algorithm.”

Now I’ll admit I don’t know a lattice from a vertical fence post, especially when it comes to quantum computing, so I’ll have to take NIST’s word for it that modules and lattice are super-good security.

Google Gemini.

But wait, there’s more!

Since I wrote my original post in October, I’ve read NordVPN’s definition of a lattice on its lattice-based access control (LBAC) page.

“A lattice is a hierarchical structure that consists of levels, each representing a set of access rights. The levels are ordered based on the level of access they grant, from more restrictive to more permissive.”

You can see how this fits into an access control mechanism, whether you’re talking about a multi-tenant cloud (NordVPN’s example) or a smartcard (Thales’ example).

Because there are some things that Tom Sawyer can access, but Injun Joe must not access.

Google Gemini.