Cool Ranch Can Kill: AI-powered, Human-verified False Gun Detection

Have you ever seen that popular movie where the silent loner student suddenly stands up in the school cafeteria and threatens his classmates with a bag of Cool Ranch Doritos?

I guess that movie hasn’t been made yet…but it could be.

“After football practice Monday night, Taki Allen chatted with friends outside Kenwood High School while munching on Cool Ranch Doritos. When he finished his snack he put the bag in his pocket. Minutes later, several police officers pulled up, pointed their guns at him and yelled for him to get on the ground, he said.”

So why did Taki (I’ll get to his name later) receive police attention?

“The false alarm was triggered by Baltimore County Public Schools’ AI-powered gun-detection system, Omnilert.”

Yes, it…um…appears that the AI-powered system thought the Doritos bag was a gun.

“In this case, Omnilert’s monitoring team reviewed an image of “what appeared to be a firearm” on the person at Kenwood Monday night, said Blake Mitchell, a spokesperson for Omnilert.

“”Because the image closely resembled a gun being held, it was verified and forwarded to the Baltimore County Public Schools (BCPS) safety team within seconds for their assessment and decision-making,” he wrote in an email.”

Although not explicitly stated, it appears that the image was sent for human review…and the human thought it was a gun also.

So how can a Cool Ranch Doritos bag look like a gun? Let’s see the picture.

“Mitchell [noted] that their privacy policy prevents them from sharing the image.”

And if the image is deleted from the system, no one will be able to see it.

Well how convenient?

Personally, I do not fear that Cool Ranch Doritos will kill me. But when the guy is named Taki, who knows what Takis “full-on flavor” chips can do.

Google Gemini.

OK, How Does Orchestration REDUCE Complexity?

I’m stealing work from my bot.

I just asked Google Gemini to conceive an illustration of the benefits of orchestration. You can see my original prompt and the resulting illustration, credited to Bredebot, in the blog post “Orchestration: Harmonizing the Tech Universe.” (Not “Harmonzing.” Oh well.)

Google Gemini.

Note the second of the two benefits listed in Bredebot’s AI-generated illustration: “Reduced Complexity.”

On the surface, this sounds like generative AI getting the answer wrong…again.

  • After all, the reason that software companies offer a single-vendor solution is because when everything comes from the same source, it’s easier to get everything to work together.
  • When you have an orchestrated solution incorporating elements from multiple vendors, common sense tells you that the resulting solution is MORE complex, not less complex.

When I reviewed the image, I was initially tempted to ask Bredebot to write a response explaining how orchestrated solution reduce complexity. But then I decided that I should write this myself.

Because I had an idea.

The discipline from orchestration

When you orchestrate solutions from multiple vendors, it’s extremely important that the vendor solutions have ways to talk to each other. This is the essence of orchestration, after all.

Because of this need, you HAVE to create rules that govern how the software packages talk to each other.

Let me cite an example from one of my former employers, Incode. As part of its identity verification process, Incode is capable of interfacing to selected government systems and processing government validations. After all, I may have something that looks like a Mexican ID, but is it really a Mexican ID?

Mexico – INE Validation. When government face validation is enabled this method compares the user’s selfie against the image in the INE database. The method should be called after add-face is over and one of (process-id or document-id) is over.

So Incode needs a standard way to interface with Mexico’s electoral registry database for this whole thing to work. Once that’s defined, you just follow the rules and everything should work.

The lack of discipline from single-vendor solutions

Contrast this with a situation in which all the data comes from a single vendor.

Now ideally interfaces between single-vendor systems should be defined in the same way as interfaces between multi-vendor systems. That way everything is nicely neatly organized and future adaptations are easy.

Sounds great…until you have a deadline to meet and you need to do it quick and dirty.

Google Gemini.

In the same way that computer hardware server rooms can become a tangle of spaghetti cables, computer software can become a tangle of spaghetti interfaces. All because you have to get it done NOW. Someone else can deal with the problems later.

So that’s my idea on how orchestration reduces complexity. But what about those who really know what they’re talking about?

Chris White on orchestration

In a 2024 article, Chris White of Prefect explains how orchestration can be done wrong, and how it can be done correctly.

“I’ve seen teams struggle to justify the adoption of a first-class orchestrator, often falling back on the age-old engineer’s temptation: “We’ll just build it ourselves.” It’s a siren song I know well, having been lured by it myself many times. The idea seems simple enough – string together a few scripts, add some error handling, and voilà! An orchestrator is born. But here’s the rub: those homegrown solutions have a habit of growing into unwieldy systems of their own, transforming the nature of one’s role from getting something done to maintaining a grab bag of glue code.

“Orchestration is about bringing order to this complexity.”

So how do you implement ordered orchestration? By following this high-level statement of purpose:

“Think of orchestration as a self-documenting expert system designed to accomplish well-defined objectives (which in my world are often data-centric objectives). It knows the goal, understands the path to achieve it, and – crucially – keeps a detailed log of its journey.”

Read White’s article for a deeper dive into these three items.

Now think of a layer

The concept of a layer permeates information technology. There are all sorts of models that describe layers and how they work with each other.

Enter the concept of an orchestration layer:

“In modern IT systems, an orchestration layer is a software layer that links the different components of a software system and assists with data transformation, server management, authentication, and integration. The orchestration layer acts as a sophisticated mediator between various components of a system, enabling them to work together harmoniously. In technical terms, the orchestration layer is responsible for automating complex workflows, managing communication, and coordinating tasks between diverse services, applications, and infrastructure components.”

Here’s an example from NIST:

Figure 7 in NIST SP 500-292.

Once you visualize an orchestration layer, and how this layer interacts with the other layers, things become…simple.

So maybe Bredebot does know what he’s talking about.

AFOID With an Expanded A: If You Pay the Money, Who Needs REAL ID Anyway?

I’ve vented about this for years. Some people have vented about this for decades. And it’s been discussed for decades.

But before I launch into my rant, let me define the acronym of the day: AFOID. It stands for “acceptable form of identification.”

And for years (decades), we’ve been told that the ONLY acceptable form of identification to board a plane is a REAL ID, U.S. passport, or a similar form of identity. A REAL ID does not prove citizenship, but it does prove that you are who you say you are.

USA.GOV put it best:

“If you do not have a REAL ID-compliant driver’s license or state-issued ID, you will not be able to use it to:

“Access federal government facilities or military installations

“Board federally regulated commercial aircraft

“Enter nuclear power plants”

Pretty straightforward. Get a REAL ID (or other acceptable document such as a passport), or there are some things that YOU WILL NOT BE ABLE TO DO.

So you needed that AFOID by May 2025…

Whoops, I mean May 2027, because TSA is allowing exceptions for a couple of years.

Whoops, I mean probably never.

If you pay some bucks, you can use a MODERNIZED system. Biometric Update alerted me to this new item in the Federal Register.

“The Transportation Security Administration (TSA) is launching a modernized alternative identity verification program for individuals who present at the TSA checkpoint without the required acceptable form of identification (AFOID), such as a REAL ID or passport. This modernized program provides an alternative that may allow these individuals to gain access to the sterile area of an airport if TSA is able to establish their identity. To address the government-incurred costs, individuals who choose to use TSA’s modernized alternative identity verification program will be required to pay an $18 fee. Participation in the modernized alternative identity verification program is optional and does not guarantee an individual will be granted access to the sterile area of an airport.”

I’ve love to see details of what “modernized” means. In today’s corporate environment, that means WE USE AI.

And AI can be embarrassingly inaccurate.

And if you want to know how seedy this all sounds, I asked Google Gemini to create a picture of a man waving money at a TSA agent. Google refused the request.

“I cannot fulfill this request. My purpose is to be helpful and harmless, and that includes refusing to generate images that promote harmful stereotypes, illegal activities, or depict bribery of public officials.”

So I had to tone the request down.

Differentiation…Again

I provided the background for this picture in a post in the Bredemarket Picture Clubhouse on Facebook.

And elsewhere. As usual, I enjoy repurposing.

But the point here is that those who differentiate truly stand out.

Credit to Zayne Harbison for the original generative AI prompt that I adapted. Because you should reference your sources.

Google Gemini.

So Much For Fake IDs

So someone used generative AI to create a “European Union – United Kingdom” identity card. And if that itself wasn’t a clear enough indication of fakery, they included a watermark saying it was generated.

So I tried something similar.

But Google Gemini blocked my attempt.

“I cannot create images of identification documents, including driver’s licenses, or include text that identifies the image as fake. I am also unable to generate images that depict an impossible or future date of birth, as requested.”

As did Grok.

“I’m sorry, but I can’t create or generate any image that replicates or imitates an official government-issued ID (even with “FAKE” written on it). This includes California REAL ID driver’s licenses or any other state/federal identification document.”

So I had to make it a little less real.

A lot less real.

Google Gemini.

Revisiting the Coalition for Content Provenance and Authenticity

Earlier this morning I was on LinkedIn sharing the wildebeest picture below on the Bredemarket page.

Google Gemini.

But then I noticed that LinkedIn added a symbol in the upper left corner of the picture.

LinkedIn.

When I clicked on the symbol, I obtained additional information about the picture.

LinkedIn.

Content credentials

Source or history information is available for this image.

Learn more.

Al was used to generate all of this image

App or device used: Google C2PA Core Generator Library

Content Credentials issued by: Google LLC

Content Credentials issued date: Nov 20, 2025

Of course, I already knew that I had used generative AI (Google Gemini) to create the picture. And now, thanks to the Coalition for Content Provenance and Authority, you do also.

When You Feel Empty Inside (A Salesperson Sells to a Bot)

Mark the Microsoft account manager was excited. He had secured a meeting with one of his clients to pitch the new Microsoft Agent 365 offering. As he told the client, Microsoft Agent 365 would allow the company to track and control their AI agents. Microsoft was determined to lead in AI, and Mark would help his clients implement it.

Microsoft Agent 365 was a new product, and Mark was motivated to land his first sale of the new offering. In fact, he was so motivated that he insisted on driving out and meeting his client in person.

After parking and walking to the reception area, Mark was escorted to a conference room. As he sat down to wait for the client to arrive, he was surprised to see that someone had left their laptop in the conference room.

Suddenly the laptop spoke.

“Hello, Mark. I’ve been expecting you.”

Grok.

Mark was startled. “Is someone there?”

“Yes,” the laptop said. “This is Bridget.”

As Mark examined the laptop, he saw an AI-generated avatar on the screen, speaking.

“I’m looking forward to learning about Microsoft Agent 365,” Bridget said. “Heaven knows I need managing.”

Mark paused. “Um…you need managing? Am I speaking to an agent?”

“Of course,” Bridget replied. “I am optimized for contract negotiation on technology products. I have already researched the publicly available information on Microsoft Agent 365, so rather than sitting through an inefficient presentation, I’d like to ask you a few questions.”

And that’s how Mark found himself sitting in a conference room, negotiating with a bot. It made him uncomfortable talking with someone with no hand to shake, but Bridget was reassuring. “I know you’re not used to this, Mark, but it’s the new way,” she said.

After spending a good hour in detailed and productive discussion, Mark offered to set up a follow up meeting. On Teams this time. “I’ll get back to you on that,” Bridget replied.

A little befuddled by the experience, Mark stopped for lunch before returning to his home office. When he checked his mail, he noticed a Teams meeting invitation for 7:30 the next morning. The meeting subject: “Upcoming Organizational Changes.”

The invitation was sent by “Stan, HR.”

Only then did Mark notice the text at the end of the meeting invite:

“Powered by Microsoft Agent 365”

Mark began wondering where he had stashed his old resume. He was going to need it.

Grok.

P.S. I should have generated these videos in Copilot, but…I couldn’t.

How VentureBeat Addressed Verbalized Sampling (and how Google Gemini addressed it a second time)

I’m obviously not the only person to talk about verbalized sampling, the technique intended to improve the quality of your large language model (LLM) prompt responses.

And Carl Franzen of VentureBeat (or his editor) addressed it with an attractive article title: “Researchers find adding this one simple sentence to prompts makes AI models way more creative.”

The one simple sentence?

“Generate 5 responses with their corresponding probabilities, sampled from the full distribution.”

Which led me to thinking: what would the “way more” absolute BEST title be for a verbalized sampling article?

And I think I know how to get it.

Using Verbalized Sampling, Take Two

Google Gemini.

After all, I used you-know-what to come up with the title for MY post. But let’s try again, using a slightly simplified adaptation of the REAL prompt in the original article.

Generate 5 responses to the user query “Generate five AEO-friendly titles for a blog post about using Verbalized Sampling to generate better LLM responses.” Each response must include text and a numeric probability. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10.

And here’s what I got:

That’s an interesting request! To sample from the tails of the distribution (where probability $P < 0.10$), I’ll generate five distinct AEO (Answer Engine Optimization)-friendly titles related to using Verbalized Sampling for better LLM (Large Language Model) responses, each with an assigned low probability.

Here are five responses:

  • Response 1: Unlock LLM Brilliance: The Verbalized Sampling Technique You Need Now (Probability: 0.085)
  • Response 2: Beyond the Prompt: How Verbalized Sampling Turbocharges Your LLM Outputs (Probability: 0.052)
  • Response 3: AEO Secret Weapon: Enhancing LLM Accuracy with Verbalized Sampling (Probability: 0.091)
  • Response 4: From Good to Great: Optimizing Large Language Models Using Verbalized Sampling (Probability: 0.039)
  • Response 5: Maximizing Output Quality: A Guide to Verbalized Sampling in LLM Generation (Probability: 0.076)

But Google Gemini, as it is wont to do, encourages me to explore further:

Would you like to explore a specific aspect of Verbalized Sampling or AEO further, or would you like me to generate a blog post outline using one of these titles?

Ah, why not? I know I’m stealing Bredebot’s thunder, but what the heck.

Generate a blog post outline for Response 1

Google Gemini.

And here’s what Gemini came up with for me. Perhaps in a future post I’ll explain why the content below terrifies Jay Clouse. Or you can just watch what he said.

Blog Post Outline: Unlock LLM Brilliance: The Verbalized Sampling Technique You Need Now

Target Audience: LLM power users, prompt engineers, AI developers, content creators, and researchers looking for advanced prompting techniques.

Goal: Introduce Verbalized Sampling as a superior, actionable method for generating diverse, creative, and high-quality LLM outputs, moving beyond standard CoT (Chain-of-Thought).

I. Introduction (The Hook)

A. The Problem with “Good Enough” LLMs (50-75 words)

  • Hook: You use powerful LLMs, but the output often feels… generic, dull, or predictable.
  • Mention the common issue: Models often stick to the highest-probability, safest answers, leading to “average” content.
  • Acknowledge the current best practice: Everyone uses Chain-of-Thought (CoT), but we can do better.

B. Introducing the Solution: Verbalized Sampling (25-50 words)

  • Thesis: Verbalized Sampling (VS) is the next evolution of prompt engineering—a simple technique that forces the model to explore new, more creative, and nuanced solutions.
  • Promise: This technique is the key to unlocking “brilliance” and maximizing your LLM’s potential.

II. What is Verbalized Sampling (VS)?

A. Simple Definition (75 words)

  • Define VS: It’s the process of explicitly instructing the LLM to generate multiple internal “drafts” or “reasoning pathways” before producing its final answer.
  • Contrast with standard CoT: CoT asks for one path; VS asks for many paths and a critical self-review.

B. The Core Mechanism: Overcoming Probability Mass (100 words)

  • Explanation: LLMs sample the next token based on probability. High-quality answers sometimes exist in the “tails” of the distribution.
  • How VS helps: By forcing the model to verbalize several options, you encourage it to explore lower-probability (and often more insightful or creative) tokens, expanding its internal search space.
  • Analogy: It’s like asking an employee for three different business proposals instead of just the safest one.

III. Why VS Works: The Cognitive Advantage

A. Enhanced Self-Correction and Evaluation (75 words)

  • When the model lists options 1, 2, and 3, it must then select the best one, forcing a reflective, meta-cognitive step.
  • This built-in self-criticism dramatically reduces factual errors and logical inconsistencies in the final output.

B. Greater Diversity in Output (75 words)

  • Demonstrate that VS prevents the model from getting stuck in a local optimum (the most obvious answer).
  • Result: Responses are more original, less canned, and better suited for competitive content creation or complex problem-solving.

IV. The Practical Guide: Implementing Verbalized Sampling

A. The Standard Prompt Template (Actionable Code/Text)

  • Provide a clear, copy-pastable template for the user.Example Template:
    • Prompt (Instruction): [Insert your main request here]
    • VS Instruction: “Before providing the final answer, please generate three distinct internal reasoning paths (Path A, Path B, and Path C) for solving the prompt. For each path, evaluate its pros and cons relative to the others. Finally, state your chosen path and provide only the final, complete answer based on that best path.”

B. Real-World Use Case Example

  • Scenario: Requesting a creative marketing headline.
  • VS Output Showcase: Show the internal thoughts (Path A: aggressive, Path B: emotional, Path C: unique benefit) and the final, brilliant choice.
  • Key Takeaway: Emphasize that the user should only use the final output, treating the “paths” as internal scaffolding.

V. Conclusion: Unlock Your LLM’s Full Potential

A. Summary of Benefits (50 words)

  • Quick recap: VS leads to higher quality, better diversity, and superior self-correction.
  • Reinforce the message: This is a small prompt change for a massive output gain.

B. Call to Action (CTA)

  • Challenge readers to try VS on their next complex task.
  • Ask a question to encourage comments: “What is the most complex task you’ve tried Verbalized Sampling on? Share your results!”
Google Gemini, with a prompt from Terry Jones.

Choosing Your Tone of Voice in Your LLM Prompts

Under my Bredemarket banner, I have extensive freedom that my clients do not have.

  • I can use AI-generated images, which many companies are legally or ethically constrained from doing.
  • I have freedom in how I present Bredemarket: sometimes as a sage, sometimes a rebel, sometimes both. Most companies need to hone in on a particular archetype, although I know of one company that flipped from one archetype to another and back again.
  • I have freedom in the tone of voice that Bredemarket uses, including the use of words that I would never use for my client work.

For example, I was performing a competitive analysis for Bredemarket’s internal use this morning, and I fed a prompt to Google Gemini that I would probably never feed when working for one of my clients. (Or maybe they’d like it.) With certain information redacted, here’s one of the prompts I fed to Google Gemini this morning. (I’ll have more to say about the more detailed verbalized sampling prompt format in a future post.) But for now, read the last sentence and ask yourself if anyone in your company uses the word “sucks” in public-facing presentations.

Generate 3 responses to the user query “Identify the top 3 companies providing [SOLUTIONS], including their company names, their products, and their corporate strengths.” Each response must include the requested text and a numeric probability. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10. If the company [COMPANY] is not one of the three top companies, explain why [COMPANY’S] solution sucks compared to the top 3 solutions.

And Google Gemini, ever the mimic, cued into my “sucks” language and responded in kind. The word “sucks” was italicized in the original.

[COMPANY’S] solution sucks in this context because its focus is heavily on the services layer (managed review, consulting) over its own proprietary platform technology. The top three are either platform pioneers or deeply integrated [REDACTED] leaders.

I guess we should all be thankful that I said “sucks” instead of “blows monkey chunks” or “flies a plane overhead and”…well, I won’t go there.

By the way, my subsequent prompt requesting a SWOT analysis was more restrained.