Mary the Marketing Leader

Back in 2022 I worked on various prospect personas, described in Word documents. Although I feel that personas are overrated, they do serve a purpose.

In those days, to use the persona you would have to read the Word document and evaluate your content against what you just read.

It’s different today with generative AI.

I spent Tuesday evening writing a persona specification for “Mary the Marketing Leader,” the persona for Bredemarket’s chief prospect. This is something I would enter into Google Gemini as a prompt. “Mary” would then ask me questions, and I would ask her questions in turn.

As of December 23 (yeah, this is a scheduled post), the persona specification has 30 bullets arranged into four sections: role, context, tone and constraints.

And no, I’m not going to share it with you.

One reason is that I don’t want to share my insights with my product marketing expert competitors. This is pretty much a Bredemarket trade secret.

The other reason is that some of my bullets are brutally honest about Mary, and even though she’s fake, she still might take offense about the things I say about her. One example:

“When working with product marketing and other consultants, Mary sometimes takes a week to provide feedback on content drafts because higher priority tasks and emergencies must be handled first.”

Such comments are all through the specification, so you’re not gonna see it.

But maybe you’ll see the benefits of this specification and use the persona, tweak it, and use it again.

For example, I’ve already learned that my 30 years of identity experience can resonate with MY prospects, as can my statement “I ask, then I act.”

Now I just have to recast Bredebot as a persona specification. That will help me immensely.

AI and Human Clarinet Detection Failure: At Least There Were No Sax Or Violins

From Dangerous Minds:

“A school in Florida was forced into shutdown after an AI-based weapon detection system mistakenly triggered an entire campus lockdown by mistaking a clarinet for a firearm.”

The software was ZeroEyes, and it allows for human review for protection against a false positive. But in this case (like the Maryland chip case) the humans failed to discern that the “gun” wasn’t a gun.

While this may be a failure of AI weapons detection software, it is also a failure of the human reviewers.

Groupthink From Bots

I participate in several public and private AI communities, and one fun exercise is to take another creator’s image generation prompt, run it yourself (using the same AI tool or a different tool), and see what happens. But certain tools can yield similar results, for explicable reasons.

On Saturday morning in a private community Zayne Harbison shared his Nano Banana prompt (which I cannot share here) and the resulting output. So I ran his prompt in Nano Banana and other tools, including Microsoft Copilot and OpenAI ChatGPT.

The outputs from those two generative AI engines were remarkably similar.

Copilot.
ChatGPT.

Not surprising, given the history of Microsoft and OpenAI. (It got more tangled later.)

But Harbison’s prompt was relatively simple. What if I provided a much more detailed prompt to both engines?

Create a realistic photograph of a coworking space in San Francisco in which coffee and hash brownies are available to the guests. A wildebeest, who is only partaking in a green bottle of sparkling water, is sitting at a laptop. A book next to the wildebeest is entitled “AI Image Generation Platforms.” There is a Grateful Dead poster on the brick wall behind the wildebeest, next to the hash brownies.

So here’s what I got from the Copilot and ChatGPT platforms.

Copilot.
ChatGPT.

For comparison, here is Google Gemini’s output for the same prompt.

Gemini.

So while there are more differences when using the more detailed prompt (see ChatGPT’s brownie placement), the Copilot and ChatGPT results still show similarities, most notably in the Grateful Dead logo and the color used in the book.

So what have we learned, Johnny? Not much, since Copilot and ChatGPT can perform many tasks other than image generation. There may be more differentiation when they perform SWOT analyses or other operations. As any good researcher would say, more funding is needed for further research.

But I will hazard two lessons learned:

  • More detailed prompts are better.

Order in the Court: California AI Policies

Technology is one thing. But policy must govern technology.

For example, is your court using artificial intelligence?

If your court is in California, it must abide by this rule by next week:

“Any court that does not prohibit the use of generative AI by court staff or judicial officers must adopt a generative AI use policy by December 15, 2025. This rule applies to the superior courts, the Courts of Appeal, and the Supreme Court.”

According to Procopio, such a policy may cover items such as a prohibition on entering private data into public systems, the need to verify and correct AI-generated results, and disclosures on AI use.

Good ideas outside the courtroom also.

For example, the picture illustrating this post was created by Google Gemini—as of this week using Nano Banana.

Which is not a baseball team.

Google Gemini.

Does Hallucination Imply Sentience?

Last month Tiernan Ray wrote a piece entitled “Stop saying AI hallucinates – it doesn’t. And the mischaracterization is dangerous.”

Ray argues that AI does not hallucinate, but instead confabulates. He explains the difference between the two terms:

“A hallucination is a conscious sensory perception that is at variance with the stimuli in the environment. A confabulation, on the other hand, is the making of assertions that are at variance with the facts, such as “the president of France is Francois Mitterrand,” which is currently not the case.

“The former implies conscious perception, the latter may involve consciousness in humans, but it can also encompass utterances that don’t involve consciousness and are merely inaccurate statements.”

And if we treat bots (such as my Bredebot) as sentient entities, we can get into all sorts of trouble. There are documented cases in which people have died because their bot—their little buddy—told them something that they believed was true.

Adapted by Google Gemini from the image here. CBS Television Distribution. Fair use.

After all, “he” or “she” said it. “It” didn’t say it.

Today, we often treat real people as things. The hundreds of thousands of people who were let go by the tech companies this year are mere “cost-sucking resources.” Meanwhile, the AI bots who are sometimes called upon to replace these “resources” are treated as “valuable partners.”

Are we endangering ourselves by treating non-person entities as human?