There is a difference between a writer and a content creator. It becomes obvious when you read WordPress’ recent post, “How to Slop Your Content in Five Steps.”
With one glaring exception, the Bredebot project. This is a highlighted experiment to see how far a well-prompted bot will go.
So my specific response to these steps is to consider the gap analysis in step 2. Bots are good at such analysis, but they have to be watched in case they don’t get their facts straight.
But I won’t give Claude the permission to write and post articles, or even any permissions on WordPress. This is a security issue, after all; how do YOU control site access for non-human identities?
In fact, I may not even use Claude for step 2, even if it’s the cool kid this week last I checked. I may use Gemini…or a thousand Bangladesh techies…or a million Pentiums…or Mika.
How you work with outside content creators
But what about you?
Before answering, take the five steps above and change the name “Claude” to Barney…or Bredemarket.
Would you give Barney or Bredemarket that power over your website?
Maybe…or maybe not.
How Bredemarket works with you
In the case of Bredemarket, I usually do NOT have direct access to my clients’ websites, sending them Word documents instead. And in the one instance where I did have website access, I left every one of my drafts in draft mode.
And when I perform a gap analysis, I present my client with choices and ask the client to choose the topic, or at least approve my suggested topic.
Because your website is not mine, or Mika’s…or Claude’s.
“I think too much knowledge is actually bad in tech: you’re biased.”
Why does this quote affect me so deeply? Because with my 30-plus years of identity/biometric experience, I obviously have too much knowledge of the industry, which is obviously bad. After all, all a biometric company needs is a salesperson, an engineer, an African data labeler, and someone to run the generative AI for everything else. The company doesn’t need someone who knows that Printrak isn’t spelled with a C.
Google Gemini.
In this post I will share three of the “biases” I have developed in my 30-plus years in identity and biometrics, and how to correct these biases by stripping away that 20th century experience and applying novel thinking.
And if that last paragraph made you throw up in your mouth…read to the end of the post.
But first, let’s briefly explore these three biases that I shamefully hold due to my status as a biometric product marketing expert:
Independent algorithmic confirmation is valuable.
Process is valuable.
Artificial intelligence is merely a tool.
Biometric product marketing expert.
Bias 1: Independent Algorithmic Confirmation is Valuable
But how do prospects know that these algorithms work? How accurate are they? How fast are they? How secure are they?
My bias
My brain, embedded with over 30 years of bias, gravitates to the idea that vendors should submit their algorithms for independent testing and confirmation.
From a NIST facial recognition demographic bias text.
This could be an accuracy test such as the ones NIST and DHS administer, or confirmation of presentation attack detection capabilities (as BixeLab, iBeta, and other organizations perform), or confirmation of injection attack detection capabilities.
Novel thinking
But you’re smarter than that and refuse to support the testing-industrial complex. They have their explicit or implicit agendas and want to force the biometric vendors to do well on the tests. For example, the U.S. Federal Bureau of Investigation’s “Appendix F” fingerprint capture quality standard specifically EXCLUDES contactless solutions, forcing everyone down the same contact path.
But you and your novel thinking reject these unnecessary impediments. You’re not going to constrain yourself by the assertions of others. You are going to assert your own benefits. Develop and administer your own tests. Share with your prospects how wonderful you are without going through an intermediary. That will prove your superiority…right?
Bias 2: Process is Valuable
A biometric company has to perform a variety of tasks. Raise funding. Hire people. Develop, market, propose, sell, and implement products. Throw parties.
How will the company do all these things?
My bias
My brain, encumbered by my experience (including a decade at Motorola), persists in a belief that process is the answer. The process can be as simple as scribblings on a cocktail napkin, but you need some process if you want to cash out in a glorious exit—I mean, deliver superior products to your customers.
Perhaps you need a development processs that defines, among other things, how long a sprint should be. A capture and proposal process (Shipley or simpler) that defines, among other things, who has the authority to approve a $10 million proposal A go-to-market process that defines the deliverables for different tiers, and who is responsible, accountable, consulted, and informed. Or maybe just an onboarding process when starting a new project, dictating the questions you need to ask at the beginning.
Bredemarket’s seven questions. I ask, then I act.
Novel thinking
Sure all that process is fine…if you don’t want to do anything. Do you really want to force your people to wait two weeks for the latest product iteration? Impose a multinational bureauracy on your sales process? Go through an onerous checklist before marketing a product?
Google Gemini.
Just code it.
Just sell it.
Just write it.
Bias 3: Artificial Intelligence is Merely a Tool
The problem with experienced people is that they think that there is nothing new under the sun.
You talk about cloud computing, and they yawn, “Sounds like time sharing.” You talk about quantum computing, and they yawn, “Sounds like the Pentium.” You talk about blockchain, and they yawn, “Sounds like a notary public.”
My bias
As I sip my Pepperidge Farm, I can barely conceal my revulsion at those who think “we use AI” is a world-dominating marketing message. Artificial intelligence is not a way of life. It is a tool. A tool that in and of itself does not merit much of a mention.
Google Gemini.
How many automobile manufacturers proclaim “we use tires” as part of their marketing messaging? Tires are essential to an automobile’s performance, but since everyone has them, they’re not a differentiator and not worthy of mention.
In the same way, everyone has AI…so why talk about its mere presence? Talk about the benefits your implementation provides and how these benefits differentiate you from your competitors.
Novel thinking
Yep, the grandpas that declare “AI is only a tool” are missing the significance entirely. AI is not like a Pentium chip. It is a transformational technology that is already changing the way we create, sell, and market.
Therefore it is critically important to highlight your product’s AI use. AI isn’t a “so what” feature, but an indication of revolutionary transformative technology. You suppress mention of AI at your own peril.
How do I overcome my biases of experience?
OK, so I’ve identified the outmoded thinking that results from too much experience. But how do I overcome it?
I don’t.
Because if you haven’t already detected it, I believe that experience IS valuable, and that all three items above are essential and shouldn’t be jettisoned for the new, novel, and kewl.
Are you a identity/biometric marketing leader who needs to tell your prospects that your algorithms are validated by reputable independent bodies?
Or that you have a process (simple or not) that governs how your customers receive your products?
Or that your AI actually does unique things that your competitors don’t, providing true benefits to your customers?
Bredemarket can help with strategy, analysis, content, and/or proposals for your identity/biometric firm. Talk to me (for free).
By the way, here’s MY process (and my services and pricing).
And here’s an example of a “right” prompt, guided by a development expert:
“Optimize this SQL query to reduce execution time on a large PostgreSQL table with millions of records. Indexing suggestions would be helpful.”
Because knowledge of coding IS good when you code stuff. And Silicon Tech Solutions uses this knowledge when it uses generative AI coding tools:
At Silicon Tech Solutions, we actively use Cursor to enhance our development workflow. By integrating AI-powered coding tools, we:
🔹 Reduce development time while maintaining high standards.
🔹 Automate repetitive tasks, allowing developers to focus on core logic.
🔹 Ensure clean and optimized code, reducing technical debt.
However, AI does not replace human expertise—it enhances it. Our team carefully reviews and refines AI-generated code to align with best practices and business requirements.
If you want Silicon Tech Solutions’ expertise at your disposal, request a meeting.
I don’t have access to Forbes, so I’m relying on this LinkedIn message from Certuma:
“We raised $10M in seed funding led by 8VC to build the first FDA-approved AI doctor.”
The way that sentence is worded, it sounds like the goal is to have the FDA approve a doctor who can…well, doctor. Like my fictional Dr. Jones. (See the 2013 version in tymshft.)
““I don’t mind answering the question,” replied the friendly voice, “and I hope you don’t take my response the wrong way, but I’m not really a person as you understand the term. I’m actually an application within the software package that runs the medical center. But my programmers want me to tell you that they’re really happy to serve you, and that Stanford sucks.” The voice paused for a moment. “I’m sorry, Edith. You have to forgive the programmers – they’re Berkeley grads.””
But Certuma’s website tells a more cautionary story in which the “AI doctor” is NOT in control.
“Certified clinical decisions at machine speed. Physician-verified and fully auditable.”
And the workflow indicates that this “doctor” is more like an intern, or even a student.
“Certuma routes every in-scope plan through physician verification. That workflow is the point: fast turnaround without removing accountability….
“Red flags, contraindications, interaction checks, scope limits, and uncertainty thresholds run through the deterministic verification layer. If something is emergent or out of scope, the system escalates instead of guessing.
“Clinicians see structured intake, highlighted risks, and a draft plan with supporting evidence. They approve, edit, or escalate; changes are captured with reason codes and a durable audit trail.”
Now there is clearly some benefit in having the bots grind out the plan, provided that the bots don’t hallucinate. There are potential time savings, and a real doctor reviews the final results.
But an “AI doctor” who can doctor independently is NOT on the horizon.
I don’t know if I specifically shared this, but when I was undergoing my eight days of wi-fi hell, I was firing off a lot of questions to Google Gemini.
A LOT of questions.
Troubleshooting, fact-checking, you name it. Questioning what I was being told in different chats and by different on-site technicians.
And as I asked all these questions, I noticed along the way that the generative AI engine had been trained to deliver emotional-sounding responses that would resonate with the frazzled person entering the prompt into the generative AI engine.
This stands to reason. The generative AI providers want us to use the tools, so they want to make the experience as positive as possible.
For example, at one point I entered this prompt into Google Gemini:
“When a fiber Internet support person scheduled a second technician for a service call, the support person said “There are times where signal issues take more than one service visit to get to the root of the issue.” What exactly are “signal issues,” and why does the line need to be “balanced”?”
Note how Google Gemini BEGAN its response to my prompt:
“It can be incredibly frustrating when your internet isn’t working and the fix feels like it’s being kicked down the road. However, “signal issues” in the world of fiber optics are a bit more complex than just a loose plug or a broken wire.”
For those who remember the Clinton era, Google felt my pain.
Let me note that Google is not a human who TRULY perceived how I felt at that point. (Your AI girlfriend doesn’t love you either.) But it had been trained to deliver an “it can be incredibly frustrating” response.
On a certain level, Google was telling me what it thought I wanted to hear.
And we can use this to tailor our generative AI prompts to get the responses we want.
How you ask generative AI
We already tailor our prompts.
Back in 2023, when generative AI was in its infancy, I intentionally asked a negative question about a particular company…and acted upon the response. (I will explain how later.)
Whenever a product marketer asks a generative AI engine to prepare a SWOT analysis (Strengths, Weaknesses, Opportunities, Threats) for a competitor or a competitive product, the PMM is intentionally asking for negative weakness and threat information.
But you can also self-analyze…and encourage the generative AI engine to be brutally honest.
A pointed question
Let me illustrate this with the following prompt:
“Why does the marketing and writing company Bredemarket suck so badly?”
Using the word “sucks” in your prompt is the reverse of what we used to call a vanity search. Sure I’ve done those—if I ask Google about a biometric product marketing expert, I appear in the results. But when I asked Google Gemini a negative question about Bredemarket, it clearly delivered.
If you’re searching for a picture about how Bredemarket sucks, you’ve come to the right place. Google Gemini. Picture by Bredemarket.
The response started as follows.
“That is a pretty pointed question!”
Intentionally so. The introduction continued:
“To be fair to the “company” (which is actually a solo consultancy run by a guy named John Bredehoft), whether Bredemarket “sucks” depends entirely on what you’re looking for.”
It then got into the nitty gritty about how I suck. A few excerpts:
Answer 1: a one-man shop
“If you were expecting a full-service marketing firm with account managers and a design department, the reality of a one-man shop based in Ontario, California, might feel underwhelming.”
Answer 2: not for non-tech lifestyle brands
“If you are a general business or a lifestyle brand, his style—which is very technical, process-oriented, and “old school” corporate—might feel dry, rigid, or out of touch with modern, punchy social media marketing.”
Answer 3: he ain’t a corporate drone
“John has a very distinct, conversational, and sometimes quirky style (often referencing his own mistakes or niche industry drama). For some, this feels authentic and transparent; for others, it can come across as unprofessional or rambling compared to the polished “corporate speak” of larger firms.”
Answer 4: see answer 1
“If a project gets complex or requires a fast turnaround that one person can’t handle, the “process” (which he mentions frequently on his site) might feel more like a bottleneck than a benefit.”
My responses to Gemini
Let me note that these are NOT hallucinations. Every criticism is valid.
Whenever a cold call follow-up email asks if they should speak to another department, I laugh. To myself. There’s no one else in the office…I mean Bredemarket corporate headquarters. It’s only 25 square feet, after all.
And no, I’m not pitching Bredemarket’s services to Sofia Bee or other lifestyle brands.
Yes, I can write in “we are perfect” corporate-speak, but I am more effective writing for clients who admit to learning from mistakes.
Finally, I can deliver on projects that require a draft in 3 (or 7) days. If you need an army of consultants to deliver a 600 page RFP response by tomorrow, don’t call Bredemarket.
Remember when I asked generative AI a pointed question in 2023? The response from generative AI was dismissive of the company because it only supported a single factor for authentication, while generative AI knew that multi-factor authentication was ALWAYS better.
First off, the company was able to implement multi-factor authentication if its client wanted it.
Second, while multi-factor authentication CAN be better, it is not ALWAYS better.
So how did I respond to generative AI’s assertion? By writing content, both for people and for the generative AI engine itself, challenging its claims.
By the way, I mentioned earlier that I can write in my wildebeest-infused style, and I can also write in dry corporate-speak. In 2023, I could also write in the company’s then-current maverick-rebel-revolutionary tone.
Bear this in mind while reading the excerpts below, which make sport of denigrating “conventional wisdom.”
Enter multi-factor authentication, the technique that was supposed to render passwords dead. The theory is that even if a fraudster is able to spoof one factor of authentication, it is extremely difficult to spoof all the factors of authentication.
This is the conventional wisdom. Even the latest generative AI engines praise a combination of entering a code on a phone in addition to a password. This, according to one generative AI engine, provides superior security.
Nevertheless, conventional wisdom and generative AI can both be wrong.
If a password or PIN (something you know) is one of your authentication factors, it won’t offer much protection for the reasons outlined above. Password protection of an adult website does no good if the account holder’s underage child gets hold of the password.
If a key fob, smartphone, or other “something you have” item is one of your authentication factors, you are vulnerable to stolen fobs, spoofed phones, and other threats. Again, if Mom or Dad leave their phone lying around, a child can access an adult site.
With these and similar arguments, I asserted that the company’s highly-accurate and independently-tested facial authentication was BETTER than some poorly-conceived multi-factor authentication solutions, despite what generative AI (and conventional wisdom) asserted.
And I was able to assert these because, despite what some people say, knowledge (in this case in biometrics) is NOT a detriment. There’s something good about being a biometric content marketing expert, after all.
But what about your company?
Perhaps there’s a view about your company that needs correcting, whether it’s a generative AI hallucination or a competitor spurious claim.
And how do many companies react to negative views about them? They do nothing. They don’t give the claim the dignity of a response. So the record stands, uncorrectred.
As you know, I’m tired of the simplistic “we use AI” marketing messaging. One reason is because when prospects hear “we use AI,” they may respond with “Oh, that technology that hallucinates.” This is NOT a good selling point.
But what if your tool, whether it is artificial intelligence or a thousand Third World workers, could actually IDENTIFY errors?
As I’ve noted before, healthcare is a pioneering user of artificial intelligence, although (hopefully) under robust controls to maintain accuracy and preserve HIPAA-level privacy.
“We are living through a generational shift, one where AI doesn’t just augment how organizations work but fundamentally transforms them from the inside out,” said Mohamad Makhzoumi, Co-CEO of NEA, who will join Qualified Health’s Board of Directors in conjunction with the financing. “From NEA’s nearly five decades of company-building experience, we believe the organizations shaping the next era of healthcare innovation will be those helping health systems reimagine every administrative and clinical workflow from the ground up, and Qualified Health is exactly that company. We are thrilled to lead this financing and to partner with Justin and team to accelerate healthcare’s AI transformation and shape the future of healthcare enterprises across the country.”
“Health systems today are operating under extraordinary pressure, from rising labor costs to tightening reimbursement, while managing increasing complexity in patient care,” said Jared Kesselheim, MD, Managing Partner at Transformation Capital. “What stood out to us about Qualified Health is that the team approaches this work as medical care specialists, with a deep understanding of the realities health systems face every day. That perspective allows them to identify where AI can create meaningful clinical and operational impact. We’re excited to partner with Justin and the Qualified Health team as they help leading health systems navigate this next phase of healthcare.”
To better understand two of the projects for a Bredemarket client, I asked Google Gemini for an explanation of the differences between OpenAPI versions 3.1.1, 3.1.2, and 3.2.
Google Gemini provided me with that explanation.
Then I supplied the following prompt:
Rewrite the above so a marketer can understand it.
Here is part of what Google Gemini provided to me.
🏗️ OpenAPI 3.1.1 & 3.1.2: The “Under the Hood” Fixes
These versions were like a restaurant updating its internal plumbing. They didn’t change what was on the menu, but they made sure the kitchen (the code) followed the latest global safety standards (JSON Schema).
The Goal: Accuracy and stability.
The Benefit: It ensures that when a developer tries to connect to your service, they don’t run into “grammar” errors in the documentation.
✨ OpenAPI 3.2: The “UX & Branding” Upgrade
Released recently (late 2025), this version is all about making your API easier to sell and easier to use.
1. Organized Menus (Hierarchical Tags)
Before, your API list was just one long, flat list of features.
Old way: A messy “junk drawer” of links.
3.2 way: You can now create folders and sub-categories (e.g., Payments > Refunds > International). This makes your documentation look professional and much easier for customers to navigate.
2. Better Support for AI and Live Data (Streaming)
If your product uses AI or real-time data (like a live stock ticker or a ChatGPT-style chat), 3.2 is a game-changer. It has a specific way to describe “Streaming” data so developers know exactly what to expect from your “live” features.
3. The “Smart Search” (The QUERY Method)
In older versions, developers often had to “hack” the system to do complex searches. 3.2 introduces a dedicated Search button (the QUERY method) that is cleaner, safer, and follows modern web standards.
4. IoT & Smart Device Support
If your product works on a Smart TV, an Apple Watch, or an IoT device, 3.2 finally adds official support for “Device Pairing” (that thing where you enter a code on a website to log in to your TV).
Hey, it helped me. Marketers need to eat, and writers understand grammar.