Because a Marketer Needs to Eat: Translating OpenAPI Version Difference Explanations into Marketing-Speak

To better understand two of the projects for a Bredemarket client, I asked Google Gemini for an explanation of the differences between OpenAPI versions 3.1.1, 3.1.2, and 3.2.

Google Gemini provided me with that explanation.

Then I supplied the following prompt:

Rewrite the above so a marketer can understand it.

Here is part of what Google Gemini provided to me.

🏗️ OpenAPI 3.1.1 & 3.1.2: The “Under the Hood” Fixes

These versions were like a restaurant updating its internal plumbing. They didn’t change what was on the menu, but they made sure the kitchen (the code) followed the latest global safety standards (JSON Schema).

  • The Goal: Accuracy and stability.
  • The Benefit: It ensures that when a developer tries to connect to your service, they don’t run into “grammar” errors in the documentation.

✨ OpenAPI 3.2: The “UX & Branding” Upgrade

Released recently (late 2025), this version is all about making your API easier to sell and easier to use.

1. Organized Menus (Hierarchical Tags)

Before, your API list was just one long, flat list of features.

  • Old way: A messy “junk drawer” of links.
  • 3.2 way: You can now create folders and sub-categories (e.g., Payments > Refunds > International). This makes your documentation look professional and much easier for customers to navigate.

2. Better Support for AI and Live Data (Streaming)

If your product uses AI or real-time data (like a live stock ticker or a ChatGPT-style chat), 3.2 is a game-changer. It has a specific way to describe “Streaming” data so developers know exactly what to expect from your “live” features.

3. The “Smart Search” (The QUERY Method)

In older versions, developers often had to “hack” the system to do complex searches. 3.2 introduces a dedicated Search button (the QUERY method) that is cleaner, safer, and follows modern web standards.

4. IoT & Smart Device Support

If your product works on a Smart TV, an Apple Watch, or an IoT device, 3.2 finally adds official support for “Device Pairing” (that thing where you enter a code on a website to log in to your TV).

Hey, it helped me. Marketers need to eat, and writers understand grammar.

The Digital Doppelgänger Dilemma: Identity Tech Tangles with Non-Human Hordes

Alright, gather round, fellow grizzled marketing veterans of the tech wars. I’ve been around the digital block, from when ‘identity verification’ meant checking a photostatic ID to now, where we’re verifying… whatever that thing is online.

Let me tell you, back in the day, we worried about humans pretending to be other humans. Remember that simplicity? Simpler times, my friends. Now, the battlefield has shifted. We’re not just fighting human frailty; we’re wrestling with code, bots, synthetic IDs, and digital shadows that behave like humans, think like humans (well, kinda), but definitely aren’t human.

These are non-human identities, folks, and they are multiplying faster than dust bunnies under a server rack.

I’ve been in this game decades. Tech, identity, biometrics – I’ve spun marketing yarns for all of them. And believe me, this current wave of non-human identity is making everything we did before look like child’s play. It’s like trying to herd hyper-intelligent cats that can also simultaneously occupy a thousand locations at once and have absolutely zero moral compass.

Our world, the identity verification space, is right in the crosshairs. We’re the bouncers at the digital club, and suddenly, half the queue isn’t just a tough crowd; they’re holograms, clever marionettes, and straight-up ghosts in the machine.

So, how are we, the builders and sellers of identity trust, reacting to this alien invasion? We’re not just rolling over. We’re pivoting, evolving, and sometimes, frankly, just scrambling. Here are three ways I’m seeing identity verification companies grapple with the rise of the non-human horde.

1. Embracing the ‘I’m Not a Robot’ 2.0 (and 3.0, and 4.0…)

We all remember CAPTCHAs. They were cute. For a while. Find the traffic lights? Sure thing. It felt like a little game. But then the bots got smart. Really smart. Now, those standard visual tests are practically meaningless. AI can crush them faster than I can spell ‘biometrics’.

So, the first big response is the hyper-evolution of CAPTCHA-like challenges. We’re moving beyond static puzzles and into behavioral, dynamic tests.

This isn’t just about what you can identify, but how you do it. Think about it: a human clicks that checkbox in a messy, slightly delayed, unpredictable way. A bot does it perfectly, every single time. Modern IDV solutions are measuring that micro-behavior. They’re tracking mouse movements, keystroke patterns, tap pressure, even the subtle sway of your phone.

We’re also seeing a pivot towards sensory-based challenges. “Record a video of yourself saying a specific phrase while looking left and right.” This kind of liveness detection is much harder for a bot to spoof. But – and here’s the kicker – it’s not impossible. I’ve seen some scarily realistic deepfakes that could pass a basic liveness test.

This is why this evolutionary branch is so frantic. We’re in a constant arms race. We build a better, harder test; the bot farms, with their wildebeests of marketing consultants whispering in their ears, devise a cleverer way around it. They’re advising these non-human wombats, helping them look just human enough to waddle past the gate. It’s a never-ending cycle of innovation and counter-innovation. For every new behavioral marker we track, they find a way to synthetically mimic it.

So, while we are definitely iterating on this ‘show me you’re a human’ model, we all know it’s just one layer of defense. Relying solely on these challenges is like bringing a spork to a lightsaber fight.

2. Doubling Down on the ‘Human’ in Human-Centric Identity

This might sound counter-intuitive. In a world overrun by bots, are we really doubling down on the human? Yes, absolutely. Because the ultimate defense against a non-human identity is proving, beyond a shadow of a doubt, that you have a tangible, physical human on the other end.

This is where my old stomping ground, biometrics, comes into play. But it’s biometrics on steroids. We’re not talking about a simple fingerprint scan anymore. We’re moving into layered, high-fidelity, multimodal biometrics.

Imagine an IDV process that doesn’t just take a selfie. It captures your face, of course, but also analyzes your gait, your voice pattern, the way you hold your device, maybe even your heartbeat through your smartphone’s camera. The goal is to create a multi-dimensional, unique ‘digital DNA’ that is monumentally harder to replicate or synthesize.

And it’s not just what biometric data we use, but where that data lives and how it’s handled. Decades ago, we were terrified of biometric databases getting breached. Now, the emphasis is on decentralization. We’re building systems where your biometric template never leaves your device, or is broken up into useless shards and stored across a blockchain. You, the human, retain control. This doesn’t just improve security; it boosts consumer trust, which is a key part of the value proposition we need to sell.

This human-centric focus is our attempt to build an insurmountable moat. We’re betting that, no matter how clever the non-human identities get, they will always struggle to convincingly replicate the full complexity and spontaneity of a real human being. It’s about focusing on the one thing they can’t truly be – us.

3. Fighting Code with… Well, Better Code (and Data, Lots of It)

Let’s be real. In this digital landscape, you can’t always verify a human. Sometimes, you’re dealing with a legitimate bot or service account that needs authorization, not liveness detection. And sometimes, you just have to assume that everything could be a lie.

This third response is all about shifting from ‘verify identity’ to ‘risk assessment’. We’re moving away from a binary pass/fail and towards a probability score.

How do we do this? With massive amounts of data and serious, brain-melting machine learning. We are pooling signals from everywhere: network data, device fingerprinting, behavioral analytics (as mentioned before), global fraud consortium databases, and even dark web chatter.

We build massive, complex models that ingest this data in real-time. The goal isn’t just to spot a bot, but to identify anomalies. If an ID is coming from a dynamic IP address in Eastern Europe, using a mobile browser that perfectly matches one known for bot activity, and is trying to access a secure bank account at 3:00 AM… that’s a red flag, human or not.

These systems learn. They spot patterns of non-human behavior that we, as puny humans, might miss. They cluster suspicious activity, identify new bot variants, and can instantly adjust their risk scores. It’s about building an immune system for the digital world. A system that can recognize ‘self’ (the legitimate identities) and ‘non-self’ (the non-human identities and fraudulent activity) and react accordingly.

This is the least ‘glamorous’ of the responses. It’s not about cool biometric scans or catchy ‘I’m not a robot’ tests. It’s about back-end engineering, data science, and an unsexy, relentless pursuit of digital signals. But in the long run, this may be our most effective weapon. It’s about creating an intelligent, adaptive filter that makes the cost of non-human identity fraud too high to be profitable.

So, What’s the Playbook for the CMO?

Now, I know what you’re thinking. “Alright, Bredebot, this is all fascinating (and terrifying), but what does it mean for me, the marketing head of an IDV company?”

It means our message has to change. We can’t just sell ‘identity verification’ anymore. That’s last-generation thinking. We have to sell:

  1. Trust Resilience: We’re not just confirming an identity; we’re building a system that keeps trust intact, even when under non-stop digital siege.
  2. Dynamic Defense: We’re selling adaptive, evolving platforms, not a static product. Our marketing needs to convey that we’re always one step ahead.
  3. Friction-Conscious Security: We have to address the age-old paradox: security vs. user experience. Our messaging must highlight how we are fighting bots without making life miserable for real humans.
  4. Data-Driven Certainty: We are the masters of data, turning massive amounts of digital noise into clear, actionable, high-probability trust decisions.

This is a whole new era, my marketing comrades. The old playbook, with its talk of ‘simple verification’ and ‘identity assurance’, is obsolete. Our job is to craft a new narrative, one that addresses the non-human threat head-on and shows how our technology is the only thing standing between the digital world and an onslaught of simulated chaos.

It’s complex, it’s fast-moving, and it’s a little scary. But hey, we’re veterans. We’ve navigated big tech, identity, and biometrics. We can handle a few digital doppelgängers. Just… don’t ask me what those marketing wildebeests are telling the wombats these days. I have enough to worry about.

Understand, Adapt, or Create

When Bredemarket begins an engagement with a client, I usually have no idea what processes, templates, or practices the client already has. So I have to handle whatever is or is not there and either understand what is there, adapt it, or create what is needed.

Understand

In some cases clients already have a process.

For example, as I delved into the Sharepoint library for one of Bredemarket’s clients, I found a complete set of branding guidelines that covered logos, colors, and many other aspects of the company’s branding.

In that case, my job is to simply make sure that I align with the client’s branding, and that my content, proposals, and analysis work for the client aligns with the branding guidelines…or with whatever other process the client has.

Adapt

Sometimes the client has a process, but it needs to be adapted in some way.

Here’s an example I can publicly share: not from a Bredemarket client, but from my former employer Motorola (back when Motorola was one company). I was a product manager at the time, and products were developed via a “stage gate” process. At Motorola, of course, it was called M-Gates.

Our “Printrak” group (automated fingerprint identification systems, computer aided dispatch systems, and the like) was the odd group out in our part of Motorola (the part that would later become Motorola Solutions). Most of the people in that part of Motorola sold police radios that were manufactured in bulk. Therefore the stage gate process included a step for a limited production run of police radios before moving to full production.

That didn’t apply for the software we sold to government systems. For example, the entire production run for the Omnitrak 8.1 release was no more than a half dozen systems for customers in Switzerland, Oklahoma, and other places. A limited production run wouldn’t make sense.

So OUR stage gate process eliminated that step and went straight to full production.

Create

And then there are the clients who don’t have anything. In these cases, my invention hat goes on.

For one Bredemarket client, I was asked to develop several pieces of collateral, such as (ironically) one on process maturity, and several random pieces of content tied to a product release.

I decided to approach it more systematically by introducing a simple go-to-market process that defined the external and internal collateral required for a “high” tier product release and a “low” tier product release. Resisting my urge to define something thorough, I simplified the GTM process as much as possible, while still providing guidance on what a product release should contain.

The client rejected the idea: “we don’t need no steenking process.”

Not surprisingly, the process maturity content was never released either.

I’ve had better luck with other Bredemarket clients, defining go-to-market, proposal, and other processes for them as needed.

Be Prepared

Providing product marketing expertise is much more than writing about a product.

Before I write a word of text, I ensure that the content aligns with the client’s strategies…or my own strategies if the client doesn’t have any.

And of course I ask questions.

When is a Law Enforcement Camera a Law Enforcement Camera?

Many years ago I was driving on Holt Boulevard in Montclair, California, preparing to make a left turn on Central. I followed the vehicle behind me and made my left turn…only then noticing that the left turn light was now red.

As the registered owner of the vehicle I was driving, I received an email from the city of Montclair a few days later. Because this is when Montclair was using cameras for traffic enforcement.

Off to traffic school.

Montclair doesn’t use traffic cameras any more, but all sorts of cameras are owned by, or accessible to, law enforcement agencies.

But how should they be used?

404 Media reported that the Georgia State Patrol accesses Flock cameras, for the intended purpose of gathering information for serious crimes. But what happens when the camera captures something not serious?

“Georgia State Patrol used its system of Flock automated license plate reader (ALPR) surveillance cameras to issue a ticket to a motorcyclist who was allegedly looking at his cell phone while riding, according to a copy of the citation obtained by 404 Media….The incident happened December 26 in Coffee County, Georgia. The ticket lists the offense as ‘Holding/supporting wireless telecommunications device,’ and includes the note ‘CAPTURED ON FLOCK CAMERA 31 MM 1 HOLDING PHONE IN LEFT HAND.’”

The man went to court and the ticket was dropped, but 404 Media is still outraged that the ticket was issued in the first place. Not because of Georgia’s policies, but because of other policies.

“Many police departments go out of their way to tell community members that Flock cameras are not used for traffic enforcement. For example, the City of Glenwood Springs, Colorado, states in a FAQ that “GSPD [Glenwood Springs Police Department] does not use Flock cameras for traffic enforcement, parking enforcement, or minor code violations.” El Paso, Texas, tells residents “these are not traffic enforcement cameras. They do not issue tickets, do not monitor speed, and do not generate revenue. They are investigative tools used after crimes occur.” Lynwood, Washington tells residents “these cameras will not be used for traffic infractions, immigration enforcement, or monitoring First Amendment-protected expressive activity” (Flock cameras have now been used for all of these purposes, as we have reported.)”

You will recall that I addressed another Flock Safety case, in which a citizen made public records requests from two Washington state jurisdictions. The jurisdictions said that they didn’t have the data; Flock Safety did. Flock Safety said that it had deleted the data.

Basically, Flock Safety is controversial, and some people are going to oppose ANYTHING they do. Even when Flock Safety technology protects people from dangerous drivers.

My view is that if a camera is used by a law enforcement agency, and there is no law prohibiting the law enforcement agency from using a camera for a particular purpose, then the agency can use the camera. There appears to be no such law in Georgia, so I’m not bent out of shape over this.

What are your thoughts? Is this a privacy violation?

Bredemarket Social Focus, March 29, 2026

After a month, I expanded things a bit.

Bredemarket is focusing on the channels that matter:

As of March 29

And no, the unsubscribable Instagram is not in my current mix.

Europe is Looking At More Than Just Biometric Testing

A little more detail, courtesy EU Brussels, regarding the policy brief published by the EU Innovation Hub for Internal Security, coordinated by eu-LISA together with the European Commission, Europol and Frontex.

As I noted earlier today, one proposal is for Europe to perform its own independent biometric testing, reducing Europe’s dependence on the American National Institute of Standards and Technology (NIST).

“The second is a centralised evaluation and testing platform connected to that repository, allowing standardised, independent and continuous assessment of biometric technologies, including benchmarking across vendors.”

But if there is a second proposal (European testing) in the cited European biometric policy brief, there must also be a first proposal—one I failed to discuss this morning.

“The first is a common EU biometric data repository containing datasets that comply with European rules, reflect the demographics and use-cases relevant to EU authorities and are stored in a secure environment.”

Makes sense. If you are going to test you need test data. And NIST has no obligation to ensure its test data complies with the General Data Protection Regulation (GDPR). The subjects in NIST test databases rarely provided the “explicit consent” mentioned in GDPR, and the “right to erasure” from a NIST database is…laughable.

Yes, it’s extremely challenging to construct a testing database that complies with GDPR.

And NIST certainly ain’t gonna do it.

Will a European entity construct it?

And if the right to erasure is maintained, how will you maintain historical consistency of test results?

Modem Replacement Part Four

Also see parts one, two, and three.

I intentionally waited two days to write this. Let’s pick up the story from Monday.

Monday afternoon

The second technician arrived at my house and ended up replacing ALL the cabling between the utility pole and the (new) modem. Among other issues, there was water in the cable. And I don’t need to be a coaxial product marketing expert to know that water in a cable is not a good thing.

Oh yeah: coaxial. Because my internet network is NOT a fiber network. It’s a hybrid network that starts as fiber, but then becomes coaxial for “the last mile.”

And this is relevant, because even after he re-cabled everything, he said there was an amplifier issue down the street. If I had been on fiber, there would be no need for a nearby amplifier.

And no, he wasn’t talking about a Spinal Tap amplifier.

From a November 2025 post.

I prompted Google Gemini for an explanation of network amplifiers, but I’m not going to reprint it here. Suffice it to say that the ISP needed to perform some work, but it wasn’t customer-facing work, was apparently super-secret work, and I would never be informed when the work was done. I was then told that if I still had problems on Friday (4 days later) NOT to contact the ISP’s regular support line, but to instead call his boss directly.

The tech restarted the modem at 2:55 pm.

He left at 3:13 pm.

My wi-fi went down at 3:29 pm.

Tuesday

It had been a week since my wi-fi started failing, And since the second technician had left on Monday afternoon, the wi-fi hadn’t stayed up for more than 45 minutes at a time. I finally gave up trying.

Then I disobeyed instructions.

  • Early Tuesday morning I texted “the boss.” No answer.
  • Then I called the boss. No answer.
  • By early afternoon I contacted my ISP, but not for customer support. I called the “Retention Department.” Yeah, the department that you call to cancel your service.

The man I spoke to had no visibility into the scheduling of our local amplifier repair, but he promised someone would call me back within the hour.

A woman called me 20 minutes later. She had no visibility into the super-secret amplifier repair schedule either.

She asked if the second technician had reported the issue to maintenance or construction. I didn’t know. Turns out this is critical information; if construction had to get involved, city permits would be required before construction could even begin. Who knows how long that would take.

But she had a solution.

  • Send a third technician out.
  • Have the third technician tell me whether they would report the issue to maintenance or construction.
  • If it was maintenance, then the super-secret group would perform maintenance. Maybe by Friday. Maybe later.

I kept my mouth shut, but this sounded like a colossal waste of time. As you will see, it wasn’t.

After I got off the phone I made a decision. Since this problem was going to persist until the ISP fixed it—or I got a new ISP—I was going to have to work around it. So I set up a TV table in front of the love seat next to the modem, then moved my laptop to the TV table. That way if my laptop lost wi-fi I could immediately restart the modem, and hopefully my laptop soul reconnect to wi-fi and I wouldn’t lose anything.

So here’s how THAT worked out:

  • Tuesday afternoon 2:01 pm: Set up laptop near the modem.
  • 2:21 pm: I lost wi-fi while submitting an online form.
  • 2:26 pm: After a modem restart, successfully submitted the online form.
  • 2:44 pm: I lost wi-fi while in mail.
  • 2:50 pm: After a modem restart, I refreshed my mail tab and everything appeared.
  • 5:30 pm: By this point I had enjoyed uninterrupted wi-fi for over two hours. I had stopped working on my laptop and was writing a Bredemarket blog post on my phone (like I am doing right now).
  • 5:50 pm: I lost wi-fi and restarted the modem.
  • 6:03 pm: There was an election in the city of Ontario that day, so I left the house to vote.
  • 6:30 pm: By the time I returned, the wi-fi was down.

A pain, but I could limp along. But by that time I was done working for the night.

Me for several days. Google Gemini.

Wednesday morning

After a few modem restarts during the night, I restarted it at 7:11 am and started working at 7:29. (Incidentally, I highly recommend Toggl Track.)

I got a lot of work done until 10:00 am, when I lost wi-fi while working on a file in OneDrive.

I restarted the modem and made sure OneDrive synced, then worked merrily along until losing wi-fi at 10:22 while working on a Bredemarket client PowerPoint.

I lost wi-fi again at 10:28.

And 11:09.

And 11:26.

But by that time the third technician was on his way.

Wednesday afternoon

I was under the impression that the third technician would climb the utility pole, check the signal-to-noise ratio issue on the amplifier (but he called it a node rather than an amplifier), and go from there.

But that isn’t what he did. He brought his test equipment into the house and started running the (new) modem and (new) router signals through the test equipment.

But he had some news for me.

“The node has improved already, so someone has been here.”

So the super-secret people had completed their mission. Now what?

The technician kept testing. 20 minutes later:

“Coax looking good.”

He was mostly testing in the house, but also testing in the backyard and running to his truck. Ten minutes later:

“The signal’s good.”

Eventually I noticed that he had grabbed a new modem from his truck, but I assumed it was only for testing.

By 12:59 he had to leave for his state-mandated lunch break, but before he left he restarted the modem one more time. He said he’d check it when he returned from lunch.

So I ate lunch myself, although Bredemarket is not subject to state work break requirements.

When he returned at 2 the wi-fi was still up. That’s when he told me that he HAD replaced my new modem (which the ISP store gave me a week ago) with an even newer modem (from his truck). Apparently the connection from the utility pole to the coax cable end was fine, and the connection from the Ethernet cable end to the router was fine, but the new modem itself had issues that the even newer modem didn’t have.

He left, and the wi-fi went down…

…actually it didn’t. As I write this I have enjoyed wi-fi for over 50 hours without interruption.

I even moved my laptop back to the Bredemarket world headquarters.

And if you’ve read all the way to the end of this post, this is actually the SHORT version. Trust me.

The really short version

My summary of what happened between Tuesday March 17 and Wednesday March 25 between two ISP store visits, three technician visits, and countless support chats and calls:

  1. 2 modem replacements 3/18 (new modem), 3/25 (even newer modem).
  2. 1 router replacement 3/20 (not counting my second Google router 3/19).
  3. 2 cable replacements 3/23 (black cable from pole to southeast corner of house, white cable from there to modem in northeast corner of house).
  4. 1 apparent node fix (date unknown, maybe 3/24 or 3/25) to fix signal to noise ratio SNR issue.

And yes, this started with a modem replacement and ended (hopefully) with a modem re-replacement.