Proving Humanity

Does it sometimes seem like humanity is obsolete?

There are seemingly more non-human identities than human ones. Bots are selling, and bots are buying.

And we are preparing for this.

So humanity is no longer necessary.

Or is it?

There are pockets where people value humanity and think that a human brings something that a bot never could.

But before we stop relying on bots and start relying on humans, we need to know whether those humans are real, or if they are bots themselves.

To do this, we have to know who those humans are—proving humanity.


Identifying Non-Human Identities with SPIFFE and SPIRE

I once tried to see if non-human identities could verify and authenticate with the six human factors. (Yeah, six. Watch for the book.)

Definitions

In reality, non-human identities use entirely different authentication methods…with their own acronyms. For example:

  • SPIFFE is the Secure Production Identity Framework For Everyone.
  • SPIRE is the SPIFFE Runtime Environment.

So what are SPIFFE and SPIRE?

“SPIFFE and SPIRE provide strongly attested, cryptographic identities to workloads across a wide variety of platforms”

That wide variety of platforms is distributed.

“SPIFFE and SPIRE provide a uniform identity control plane across modern and heterogeneous infrastructure. Since software and application architectures have grown substantially, they are spread across virtual machines in public clouds and private data centers.”

Distinguishing between the two, the SPIFFE Project “defines a framework and set of standards for identifying and securing communications between application services, while the runtime environment SPIRE “is a toolchain of APIs for establishing trust between software systems across a wide variety of hosting platforms.”

Benefits

Forget all that. Let’s get to the benefits.

Enable defense in depth: Provide strongly attested identities to reduce the likelihood of breach through credential comprise

Reduce operational complexity: Consistent, automated management of identity reduces the burden of devops teams

Interoperability: Simplifies the technical aspects of full interoperability across multiple stacks

Compliance and auditability: Enables mutually authenticated TLS and multiple roots of trust to meet regulatory requirements

Use at Uber

But does anyone use it? Yes. Take Uber:

“We use SPIRE at Uber to provide identity to workloads running in multiple clouds (GCP, OCI, AWS, on-premise) for a variety of jobs, including stateless services, stateful storage, batch and streaming jobs, CI jobs, workflow executions, infrastructure services, and more. We have worked with the open source community since the early stages of the project in mid-2018 to address production readiness and scalability concerns.”

More here.

Now this is admittedly a whole new world for me, far afield from the usual 12345 and gummy arguments where I usually reside. But since bots will soon outnumber people (if they don’t already), we had all better learn it.

WordPress and Claude: No, Yes, Maybe, No, No…and No

There is a difference between a writer and a content creator. It becomes obvious when you read WordPress’ recent post, “How to Slop Your Content in Five Steps.”

Actually, that’s not the title.

Claude the content creator

Whoever or whatever wrote WordPress’ post used a more AEO-friendly title: “How to Build an Endless Stream of Content Ideas with WordPress and Claude.”

And there are five steps.

  • Step 1: Connect Claude to your WordPress.com website.
  • Step 2: Ask Claude to review your website and find content gaps.
  • Step 3: Ask Claude to prioritize topics and create a content calendar.
  • Step 4: Create Claude-assisted outlines and articles.
  • Step 5: Ask Claude to add the article to WordPress.com.

Bredemarket the writer

Before I discuss these five steps, let me state two things specific to me that may not apply to you.

  • With one glaring exception, the Bredebot project. This is a highlighted experiment to see how far a well-prompted bot will go.

So my specific response to these steps is to consider the gap analysis in step 2. Bots are good at such analysis, but they have to be watched in case they don’t get their facts straight.

But I won’t give Claude the permission to write and post articles, or even any permissions on WordPress. This is a security issue, after all; how do YOU control site access for non-human identities?

In fact, I may not even use Claude for step 2, even if it’s the cool kid this week last I checked. I may use Gemini…or a thousand Bangladesh techies…or a million Pentiums…or Mika.

How you work with outside content creators

But what about you?

Before answering, take the five steps above and change the name “Claude” to Barney…or Bredemarket.

Would you give Barney or Bredemarket that power over your website?

Maybe…or maybe not.

How Bredemarket works with you

In the case of Bredemarket, I usually do NOT have direct access to my clients’ websites, sending them Word documents instead. And in the one instance where I did have website access, I left every one of my drafts in draft mode.

And when I perform a gap analysis, I present my client with choices and ask the client to choose the topic, or at least approve my suggested topic.

Because your website is not mine, or Mika’s…or Claude’s.

When Certuma’s Messaging Seems Contradictory: “AI Doctor” or “Physician-Verified”?

I don’t have access to Forbes, so I’m relying on this LinkedIn message from Certuma:

“We raised $10M in seed funding led by 8VC to build the first FDA-approved AI doctor.”

The way that sentence is worded, it sounds like the goal is to have the FDA approve a doctor who can…well, doctor. Like my fictional Dr. Jones. (See the 2013 version in tymshft.)

““I don’t mind answering the question,” replied the friendly voice, “and I hope you don’t take my response the wrong way, but I’m not really a person as you understand the term. I’m actually an application within the software package that runs the medical center. But my programmers want me to tell you that they’re really happy to serve you, and that Stanford sucks.” The voice paused for a moment. “I’m sorry, Edith. You have to forgive the programmers – they’re Berkeley grads.””

But Certuma’s website tells a more cautionary story in which the “AI doctor” is NOT in control.

“Certified clinical decisions at machine speed. Physician-verified and fully auditable.”

And the workflow indicates that this “doctor” is more like an intern, or even a student.

“Certuma routes every in-scope plan through physician verification. That workflow is the point: fast turnaround without removing accountability….

“Red flags, contraindications, interaction checks, scope limits, and uncertainty thresholds run through the deterministic verification layer. If something is emergent or out of scope, the system escalates instead of guessing.

“Clinicians see structured intake, highlighted risks, and a draft plan with supporting evidence. They approve, edit, or escalate; changes are captured with reason codes and a durable audit trail.”

Now there is clearly some benefit in having the bots grind out the plan, provided that the bots don’t hallucinate. There are potential time savings, and a real doctor reviews the final results.

But an “AI doctor” who can doctor independently is NOT on the horizon.

At least not yet.

Returning to Lattice Identity

The last time I delved into lattices, it was in connection with the NIST FIPS 204 Module-Lattice-Based Digital Signature Standard. To understand why the standard is lattice-based, I turned to NordVPN:

“A lattice is a hierarchical structure that consists of levels, each representing a set of access rights. The levels are ordered based on the level of access they grant, from more restrictive to more permissive.”

In essence, the lattice structure allows more elaborate access rights.

This article (“Lattice-Based Identity and Access Management for AI Agents”) discusses lattices more. Well, not explicitly; the word “lattice” only appears in the title. But here is the article’s main point:

“We are finally moving away from those clunky, “if-this-then-that” systems. The shift to deep learning means agents can actually reason through a mess instead of just crashing when a customer uses a slang word or a shipping invoice is slightly blurry.”

It then says

“Deep learning changes this because it uses neural networks to understand intent, not just keywords.”

Hmm…intent? Sounds a little somewhat you why…or maybe it’s just me.

But it appears that we sometimes don’t care about the intent of AI agents.

“If you gave a new employee the keys to your entire office and every filing cabinet on day one, you’d be sweating, right? Yet, that is exactly what many companies do with ai agents by just slapping an api key on them and hoping for the best.”

This is not recommended. See my prior post on attribute-based access control, which led me to focus more on non-person entities (non-human identities).

As should we all.

A Little Help For Entry-Level Workers

Over a year ago I shared this:

A little help.

The mood at the time was that the world was changing and generative AI bots and non-person entities could replace people.

Yes, I am familiar with the party line that AI wouldn’t replace anyone, but would empower everyone to do their jobs more effectively.

The layoff trackers told a different story.

As did the AI gurus who proclaimed that many jobs would soon be obsolete.

Strangely enough, “AI guru” was not one of the jobs that was going away. Which is odd. It seems to me that giving inspirational talks would be the perfect job for a non-person entity.

Previously posted here.

One firm is (big) blue on people

But many people agreed that entry-level jobs were ripe for rightsizing, meaning that those at the beginnings of their careers would have a much harder time finding work.

Until they didn’t.

“Hardware giant IBM plans to triple entry-level hiring in the U.S. in 2026, according to reporting from Bloomberg. Nickle LaMoreaux, IBM’s chief human resource officer, announced the initiative….’And yes, it’s for all these jobs that we’re being told AI can do,’ LaMoreaux said.”

Because IBM has separated what AI can do from what it can’t do. IBM’s new positions are “less focused on areas AI can actually automate — like coding — and more focused on people-forward areas like engaging with customers.”

Guess what? Bots are not engaging. Well, maybe they’re more engaging than AI gurus…

Can you use people?

But I will go one step further and claim that human product marketers and content writers are more engaging than bot product marketers and content writers.

Believe me, I’ve tested this. Bredebot can fake 30 years of experience, but it’s not genuine.

If you want to engage with your prospects, don’t assign the job to a bot. That’s human work.

Content for tech marketers.

Bash Script Vulnerabilities

I can’t say WHY I’m looking at bash script vulnerabilities, but they’ve been around since…well, this Kaspersky article is based upon CVE-2014-6271.

The “bash bug,” also known as the Shellshock vulnerability, poses a serious threat to all users. The threat exploits the Bash system software common in Linux and Mac OS X systems in order to allow attackers to take potentially take control of electronic devices. An attacker can simply execute system level commands, with the same privileges as the affected services….

“But just imagine that you could not only pass this normal system information to the CGI script, but could also tell the script to execute system level commands. This would mean that – without having any credentials to the webserver – as soon as you access the CGI script it would read your environment variables; and if these environment variables contain the exploit string, the script would also execute the command that you have specified.”

An authorization nightmare as a hostile non-person entity runs amok.

And it’s still a threat, as two recent CVEs attest…and that’s all I’ll say.

Why Would a Robot Fish?

Sadly the question “why would a robot fish?” was shared in a private Facebook group, so I cannot share the entire question with you. But I can share my response.

“Some humans don’t fish for food, but for relaxation. But if robots need downtime, it doesn’t have to be at a stream with a pole.”

After thinking, I composed the prompt for the Google Gemini picture that illustrates this post.

“Create a realistic picture of a robot by a stream in the woods, fishing. The eyes and other parts of the robot’s head indicate that its internal controls are in maintenance mode, or that the robot is ‘relaxing.’”

My own content creation process with Bredemarket includes a “sleep on it” step which lets my brain reset before taking a fresh look at the content.

The generative AI equivalent is to take the output from the initial prompt, start a new independent chat, and write a second prompt to re-evaluate the output of the first prompt.

Which I guess would be “fishing.”