A Little Help For Entry-Level Workers

Over a year ago I shared this:

A little help.

The mood at the time was that the world was changing and generative AI bots and non-person entities could replace people.

Yes, I am familiar with the party line that AI wouldn’t replace anyone, but would empower everyone to do their jobs more effectively.

The layoff trackers told a different story.

As did the AI gurus who proclaimed that many jobs would soon be obsolete.

Strangely enough, “AI guru” was not one of the jobs that was going away. Which is odd. It seems to me that giving inspirational talks would be the perfect job for a non-person entity.

Previously posted here.

One firm is (big) blue on people

But many people agreed that entry-level jobs were ripe for rightsizing, meaning that those at the beginnings of their careers would have a much harder time finding work.

Until they didn’t.

“Hardware giant IBM plans to triple entry-level hiring in the U.S. in 2026, according to reporting from Bloomberg. Nickle LaMoreaux, IBM’s chief human resource officer, announced the initiative….’And yes, it’s for all these jobs that we’re being told AI can do,’ LaMoreaux said.”

Because IBM has separated what AI can do from what it can’t do. IBM’s new positions are “less focused on areas AI can actually automate — like coding — and more focused on people-forward areas like engaging with customers.”

Guess what? Bots are not engaging. Well, maybe they’re more engaging than AI gurus…

Can you use people?

But I will go one step further and claim that human product marketers and content writers are more engaging than bot product marketers and content writers.

Believe me, I’ve tested this. Bredebot can fake 30 years of experience, but it’s not genuine.

If you want to engage with your prospects, don’t assign the job to a bot. That’s human work.

Content for tech marketers.

Bash Script Vulnerabilities

I can’t say WHY I’m looking at bash script vulnerabilities, but they’ve been around since…well, this Kaspersky article is based upon CVE-2014-6271.

The “bash bug,” also known as the Shellshock vulnerability, poses a serious threat to all users. The threat exploits the Bash system software common in Linux and Mac OS X systems in order to allow attackers to take potentially take control of electronic devices. An attacker can simply execute system level commands, with the same privileges as the affected services….

“But just imagine that you could not only pass this normal system information to the CGI script, but could also tell the script to execute system level commands. This would mean that – without having any credentials to the webserver – as soon as you access the CGI script it would read your environment variables; and if these environment variables contain the exploit string, the script would also execute the command that you have specified.”

An authorization nightmare as a hostile non-person entity runs amok.

And it’s still a threat, as two recent CVEs attest…and that’s all I’ll say.

Why Would a Robot Fish?

Sadly the question “why would a robot fish?” was shared in a private Facebook group, so I cannot share the entire question with you. But I can share my response.

“Some humans don’t fish for food, but for relaxation. But if robots need downtime, it doesn’t have to be at a stream with a pole.”

After thinking, I composed the prompt for the Google Gemini picture that illustrates this post.

“Create a realistic picture of a robot by a stream in the woods, fishing. The eyes and other parts of the robot’s head indicate that its internal controls are in maintenance mode, or that the robot is ‘relaxing.’”

My own content creation process with Bredemarket includes a “sleep on it” step which lets my brain reset before taking a fresh look at the content.

The generative AI equivalent is to take the output from the initial prompt, start a new independent chat, and write a second prompt to re-evaluate the output of the first prompt.

Which I guess would be “fishing.”

Impressionable Bots?

Update to my prior post: Google Analytics shows lower numbers for February 5.

Why?

Google Gemini suggests bots may be to blame.

The internet is full of “bots” (automated scripts from search engines or malicious actors).  

Google Analytics has an industry-leading database of known bots and filters them out very aggressively to give you “human” data.  

Jetpack also filters bots, but its list is different. Jetpack often catches fewer bots than Google, which usually results in Jetpack showing higher traffic numbers than GA.

Still unanswered: why did the bots swarm on that particular day?

Looks like disregarding the traffic is the correct choice.

New 1954 Technology: The Finger Stop

If I were alive in 1954, I would understand why I would need a movie to figure this “dialing” thing out.

The movie from “the telephone company” emphasizes that you MUST bring your finger all the way to the finger stop when dialing the two letters and five numbers to talk to another person on the phone.

A model showing the finger stop.

Here’s the movie.

How to dial your phone, 1954.

Was this truly an improvement over the old system, in which you simply spoke the number to your friendly operator?

Probably not…but as phones became more useful, the old system wouldn’t have enough operators in 1954. Already there were 51 million phones in the United States; what if that number doubled?

And yes, that number did double…in 1967.

With some of those 100 million users dialing phone numbers WITHOUT worrying about the finger stop, as touch tone phones were introduced in 1963, supported by a new underlying technology dual-tone multi-frequency (DTMF).

And…well, a lot of other stuff happened.

In 2026 some of us don’t dial at all. We just say “Call Mom” to our non-human “operator” on our smartphones.

And many of the operators are out of a job.

The NHIs Are There, But We Don’t Know What They Are Doing

Permiso has released its 2026 State of Identity Security Report, and the results aren’t pretty. The first data point of interest:

“95% [of surveyed organizations] say AI systems can create or modify identities without human oversight”

Which is OK, provided that the organizations have the proper controls. But that brings us to the second data point:

Only 46% have full visibility into all human, non-human, and AI identities”

This is…not good.

Nobot Policies Hurt Your Company and Your Product

If your security software enforces a “no bots” policy, you’re only hurting yourself.

Bad bots

Yes, there are some bots you want to keep out.

“Scrapers” that obtain your proprietary data without your consent.

“Ad clickers” from your competitors that drain your budgets.

And, of course, non-human identities that fraudulently crack legitimate human and non-human accounts (ATO, or account takeover).

Good bots

But there are some bots you want to welcome with open arms.

Such as the indexers, either web crawlers or AI search assistants, that ensure your company and its products are known to search engines and large language models. If you nobot these agents, your prospects may never hear about you.

Buybots

And what about the buybots—those AI agents designed to make legitimate purchases? 

Perhaps a human wants to buy a Beanie Baby, Bitcoin, or airline ticket, but only if the price dips below a certain point. It is physically impossible for a human to monitor prices 24 hours a day, 7 days a week, so the human empowers an AI agent to make the purchase. 

Do you want to keep legitimate buyers from buying just because they’re non-human identities?

(Maybe…but that’s another topic. If you’re interested, see what Vish Nandlall said in November about Amazon blocking Perplexity agents.)

Nobots 

According to click fraud fighter Anura in October 2025, 51% of web traffic is non-human bots, and 37% of the total traffic is “bad bots.” Obviously you want to deny the 37%, but you want to allow the 14% “good bots.”

Nobot policies hurt. If your verification, authentication, and authorization solutions are unable to allow good bots, your business will suffer.

Avoiding Bot Medical Malpractice Via…Standards!

Back in the good old days, Dr. Welby’s word was law and was unquestioned.

Then we started to buy medical advice books and researched things ourselves.

Later we started to access peer-reviewed consumer medical websites and researched things ourselves.

Then we obtained our medical advice via late night TV commercials and Internet advertisements.

OK, this one’s a parody, but you know the real ones I’m talking about. Silver Solution?

Finally, we turned to generative AI to answer our medical questions.

With potentially catastrophic results.

So how do we fix this?

The U.S. National Institute of Standards and Technology (NIST) says that we should…drumroll…adopt standards.

Which is what you’d expect a standards-based government agency to say.

But since I happen to like NIST, I’ll listen to its argument.

“One way AI can prove its trustworthiness is by demonstrating its correctness. If you’ve ever had a generative AI tool confidently give you the wrong answer to a question, you probably appreciate why this is important. If an AI tool says a patient has cancer, the doctor and patient need to know the odds that the AI is right or wrong.

“Another issue is reliability, particularly of the datasets AI tools rely on for information. Just as a hacker can inject a virus into a computer network, someone could intentionally infect an AI dataset to make it work nefariously.”

So we know the risks, but how do we mitigate them?

“Like all technology, AI comes with risks that should be considered and managed. Learn about how NIST is helping to manage those risks with our AI Risk Management Framework. This free tool is recommended for use by AI users, including doctors and hospitals, to help them reap the benefits of AI while also managing the risks.”

Who or What Requires Authorization?

There are many definitions of authorization, but the one in RFC 4949 has the benefit of brevity.

“An approval that is granted to a system entity to access a system resource.”

Non-person Entities Require Authorization

Note that it uses the word “entity.” It does NOT use the word “person.” Because the entity requiring authorization may be a non-person entity.

I made this point in a previous post about attribute-based access control (ABAC), when I quoted from the 2014 version of NIST Special Publication 800-162. Incidentally, if you wonder why I use the acronym NPE (non-person entity) rather than the acronym NHI (non-human identity), this is why.

“A subject is a human user or NPE, such as a device that issues access requests to perform operations on objects. Subjects are assigned one or more attributes.”

If you have a process to authorize people, but don’t have a process to authorize bots, you have a problem. Matthew Romero, formerly of Veza, has written about the lack of authorization for non-human identities.

“Unlike human users, NHIs operate without direct oversight or interactive authentication. Some run continuously, using static credentials without safeguards like multi-factor authentication (MFA). Because most NHIs are assigned elevated permissions automatically, they’re often more vulnerable than human accounts—and more attractive targets for attackers. 

“When organizations fail to monitor or decommission them, however, these identities can linger unnoticed, creating easy entry points for cyber threats.”

Veza recommends that people use a product that monitors authorizations for both human and non-human identities. And by the most amazing coincidence, Veza offers such a product.

People Require Authorization

And of course people require authorization also. They need authorization:

It’s not enough to identify or authenticate a person or NPE. Once that is done, you need to confirm that this particular person has the authorization to…launch a nuclear bomb. Or whatever.

Your Customers Require Information on Your Authorization Solution

If your company offers an authorization solution, and you need Bredemarket’s content, proposal, or analysis consulting help, talk to me.