In conclusion—and I will delve into this later—your beloved AI detector may deliver a bunch of false positives, or Type I errors.
For example, if every word in a post is spelled correctly, that’s an obvious sign the text wasn’t written by a human—correct? In the ever-expanding world of virtual communication, correct spelling is a dead giveaway of non-human content—as is the use of characters unavailable on a standard keyboard. Motörhead made a bunch of £ and € despite not being real. As the band never said,
“Timothy Leary’s dead
No, no, no, no, he’s outside, looking in”
(I had to include one hallucination in this post.)
Use MFAID (multi factor AI detection) to increase accuracy when you claim to detect generative AI.
(Timothy Leary image public domain; lyrics from the Moody Blues, “Legend of a Mind”)
If you want to delve into so-called signs of generative AI writing, see
Most product marketing references to artificial intelligence are meaningless. Some companies think that they can simply promote their product by saying “We use AI,” as if this is a sufficient reason for prospects to buy.
I’ve previously observed that saying “we use AI” is the 2020s equivalent to saying “we use Pentium.”
“(Several organizations) received a three-year, $1.3 million National Science Foundation grant to teach Florida middle school teachers and students how to use artificial intelligence (AI) to identify fossil shark teeth….Florida teachers learn to use a branch of AI called “machine learning,” to teach computers how to use shape, color, and texture to identify the teeth of the extinct giant shark megalodon.”
Now I come from the identity/biometrics industry, which uses machine learning extensively. But customers in this industry don’t really care about the “how,” (machine learning). They care about the “why” (identify individuals). For all the customers care, the vendors could use Pentium for identification. Or blockchain. Or Beatrice. As Loren Feldman says, “It doesn’t matter.”
Remember this the next time you want to identify extinct megalodon shark teeth. Now I admit the exercise serves an educational purpose by exposing teachers to the capabilities of machine learning. But if your sole interest is tooth classification, you can simply purchase the non-expurgated version of Olsen’s Standard Book of Extinct Sharks and get the job done.
Marketing executives, AI is no longer a differentiator. Trust me. If you need assistance with a real differentiator, I can help.
If you want to win business, learn more about Bredemarket’s content – proposal – analysis services here.
Upload PDFs, websites, YouTube videos, audio files, Google Docs, or Google Slides, and NotebookLM will summarize them and make interesting connections between topics, all powered by Gemini 1.5’s multimodal understanding capabilities.
With all of your sources in place, NotebookLM gets to work and becomes a personalized AI expert in the information that matters most to you….
Our new Audio Overview feature can turn your sources into engaging “Deep Dive” discussions with one click.
I uploaded the most recent version of my resume to NotebookLM.
Technically, this is not my resume; this is a PDF version of a portion of my LinkedIn profile. But my resume has similar information.
NotebookLM used the resume as source material to create a 20+ minute podcast called “Career Detective.” In the podcast, a male and a female pair of bots took turns discussing the insights they gleaned from the resume of John E. “Breedehoft.” (I use a short e, not a long e, but people can call me anything if I get business from it.)
Surprisingly, they didn’t really hallucinate. Or at least I don’t think they did. When the bots said I was deeply qualified, as far as I’m concerned they were speaking the truth.
They even filled in some gaps. For example, I used the acronyms for KYC, KYB, and AML on my resume to save space, so one of the bots explained to the other what those acronyms meant, and why they were important.
Probably the most amusing part of the podcast was when they noted that I had worked at two very large companies. (Just so you know, my resume only goes back to 2015, so Motorola isn’t even discussed.) While Incode and IDEMIA are both multinationals, I wouldn’t characterize Incode as massive.
Anyway, judge for yourself
So here’s the audio episode of “Career Detective” that focuses on…me.
By the way, I learned about NotebookLM via the Never Search Alone Slack workspace, but still need to explore NotebookLM’s other features.
I have a telehealth appointment next week with a medical professional whom I have previously met. And I assume she will participate in the telehealth appointment.
In the future, of course, she may not.
Way back in April 2013, I wrote a tymshft piece entitled “You will still take a cab to the doctor’s office. For a while.” It speculated about a future 2023 medical appointment in which the patient took a driverless cab to a medical facility. In the office, the patient was examined by remote staff…or so she thought.
“Well, I’m glad you’ve gotten used to the procedure,” replied the friendly voice. “I hope you like me!”
“I do,” said Edith. “You’ve been very helpful. But I’ve always wondered exactly WHERE you were. If you were in Los Angeles, or in Mississippi, or perhaps in India or China, or perhaps even in one of the low-cost places such as Chad. If you don’t mind my asking, exactly where ARE you?”
“I don’t mind answering the question,” replied the friendly voice, “and I hope you don’t take my response the wrong way, but I’m not really a person as you understand the term. I’m actually an application within the software package that runs the medical center. But my programmers want me to tell you that they’re really happy to serve you, and that Stanford sucks.” The voice paused for a moment. “I’m sorry, Edith. You have to forgive the programmers – they’re Berkeley grads.”
“Oh,” said Edith after a moment. “This is something new. I’m used to it in banking, but I didn’t realize that a computer program could run an entire medical center. Well…who picks up the trash?”
“That’s an extra question! Just kidding,” replied the friendly voice. “Much of the trash pickup is automated, but we do have a person to supervise the operation. Ron Hussein. You actually know him – he was your cab driver in 2018 when you came here.”
Re-reading this 2013 piece, I was amused at three things I got wrong.
First, Google, Facebook, and Apple did NOT merge to form Gaceapple, “the important merger that saved the tech industry in the United States from extinction.” American tech firms are still powerful…for now.
Second, my assumption of cab companies adopting driverless cars assumed the continued existence of cab companies. Ride share services have reduced the presence of traditional companies dramatically.
Third, my assumption that medical firms would sink untold sums of money into centralized automated medical examination rooms could be questioned…especially for routine appointments like Edith’s. Why not just let Edith’s smartphone—perhaps with a single attachment—gather the data?
Of course, there are medical ethics questions that underlie this entire discussion of remote telehealth and the use of non-person entities (NPEs). And we are struggling with those right now.
In “On Attribute-Based Access Control,” I noted that NIST defined a subject as “a human user or NPE (Non-Person Entity), such as a device that issues access requests to perform operations on objects.” Again, there’s a need to determine that the NPE has the right attributes, and is not a fake, deep or shallow.
There’s clearly a need to identify non-person entities. If I work for IBM and have a computer issued by IBM, the internal network needs to know that this is my computer, and not the computer of a North Korean hacker.
But I was curious. Can the five (or six) factors identify non-person entities?
Let’s consider factor applicability, going from the easiest to the hardest.
The easy factors
Somewhere you are. Not only is this extremely applicable to non-person entities, but in truth this factor doesn’t identify persons, but non-person entities. Think about it: a standard geolocation application doesn’t identify where YOU are. It identities where YOUR SMARTPHONE is. Unless you have a chip implant, there is nothing on your body that can identify your location. So obviously “somewhere you are” applies to NPEs.
Something you have. Another no brainer. If a person has “something,” that something is by definition an NPE. So “something you have” applies to NPEs.
Something you do. NPEs can do things. My favorite example is Kraftwerk’s pocket calculator. You will recall that “by pressing down this special key it plays a little melody.” I actually had a Casio pocket calculator that did exactly that, playing a tune that is associated with Casio. Later, Brian Eno composed a startup sound for Windows 95. So “something you do” applies to NPEs. (Although I’m forced to admit that an illegal clone computer and operating system could reproduce the Eno sound.)
Something you know. This one is a conceptual challenge. What does an NPE “know”? For artificial intelligence creations such as Kwebbelkop AI, you can look at the training data used to create it and maintain it. For a German musician’s (or an Oregon college student’s) pocket calculator, you can look at the code used in the device, from the little melody itself to the action to take when the user enters a 1, a plus sign, and another 1. But is this knowledge? I lean toward saying yes—I can teach a bot my mother’s maiden name just as easily as I can teach myself my maiden name. But perhaps some would disagree.
Something you are. For simplicity’s sake, I’ll stick to physical objects here, ranging from pocket calculators to hand-made ceramic plates. The major reason that we like to use “something you are” as a factor is the promise of uniqueness. We believe that fingerprints are unique (well, most of us), and that irises are unique, and that DNA is unique except for identical twins. But is a pocket calculator truly unique, given that the same assembly line manufactures many pocket calculators? Perhaps ceramic plates exhibit uniqueness, perhaps not.
That’s all five factors, right?
Well, let’s look at the sixth one.
Somewhat you why
You know that I like the “why” question, and some time ago I tried to apply it to identity.
Why is a person using a credit card at a McDonald’s in Atlantic City? (Link) Or, was the credit card stolen, or was it being used legitimately?
Why is a person boarding a bus? (Link) Or, was the bus pass stolen, or was it being used legitimately?
Why is a person standing outside a corporate office with a laptop and monitor? (Link) Or, is there a legitimate reason for an ex-employee to gain access to the corporate office?
The first example is fundamental from an identity standpoint. It’s taken from real life, because I had never used any credit card in Atlantic City before. However, there was data that indicated that someone with my name (but not my REAL ID; they didn’t exist yet) flew to Atlantic City, so a reasonable person (or identity verification system) could conclude that I might want to eat while I was there.
But can you measure intent for an NPE?
Does Kwebbelkop AI have a reason to perform a particular activity?
Does my pocket calculator have a reason to tell me that 1 plus 1 equals 3?
Does my ceramic plate have a reason to stay intact when I drop it ten meters?
Perhaps I was a little agitated or excited this afternoon.
Perhaps I was a little agitated because five (or maybe more; I lost count) scammers had sent me LinkedIn DMs when they gravitated to my #opentowork LinkedIn banner…like flies speeding to a carcass. (Which gave me an excuse to write about identity management on LinkedIn.)
Or perhaps I was a little excited because a Never Search Alone participant had shared an identity management product marketing position with me.
The bot says hello
After applying for said job, I was reading the company’s website to learn about its identity management product when the company’s bot kept on messaging me.
At 2:30 pm Chat Bot says: Hello, this is an AI Rep from [COMPANY]. If you need any assistance, feel free to ask.
At 2:30 pm Chat Bot says: Saw that you’re interested in our SaaS management and identity governance software. Any questions?
Being the curious type (after all, I worked for IDEMIA), I wondered how the chat bot would deal with what I REALLY wanted to ask about.
Not about a demo.
Not about the company’s recent funding round. (Series B, $35 million.)
At 2:47 pm I say: With 30 years’ identity experience and 9 years as a PMM, I just applied for your Sr PMM Identity Management role. So the only question that I have is when you will interview me.
And yes, my assumption that the company would immediately drop everything and interview me may have been a little overconfident. But it’s better than imposter syndrome.
And after all, this was just a bot. I assumed that the bot would be completely confused by my question, so no harm would come from overconfidence.
You know what they say about people who assume?
The bot exceeds expectations
At 2:47 pm Chat Bot says: Thanks for applying for the Sr PMM Identity Management role! Could you provide your email address so we can follow up on your application status?
One note here: the bot parroted my response back to me. The company’s job description does not include a job title with the “PMM” acronym.
Regardless, I was pleasantly surprised. The bot was trained to respond to job applicants. So I provided the email address that I used for my job application.
The bot does not meet expectations
That’s when I was UNPLEASANTLY surprised.
At 2:47 pm Chat Bot says: Thank you for your interest. Please provide a business email address so we can follow up on your application status.
Apparently the training wasn’t thorough enough. Think about it; if you’re an individual applying for a job, why would you apply with a “business email address”? Which one? From your current employer? THAT is not a good idea.
But as it turns out, I DO have a business email address-the one for Bredemarket. So I entered that, even though that email address does not appear anywhere on the job application that I had just submitted.
At 2:47 pm Chat Bot says: Thank you for providing your email. Our team will review the conversation and follow up if necessary. How else can I assist you today?
So it will be interesting to see whether the company contacts me at both of my email addresses.
(Post image generated at craiyon.com)
Postscript for those who don’t click on every link in a Bredemarket post
You missed Brian May’s collaboration with the Weebl folks.
This resonated with me because in mid-September I spent some Instagram time thinking about the word “flash” (as in how fast a human can create content).
Pangiam is BigBear.ai’s digital identity brand, harnessing facial recognition, image-based anomaly detection and advanced biometrics with computer vision and predictive analytics.
Trueface Performs one of the fastest one-to-many (1:N) facial matches with real-time photos, delivering safe and efficient identity verification.
veriScan™ Securely captures and transmits real-time photos into a biometric matching service supporting access control and biometric boarding/bag tags.
Dartmouth Delivers real-time image-based anomaly detection for enhanced 3D baggage screening.
All these products, including Dartmouth, were developed before the BigBear.ai acquisition. (Where is Pangiam Bridge?)
My current two categories for go-to-market collateral, “external” and “internal,” are not sufficient. I want to fix that, but I’m still thinking through things, such as what to call the third category. As of this moment, my best option is “intrinsic,” based upon my conversations with my good buddy Gemini.
My current two categories for go-to-market collateral
Based upon go-to-market efforts that stretch back a decade (even before I formally became a Product Marketing Manager), I have traditionally divided go-to-market output into two categories.
22 types of content Bredemarket can create.
External content for your hungry people (target audience), such as articles, blog posts, social media, case studies, white papers, and proposals. This content goes to CEOs, marketers, engineers, IT staff, and many others at the companies that buy from you.
Internal content for the people in the company who talk to your hungry people, such as battlecards, sales playbooks, and scripts for trade show demonstrations. This content goes to salespeople, marketers, product managers, customer success folks, and many other employees and contractors.
Some content can either be external or internal, depending upon the audience.
Simple and straightforward, right?
My emerging third category
But during a current go-to-market effort for a client, I realized that these two categories are insufficient, since both the client and I are creating content that will never been seen by either the external clients or the internal clients.
I’ll give you two heavily redacted and anonymized examples for a #fakefakefake machine.
AI-generated by Google Gemini.
Bob wrote a Word document that listed the pain points that MegaCorp’s client encounter when trying to put green round widgets on their end customers’ beanie caps.
Judy created a picture with stick figures that showed how clients can use the MegaCorp WidgetPlacer 2000 for their beanie cap production needs.
Bob and Judy (names from this song) are NEVER going to show that document or that picture to the customers, or even to the company’s own account managers.
So why create it?
Because they will be the initial guides as the go-to-market team creates all the other external and internal collateral for the effort.
These foundation documents are a critical part of the go-to-market effort. Without them, the subsequent go-to-market collateral will be a contradictory mishmash of stuff, and you’ll need to undertake a lot of effort to align all the content after the fact.
And the third category is…
…well, I don’t know what to call this third category yet.
“Internal internal content” seems a bit silly.
“Foundational content” seems a bit grandiose.
“Bob and Judy” content seems like an independent TV show.
If we call the first category “external” marketing content for customers, and we call the second category “internal” marketing content for employees, then what do we call the third category which is only for the people who create marketing content for customers and employees?
Gemini’s first batch of answers:
Internal Marketing Content for Marketing Teams
Marketing Team Communication
Marketing Team Resources
Internal Marketing Tools
Brevity is NOT a strength of generative AI. Even when I specifically instructed Gemini to provide single word responses, I got hyphenated words:
Intra-marketing (combining “intra” for within and “marketing”)
Meta-marketing (implying a higher-level perspective on marketing)
Team-marketing (emphasizing its focus on the marketing team)
The fatal flaw is that not all the people on a go-to-market team are marketers. (This is my fault because of the prompt I wrote.)
And if I lop off the last part of each compound phrase, two of my options are “team” (too generic) and “meta” (NO).
And “intra” is too close to “internal internal.” But perhaps it’s a start. So I asked Gemini for words similar to “intra-” and got this response:
The prefix “intra-” means “within,” “inside,” or “inner.” Here are 12 words that convey a similar meaning:
Inward: Turning or moving toward the inside.
Internal: Existing or occurring within something.
Inner: Situated or located within.
Intrinsic: Belonging to or inherent in the nature of something.
Interior: The inside part of something.
Intramural: Occurring within a school or institution.
Intranet: A private computer network within an organization.
Intravenous: Administered into a vein.
Intracranial: Situated or occurring within the skull.
Intracellular: Existing or occurring within a cell.
Intralingual: Relating to or occurring within a language.
Intrajugular: Situated or occurring within the jugular vein.
I kind of like “intrinsic,” but I’ll sleep on it.
In the meantime, what would YOU call the third category?
The word “landscape” suggests a physical environment, not a digital environment. Merriam-Webster specifically cites “natural inland scenery,” which even rules out the shoreline, much less a bunch of smartphone apps or SDKs jumbled together.
And how does a DIGITAL landscape evolve, rapidly or otherwise?
Now I’m not suggesting that you AVOID references to the “rapidly evolving digital landscape.” After all, if aspiring influencers and thought leaders use the term, your content needs to sound exactly like theirs. And this applies whether your thought leader is a person or an AI bot. Trust me on this.
Or perhaps you shouldn’t take my advice. Maybe the overuse of hack phrases is NOT a best-of-breed approach.
So why did I write this…
Because a particular respectable vendor began a Facebook post with the words “In today’s rapidly evolving digital landscape.”
And it shook me.
Was this a one-time slip up, or are readers EXPECTING companies to talk like this?
(Digital landscape image AI-generated by Google Gemini)
Klarna CEO Sebastian Siemiatkowski has revealed a plan to eliminate his job and replace himself with an AI bot…
…wait, I misread that. Siemiatkowski has revealed a plan to replace OTHER people’s jobs at Klarna.
Funny how these rightsizing moves, with a few notable exceptions, always seem to spare the executives. The brunt will fall on others.
“With almost half the workforce at the company, Siemiatkowski plans to shift customer service and marketing duties to AI. ‘Not only can we do more with less, but we can do much more with less,’ he said…”
I bet customer service and marketing morale is REALLY high at Klarna right now. If it weren’t for the not-so-good job market at the moment, Siemiatkowski might realize his rightsizing goals via attrition before his bots are ready.
But let me ask my good buddy Google Gemini. The material below was not written by me.
= =
While AI has made significant strides in customer service and marketing, it’s unlikely that Klarna, or any company, could completely eliminate human involvement.
Here’s why:
Nuance and Empathy: Human agents are better equipped to handle complex customer inquiries, especially those involving emotional situations. AI, while improving, still struggles with understanding and responding to nuanced emotions.
Problem-Solving: Certain customer issues require creative problem-solving or judgment calls that AI may not be able to handle effectively.
Ethical Considerations: AI can perpetuate biases present in the data it’s trained on. Human oversight is essential to ensure ethical and fair treatment of customers.
Unforeseen Circumstances: AI may not be able to adapt to unexpected situations or rapidly changing circumstances. Human agents can often address these challenges more effectively.
Customer Preference: Many customers still prefer to interact with human agents, especially for complex or sensitive issues.
However, AI can be a valuable tool in these areas. It can automate routine tasks, provide 24/7 support, and offer personalized recommendations. The most effective approach is likely a hybrid model, combining AI with human agents to provide the best possible customer experience.