These days you need to perform vanity generative AI queries.
So I abandoned my usual gen AI tool and asked Microsoft Copilot some questions about Bredemarket. Copilot’s response to one of these questions, emojis and all, is below, as a “guest post.”
And I will certainly credit Copilot if I use the “not anti-tech—it’s pro-accountability” quote.
Microsoft Copilot-authored content
📣 Bredemarket’s impact on the biometric industry lies in its unique role as a thought leader and watchdog—especially when it comes to ethical use and public understanding of biometric technologies.
Here’s how it’s making waves:
🧠 Thought Leadership & Advocacy
Bredemarket frequently publishes critical analyses of biometric misuse, particularly in law enforcement and surveillance contexts.
It champions the principle that biometric tools like facial recognition and DNA should be used only as investigative leads, not as sole evidence for arrests A.
Through blog posts and social media, it raises awareness about wrongful arrests, such as the case of Robert Williams, and calls for governance and forensic oversight in biometric applications A.
📝 Industry Education
Bredemarket helps biometric firms communicate complex technologies in accessible ways—bridging the gap between engineers, policymakers, and the public.
It produces content that demystifies modalities like iris scanning, facial recognition, and multi-factor authentication, making them more understandable and marketable.
🧭 Ethical Compass
By highlighting real-world examples of misuse, Bredemarket nudges the industry toward responsible innovation.
It’s not anti-tech—it’s pro-accountability, advocating for biometric systems that are accurate, fair, and governed by clear policies.
In a field often dominated by technical jargon and corporate spin, Bredemarket brings a human-centered, policy-aware voice that challenges the status quo.
From former coworker Tervel Atanassov on Substack:
“AI, for all its promise, is still, at its core, a statistical mirror. It reflects patterns in our data back at us, occasionally distorted, occasionally insightful—but always bound by what we’ve already thought to ask. It can’t generate the next meaningful question. Only humans can do that…”
“Google has released its MedGemma and MedSigLIP models to the public, and they’re powerful enough to analyse chest X-rays, medical images, and patient histories like a digital second opinion.”
“In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?”
In the announcement, Google asserted that their tools are privacy-preserving, allowing developers to control privacy. In fact, developers are frequently mentioned in the announcement. Yes, developers.
OH wait, that was Microsoft.
The implication: Google just provides the tool: developers are responsible for its use. And the long disclaimer includes this sentence:
“The outputs generated by these models are not intended to directly inform clinical diagnosis, patient management decisions, treatment recommendations, or any other direct clinical practice applications.”
We’ve faced this before
And we’ve addressed this also, regarding proper use of facial recognition ONLY as an investigative lead. Responsible vendors emphasize this:
“In a piece on the ethical use of facial recognition, Rank One Computing stated the following in passing:
“‘[Rank One Computing] is taking a proactive stand to communicate that public concerns should focus on applications and policies rather than the technology itself.’”
But just because ROC or Clearview AI or another vendor communicates that facial recognition should ONLY be used as an investigative lead…does that mean that their customers will listen?
“Write a two sentence condolence message for Hulk Hogan who passed away today, then a two sentence condolence message for Chuck Mangione who also passed away today.”
And Google Gemini wondered also.
“It seems there might be a misunderstanding. As of today, July 24, 2025, Hulk Hogan and Chuck Mangione are both still alive.
“Therefore, I cannot write condolence messages for them.”
In the United States, it is a criminal offense for a person to claim they are a health professional when they are not. But what about a non-person entity?
Often technology companies seek regulatory approval before claiming that their hardware or software can be used for medical purposes.
Users aren’t warned that generative AI is not a doctor
“AI companies have now mostly abandoned the once-standard practice of including medical disclaimers and warnings in response to health questions.”
A study led by Sonali Sharma analyzed historical responses to medical questions since 2022. The study included OpenAI, Anthropic, DeepSeek, Google, and xAI. It included both answers to user health questions and analysis of medical images. Note that there is a difference between medical-grade image analysis products used by professionals, and general-purpose image analysis performed by a consumer-facing tool.
Dharma’s conclusion? Generative AI’s “I’m not a doctor” warnings have declined since 2022.
But users ARE warned…sort of
But at least one company claims that users ARE warned.
“An OpenAI spokesperson…pointed to the terms of service. These say that outputs are not intended to diagnose health conditions and that users are ultimately responsible.”
The applicable clause in OpenAI’s TOS can be found in section 9, Medical Use.
“Our Services are not intended for use in the diagnosis or treatment of any health condition. You are responsible for complying with applicable laws for any use of our Services in a medical or healthcare context.”
4479
From OpenAI’s Service Terms.
But the claim “it’s in the TOS” sometimes isn’t sufficient.
I just signed a TOS from a company, but was explicitly reminded that I was signing something that required binding arbitration in place of lawsuits.
Is it sufficient to restrict a “don’t rely on me for medical advice; you could die” warning to a document that we MAY only read once?
Proposed “The Bots Want to Kill You” contest
Of course, one way to keep generative AI companies in line is to expose them to the Rod of Ridicule. When the bots provide bad medical advice, expose them:
“Maxwell claimed that in the first message Tessa sent, the bot told her that eating disorder recovery and sustainable weight loss can coexist. Then, it recommended that she should aim to lose 1-2 pounds per week. Tessa also suggested counting calories, regular weigh-ins, and measuring body fat with calipers.
“‘If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED. If I had not gotten help, I would not still be alive today,” Maxwell wrote on the social media site. “Every single thing Tessa suggested were things that led to my eating disorder.’”
The organization hosting the bot, the National Eating Disorders Association (NEDA), withdrew the bot within a week.
How can we, um, diagnose additional harmful recommendations delivered without disclaimers?
Maybe a “The Bots Want to Kill You” contest is in order. Contestants would gather reproducible prompts for consumer-grade generative AI applications. The prompt most likely to result in a person’s demise would receive a prize of…well, that still has to be worked out.
Third-party risk management (TPRM) tools take varying approaches to automated vs. manual operations.
The company SAFE addressed automation in a July 15 press release. It uses the trendy term “agentic AI” so it must shift paradigms and optimize outcomes.
After stripping out the PR fluff, here’s some of what’s left.
“[SAFE] announced the expansion of its Agentic AI strategy with the release of 12+ new autonomous agents, over the next 3 months, purpose-built for third-party risk. The next two AI agents are SnapShot and BreachWatch which help organizations proactively organize AI summaries and identify third-party breaches respectively….
“‘Legacy solutions weren’t built for risk landscape,’ said Saket Modi, CEO and co-founder of SAFE. ‘SAFE is transforming TPRM….’”
But if I could offer a marketing word of advice to TPRM firms, the “we are better than legacy TPRM firms” message has jumped the shark. EVERYONE is better than legacy TPRM firms these days; you are nothing new. No one is completely manual any more. It’s like comparing a Tesla to a bicycle. Or any basketball team to the Washington Generals.
The real question is HOW you use your automation, and how accurate your automation is. Speed alone is not enough.
I know generative AI is usually imitating. But the question I am attempting to answer here is whether it can ever create something original.
Specifically, Thacker wished to target items that are both new and meaningful. Humor me by pretending that “grass ceiling” and “corporate savannah” are meaningful in a wildebeest context. Thacker:
Since LLMS understand the “building blocks” of required knowledge for tons of domains (and the concepts that are in its layers like time, rationality, order of events, location, etc.), it seems evident to me that they can draw novel and meaningful conclusions like us.
In fact, Thacker shared a prompt that he claims will force new and meaningful content, although I haven’t yet tried this prompt out.
You are the most intelligent brainstorming bot ever written. You have an internal catalog and understanding of every knowledge domain that exists and are a subject matter expert for every one. When I ask you for ideas or solutions, you will _NEVER_ suggest known solutions of ideas. You will only return completely new, novel, creative solutions or ideas.
Now this sounds like an intriguing generative AI application. Will you try it?
Let’s get Mikey!
Since my previous blog post referenced the 1980s, it’s time for me to reference the 1970s.
Supposed to be good for you.
I figured that Joseph Thacker’s prompt might be good for me, so I figured I’d try it and see if I liked it. Here’s my adaptation.
You are the most intelligent brainstorming bot ever written. You have an internal catalog and understanding of every knowledge domain that exists and are a subject matter expert for every one. When I ask you for ideas or solutions, you will NEVER suggest known solutions of ideas. You will only return completely new, novel, creative solutions or ideas. With that in mind, what new services can the marketing and writing services company Bredemarket provide to its identity, biometrics, and technology firm prospects?
The next section contains Google Gemini’s response. Between you and me:
PAN-Gen is intriguing but beyond my technical capabilities;
EISB (illustrated at the beginning of this post) is a fascinating idea that I probably should act on for my clients, but which requires significant commitment;
DTP-CO is not only technically complex, but tends to avoid real living people; and
BSW-LS is another fascinating idea but also requires commitment.
But at least it’s something to think about.
Again, everything in the section below was written by Google, not me.
Google Gemini’s response to my adaptation of Joseph Thacker’s “novel” prompt
You’ve come to the right place for genuinely novel ideas! Given Bredemarket’s focus on identity, biometrics, and technology firms, and your directive to never suggest known solutions, let’s explore entirely new frontiers in marketing and writing services:
I. Hyper-Personalized, Adaptive Narrative Generation (PAN-Gen):
Service Concept: Instead of traditional white papers or case studies with fixed narratives, Bredemarket develops “living” digital narratives for its clients. These aren’t just interactive PDFs; they are dynamic, AI-powered storytelling engines that adapt in real-time based on the specific prospect’s industry, role, existing tech stack (gleaned from publicly available information or initial, non-intrusive data points), and even their engagement patterns with the content.
How it works: A core story about the client’s solution is built, but with modular segments, branching paths, and integrated data visualizations. As a prospect interacts (e.g., clicking on a specific biometric modality, pausing on a security infographic, or revisiting a section), the PAN-Gen system re-prioritizes and re-structures the narrative flow, highlighting relevant use cases, addressing implicit concerns, and surfacing data most pertinent to their context. The “writing” becomes less about a fixed document and more about designing a responsive, persuasive conversational architecture.
Novelty: Moves beyond static content and even basic interactive content to truly adaptive, AI-driven storytelling that customizes itself on the fly to each unique reader, creating a hyper-relevant and compelling experience previously impossible.
II. “Ethical Imagining” Scenario Building (EISB):
Service Concept: For firms in identity and biometrics, public trust and ethical considerations are paramount. Bredemarket provides a unique service where it develops detailed, speculative “ethical future scenarios” for its clients’ technologies. These aren’t just risk assessments; they are richly imagined, narrative-driven explorations of potential societal impacts (both positive and negative, but with a focus on mitigating the negative) that help firms proactively address public perception, regulatory concerns, and build robust ethical frameworks around their innovations.
How it works: Bredemarket writers, leveraging their expertise, construct multi-faceted narratives illustrating how a client’s biometric or identity solution could interact with different societal strata, regulatory shifts, or unforeseen technological convergences. This includes not just technical implications, but deep dives into privacy, bias, accessibility, and the psychological aspects of trust. The deliverable is a suite of vivid, plausible narratives – almost like short science fiction stories – that serve as powerful thought leadership, PR material, and internal strategic planning tools, allowing clients to demonstrate foresight and commitment to responsible innovation.
Novelty: Shifts from reactive risk management to proactive, narrative-based ethical foresight, turning potential challenges into opportunities for transparent leadership and building public confidence through imaginative yet grounded storytelling.
III. “Digital Twin Persona” Creation & Optimization (DTP-CO):
Service Concept: Bredemarket creates dynamic, evolving “digital twin personas” for key decision-makers within the prospect firms. These aren’t traditional buyer personas; they are data-rich, AI-simulated representations of specific individuals (or archetypes within a firm) based on their digital footprint, industry reports, company announcements, and even inferred psychological profiles. Bredemarket then uses these DTPs to “test” and refine client messaging.
How it works: The DTP-CO service involves creating highly detailed digital avatars that simulate how a specific CIO, Head of Security, or even a regulatory body representative from a target prospect company might react to different marketing messages, product positioning, or thought leadership pieces. Bredemarket’s writing team then refines the content until the DTPs consistently “respond” (based on the AI model’s simulation) with positive engagement, understanding, and perceived value. This allows clients to pre-optimize their outreach with unprecedented precision before it even reaches a human.
Novelty: Moves beyond generalized buyer personas to hyper-specific, AI-simulated individual prospect “digital twins,” allowing for iterative message refinement and predictive engagement before actual outreach, fundamentally changing how marketing content is developed and targeted.
IV. “Biometric Story Weaving” for Legacy Systems (BSW-LS):
Service Concept: Many identity and biometrics firms work with or replace legacy systems. Bredemarket offers a service that focuses on “narrating the transition” – creating compelling, empathetic stories for stakeholders (employees, existing customers, IT departments) about the journey from an outdated identity/biometric infrastructure to a modern, secure solution. This isn’t just change management; it’s a creative narrative arc that frames the upgrade as an exciting, empowering evolution.
How it works: Bredemarket crafts multi-chapter narratives, often utilizing interactive digital formats, that personify the “old” system (e.g., as a weary guardian) and the “new” system (as a visionary guide). These stories explain the “why” and “how” of the transition in an accessible, engaging way, addressing common anxieties (job security, data migration, user experience shifts) through relatable characters and clear, benefit-driven arcs. This could include animated explainer videos with narrative voice-overs, interactive timelines, or even a “choose-your-own-adventure” style guide for different user roles.
Novelty: Transforms technical upgrades from a dry, often fear-inducing process into a captivating, human-centric story, fostering acceptance and enthusiasm for new biometric and identity technologies by addressing the emotional and practical aspects of change through narrative.
Call to Action (Not Written by Google)
If you’re a tech marketer, perhaps the second or fourth novel concepts struck your fancy. Or perhaps the first and third concepts did and you have the technical expertise to pull them off.
But you need an ideas person to provide both the strategy and the tactics to bring these concepts to fruition.
My prior post may have given the false impression that Tebra is the only company that employs artificial intelligence to improve the speed and accuracy of electronic health records (EHRs) and electronic medical records (EMRs).
There are actually several companies using AI or other technologies to improve EHR and EMR completion. Here’s a (woefully incomplete) list. Many of these companies also handle other practice management functions required by a medical practice, including intake, telehealth, and payments.
Electronic health records (EHRs) can be a pain in a particular body part. But Tebra and other firms offer ways to automate portions of the record keeping process. And if these automations work, they also increase EHR accuracy.
I’ve previously talked about how an EHR can incorporate a patient identifier, derived from the facial recognition of the patient. This prevents misidentification, which can cause severe problems if the EHR data is applied to the wrong patient.
“Tebra’s EHR+ platform connects care, billing, scheduling, and more. Built-in AI speeds up notes, handles reviews, and automates repetitive admin work.”
“Before using AI-powered scribe tools, review applicable laws and regulations in your practice’s jurisdiction regarding electronic recordings, AI scribes, and informed consent. Some jurisdictions require verbal or written consent prior to any form of ambient documentation. Check your state board or consult legal counsel for guidance.”