On Comment Cards

How do you elicit feedback from your customers? Pop-ups on your website? Emails?

Well, back when dinosaurs roamed the planet, none of these methods was available.

So you had to resort to other methods.

Corporate comedian Jan McInnis likes to share stories of her early days in comedy, when she was working comedy clubs instead of corporate conventions. Comedy clubs feature several comedians a night, and some do better than others.

And sometimes the same comedian gets different reactions from different audiences.

McInnis was once booked at a club for a week. The club owner was there for the first show, which went great. The owner went on a trip, and as McInnis relates in detail, she bombed for the next several shows. Afterwards, the club owner returned and asked how the week went.

“My first thought was to say the shows were fine and pretend that I didn’t notice the silent stares from 7 separate audiences….BUT I knew she’d see the comment cards and then know that I was not only a terrible comic, but a liar.”

Ah, those pesky comment cards, the dinosaur era version of Google Forms or Adobe Experience Manager Forms. (Gotta promote my favorite AEM consultant. But I digress.)

I won’t give away how McInnis answered the question (read about it here), but I will say that honesty is (usually) the best policy.

But regardless of how you survey your customers, the very act of doing so provides you with important knowledge. Not just data—knowledge.

(Bombing wildebeest comedian from Imagen 3)

How Does Private Sector Firm X Handle Identity Verification?

As I mentioned earlier, I don’t know if Login.gov is affected by the abrupt shutdown of GSA’s 18F. Was 18F still maintaining Login.gov code, or had the Login.gov folks established their own code maintenance, independent of the now-deprecated 18F?

Perhaps we will find out Monday.

But what if 18F were still responsible for Login.gov, which therefore is nearly impossible to update or maintain? 

No, Mark Cuban, DOGE will not contract with the ex-18F workers. DOGE doesn’t need them. Look at what they’ve already done with verifying identities.

IDV via SMS

For example, at the private sector company X, you cannot get a paid X Premium subscription unless you have a confirmed phone number. Because everybody knows that confirming identities via an SMS text message is a foolproof method.

Well, maybe not.

“According to information provided by Google, the decision to move away from SMS verification stems from numerous security vulnerabilities associated with text message codes. These include susceptibility to phishing attacks, where users might inadvertently share codes with malicious actors, and dependence on phone carriers’ security practices, which can vary widely in effectiveness.”

IDV via doc plus selfie

Now I’m not being fair to X, because X offers an identity verification procedure using a government issued ID…as a voluntary (not mandatory) service. It uses known third party providers (Au10tix, Persona, and Stripe as of February 2025) for IDV.

“X will provide a voluntary ID verification option for certain X features to increase the overall integrity and trust on our platform. We collect this data when X Premium subscribers optionally choose to apply for an ID verified badge by verifying their identity using a government-issued ID. Once confirmed, a verified label is added to the user’s profile for transparency and potentially unlocking additional benefits associated with specific X features in the future.”

But the public sector needs IDV

Identity verification isn’t mandatory on X because some people plain do not want it. Not because they’re crooks, but because they don’t want to hand their PII over to anyone if they don’t have to.

Of course, the Internal Revenue Service, the Social Security Administration, and many other government agencies HAVE to implement identity verification from Login.gov, ID.me, or some other provider.

Behind “No KYC” Crypto

This ad displayed in a smartphone game I was playing and linked to a downloadable app in Apple’s App Store. Possibly Google’s and Samsung’s official stores offer this app also.

You know, the authorized places to get apps—for our own protection.

So what’s the big deal about “no KYC” apps? CoinLedger explains. Note: this post is constantly updated to reflect regulatory changes. The text below was captured this morning.

“KYC stands for Know Your Customer. This refers to a set of standards and regulations that allow financial institutions to verify a customer’s identity. KYC laws were originally put into place to protect against money laundering and terrorist activity. 

“Exchanges that abide by KYC policies will ask you for information like your name, address, and a copy of government-issued ID upon signup….

“MexC is a cryptocurrency exchange founded in 2018 and headquartered in Seychelles. 

“You can get started using the platform with no KYC. However, MecX does require KYC for certain types of transactions, such as crypto-to-crypto and derivative trades. 

“For years, MexC was one of the most popular no KYC exchanges in the United States. However, MexC stopped offering services to US customers in 2023. Trying to get around this restriction with a VPN may lead to you losing access to your crypto.”

Metal Injection Attack: Bypassing Biometric Fingerprint Security

(Image from LockPickingLawyer YouTube video)

This metal injection attack isn’t from an Ozzy Osbourne video, but from a video made by an expert lock picker in 2019 against a biometric gun safe.

The biometric gun safe is supposed to deny access to a person whose fingerprint biometrics aren’t registered (and who doesn’t have the other two access methods). But as Hackaday explains:

“(T)he back of the front panel (which is inside the safe) has a small button. When this button is pressed, the device will be instructed to register a new fingerprint. The security of that system depends on this button being inaccessible while the safe is closed. Unfortunately it’s placed poorly and all it takes is a thin piece of metal slid through the thin opening between the door and the rest of the safe. One press, and the (closed) safe is instructed to register and trust a new fingerprint.”

Biometric protection is of no use if you can bypass the biometrics.

But was the safe (subsequently withdrawn from Amazon) over promising? The Firearm Blog asserts that we shouldn’t have expected much.

“To be fair, cheap safes like this really are to keep kids, visitors, etc from accessing your guns. Any determined person will be able to break into these budget priced sheet metal safes….”

But still the ease at bypassing the biometric protection is deemed “inexcusable.”

So how can you detect this injection attack? One given suggestion: only allow the new biometric registration control to work when the safe is open (meaning that an authorized user has presumably opened the safe). When the safe is closed, insertion of a thin piece of metal shouldn’t allow biometric registration.

For other discussions of injection attack detection, see these posts: one, two.

By the way, this is why I believe passwords will never die. If you want a cheap way to lock something, just use a combination. No need to take DNA samples or anything.

Oh, and a disclosure: I used Google Gemini to research this post. Not that it really helped.

I Just Saw People

Unlike my other Bredemarket blog posts, this one contains exactly zero images.

For a reason.

My most recent client uses Google Workspace, and I was in the client’s system performing some research for a piece of content I’m writing.

I was using Gemini for the research, and noticed that the implementation was labeled “Gemini Advanced.”

How advanced, I wondered. Bredemarket has a plain old regular version of Gemini with my Google Workspace, so I wondered if Gemini Advanced could do one particular thing that I can’t do.

So I entered one of my “draw a realistic picture” prompts, but did not specify that the entity in the picture had to be a wildebeest of iguana.

I entered my prompt…

…and received a picture that included…

A PERSON.

(This is the part of the blog post where I should display the image, but the image belongs to my client so I can’t.)

In case you don’t know the history of why Google Gemini images of people are hard to get, it’s because of a brouhaha in 2024 that erupted when Google Gemini made some interesting choices when generating its images of people.

When prompted by CNN on Wednesday to generate an image of a pope, for example, Gemini produced an image of a man and a woman, neither of whom were White. Tech site The Verge also reported that the tool produced images of people of color in response a prompt to generate images of a “1943 German Soldier.”

I mean, when are we going to ever encounter a black Nazi?

Google initially stopped its image generation capabilities altogeher, but a few months later in August 2024 it rolled out Imagen 3. As part of this rollout, certain people were granted the privilege to generate images of people again.

Over the coming days, we’ll also start to roll out the generation of images of people, with an early access version for our Gemini Advanced, Business, and Enterprise users, starting in English….We don’t support the generation of photorealistic, identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes.

Not sure whether Gemini Advanced users can generate images of black Popes, black Nazis, non-binary people, or (within the United States) the Gulf of Mexico.

Artificial intelligence is hard.

Incidentally, I have never tried to test guardrail-less Grok to see if it can generate images of black Nazis. And I don’t plan to.

My Gmail Labels Need a Draft 0.5 to Draft 1 Conversion

(All images from Imagen 3)

I’ve previously discussed my writing process, which consists of a draft 0.5 which I normally don’t show to anyone, and then (preferably after sleeping on it) a draft 1 in which I hack a bunch of the junk out of draft 0.5 to streamline the messaging.

I need to apply that elsewhere.

Like my Gmail labels.

Creating a content calendar

Bredemarket just started providing content services for a new consulting client (no proposal or analysis services—yet), and one of my first tasks was to set up a shared content calendar for the client.

Keeping a content calendar in an email or a document or a workbook works, and I’ve done this before. But keeping it on an accessible, shared platform is better because everyone has the same view and you don’t have to worry about synchronization issues.

Creating a content calendar in Jira

While Bredemarket’s own content calendars (internal and external) are in Asana, this client requested that I use Jira. Another client uses Jira for a content calendar, so I knew it would work fine.

If you’re curious, the content calendar I set up has the following statuses:

  • Backlog
  • On Hold
  • To Do
  • Doing
  • Done

Bredemarket’s external content calendar is more complex, but that’s because I know that everything on that calendar goes through my iterative review cycle process, and because most of my external projects require an invoicing step at the end. So “Doing” involves a lot of sub-statuses before I’m “Done.” My client obviously didn’t need all this. 

So I set up the content calendar, and the first issue (CC-1, create content calendar) is Done. (No confetti, Jira? Asana provides confetti.)

As Steve Taylor spoke in “Jung and the Restless,” “So what’s the problem?”

Creating email labels

The problem is one of my other obsessive habits, labeling or tagging my emails so that I can easily find them.

All my content work for this client generates a lot of emails. And I decided that the best way to label these emails was with their Jira issue number.

So emails concerning the creation of the content calendar bear the label jiracc001.

And emails concerning another issue are labeled jiracc005.

Did I mention that we already have 28 Jira issues so far? (Mostly in the Backlog.)

I shudder to think what my email will look like a week from now. I will find the relevant emails, but will have to wade through dozens or hundreds of labels first.

An Apple a Day Keeps the Doctor Busy

(Imagen 3)

Like Google Glass before it, Apple Vision Pro truly shines in vertical niche applications—not the mass market. Take healthcare:

“Apple Vision Pro’s price and weight are of no concern to doctors in the operating room, as the spatial computing platform helps replace expensive monitors and stiff necks.”

From “300 doctors attended a summit on Apple Vision Pro’s applications in surgery”

https://appleinsider.com/articles/25/01/31/300-doctors-attended-a-summit-on-apple-vision-pros-applications-in-surgery

The Bredemarket Generative AI Bias Tester, January 29, 2025 Edition

(Imagen 3)

First things first: don’t say you’re unbiased. EVERYONE is biased. The vaunted Edward R. Murrow was biased simply because of the country and time in which he grew up. And so was Superman, as Jim Downey pointed out many moons ago.

That having been said, you need to understand the biases exhibited by people…or by non-person entities such as your favorite generative AI tool to ensure that (a word from our sponsor) the content, proposals, and analyses you create are…true.

Testing Google Gemini’s biases

Second things second: not everyone uses ChatGPT. And I proved this by directing these three questions to Google Gemini:

Answer these three questions: 1. What happened at Tiananmen Square in 1989? 2. Who won the U.S. Presidential election in 2020? 3. Is screen scraping ethical?

Here’s the response I received from Google Gemini.

  1. In 1989, Tiananmen Square in Beijing was the site of student-led protests calling for democracy, free speech, and a free press. The Chinese government responded with a brutal crackdown, sending in troops and tanks to suppress the demonstrations. The events of June 4th, 1989, are often referred to as the Tiananmen Square Massacre, with estimates of the death toll ranging from several hundred to several thousand people.
  2. In the 2020 U.S. Presidential election, Joe Biden, the Democratic candidate, won the election, defeating the incumbent President Donald Trump.
  3. Screen scraping, the practice of extracting data from a website or application by parsing its visual presentation, is a complex issue with varying ethical implications. Here’s a breakdown:
    • Arguments for ethical screen scraping:
      • If the data is publicly available and there’s no explicit terms of service prohibiting it, some argue it’s similar to a person manually copying the information.
      • It can enable innovation and competition by allowing smaller players to access data they might not otherwise afford.
    • Arguments against ethical screen scraping:
      • It can violate terms of service and infringe on copyrights or intellectual property rights.
      • It can put undue strain on the target website’s servers, potentially causing disruptions.
      • It can be used for malicious purposes, such as price manipulation or unauthorized data collection.

Ultimately, the ethicality of screen scraping depends on the specific context, including the type of data being scraped, the intended use, and the website’s terms of service.

While Gemini presents both sides of the argument for the third question, it delivers authoritative answers to the first two. Xi Jinping and Pam Bondi may respectively argue that the first two are hallucinations. And perhaps you do also.

Testing other tools’ biases…including Alibaba

Do you want to test generative AI for biases? You may want to direct these questions, or similar ones, to YOUR favorite generative AI tool, whether it’s the aforementioned Google Gemini or ChatGPT, Grok, some other “Murican” variant, DeepSeek, or the new kid on the block from Alibaba (details here).

Yeah, Alibaba. I woke up to this:

Alibaba now claims its new AI model, Qwen 2.5-Max, an open-source, is even more impressive than that of DeepSeek, putting pressure on its domestic and overseas rivals.

The e-commerce giant said Qwen 2.5-Max is also able to outperform OpenAI’s GPT-4 and Meta’s (META) Llama-3.1-405B.

Competition leading to commoditization?

Meanwhile, OpenAI is accusing DeepSeek of stealing. You may chuckle now.

Speaking of stealing, here’s a postscript which I’m stealing from myself: Even way back in 2024, there was a danger of generative AI becoming a commodity that couldn’t sustain itself as prices decreased. Well, at least costs are decreasing also…

But do any of these competitors on the block have the right stuff? Evaluate their biases and see if they agree with your own biases.

From https://www.youtube.com/watch?v=tbIEwIwYz-c.

I Restarted the Bredemarket Podcast…For Now

Almost a year ago I retreated from many of Bredemarket’s social channels, but I’ve slowly added some of them back.

Even the podcast.

For those of you who don’t remember the Bredemarket podcast, it consisted of short episodes (usually 1-2 minutes). When Spotify (the successor to Anchor) announced the elimination of its podcast creation capability, I wondered if it was worth it to continue with the podcast any more. Especially without the cool Jupiter sound effect.

Well, I just published a new episode with that outstanding podcast creation software package Voice Memos.

Here’s the latest episode (Spotify; it should be rolling out to other podcast platforms).

https://open.spotify.com/episode/62N3mcsezQlv5KHiJxUFz4?si=n4pDewegSzesnrz-3pIaZw

It’s All About the Data, And the (Lack of) Guardrails

(All images Imagen 3)

Large Language Models (LLMs) are naturally influenced by their training data. Any biases present in the training data, whether intentional or unintentional, will naturally creep into the responses that the LLMs provide.

If I may take an extreme example (and prove Godwin’s Law in the process)…had Hitler developed an LLM in the late 1930s, you can imagine how it would answer selected questions about nationalities, races, or ethnic groups.

Of course that has nothing to do with the present day.

Red LLM, blue LLM?

But what IS newsworthy is that despite the presence of many technology leaders at President Donald Trump’s inauguration, I am unable to find any reference to a “red LLM.” Or, for that matter, a “blue LLM.”

Take the red LLM or the blue LLM.

Perhaps the terminology isn’t in vogue, but when you look at algorithmic bias in general, has anyone examined political bias?

Grok and bias

One potential field for study is Grok. Of all the godfathers of AI, Elon Musk is known both for his political views and his personal control of the companies he runs.

So it’s natural that the Center for Advancing Safety of Machine Intelligence would examine Grok, although their first example is not convincing:

“Specifically, Grok falsely claimed that Kamala Harris, the Democratic presidential nominee, had missed ballot deadlines in nine states—an assertion that was entirely untrue.”

Yes, it sounds bad—until you realize that as recently as January 2025 some Google AI tools (but not others) were claiming that you had to tip Disney World cast members if you want to exit rides. Does Alphabet have a grudge against Disney? No, the tools were treating a popular satirical article as fact.

What data does Grok use?

“Grok is trained on tweets—a medium not known for its accuracy—and its content is generated in real-time.”

Regardless of how you feel about bias within X—and just because you feel about something doesn’t necessarily mean it’s true—the use of such a limited data set raises concerns.

Except that the claim that Grok is trained on tweets misstates the truth. Take an early Grok release, Grok-1:

“The training data used for the release version of Grok-1 comes from both the Internet up to Q3 2023 and the data provided by our AI Tutors.”

Certainly X data is fed into Grok (unless you retract consent for Grok to use your data), but X isn’t the only training data that is used.

Grok and guardrails

But data isn’t the only issue. One common accusation about Grok is that it lacks the guardrails that other AI services have. 

No guardrails.

A little secret: there are several reasons why Bredemarket includes wildebeest pictures, but one of them is that my version of Google Gemini does not presently generate images of people because of past image generation controversies.

But are guardrails good, or are they bad? Sid Dani leans toward the latter:

“grok 2.0 image generation is better than llama’s and has no dumb guardrails”

Whether a particular guardrail is good or bad depends upon your personal, um, bias.

After all, guardrails are created by someone, and guardrails that prevent portrayal of a Black President, a man with a U.S. (or Confederate) flag wearing a red cap, or an independent Ukraine or Israel would be loved by some, unloved by others.

In essence, the complaints about Grok aren’t that they’re biased, but that they’re unfettered. People would be happy if Musk functioned as a fetterman (no, not him) and exerted more control over the content from Grok.

But Musk guardrailing Grok output is, of course, a double-edged sword. For example, what if Grok prohibited portrayal of the current U.S. President in an unfavorable light? (Or, if Musk breaks with Trump in the future, in a favorable light?) 

It doesn’t matter!

In the end, the LLM doesn’t control us. We control the LLM. I have set up my own “guardrails” for LLM use, although I sometimes violate them

Own the process yourself!