How Expositor Syndrome Helps Your Firm

What is the opposite of impostor syndrome?

  • The Dunning-Kruger Effect?
  • A delusion of grandeur?

Etymologically, the opposite of impostor syndrome would be expositor syndrome. I asked my buddy Google Gemini to hallucinate a definition, and this is what I got:

“Expositor Syndrome is a hypothetical, non-clinical psychological pattern characterized by an overwhelming and often compulsive urge to explain, clarify, or elaborate upon concepts, ideas, or events, even when such detailed exposition is unsolicited, unnecessary, or redundant. Individuals exhibiting Expositor Syndrome experience a profound discomfort or anxiety if they perceive a potential for misunderstanding or an unstated implication, feeling an internal pressure to “lay bare” all facets of a topic.

“Note: This is a fictional construct, not a recognized medical or psychological condition.”

Gemini actually said a lot more, but I chose not to elaborate.

This, rather than a delusion of grandeur, is considered the opposite of impostor syndrome because an impostor HIDES their true talented self, whereas an expositor ELABORATES and goes on and on about their knowledge. Until their friends become former friends and stop speaking to them.

But can someone exhibit both expositor syndrome and a delusion of grandeur?

Perhaps such a person—if they exist—can still make positive contributions to society.

Such as the Bredemarket 2800 Medium Writing Service, approximately 2800 to 3200 words that (a) answers the WHY/HOW/WHAT questions about you, (b) advances your GOAL, (c) communicates your BENEFITS, and (d) speaks to your TARGET AUDIENCE.

If you need someone to write roughly 3000 words about your identity/biometric or technology firm, request information at https://bredemarket.com/bredemarket-2800-medium-writing-service/

How (and Why) to Avoid Unreliable Business Partners

(Imagen 4)

Is there an easy way to detect potential business partners to avoid?

There are reliable business partners, and unreliable ones. 

I’d like to share my thoughts on how to gravitate toward the former and away from the latter, and why this is critically important to your business.

What is a reliable business partner?

We all work with a variety of business partners: clients, prospects, vendors, agents, evangelists, and the like.

Some business partners are really good at:

  • Paying you. On time, or even early; Bredemarket has enjoyed working with 2 clients with “net 0” terms.
  • Setting expectations. While Bredemarket has its own system for kicking off a project, things work even more smoothly when a client answers your questions before you ask them.
  • Communicating with you. Outside of a project in work, partners who take the time to communicate with you are valuable. Maybe the communication is “check back in a few months.” Or perhaps the communication is “Sorry, but we have no need for Bredemarket’s services.” Even the latter is valuable.
  • Keeping in touch with you. Some partners go above and beyond the minimum. For example, an executive at one of my clients would check in with me and ask, is everything OK? Are we paying you on time?

When business partners pay you, set expectations, communicate with you, and keep in touch with you, you know that you can rely on them.

What is an unreliable business partner?

On the other hand, some business partners are really “good” at:

  • Not paying you. This one’s obvious.
  • Not setting expectations. Have you ever had a client who was vague about what they wanted, saying, “I’ll know it when I see it?” And then…they don’t see it.
  • Ghosting you. In the consulting world, you often get a prospect’s urgent and seemingly important request for services. When you respond to the prospect’s questions, you never hear from them again. Apparently that urgent request was not that important after all.

After you pull these shenanigans on me, I quietly brand you as unprofessional…and unreliable.

How to avoid unreliable business partners

So how do you avoid the unreliable business partners? Here are three tips:

  1. Communicate your expectations. Take the Bredemarket 400 Short Writing Service as an example. I clearly communicate that I only offer 2 review cycles, you respond with comments in 3 calendar days, and you pay me $500 (as of June 2025) within 15 calendar days. (And some consultants insist that I should collect money up front.)  And if you don’t meet my expectations, I gently let you know.
  2. Expect communications. If someone says they will get back with you by a certain date, follow up. Maybe not on the exact date, but remind them what they said. But after a couple of these reminders with no response…
  3. Don’t pursue lost causes. Many of us hold out hope for too long, reliving the movie quote “so you’re telling me there’s a chance.” Bredemarket has hundreds of contacts in its CRM, but the majority are flagged as inactive because there’s no longer a chance. If they subsequently reach out to me…we’ll see.

Why to avoid unreliable business partners

Obviously you don’t want to deal with unreliable people, but why should you be so proactive that the unreliable people avoid you altogether?

At Bredemarket, I continuously return to the topic of focus (ubiquity or docking or whatever). And if I focus on attracting reliable business partners, and convey that the unreliable ones should stay away, then Bredemarket’s reputation as a quality provider will be enhanced.

And the people that want me to halve my prices can go to Fiverr…or ChatGPT.

Wanna Know a “Why” Secret About Bredemarket’s TPRM Content?

(The picture is only from Imagen 3. I’ve been using it since January, as you will see.)

Here’s a “why” question: why does Bredemarket write the things it writes about?

Several reasons:

  • To promote Bredemarket’s services so that you meet with me and buy them.
  • To educate about Bredemarket’s target industries of identity/biometrics, technology, and Inland Empire business.
  • To dive into specific topics that interest me, such as deepfakes, HiveLLM, identity assurance levels, IMEI uniqueness, and Leonardo Garcia Venegas (the guy with the REAL ID that was real).
  • Because I feel like it.

And then there are really specific reasons such as this one.

In late January I first wrote about third-party risk management (TPRM) and have continued to do so since.

Why?

TPRM firm 1

Because at that time, a TPRM firm had a need for content marketing and product marketing services, and Bredemarket started consulting for the firm.

I was very busy for 2 1/2 months, and the firm was happy with my work. And I got to dive into TPRM issues in great detail:

  • The incredibly large number of third parties that a vendor deals with…possibly numbering into the hundreds. If hundreds of third parties have YOUR data, and just ONE of those third parties is breached, bad things can happen.
  • The delicate balance between automated and manual work. News flash: if you look at my prior employers, you will see that I’ve dealt with this issue for over 30 years.
  • Organizational process maturity. News flash: I used to work for Motorola.
  • All the NIST standards related to TPRM, including NIST’s discussion of FARM (Frame, Assess, Respond, and Monitor). News flash: I’ve known NIST standards for many years.
  • Other relevant standards such as SOC 2. News flash: identity verification firms deal with SOC 2 also.
  • Fourth-party, fifth-party, and other risks. News flash: anyone that was around when AIDS emerged already knows about nth-party risk.

But for internal reasons that I can’t disclose (NDA, you know), the firm had to end my contract.

Never mind, I thought. I had amassed an incredible 75 days of TPRM experience—or about the same time that it takes for a BAD TPRM vendor to complete an assessment. 

But how could I use this?

TPRM firm 2

Why not put my vast experience to use with another TPRM firm? (Honoring the first firm’s NDA, of course.)

So I applied for a product marketing position with another TPRM firm, highlighting my TPRM consulting experience.

The company decided to move forward with other candidates.

The firm had another product marketing opening, so I applied again.

The company decided to move forward with other candidates.

Even if this company had a third position, I couldn’t apply for it because of its “maximum 2 applications in 60 days” rule.

TPRM firm 3

Luckily for me, another TPRM firm had a product marketing opening. TPRM is active; the identity/biometrics industry isn’t hiring this many product marketers.

  • So I applied on Monday, June 2 and received an email confirmation:
  • And received a detailed email on Tuesday, June 3 outlining the firm’s hiring process.
  • And received a third email on Wednesday, June 4:

“Thank you for your application for the Senior Product Marketing Manager position at REDACTED. We really appreciate your interest in joining our company and we want to thank you for the time and energy you invested in your application to us.

“We received a large number of applications, and after carefully reviewing all of them, unfortunately, we have to inform you that this time we won’t be able to invite you to the next round of our hiring process.

“Due to the high number of applications, we are unfortunately not able to provide individual feedback to your application at this early stage of the process.

“Again, we really appreciated your application and we would welcome you to apply to REDACTED in the future. Be sure to keep up to date with future roles at REDACTED by following us on LinkedIn and our other social channels. 

“We wish you all the best in your job search.”

Unfortunately, I apparently did not have “impressive credentials.” Oh well.

TPRM firm 4?

What now?

If nothing else, I will continue to write about TPRM and the issues I listed above.

Well, if any TPRM firm wants to contract with Bredemarket, schedule a meeting: https://bredemarket.com/cpa/

And if any TPRM firm wants to use my technology experience and hire me as a full-time product marketer, contact my personal LinkedIn account: https://www.linkedin.com/in/jbredehoft

I’m motivated to help your firm succeed, and make your competitors regret passing on me.

Sadly, despite my delusions of grandeur and expositor syndrome (to be addressed in a future Bredemarket blog post), I don’t think any TPRM CMOs are quaking in their boots and fearfully crying, “We missed out on Bredehoft, and now he’s going to work for the enemy and crush us!”

But I could be wrong.

Deepfakes Slipping Through the Silos

(Imagen 4)

Sometimes common sense isn’t enough to stop deepfake fraud. Marc Ricker of iValt asserrts that a unified response helps also.

“Too often, network teams focus on availability, while security teams chase threats after the fact. That separation creates gaps — gaps that attackers exploit.”

Ricker’s solution:

“iVALT unifies remote access and identity security through:

Instant, passwordless biometric authentication

AI-resistant technology that stops deepfake and synthetic identity fraud”

iVALT trumpets the use of 5 factors: device ID, biometrics, geolocation, time window, and “app code.” 

  • I was curious which biometric modalities and vendors iVALT supported, so I looked it up. 
  • iVALT appears to use PingOne DaVinci, which orchestrates everything. 
  • The only biometrics specifically mentioned by iVALT are those captured on a mobile phone.
  • But it’s unclear to me whether these are the biometrics captured by the phone’s operating system (for example, TouchID or FaceID on iOS), third party biometrics, or all of the above.

Of course, most people don’t care about the minutiae of supported biometric modalities. 

But some do…because all biometric algorithms do NOT provide the same accuracy or performance.

When HiveLLM Pitches an Anti-Fraud Professional

I received a suspicious email from “Sara Romano,” a “scout” with HiveLLM who wanted me to bid on a biometric content calendar with a budget of “75000” (no currency specified).

HiveLLM has no corporate address, no LinkedIn presence, a website only a couple of months old, and an advertised business model in which you can ask a question for 10 cents.

Oh, and “Sara Romano” also cold emailed Danie Wylie, who also found the pitch sketchy: https://m.facebook.com/story.php?story_fbid=pfbid0nvmhyuLpn3jwMv8K8sbK5EXfS4kcpjfWHicgj4BJhdFLMme87P5fvPSYf9CwjRH7l&id=100001380243595&mibextid=wwXIfr

A clear case of the need for Know Your Business (KYB).

And as you can see, HiveLLM failed a rudimentary KYB check.

But let’s ask some questions anyway.

“Sara, to confirm that HiveLLM is not a fraudulent entity, please provide your corporate address, registration information, and the identities of your owner(s) and corporate officers.”

UPDATE. At midnight Pacific Time, “Sara” sent a long response. Buried toward the end: “I’m unable to provide corporate registration or ownership details.”

The Monk Skin Tone Scale

(Part of the biometric product marketing expert series)

Now that I’ve dispensed with the first paragraph of Google’s page on the Monk Skin Tone Scale, let’s look at the meat of the page.

I believe we all agree on the problem: the need to measure the accuracy of facial analysis and facial recognition algorithms for different populations. For purposes of this post we will concentrate on a proxy for race, a person’s skin tone.

Why skin tone? Because we have hypothesized (I believe correctly) that the performance of facial algorithms is influenced by the skin tone of the person, not by whether or not they are Asian or Latino or whatever. Don’t forget that the designated races have a variety of skin tones within them.

But how many skin tones should one use?

40 point makeup skin tone scale

The beauty industry has identified over 40 different skin tones for makeup, but this granular of an approach would overwhelm a machine learning evaluation:

[L]arger scales like these can be challenging for ML use cases, because of the difficulty of applying that many tones consistently across a wide variety of content, while maintaining statistical significance in evaluations. For example, it can become difficult for human annotators to differentiate subtle variation in skin tone in images captured in poor lighting conditions.

6 point Fitzpatrick skin tone scale

The first attempt at categorizing skin tones was the Fitzpatrick system.

To date, the de-facto tech industry standard for categorizing skin tone has been the 6-point Fitzpatrick Scale. Developed in 1975 by Harvard dermatologist Thomas Fitzpatrick, the Fitzpatrick Scale was originally designed to assess UV sensitivity of different skin types for dermatological purposes.

However, using this skin tone scale led to….(drumroll)…bias.

[T]he scale skews towards lighter tones, which tend to be more UV-sensitive. While this scale may work for dermatological use cases, relying on the Fitzpatrick Scale for ML development has resulted in unintended bias that excludes darker tones.

10 point Monk Skin Tone (MST) Scale

Enter Dr. Ellis Monk, whose biography could be ripped from today’s headlines.

Dr. Ellis Monk—an Associate Professor of Sociology at Harvard University whose research focuses on social inequalities with respect to race and ethnicity—set out to address these biases.

If you’re still reading this and haven’t collapsed in a rage of fury, here’s what Dr. Monk did.

Dr. Monk’s research resulted in the Monk Skin Tone (MST) Scale—a more inclusive 10-tone scale explicitly designed to represent a broader range of communities. The MST Scale is used by the National Institute of Health (NIH) and the University of Chicago’s National Opinion Research Center, and is now available to the entire ML community.

From https://skintone.google/the-scale.

Where is the MST Scale used?

According to Biometric Update, iBeta has developed a demographic bias test based upon ISO/IEC 19795-10, which itself incorporates the Monk Skin Tone Scale.

At least for now. Biometric Update notes that other skin tone measurements are under developoment, including the “Colorimetric Skin Tone (CST)” and INESC TEC/Fraunhofer Institute research that uses “ethnicity labels as a continuous variable instead of a discrete value.”

But will there be enough data for variable 8.675309?

What “Gender Shades” Was Not

Mr. Owl, how many licks does it take to get to the Tootsie Roll center of a Tootsie Pop?

A good question. Let’s find out. One, two, three…(bites) three.

From YouTube.

If you think Mr. Owl’s conclusion was flawed, let’s look at Google.

One, two, three…three

I was researching the Monk Skin Tone Scale for a future Bredemarket blog post, but before I share that post I have to respond to an inaccurate statement from Google.

Google began its page “Developing the Monk Skin Tone Scale” with the following statement:

In 2018, the pioneering Gender Shades study demonstrated that commercial, facial-analysis APIs perform substantially worse on images of people of color and women.

Um…no it didn’t.

I will give Google props for using the phrase “facial-analysis,” which clarifies that Gender Shades was an exercise in categorization, not individualization.

But to say that Gender Shades “demonstrated that commercial, facial-analysis APIs perform substantially worse” in certain situations is an ever-so-slight exaggeration.

Kind of like saying that a bad experience at a Mexican restaurant in Lusk, Wyoming demonstrates that all Mexican restaurants are bad.

How? I’ve said this before:

The Gender Shades study evaluated only three algorithms: one from IBM, one from Microsoft, and one from Face++. It did not evaluate the hundreds of other facial recognition algorithms that existed in 2018 when the study was released.

So to conclude that all facial classification algorithms perform substantially worse cannot be supported…because in 2018 the other algorithms weren’t tested.

One, two, three…one hundred and eighty nine

In 2019, NIST tested 189 software algorithms from 99 developers for demographic bias, and has continued to test for demographic bias since.

In these tests, vendors volunteer to have NIST test their algorithms for demographic bias.

Guess which three vendors have NOT submitted their algorithms to NIST for testing?

You guessed it: IBM, Microsoft, and Face++.

Anyway, more on the Monk Skin Tone Scale here, but I had to share this.

Today’s Large Multimodal Model (LMM) is FLUX.1 Kontext

Do you remember when I explained what a Large Multimodal Model (LMM) is, and why an LMM is crucial to correctly render text in generative AI-created images?

Well, Black Forest Labs (with an Impressum…in Delaware) announced a new LMM last Thursday:

“FLUX.1 Kontext marks a significant expansion of classic text-to-image models by unifying instant text-based image editing and text-to-image generation. As a multimodal flow model, it combines state-of-the-art character consistency, context understanding and local editing capabilities with strong text-to-image synthesis.”

FLUX.1 Kontext has also received TechCrunch coverage.

And yes, the company does have a German presence.

(And no, the picture is obviously not from FLUX.1 Kontext. It’s from Imagen 4.)

Presentation Attack Injection, Injection Attack Detection, and Deepfakes on LinkedIn and Substack

Just letting my Bredemarket blog readers know of two items I wrote on other platforms.

  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes.” This LinkedIn article, part of The Wildebeest Speaks newsletter series, is directed toward people who already have some familiarity with deepfake attacks.
  • “Presentation Attack Injection, Injection Attack Detection, and Deepfakes (version 2). This Substack post does NOT assume any deepfake attack background.

Okta Talks About Evil Twins

Public wi-fi can be fun, especially when you don’t realize which networks were legitimately set up by the business.

And they’re really fun when someone pulls the “evil twin” trick, described by Okta.

“A hacker looks for a location with free, popular WiFi. The hacker takes note of the Service Set Identifier (SSID) name. Then, the hacker uses a tool like a WiFi Pineapple to set up a new account with the same SSID. Connected devices can’t differentiate between legitimate connections and fake versions.”

The next steps are to trick users into providing the authentication details for the “good” network, lure people into logging in to the “evil” network, then steal any unencrypted data.

Of course you don’t have to go to those extremes. If the business fails to publicize what the “good” network is called, just set up a network called “ReelOffishelWiFi” and see how many suckers you get.

(Imagen 4)