Multi-accounting: Not For Bean Counters

I just ran across a phrase I had never seen before: “multi-accounting.” But it has nothing to do with “cooking the books.”

Incognia used the phrase in its report “The State of Fraud in the Gig Economy” (available here), and the term refers to people fraudulently creating multiple accounts to evade bans. If Henry Kissinger is banned from creating an account at the Ho Chi Minh website, perhaps “Kenry Hissinger” will sign up for an account.

One clear pattern emerges: multi-accounting and ban evasion are a key part of the engine behind many of these concerns. Abuse at scale—whether it’s stacking promos, exploiting refunds, or coordinating scams—typically depends on the ability to create and recycle accounts without getting caught. And collusion and cancellation abuse can rely on the same cycle.

Incognia recommends that gig economy firms examine their upstream processes to “close the gaps that enable account recycling.”

However, some device ID, tamper detection, and location intelligence anti-fraud tools are flawed and easily circumvented.

I’m sure Incognia would be more than happy to help you find an anti-fraud solution. Its solution for ride-sharing firms is described here.

Eight is Enough: Eight Reasons This Substack “Compromised Firmware” Post Sounded Like A Hack

Last night I saw a Substack post from one of my subscriptions, but I immediately distrusted the post.

The post was purportedly from Kathy Kristof from SideHusl.com. Now Kristof herself is legitimate, and her SideHusl website evaluates…well, side hustles.

But this message didn’t sound like Kathy, and my spidey sense was aroused.

First part of scam post.
Second part of scam post.

Let me count the ways.

  1. “We.” Normally if an entity suffers a breach, the entity uses its name.
  2. “Your device”…”the firmware level.” Substack posts can be viewed on a variety of devices. So this supposed breach affected all of them?
  3. “If you are receiving this email.” While Substack subscribers can receive emails of posts, they also appear on the Substack website. I happened to be on the Substack website when I saw the post. I was not reading an email.
  4. “Take immediate action…by updating your firmware.” The typical scam sense of urgency, coupled with a non-sensical request (see 2).
  5. “The FBI has been notified.” Such a report should probably go to a different agency.
  6. “support@trezor.io.” Trezor is a legitimate company that secures crypto assets…which has nothing to do with SideHusl or Substack. And by the way…
  7. “Substack” (not). In the same way that the post does not explicitly mention SideHusl, it doesn’t explicitly mention Substack either.
  8. “Access Dashboard button.” The reader is asked to click this button, supposedly to update their firmware (see 2).

My immediate reaction?

“I ain’t clicking that Access Dashboard button.”

My note restacking the scam post.

And:

“Suspicious message, purportedly from Kathy Kristof at Sidehusl.com, asking you to click a button.

“No way.”

Independent note with screenshots of the original scam post.

Be careful out there.

Deepfake Voices Have Been Around Since the 1980s

(Part of the biometric product marketing expert series)

Inland Empire locals know why THIS infamous song is stuck in my head today.

“Blame It On The Rain,” (not) sung by Milli Vanilli.

For those who don’t know the story, Rob Pilatus and Fab Morvan performed as the band Milli Vanilli and released an extremely successful album produced by Frank Farian. The title? “Girl You Know It’s True.”

But while we were listening to and watching Pilatus and Morvan sing, we were actually hearing the voices of Charles Shaw, John Davis, and Brad Howell. So technically this wasn’t a modern deepfake: rather than imitating the voice of a known person, Shaw et al were providing the voice of an unknown person. But the purpose was still deception.

Anyway, the ruse was revealed, Pilatus and Morvan were sacrificed, and things got worse.

“Pilatus, in particular, found it hard to cope, battling substance abuse and legal troubles. His tragic death in 1998 from a suspected overdose marked a sad epilogue to the Milli Vanilli saga.”

But there were certainly other examples of voice deepfakes in the 20th century…take Rich Little.

So deepfake voices aren’t a new problem. It’s just that they’re a lot easier to create today…which means that a lot of fraudsters can use them easily.

And if you are an identity/biometric marketing leader who needs Bredemarket’s help to market your anti-deepfake product, schedule a free meeting with me at https://bredemarket.com/mark/.

Using Grok For Evil: Deepfake Celebrity Endorsement

Using Grok for evil: a deepfake celebrity endorsement of Bredemarket?

Although in the video the fake Taylor Swift ends up looking a little like a fake Drew Barrymore.

Needless to say, I’m taking great care to fully disclose that this is a deepfake.

But some people don’t.

What is Truth? (What you see may not be true.)

I just posted the latest edition of my LinkedIn newsletter, “The Wildebeest Speaks.” It examines the history of deepfakes / likenesses, including the Émile Cohl animated cartoon Fantasmagorie, my own deepfake / likeness creations, and the deepfake / likeness of Sam Altman committing a burglary, authorized by Altman himself. Unfortunately, some deepfakes are NOT authorized, and that’s a problem.

Read my article here: https://www.linkedin.com/pulse/what-truth-bredemarket-jetmc/

Office.

In the PLoS One Voice Deepfake Detection Test, the Key Word is “Participants”

(Part of the biometric product marketing expert series)

A recent PYMNTS article entitled “AI Voices Are Now Indistinguishable From Humans, Experts Say” includes the following about voice deepfakes:

“A new PLoS One study found that artificial intelligence has reached a point where cloned voices are indistinguishable from genuine ones. In the experiment, participants were asked to tellhuman voices from AI-generated ones across 80 samples. Cloned voices were mistaken for real in 58% of cases, while human voices were correctly identified only 62% of the time.”

What the study didn’t measure

Since you already read the title of this post, you know that I’m concentrating on the word “participants.”

The PLoS One experiment used PEOPLE to try to distinguish real voices from deepfake ones.

And people aren’t all that accurate. Never have been.

Picture from Google Gemini.

Before you decide that people can’t detect fake voices…

…why not have an ALGORITHM give it a try?

What the study did measure

But to be fair, that wasn’t the goal of the PLoS One study, which specifically focused on human perception.

“Recently, an intriguing effect was reported in AI-generated faces, where such face images were perceived as more human than images of real humans – a “hyperrealism effect.” Here, we tested whether a “hyperrealism effect” also exists for AI-generated voices.”

For the record, the researchers did NOT discover a hyperrealism effect in AI-generated voices.

Do you offer a solution?

But if future deepfake voices sound realer than real, then we will REALLY need the algorithms to spot the fakes.

And if your company has a voice deepfake detection solution, I could have talked about it right now in this post.

Or on your website.

Or on your social media.

Where your prospects can see it…and purchase it.

And money in your pocket is realer than real.

Let’s talk. https://bredemarket.com/mark/

Picture from Google Gemini.

Proof of IAL3

I was up bright and early to attend a Liminal Demo Day, and the second presenter was Proof. Lauren Furey and Kurt Ernst presented, with Lauren assuming the role of the agent verifying Kurt’s identity.

The mechanism to verify the identity was a video session. In this case, Agent Lauren used three methods:

  • Examining Kurt’s ID, which he presented on screen.
  • Examining Kurt’s face (selfie).
  • Examining a credit card presented by Kurt.

One important note: Agent Lauren had complete control over whether to verify Kurt’s identity or not. She was not a mere “human in the loop.” Even if Kurt passed all the checks, Lauren could fail the identity check if she suspected something was wrong (such as a potential fraudster prompting Kurt what to do).

If you’ve been following my recent posts on identity assurance level, you know what happened next. Yes, I asked THE question:

“Another question for Proof: does you solution meet the requirements for supervised remote identity proofing (IAL3)?”

Lauren responded in the affirmative.

It’s important to note that Proof’s face authentication solution incorporates liveness detection, so there is reasonable assurance that the person’s fake is not a spoof or a synthetic identity.

So I guess I’m right, and that we’re seeing more and more IAL3 implementations, even if they don’t have the super-duper Kantara Initiative certification that NextgenID has.

Why Do We Trust SMS?

I hate to use the overused t word (trust), but in this case it’s justified.

“Scammers are aware that people are more likely to open and read a text message rather than an email  The open rates for text messages are more than 90% while the open rates for emails is less than 30%.  In addition, many email providers have filters that are able to identify and filter out phishing emails while the filtering capabilities on text messages is much less.  Additionally, people tend to trust text messages more than emails.  Text message also may prompt a quick response before the targeted victim can critically consider the legitimacy of the text message.”

From Scamicide, https://scamicide.com/2025/09/18/scam-of-the-day-september-19-2025-treasury-refund-text-smishing-scam/

What I can’t figure out is WHY text messages have such a high level of t[REDACTED]. Does SMS feel more personal?

Unlocking High-Value Financial Transactions: The Critical Role of Identity Assurance Level 3 (IAL3)

(Picture designed by Freepik.)

I’ve previously discussed the difference between Identity Assurance Level 2 (IAL2) and Identity Assurance Level 3 (IAL3). The key differentiator is that IAL3 requires either (1) in-person identity proofing or (2) remote supervised identity proofing.

Who and how to use IAL3

Who can provide remote supervised identity proofing?

“NextgenID Trusted Services Solution provides Supervised Remote Identity Proofing identity stations to collect, review, validate, proof, and package IAL-3 identity evidence and enrollment data for CSPs operating at IAL-3.”

And there are others who can provide the equivalent of IAL3, as we will see later.

How do you supervise a remote identity proofing session?

“The camera(s) a CSP [Credential Service Provider] employs to monitor the actions taken by a remote applicant during the identity proofing session should be positioned in such a way that the upper body, hands, and face of the applicant are visible at all times.”

But that doesn’t matter with me now. What matters to me is WHEN we need remote identity proofing sessions.

Mitek Systems’ Adam Bacia provides one use case:

“IAL3 is reserved for high-risk environments such as sensitive government services.”

So that’s one use case.

But there is another.

When to use IAL3 for financial transactions

Governments aren’t the only entities that need to definitively know identities in critically important situations.

What about banks and other financial institutions, which are required by law to know their customers?

Now it’s one thing when one of my Bredemarket clients used to pay me by paper check. Rather than go to the bank and deposit it in person at a teller window (in person) or at an ATM (remote supervised), I would deposit the check with my smartphone app (remote unsupervised).

Now the bank assumed a level of risk by doing this, especially since the deposited check would not be in the bank’s physical possession after the deposit was completed.

But guess what? The risk was acceptable for my transactions. I’m disclosing Bredemarket company secrets, but that client never wrote me a million dollar check. Actually, none of my clients has ever written me a million dollar check. (Perhaps I should raise my rates. It’s been a while. If I charge an hourly rate of $100,000, I will get those million dollar checks!)

So how do financial institutions implement the two types of IAL3?

In-person

Regarding IAL3 and banks, in-person transactions are supported in certain cases, even with the banks’ moves to close branches.

“If you need to initiate a funds transfer payment, an authorized signer for your account may also initiate funds (wire) transfers at any Chase branch.”

Note the use of the word “may.” However, if you don’t want to go to a branch to make a wire transfer, you have to set up an alternate method in advance.

Remote supervised

What about remote supervised transactions at financial institutions, where you are not physically present, but someone at the bank remotely sees you and everything you do? Every breath you take? And every move you make? Etcetera.

It turns out that the identity verification providers support video sessions between businesses (such as banks) and their customers. For example, Incode’s Developer Hub includes several references to a video conference capability. 

To my knowledge, Incode has not publicly stated whether any of its financial identity customers are employing this video conference capability, but it’s certainly possible. And when done correctly, this can support the IAL3 specifications.

Why to use IAL3 for financial transactions

For high-risk transactions such as ones with high value and ones with particular countries, IAL3 protects both the financial institutions and their customers. It lessens the fraud risk and the possible harm to both parties.

Some customers may see IAL3 as an unnecessary bureaucratic hurdle…but they would feel differently if THEY were the ones getting ripped off.

This is why both financial institutions and identity verification vendors need to explain the benefits of IAL3 procedures for riskier transactions. And do it in such a way that the end customers DEMAND IAL3.

To create the content to influence customer perception, you need to answer the critically important questions, including why, how, and benefits. (There are others.)

And if your firm needs help creating that content, Underdog is here.

I mean Bredemarket is here.

Visit https://bredemarket.com/mark/ and schedule a time to talk to me—for free. I won’t remotely verify your identity during our videoconference, but I will help you plan the content your firm needs.

Know Your Recruiter, Tuesday 9/16/2025 Edition

A supposed recruiter on LinkedIn with 2 names (Adriana, and Linda) and only 2 connections (whoops, now 3) tried to scam a friend of mine.

But my friend smelled a rat.

Another employment scammer.

Know your recruiter!

(Hiring rat picture from Imagen 4)

Know your recruiter!