My Continuing (Positive) Experiences With Wildebeest Bank

(CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=245337.)

The names and identification numbers have been changed to protect my PII.

Early this morning, I received an email from my bank, Wildebeest Bank.

Your Wildebeest Bank debit card transaction was declined.

A transaction on your debit card ending in 1234 was declined.

So I went to my Wildebeest Bank app to see what company tried to charge to my card, and how much they tried to charge. But I found nothing.

Then I realized that my debit card does NOT end in 1234, but in 5678. I’ve had that debit card since…well, since May 2024, when “enron*publications us” fraudulently charged $8.28 to my card that DID end in 1234.

Wildebeest Bank cancelled that card immediately and issued a new one.

And no one tried to use that old card until today.

And Wildebeest Bank just laughed at the attempt, not even bothering to inform me of the details.

Why is Morph Detection Important?

We’re all familiar with the morphing of faces from subject 1 to subject 2, in which there is an intermediate subject 1.5 that combines the features of both of them. But did you know that this simple trick can form the basis for fraudulent activity?

Back in the 20th century, morphing was primarily used for entertainment purposes. Nothing that would make you cry, even though there were shades of gray in the black or white representations of the morphed people.

Godley and Creme, “Cry.”
Michael Jackson, “Black or White.” (The full version with the grabbing.) The morphing begins about 5 1/2 minutes into the video.

But Godley, Creme, and Jackson weren’t trying to commit fraud. As I’ve previously noted, a morphed picture can be used for fraudulent activity. Let me illustrate this with a visual example. Take a look at the guy below.

From NISTIR 8584.

Does this guy look familiar to you? Some of you may think he kinda sorta looks like one person, while others may think he kinda sorta looks like a different person.

The truth is, the person above does not exist. This is actually a face morph of two different people.

From NISTIR 8584.

Now imagine a scenario in which a security camera is patrolling the entrance to the Bush ranch in Crawford, Texas. But instead of having Bush’s facial image in the database, someone has tampered with the database and inserted the “Obushama” image instead…and that image is similar enough to Barack Obama to allow Obama to fraudulently enter Bush’s ranch.

Or alternative, the “Obushama” image is used to create a new synthetic identity, unconnected to either of the two.

But what if you could detect that a particular facial image is not a true image of a person, but some type of morph attempt? NIST has a report on this:

“To address this issue, the National Institute of Standards and Technology (NIST) has released guidelines that can help organizations deploy and use modern detection methods designed to catch morph attacks before they succeed.”

The report, “NIST Interagency Report NISTIR 8584, Face Analysis Technology Evaluation (FATE) MORPH Part 4B: Considerations for Implementing Morph Detection in Operations,” is available in PDF form at https://doi.org/10.6028/NIST.IR.8584.

And a personal aside to anyone who worked for Safran in the early 2010s: we’re talking about MORPH detection, not MORPHO detection. I kept on mistyping the name as I wrote this.

Fake Support (this was NOT Intuit)

Know your business, today’s edition.

I knew I was asking for trouble when I answered a simple question of whether I used Quickbooks.

Sure enough, I subsequently received a call from the Quickbooks Support Department.

After wasting his time for a few minutes, I asked for his Intuit email address.

He didn’t have one. Just a Quickbooks Support email address.

So I just blocked a number from the 207 area code. Which is in Maine, the hotbed of Intuit activity.

Perhaps instead of his Intuit email address, I should have asked him to consent to a biometric scan that matches against Intuit employee records.

What is the Proper Identity Assurance Level (IAL) for Employer Identification Number (EIN) Assignment?

(Imagen 4)

In the latest Know Your Business brouhaha, the Treasury Inspector General for Tax Administration (TIGTA) has questioned some potential gaps in the assignment of an Employer Identification Number, or EIN.

It seems that some so-called “businesses” are using an EIN as a facade for illegal activity…and insufficient identity assurance is preventing the fraudsters from being caught.

Obtaining Employer Identification Numbers to commit tax fraud

What is an EIN? In the same way that U.S. citizens have Social Security Numbers, U.S. businesses have Employer Identification Numbers. It’s not a rigorous process to get an EIN; heck, Bredemarket has one.

But maybe it needs to be a little more rigorous, according to TIGTA.

“EINs are targeted and used by unscrupulous individuals to commit fraud. In July 2021, we reported that there were hundreds of potentially fraudulent claims for employer tax credits….Further, in April 2024, our Office of Investigations announced that it helped prevent $3.5 billion from potentially being paid to fraudsters. Our special agents identified a scheme where individuals obtained an EIN for the sole purpose of filing business tax returns to improperly claim pandemic-related tax credits.”

Yes, that’s $3.5 billion with a B. That’s a lot of fraud.

Perhaps the pandemic has come and gone, but the temptation to file fraudulent business tax returns with an improperly-obtained EIN continues.

Facade.

Enter the Identity Assurance Level

So how does the Internal Revenue Service (IRS) gatekeep the assignment of EINs?

By specifying an Identity Assurance Level (IAL) before assigning an EIN.

Specifically, Identity Assurance Level 1.

“In December 2024, the IRS completed the annual reassessment of the Mod IEIN system. The IRS rated the identity proofing and authentication requirements at Level 1 (the same level as the initial assessment in January 2020).”

IAL1 doesn’t “assure” anything…except continued tax fraud

If you’ve read the Bredemarket blog or other biometric publications, you know that IAL1 is, if I may use a technical term, a “nothingburger.” The National Institute of Standards and Technology (NIST) says this about IAL1:

“There is no requirement to link the applicant to a specific real-life identity. Any attributes provided in conjunction with the subject’s activities are self-asserted or should be treated as self-asserted (including attributes a CSP asserts to an RP). Self-asserted attributes are neither validated nor verified.”

If that isn’t a shady way to identity a business, I don’t know what is.

Would IAL2 or IAL3 be better for EIN assignment?

These days it’s probably unreasonable to require every business to use Identity Assurance Level 3 (discussed in the Bredemarket post “Identity Assurance Level 3 (IAL3): When Identity Assurance Level 2 (IAL2) Isn’t Good Enough“) to obtain an EIN. As a reminder, IAL3 requires either in-person or supervised proof of identity.

But I agree with TIGTA’s assertion that Identity Assurance Level 2, with actual evidence of the real-world identity, should be the minimum.

Does your firm offer an IAL2/IAL3 product?

And if your identity/biometric firm offers a product that conforms to IAL2 or IAL3, and you need assistance creating product marketing content, talk to Bredemarket.

GeoComply, Geolocation, and First-Party Fraud

(Imagen 4)

As you may know, I am a fan of including geolocation as a factor of identity verification and authentication.

So I was delighted to learn that last Wednesday’s Liminal’s Demo Day on First-Party Fraud started with a demonstration from GeoComply.

How does GeoComply use geolocation to reduce first-party fraud?

1. Collect data from a user’s device: GPS, GSM, WiFi, plus IP addresses.

2. Verify location accuracy. Our rules engine runs hundreds of location data, device integrity, and identity fraud checks on every geolocation transaction to detect suspicious activity.

3. Combine real-time and historical data to detect and flag patterns of location fraud. Our models are constantly updated with the use of machine learning and human intelligence.

In his demonstration, Matthew Boland showed an example of someone who had filed numerous chargeback requests in a short period. That’s a red flag in itself.

But when Boland combined the real-time and historical data to analyze the geolocations of the chargeback requests, he found that many of the requests were filed from the same location as the person’s mailing address. So at least that was legit, and the chargeback requests weren’t being filed from China.

In addition to first-party fraud, GeoComply handles geofencing for gambling operations. To see an example of Super Bowl 2024 attempted gambling transactions in Kansas (good) and Missouri (bad), watch this video.

Kansas City (KS, MO) activity on Super Bowl Sunday.

When Social Platforms Convert Users Into Identity Verification Salespeople

(Imagen 4)

(Author’s preface: I was originally going to schedule this post for the middle of next week. But by the time I wrote it, the end of the post referenced a current event of astronomical proportions. Since said current event may be forgotten by the middle of next week, I am publishing it now.)

As a proponent of identity verification and a biometric product marketing expert I should like this…but I don’t.

I got the message and the message is clear

You get a message on a platform from someone you don’t know. The message may look something like this:

“John ,

“I hope this message finds you well. I came across your profile and was truly impressed by your background. While I’m not a recruiter, I’m assisting in connecting talented professionals with a startup that is working on a unique initiative.

“Given your experience, I believe you could be a fantastic fit for their senior consultant role. If you’re open to exploring this opportunity, I’d be happy to share more details and introduce you to the team directly. Please let me know if you’re interested!”

Let’s count the red flags in this message, which is one I actually received on May 30 from someone named David Joseph:

  • The author was truly impressed by my background, but didn’t cite any specifics about my background that impressed them. This exact same message could be sent to a biometric product marketing expert, a nuclear physicist, or a store cashier.
  • The author is not a recruiter, but a connector who will presumably pass me on to someone else. Why doesn’t the “someone else” contact me directly?
  • The whole unidentified startup working on a unique initiative story. Yes, some companies operate as stealth firms before revealing their corporate identity. Amway. Prinerica. Countless MLMs with bad reputations. Trust me, these initiatives are not unique.
  • That senior consultant title. Not junior consultant. Senior consultant. To make that envelope stuffing role even more prestigious.

I got the note and the note is even clearer

But I wasn’t really concerned with the message. I get these messages all the time.

So what concerned me?

The note attached to the message by the platform that hosted the message.

“Don’t know David? Ask David to verify their profile information before responding for added security.”

The platform, if you haven’t already guessed, is LinkedIn, the message a LinkedIn InMail.

Let’s follow the trail.

  • LinkedIn let “David” use the platform without verifying his identity or verifying that Randstad is truly his employer as his profile states.
  • LinkedIn sold “David” a bunch of InMail credits so that he could privately share this unique opportunity.
  • Now LinkedIn wants me to do its dirty work and say, “Hey David, why don’t you verify your profile?”

Now the one thing in LinkedIn’s favor is that LinkedIn—unlike Meta—lets its users verify their profiles for free. Meta charges you for this.

But again, why should I do LinkedIn’s dirty work?

Why doesn’t LinkedIn prevent users from sending InMails unless their profiles are verified?

The answer: LinkedIn makes a ton of money selling InMails to people without verified profiles. And thus makes money off questionable businesspeople and outright scammers.

Instead of locking down the platform and preventing scammers from joining the platform in the first place.

It’s like LinkedIn openly embraces scammers.

And everyone knows it.

Imagen 4.

Oh Heck, I Look Like a Scammer

Scamicide recently talked about a “free piano scam” where the scammer gifts the victim a piano for free—if the victim pays delivery costs northwards of $600—in advance. Guess what never gets delivered?

The post goes on to say:

“A big indication that this is a scam is that the moving company asks for payment by Zelle or cryptocurrencies.  No legitimate business asks for payment by Zelle or cryptocurrencies, but scammers often do because of the anonymity for these types of payments and the difficulty in tracing or reversing payments made in this manner.”

Well, Bredemarket doesn’t REQUIRE Zelle…but I take it. (No crypto.)

Employment Fraudster Lack Of Differentiation

While the fraud fighting companies don’t differentiate themselves, it turns out the fraudsters aren’t differentiating themselves either.

“Gibson Karen.”

Take Gibson Karen, who commented that I should connect a particular person in Gibson’s network.

  • Except that Fibson has no network: 0 connections, 0 followers, and 0 recommendations despite nearly 2 decades in the industry.
  • Fibson’s location? “United States.”
  • The odd first name as last name that doesn’t match Fibson’s perceived sex.
  • The request to contact someone else, not Fibson.
  • The email address of Fibson’s contact? gregory.hopkins@allegisgroupjobs.com. The real URL is allegisgroup, not allegisgroupjobs.
Um…

Don’t they even try any more?

You don’t need 30 years of identity experience to recognize employment fraud when you see it.