I just posted the latest edition of my LinkedIn newsletter, “The Wildebeest Speaks.” It examines the history of deepfakes / likenesses, including the Émile Cohl animated cartoon Fantasmagorie, my own deepfake / likeness creations, and the deepfake / likeness of Sam Altman committing a burglary, authorized by Altman himself. Unfortunately, some deepfakes are NOT authorized, and that’s a problem.
A recent PYMNTS article entitled “AI Voices Are Now Indistinguishable From Humans, Experts Say” includes the following about voice deepfakes:
“A new PLoS One study found that artificial intelligence has reached a point where cloned voices are indistinguishable from genuine ones. In the experiment, participants were asked to tellhuman voices from AI-generated ones across 80 samples. Cloned voices were mistaken for real in 58% of cases, while human voices were correctly identified only 62% of the time.”
What the study didn’t measure
Since you already read the title of this post, you know that I’m concentrating on the word “participants.”
The PLoS One experiment used PEOPLE to try to distinguish real voices from deepfake ones.
And people aren’t all that accurate. Never have been.
Before you decide that people can’t detect fake voices…
…why not have an ALGORITHM give it a try?
What the study did measure
But to be fair, that wasn’t the goal of the PLoS One study, which specifically focused on human perception.
“Recently, an intriguing effect was reported in AI-generated faces, where such face images were perceived as more human than images of real humans – a “hyperrealism effect.” Here, we tested whether a “hyperrealism effect” also exists for AI-generated voices.”
For the record, the researchers did NOT discover a hyperrealism effect in AI-generated voices.
Do you offer a solution?
But if future deepfake voices sound realer than real, then we will REALLY need the algorithms to spot the fakes.
And if your company has a voice deepfake detection solution, I could have talked about it right now in this post.
I was up bright and early to attend a Liminal Demo Day, and the second presenter was Proof. Lauren Furey and Kurt Ernst presented, with Lauren assuming the role of the agent verifying Kurt’s identity.
The mechanism to verify the identity was a video session. In this case, Agent Lauren used three methods:
Examining Kurt’s ID, which he presented on screen.
Examining Kurt’s face (selfie).
Examining a credit card presented by Kurt.
One important note: Agent Lauren had complete control over whether to verify Kurt’s identity or not. She was not a mere “human in the loop.” Even if Kurt passed all the checks, Lauren could fail the identity check if she suspected something was wrong (such as a potential fraudster prompting Kurt what to do).
“Another question for Proof: does you solution meet the requirements for supervised remote identity proofing (IAL3)?”
Lauren responded in the affirmative.
It’s important to note that Proof’s face authentication solution incorporates liveness detection, so there is reasonable assurance that the person’s fake is not a spoof or a synthetic identity.
I hate to use the overused t word (trust), but in this case it’s justified.
“Scammers are aware that people are more likely to open and read a text message rather than an email The open rates for text messages are more than 90% while the open rates for emails is less than 30%. In addition, many email providers have filters that are able to identify and filter out phishing emails while the filtering capabilities on text messages is much less. Additionally, people tend to trust text messages more than emails. Text message also may prompt a quick response before the targeted victim can critically consider the legitimacy of the text message.”
Who can provide remote supervised identity proofing?
“NextgenID Trusted Services Solution provides Supervised Remote Identity Proofing identity stations to collect, review, validate, proof, and package IAL-3 identity evidence and enrollment data for CSPs operating at IAL-3.”
And there are others who can provide the equivalent of IAL3, as we will see later.
How do you supervise a remote identity proofing session?
“The camera(s) a CSP [Credential Service Provider] employs to monitor the actions taken by a remote applicant during the identity proofing session should be positioned in such a way that the upper body, hands, and face of the applicant are visible at all times.”
But that doesn’t matter with me now. What matters to me is WHEN we need remote identity proofing sessions.
Governments aren’t the only entities that need to definitively know identities in critically important situations.
What about banks and other financial institutions, which are required by law to know their customers?
Now it’s one thing when one of my Bredemarket clients used to pay me by paper check. Rather than go to the bank and deposit it in person at a teller window (in person) or at an ATM (remote supervised), I would deposit the check with my smartphone app (remote unsupervised).
Now the bank assumed a level of risk by doing this, especially since the deposited check would not be in the bank’s physical possession after the deposit was completed.
But guess what? The risk was acceptable for my transactions. I’m disclosing Bredemarket company secrets, but that client never wrote me a million dollar check. Actually, none of my clients has ever written me a million dollar check. (Perhaps I should raise my rates. It’s been a while. If I charge an hourly rate of $100,000, I will get those million dollar checks!)
So how do financial institutions implement the two types of IAL3?
“If you need to initiate a funds transfer payment, an authorized signer for your account may also initiate funds (wire) transfers at any Chase branch.”
Note the use of the word “may.” However, if you don’t want to go to a branch to make a wire transfer, you have to set up an alternate method in advance.
Remote supervised
What about remote supervised transactions at financial institutions, where you are not physically present, but someone at the bank remotely sees you and everything you do? Every breath you take? And every move you make? Etcetera.
It turns out that the identity verification providers support video sessions between businesses (such as banks) and their customers. For example, Incode’s Developer Hub includes several references to a video conference capability.
To my knowledge, Incode has not publicly stated whether any of its financial identity customers are employing this video conference capability, but it’s certainly possible. And when done correctly, this can support the IAL3 specifications.
Why to use IAL3 for financial transactions
For high-risk transactions such as ones with high value and ones with particular countries, IAL3 protects both the financial institutions and their customers. It lessens the fraud risk and the possible harm to both parties.
Some customers may see IAL3 as an unnecessary bureaucratic hurdle…but they would feel differently if THEY were the ones getting ripped off.
This is why both financial institutions and identity verification vendors need to explain the benefits of IAL3 procedures for riskier transactions. And do it in such a way that the end customers DEMAND IAL3.
To create the content to influence customer perception, you need to answer the critically important questions, including why, how, and benefits. (There are others.)
And if your firm needs help creating that content, Underdog is here.
Visit https://bredemarket.com/mark/ and schedule a time to talk to me—for free. I won’t remotely verify your identity during our videoconference, but I will help you plan the content your firm needs.
We’re all familiar with the morphing of faces from subject 1 to subject 2, in which there is an intermediate subject 1.5 that combines the features of both of them. But did you know that this simple trick can form the basis for fraudulent activity?
Back in the 20th century, morphing was primarily used for entertainment purposes. Nothing that would make you cry, even though there were shades of gray in the black or white representations of the morphed people.
Godley and Creme, “Cry.”
Michael Jackson, “Black or White.” (The full version with the grabbing.) The morphing begins about 5 1/2 minutes into the video.
But Godley, Creme, and Jackson weren’t trying to commit fraud. As I’ve previously noted, a morphed picture can be used for fraudulent activity. Let me illustrate this with a visual example. Take a look at the guy below.
From NISTIR 8584.
Does this guy look familiar to you? Some of you may think he kinda sorta looks like one person, while others may think he kinda sorta looks like a different person.
The truth is, the person above does not exist. This is actually a face morph of two different people.
From NISTIR 8584.
Now imagine a scenario in which a security camera is patrolling the entrance to the Bush ranch in Crawford, Texas. But instead of having Bush’s facial image in the database, someone has tampered with the database and inserted the “Obushama” image instead…and that image is similar enough to Barack Obama to allow Obama to fraudulently enter Bush’s ranch.
Or alternative, the “Obushama” image is used to create a new synthetic identity, unconnected to either of the two.
But what if you could detect that a particular facial image is not a true image of a person, but some type of morph attempt? NIST has a report on this:
“To address this issue, the National Institute of Standards and Technology (NIST) has released guidelines that can help organizations deploy and use modern detection methods designed to catch morph attacks before they succeed.”
The report, “NIST Interagency Report NISTIR 8584, Face Analysis Technology Evaluation (FATE) MORPH Part 4B: Considerations for Implementing Morph Detection in Operations,” is available in PDF form at https://doi.org/10.6028/NIST.IR.8584.
And a personal aside to anyone who worked for Safran in the early 2010s: we’re talking about MORPH detection, not MORPHO detection. I kept on mistyping the name as I wrote this.
It seems that some so-called “businesses” are using an EIN as a facade for illegal activity…and insufficient identity assurance is preventing the fraudsters from being caught.
Obtaining Employer Identification Numbers to commit tax fraud
What is an EIN? In the same way that U.S. citizens have Social Security Numbers, U.S. businesses have Employer Identification Numbers. It’s not a rigorous process to get an EIN; heck, Bredemarket has one.
But maybe it needs to be a little more rigorous, according to TIGTA.
“EINs are targeted and used by unscrupulous individuals to commit fraud. In July 2021, we reported that there were hundreds of potentially fraudulent claims for employer tax credits….Further, in April 2024, our Office of Investigations announced that it helped prevent $3.5 billion from potentially being paid to fraudsters. Our special agents identified a scheme where individuals obtained an EIN for the sole purpose of filing business tax returns to improperly claim pandemic-related tax credits.”
Yes, that’s $3.5 billion with a B. That’s a lot of fraud.
Perhaps the pandemic has come and gone, but the temptation to file fraudulent business tax returns with an improperly-obtained EIN continues.
Facade.
Enter the Identity Assurance Level
So how does the Internal Revenue Service (IRS) gatekeep the assignment of EINs?
By specifying an Identity Assurance Level (IAL) before assigning an EIN.
Specifically, Identity Assurance Level 1.
“In December 2024, the IRS completed the annual reassessment of the Mod IEIN system. The IRS rated the identity proofing and authentication requirements at Level 1 (the same level as the initial assessment in January 2020).”
IAL1 doesn’t “assure” anything…except continued tax fraud
If you’ve read the Bredemarket blog or other biometric publications, you know that IAL1 is, if I may use a technical term, a “nothingburger.” The National Institute of Standards and Technology (NIST) says this about IAL1:
“There is no requirement to link the applicant to a specific real-life identity. Any attributes provided in conjunction with the subject’s activities are self-asserted or should be treated as self-asserted (including attributes a CSP asserts to an RP). Self-asserted attributes are neither validated nor verified.”
If that isn’t a shady way to identity a business, I don’t know what is.
But I agree with TIGTA’s assertion that Identity Assurance Level 2, with actual evidence of the real-world identity, should be the minimum.
Does your firm offer an IAL2/IAL3 product?
And if your identity/biometric firm offers a product that conforms to IAL2 or IAL3, and you need assistance creating product marketing content, talk to Bredemarket.