CBS News recently reported on the attempts of Meta and others to remove advertisements for “nudify” apps from their platforms. The intent of these apps is to take pictures of existing people—for example, “Scarlett Johansson and Anne Hathaway”—and creating deepfake nudes based on the source material.
Two versions of “what does this app do”
But the apps may present their purposes differently when applying for Apple App Store and Google Play Store approval.
“The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification. So when these apps come up for review on the Apple or Google store, they don’t necessarily have the wherewithal to ban them.”
How old are you? If you say so
And there’s another problem. While the apps are marketed to adult men, their users extend beyond that.
“CBS News’ 60 Minutes reported on the lack of age verification on one of the most popular sites using artificial intelligence to generate fake nude photos of real people.
“Despite visitors being told that they must be 18 or older to use the site…60 Minutes was able to immediately gain access to uploading photos once the user clicked “accept” on the age warning prompt, with no other age verification necessary.”
There is a lot of discussion about identity verification for people working in certain jobs: police officers, teachers, financial professionals, and the like.
With one exception.
One job that isn’t frequently discussed in the identity verification world is that of a sex worker. Primarily because sex workers usually don’t undergo identity verification for employment, but identity checks for criminal proceedings.
Applicants are fingerprinted and are also required to submit a recent photo.
Applicants must provide their birth name and all subsequent “names or aliases used.”
Three years of residence addresses and employment information.
The applicant criminal record “except minor traffic violations.”
“A waiver of release of medical information,” since the nature of the work involves the possibility of transmission of communicable diseases. And you thought being a nuclear power plant worker was dangerous!
Presumably the fingerprints are searched against law enforcement databases, just like the fingerprints of school teachers and the other newer professions.
Why?
“The chief of police shall investigate, through all available means, the accuracy of all information supplied by the prostitute on the registration form.”
Included in the investigation:
Controlled substance criminal convictions.
Felony convictions.
Embezzlement, theft, or shoplifting convictions.
Age verification; you have to be 21.
As you can see, the identity verification requirements for sex workers are adapted to meet the needs of that particular position.
But…it takes two to tango.
Brothel clients need to be at least 18 years old.
But I don’t know if Nevada requires client age verification, or if age estimation is acceptable.
Why do we have both age verification and age estimation? And how do we overcome the restrictions that force us to choose one over the other?
Why age verification?
As I’ve mentioned before, there are certain products and services that are ONLY provided to people who have attained a certain age. These include alcohol, tobacco, firearms, cannabis, driver’s licenses, gambling, “mature” adult content, and car rentals.
There’s also social media access, which I’ll get to in a minute.
So how do you know that someone purchasing one of these controlled products or services has attained the required age?
One way is to ask the purchaser to provide their government identification (driver’s license, passport, whatever) with their birthdate to prove their age.
This is known as age verification. Provided that the ID was issued by a legitimate government authority, and provided that the ID is not fraudulent, this ID provides ironclad assurance that you are 18 years old or 21 years old or whatever the requirement is.
But let’s return to social media.
Why age estimation?
If you’re Australian, sit down for a moment before I share the following fact.
There are jurisdictions in the world that allow kids as young as 13 years old to access social media.
However, these wild uncontrolled jurisdictions face a problem when trying to determine the ages of their social media users. As I noted almost two years ago:
How many 13 year olds do you know that have driver’s licenses? Probably none.
How many 13 year olds do you know that have government-issued REAL IDs? Probably very few.
How many 13 year olds do you know that have passports? Maybe a few more (especially after 9/11), but not that many.
So how can you figure out whether Bobby or Julie is old enough to open that social media account?
One way to do so is by using a technique called age estimation, which looks at facial features and classifies people by their estimated ages.
The only problem is that while age verification is accurate (assuming the ID is legitimate), age estimation is not:
So if a minor does not have a government ID, and the social media firm has to use age estimation to determine a minor’s age for purposes of the New York Child Data Protection Act, the following two scenarios are possible:
An 11 year old may be incorrectly allowed to give informed consent for purposes of the Act.
A 14 year old may be incorrectly denied the ability to give informed consent for purposes of the Act.
“OK,” you may say, “but so what? Anybody can print a card that says anything they want, like Alabama’s John Wahl did. Why should anyone accept the CitizenCard?”
CitizenCard is the only non-profit, UK-wide issuer of police-approved proof of age & ID cards….
CitizenCard was founded in 1999 and is governed by representatives from the National Lottery operator Allwyn, the Co-op, Ladbrokes & Coral owner Entain and the TMA.
CitizenCard…is the longest-established and the largest issuer of Home Office-endorsed PASS-hologram ID cards in the UK with more than 2.5 million issued.
[CitizenCard] is audited by members of the Age Check Certification Scheme on behalf of PASS to ensure that the highest standards of UK data protection, privacy and security are upheld and rigorous identity verification is carried out.
So one could argue that you don’t need age estimation in the UK, because there is a well-established way to VERIFY ages in the UK.
However, there are other benefits to age estimation, including the fact that estimation is frictionless and doesn’t require you to pull out a card (or a smartphone) at all.
I was encouraged to check out k-ID, a firm that tracks age compliance laws on a global basis. It also lets companies ensure that their users comply with these laws.
“Age assurance refers to a range of methods and technologies used to estimate, verify or confirm someone’s age. Different countries allow different methods, but here are a few commonly used by k-ID…”
The following methods are then listed:
Face Scan: Your age is estimated by completing a video selfie
ID Scan: Your age is confirmed by scanning a government-issued ID
Credit Card: Confirm you’re an adult by using a valid credit card
Note that k-ID’s age assurance methods include age estimation (via your face), age verification (via your government ID), and age who-knows-what (IMHO, possession of a credit card proves nothing, especially if it’s someone else’s).
But if k-ID truly applies the appropriate laws to age assurance, it’s a step in the right direction. Because keeping track of laws in hundreds of thousands of jurisdictions can be a…um…challenge.
When marketing digital identity products secured by biometrics, emphasize that they are MORE secure and more private than their physical counterparts.
When you hand your physical driver’s license over to a sleazy bartender, they find out EVERYTHING about you, including your name, your birthdate, your driver’s license number, and even where you live.
When you use a digital mobile driver’s license, bartenders ONLY learn what they NEED to know—that you are over 21.
The Prism Project’s home page at https://www.the-prism-project.com/, illustrating the Biometric Digital Identity Prism as of March 2024. From Acuity Market Intelligence and FindBiometrics.
With over 100 firms in the biometric industry, their offerings are going to naturally differ—even if all the firms are TRYING to copy each other and offer “me too” solutions.
I’ve worked for over a dozen biometric firms as an employee or independent contractor, and I’ve analyzed over 80 biometric firms in competitive intelligence exercises, so I’m well aware of the vast implementation differences between the biometric offerings.
Some of the implementation differences provoke vehement disagreements between biometric firms regarding which choice is correct. Yes, we FIGHT.
Let’s look at three (out of many) of these implementation differences and see how they affect YOUR company’s content marketing efforts—whether you’re engaging in identity blog post writing, or some other content marketing activity.
The three biometric implementation choices
Firms that develop biometric solutions make (or should make) the following choices when implementing their solutions.
Presentation attack detection. Assuming the solution incorporates presentation attack detection (liveness detection), or a way of detecting whether the presented biometric is real or a spoof, the firm must decide whether to use active or passive liveness detection.
Age assurance. When choosing age assurance solutions that determine whether a person is old enough to access a product or service, the firm must decide whether or not age estimation is acceptable.
Biometric modality. Finally, the firm must choose which biometric modalities to support. While there are a number of modality wars involving all the biometric modalities, this post is going to limit itself to the question of whether or not voice biometrics are acceptable.
I will address each of these questions in turn, highlighting the pros and cons of each implementation choice. After that, we’ll see how this affects your firm’s content marketing.
(I)nstead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).
This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).
And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.
(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)
Now I could cite a firm using active liveness detection to say why it’s great, or I could cite a firm using passive liveness detection to say why it’s great. But perhaps the most balanced assessment comes from facia, which offers both types of liveness detection. How does facia define the two types of liveness detection?
Active liveness detection, as the name suggests, requires some sort of activity from the user. If a system is unable to detect liveness, it will ask the user to perform some specific actions such as nodding, blinking or any other facial movement. This allows the system to detect natural movements and separate it from a system trying to mimic a human being….
Passive liveness detection operates discreetly in the background, requiring no explicit action from the user. The system’s artificial intelligence continuously analyses facial movements, depth, texture, and other biometric indicators to detect an individual’s liveness.
Pros and cons
Briefly, the pros and cons of the two methods are as follows:
While active liveness detection offers robust protection, requires clear consent, and acts as a deterrent, it is hard to use, complex, and slow.
Passive liveness detection offers an enhanced user experience via ease of use and speed and is easier to integrate with other solutions, but it incorporates privacy concerns (passive liveness detection can be implemented without the user’s knowledge) and may not be used in high-risk situations.
So in truth the choice is up to each firm. I’ve worked with firms that used both liveness detection methods, and while I’ve spent most of my time with passive implementations, the active ones can work also.
A perfect wishy-washy statement that will get BOTH sides angry at me. (Except perhaps for companies like facia that use both.)
If you need to know a person’s age, you can ask them. Because people never lie.
Well, maybe they do. There are two better age assurance methods:
Age verification, where you obtain a person’s government-issued identity document with a confirmed birthdate, confirm that the identity document truly belongs to the person, and then simply check the date of birth on the identity document and determine whether the person is old enough to access the product or service.
Age estimation, where you don’t use a government-issued identity document and instead examine the face and estimate the person’s age.
I changed my mind on age estimation
I’ve gone back and forth on this. As I previously mentioned, my employment history includes time with a firm produces driver’s licenses for the majority of U.S. states. And back when that firm was providing my paycheck, I was financially incentivized to champion age verification based upon the driver’s licenses that my company (or occasionally some inferior company) produced.
But as age assurance applications moved into other areas such as social media use, a problem occurred since 13 year olds usually don’t have government IDs. A few of them may have passports or other government IDs, but none of them have driver’s licenses.
But does age estimation work? I’m not sure if ANYONE has posted a non-biased view, so I’ll try to do so myself.
The pros of age estimation include its applicability to all ages including young people, its protection of privacy since it requires no information about the individual identity, and its ease of use since you don’t have to dig for your physical driver’s license or your mobile driver’s license—your face is already there.
The huge con of age estimation is that it is by definition an estimate. If I show a bartender my driver’s license before buying a beer, they will know whether I am 20 years and 364 days old and ineligible to purchase alcohol, or whether I am 21 years and 0 days old and eligible. Estimates aren’t that precise.
Fingerprints, palm prints, faces, irises, and everything up to gait. (And behavioral biometrics.) There are a lot of biometric modalities out there, and one that has been around for years is the voice biometric.
I’ve discussed this topic before, and the partial title of the post (“We’ll Survive Voice Spoofing”) gives away how I feel about the matter, but I’ll present both sides of the issue.
No one can deny that voice spoofing exists and is effective, but many of the examples cited by the popular press are cases in which a HUMAN (rather than an ALGORITHM) was fooled by a deepfake voice. But voice recognition software can also be fooled.
Take a study from the University of Waterloo, summarized here, that proclaims: “Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.”
If you re-read that sentence, you will notice that it includes the words “up to.” Those words are significant if you actually read the article.
In a recent test against Amazon Connect’s voice authentication system, they achieved a 10 per cent success rate in one four-second attack, with this rate rising to over 40 per cent in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99 per cent success rate after six attempts.
Other voice spoofing studies
Similar to Gender Shades, the University of Waterloo study does not appear to have tested hundreds of voice recognition algorithms. But there are other studies.
The 2021 NIST Speaker Recognition Evaluation (PDF here) tested results from 15 teams, but this test was not specific to spoofing.
A test that was specific to spoofing was the ASVspoof 2021 test with 54 team participants, but the ASVspoof 2021 results are only accessible in abstract form, with no detailed results.
Another test, this one with results, is the SASV2022 challenge, with 23 valid submissions. Here are the top 10 performers and their error rates.
You’ll note that the top performers don’t have error rates anywhere near the University of Waterloo’s 99 percent.
So some firms will argue that voice recognition can be spoofed and thus cannot be trusted, while other firms will argue that the best voice recognition algorithms are rarely fooled.
What does this mean for your company?
Obviously, different firms are going to respond to the three questions above in different ways.
For example, a firm that offers face biometrics but not voice biometrics will convey how voice is not a secure modality due to the ease of spoofing. “Do you want to lose tens of millions of dollars?”
A firm that offers voice biometrics but not face biometrics will emphasize its spoof detection capabilities (and cast shade on face spoofing). “We tested our algorithm against that voice fake that was in the news, and we detected the voice as a deepfake!”
There is no universal truth here, and the message your firm conveys depends upon your firm’s unique characteristics.
And those characteristics can change.
Once when I was working for a client, this firm had made a particular choice with one of these three questions. Therefore, when I was writing for the client, I wrote in a way that argued the client’s position.
After I stopped working for this particular client, the client’s position changed and the firm adopted the opposite view of the question.
Therefore I had to message the client and say, “Hey, remember that piece I wrote for you that said this? Well, you’d better edit it, now that you’ve changed your mind on the question…”
Bear this in mind as you create your blog, white paper, case study, or other identity/biometric content, or have someone like the biometric content marketing expert Bredemarket work with you to create your content. There are people who sincerely hold the opposite belief of your firm…but your firm needs to argue that those people are, um, misinformed.
The so-called experts say that a piece of content should only have one topic and one call to action. Well, it’s Sunday so hopefully the so-called experts are taking a break and will never see the paragraphs below.
This is my endorsement for Cultivated Cool. Its URL is https://cultivated.cool/, which I hope you can remember.
Cultivated Cool self-identifies as “(y)our weekly guide to the newest, coolest products you didn’t know you needed.” Concentrating on the direct-to-consumer (DTC or D2C) space, Cultivated Cool works with companies to “transform (their) email marketing from a chore into a revenue generator.” And to prove the effectiveness of email, it offers its own weekly email that highlights various eye-catching products. But not trendy ones:
Trends come and go but cool never goes out of style.
Bredemarket isn’t a prospect for Cultivated Cool’s first service—my written content creation is not continuously cool. (Although it’s definitely not trendy either). But I am a consumer of Cultivated Cool’s weekly emails, and you should subscribe to its weekly emails also. Enter your email and click the “Subscribe” button on Cultivated Cool’s webpage.
And Cultivated Cool’s weekly emails lead me to the point of this post.
The day that Stella sculpted air
Today’s weekly newsletter issue from Cultivated Cool is entitled “Dig It.” But this has nothing to do with the Beatles or with Abba. Instead it has to do with gardening, and the issue tells the story of Stella, in five parts. The first part is entitled “Snip it in the Bud,” and begins as follows.
Stella felt a shiver go down her spine the first time the pruner blades closed. She wasn’t just cutting branches; she was sculpting air.
The pruner blades featured in Cultivated Cool are sold by Niwaki, an English company that offers Japanese-inspired products. As I type this, Niwaki offers 18 different types of secateurs (pruning shears), including large hand, small hand, right-handed, and left-handed varieties. You won’t get these at your dollar store; prices (excluding VAT) range from US$45.50 to US$280.50 (Tobisho Hiryu Secateurs).
But regardless of price, all the secateurs sold by Niwaki have one thing in common: an age restriction on purchases. Not that Niwaki truly enforces this restriction.
Please note: By law, we are not permitted to sell a knife or blade to any person under the age of 18. By placing an order for one of these items you are declaring that you are 18 years of age or over. These items must be used responsibly and appropriately.
I hope you’re sitting down as I reveal this to you: underage people can bypass the age assurance scheme on alcohol websites by inputting any year of birth that they wish. Just like anyone, even a small child, can make any declaration of age that they want, as long as their credit card is valid.
Now I have no idea whether Ofcom’s UK Online Safety Act consultations will eventually govern Niwaki’s sales of adult-controlled physical products. But if Niwaki finds itself under the UK Online Safety Act, or some other act in the United Kingdom or any country where Niwaki conducts business, then a simple assurance that the purchaser is old enough to buy “a knife or blade” will not be sufficient.
Niwaki’s website would then need to adopt some form of age assurance for purchasers, either by using a government-issued identification document (age verification) or examining the face to algorithmically surmise the customer’s age (age estimation).
Age verification. For example, the purchaser would need to provide their government-issued identity document so that the seller can verify the purchaser’s age. Ideally, this would be coupled with live face capture so that the seller can compare the live face to the face on the ID, ensuring that a kid didn’t steal mommy’s or daddy’s driver’s license (licence) or passport.
Age estimation. For example, the purchaser would need to provide their live face so that the seller can estimate the purchaser’s age. In this case (and in the age verification case if a live face is captured), the seller would need to use liveness dectection to ensure that the face is truly a live face and is not a presentation attack or other deepfake.
And then the seller would need to explain why it was doing all of this.
How can a company explain its age assurance solution in a way that its prospects will understand…and how can the company reassure its prospects that its age assurance method protects their privacy?
Companies other than identity companies must explain their identity solutions
Which brings me to the TRUE call to action in this post. (Sorry Mark and Lindsey. You’re still cool.)
I’ve stated ad nauseum that identity companies need to explain their identity solutions: why they developed them, how they work, what they do, and several other things.
In the same way, firms that incorporate solutions from identity companies got some splainin’ to do.
This applies to a financial institution that requires customers to use an identity verification solution before opening an account, just like it applies to an online gardening implement website that uses an age assurance method to check the age of pruning shear purchasers.
So how can such companies explain their identity and biometrics features in a way their end customers can understand?
The Georgia bill explicitly mentions Identity Assurance Level 2.
Under the bill, the age verification methods would have to meet or exceed the National Institute of Standards and Technology’s Identity Assurance Level 2 standard.
So if you think you can use Login.gov to access a porn website, think again.
There’s also a mention of mobile driver’s licenses, albeit without a corresponding mention of the ISO/IEC 18013-5:2021.
Specifically mentioned in the bill text is “digitized identification cards,” described as “a data file available on a mobile device with connectivity to the internet that contains all of the data elements visible on the face and back of a driver’s license or identification card.”
So digital identity is becoming more important for online access, as long as certain standards are met.
The Digital Trust & Safety Partnership (DTSP) consists of “leading technology companies,” including Apple, Google, Meta (parent of Facebook, Instagram, and WhatsApp), Microsoft (and its LinkedIn subsidiary), TikTok, and others.
DTSP appreciates and shares Ofcom’s view that there is no one-size-fits-all approach to trust and safety and to protecting people online. We agree that size is not the only factor that should be considered, and our assessment methodology, the Safe Framework, uses a tailoring framework that combines objective measures of organizational size and scale for the product or service in scope of assessment, as well as risk factors.
We’ll get to the “Safe Framework” later. DTSP continues:
Overly prescriptive codes may have unintended effects: Although there is significant overlap between the content of the DTSP Best Practices Framework and the proposed Illegal Content Codes of Practice, the level of prescription in the codes, their status as a safe harbor, and the burden of documenting alternative approaches will discourage services from using other measures that might be more effective. Our framework allows companies to use whatever combination of practices most effectively fulfills their overarching commitments to product development, governance, enforcement, improvement, and transparency. This helps ensure that our practices can evolve in the face of new risks and new technologies.
But remember that the UK’s neighbors in the EU recently prescribed that USB-3 cables are the way to go. This not only forced DTSP member Apple to abandon the Lightning cable worldwide, but it affects Google and others because there will be no efforts to come up with better cables. Who wants to fight the bureaucratic battle with Brussels? Or alternatively we will have the advanced “world” versions of cables and the deprecated “EU” standards-compliant cables.
So forget Ofcom’s so-called overbearing approach and just adopt the Safe Framework. Big tech will take care of everything, including all those age assurance issues.
Incorporating each characteristic comes with trade-offs, and there is no one-size-fits-all solution. Highly accurate age assurance methods may depend on collection of new personal data such as facial imagery or government-issued ID. Some methods that may be economical may have the consequence of creating inequities among the user base. And each service and even feature may present a different risk profile for younger users; for example, features that are designed to facilitate users meeting in real life pose a very different set of risks than services that provide access to different types of content….
Instead of a single approach, we acknowledge that appropriate age assurance will vary among services, based on an assessment of the risks and benefits of a given context. A single service may also use different approaches for different aspects or features of the service, taking a multi-layered approach.