The Pros and Cons of Age Estimation

By NikosLikomitros – Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=136366736

I just published the latest edition of “The Wildebeest Speaks,” Bredemarket’s monthly LinkedIn newsletter.

To be honest, “The Pros and Cons of Age Estimation” repurposes some content I’ve already published in the Bredemarket blog, namely:

The net result? An article explaining both the advantages and disadvantages of age estimation.

Take a chance to read the article, published by LinkedIn’s Bredemarket account. And if you’re a LinkedIn member, subscribe to the newsletter.

Reasonable Minds Vehemently Disagree On Three Biometric Implementation Choices

(Part of the biometric product marketing expert series)

There are a LOT of biometric companies out there.

The Prism Project’s home page at https://www.the-prism-project.com/, illustrating the Biometric Digital Identity Prism as of March 2024. From Acuity Market Intelligence and FindBiometrics.

With over 100 firms in the biometric industry, their offerings are going to naturally differ—even if all the firms are TRYING to copy each other and offer “me too” solutions.

Will Ferrell and Chad Smith, or maybe vice versa. Fair use. From https://www.billboard.com/music/music-news/will-ferrell-chad-smith-red-hot-benefit-chili-peppers-6898348/, originally from NBC.

I’ve worked for over a dozen biometric firms as an employee or independent contractor, and I’ve analyzed over 80 biometric firms in competitive intelligence exercises, so I’m well aware of the vast implementation differences between the biometric offerings.

Some of the implementation differences provoke vehement disagreements between biometric firms regarding which choice is correct. Yes, we FIGHT.

MMA stands for Messy Multibiometric Authentication. Public Domain, https://commons.wikimedia.org/w/index.php?curid=607428

Let’s look at three (out of many) of these implementation differences and see how they affect YOUR company’s content marketing efforts—whether you’re engaging in identity blog post writing, or some other content marketing activity.

The three biometric implementation choices

Firms that develop biometric solutions make (or should make) the following choices when implementing their solutions.

  1. Presentation attack detection. Assuming the solution incorporates presentation attack detection (liveness detection), or a way of detecting whether the presented biometric is real or a spoof, the firm must decide whether to use active or passive liveness detection.
  2. Age assurance. When choosing age assurance solutions that determine whether a person is old enough to access a product or service, the firm must decide whether or not age estimation is acceptable.
  3. Biometric modality. Finally, the firm must choose which biometric modalities to support. While there are a number of modality wars involving all the biometric modalities, this post is going to limit itself to the question of whether or not voice biometrics are acceptable.

I will address each of these questions in turn, highlighting the pros and cons of each implementation choice. After that, we’ll see how this affects your firm’s content marketing.

Choice 1: Active or passive liveness detection?

Back in June 2023 I defined what a “presentation attack” is.

(I)nstead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).

This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).

Then I talked about standards and testing.

But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.

And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.

(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)

Well…that day is today.

A balanced assessment

Now I could cite a firm using active liveness detection to say why it’s great, or I could cite a firm using passive liveness detection to say why it’s great. But perhaps the most balanced assessment comes from facia, which offers both types of liveness detection. How does facia define the two types of liveness detection?

Active liveness detection, as the name suggests, requires some sort of activity from the user. If a system is unable to detect liveness, it will ask the user to perform some specific actions such as nodding, blinking or any other facial movement. This allows the system to detect natural movements and separate it from a system trying to mimic a human being….

Passive liveness detection operates discreetly in the background, requiring no explicit action from the user. The system’s artificial intelligence continuously analyses facial movements, depth, texture, and other biometric indicators to detect an individual’s liveness.

Pros and cons

Briefly, the pros and cons of the two methods are as follows:

  • While active liveness detection offers robust protection, requires clear consent, and acts as a deterrent, it is hard to use, complex, and slow.
  • Passive liveness detection offers an enhanced user experience via ease of use and speed and is easier to integrate with other solutions, but it incorporates privacy concerns (passive liveness detection can be implemented without the user’s knowledge) and may not be used in high-risk situations.

So in truth the choice is up to each firm. I’ve worked with firms that used both liveness detection methods, and while I’ve spent most of my time with passive implementations, the active ones can work also.

A perfect wishy-washy statement that will get BOTH sides angry at me. (Except perhaps for companies like facia that use both.)

Choice 2: Age estimation, or no age estimation?

Designed by Freepik.

There are a lot of applications for age assurance, or knowing how old a person is. These include smoking tobacco or marijuana, buying firearms, driving a cardrinking alcoholgamblingviewing adult contentusing social media, or buying garden implements.

If you need to know a person’s age, you can ask them. Because people never lie.

Well, maybe they do. There are two better age assurance methods:

  • Age verification, where you obtain a person’s government-issued identity document with a confirmed birthdate, confirm that the identity document truly belongs to the person, and then simply check the date of birth on the identity document and determine whether the person is old enough to access the product or service.
  • Age estimation, where you don’t use a government-issued identity document and instead examine the face and estimate the person’s age.

I changed my mind on age estimation

I’ve gone back and forth on this. As I previously mentioned, my employment history includes time with a firm produces driver’s licenses for the majority of U.S. states. And back when that firm was providing my paycheck, I was financially incentivized to champion age verification based upon the driver’s licenses that my company (or occasionally some inferior company) produced.

But as age assurance applications moved into other areas such as social media use, a problem occurred since 13 year olds usually don’t have government IDs. A few of them may have passports or other government IDs, but none of them have driver’s licenses.

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727.

Pros and cons

But does age estimation work? I’m not sure if ANYONE has posted a non-biased view, so I’ll try to do so myself.

  • The pros of age estimation include its applicability to all ages including young people, its protection of privacy since it requires no information about the individual identity, and its ease of use since you don’t have to dig for your physical driver’s license or your mobile driver’s license—your face is already there.
  • The huge con of age estimation is that it is by definition an estimate. If I show a bartender my driver’s license before buying a beer, they will know whether I am 20 years and 364 days old and ineligible to purchase alcohol, or whether I am 21 years and 0 days old and eligible. Estimates aren’t that precise.

How precise is age estimation? We’ll find out soon, once NIST releases the results of its Face Analysis Technology Evaluation (FATE) Age Estimation & Verification test. The release of results is expected in early May.

Choice 3: Is voice an acceptable biometric modality?

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

Fingerprints, palm prints, faces, irises, and everything up to gait. (And behavioral biometrics.) There are a lot of biometric modalities out there, and one that has been around for years is the voice biometric.

I’ve discussed this topic before, and the partial title of the post (“We’ll Survive Voice Spoofing”) gives away how I feel about the matter, but I’ll present both sides of the issue.

White House photo by Kimberlee Hewitt – whitehouse.gov, President George W. Bush and comedian Steve Bridges, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3052515

No one can deny that voice spoofing exists and is effective, but many of the examples cited by the popular press are cases in which a HUMAN (rather than an ALGORITHM) was fooled by a deepfake voice. But voice recognition software can also be fooled.

(Incidentally, there is a difference between voice recognition and speech recognition. Voice recognition attempts to determine who a person is. Speech recognition attempts to determine what a person says.)

Finally facing my Waterloo

Take a study from the University of Waterloo, summarized here, that proclaims: “Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.”

If you re-read that sentence, you will notice that it includes the words “up to.” Those words are significant if you actually read the article.

In a recent test against Amazon Connect’s voice authentication system, they achieved a 10 per cent success rate in one four-second attack, with this rate rising to over 40 per cent in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99 per cent success rate after six attempts.

Other voice spoofing studies

Similar to Gender Shades, the University of Waterloo study does not appear to have tested hundreds of voice recognition algorithms. But there are other studies.

  • The 2021 NIST Speaker Recognition Evaluation (PDF here) tested results from 15 teams, but this test was not specific to spoofing.
  • A test that was specific to spoofing was the ASVspoof 2021 test with 54 team participants, but the ASVspoof 2021 results are only accessible in abstract form, with no detailed results.
  • Another test, this one with results, is the SASV2022 challenge, with 23 valid submissions. Here are the top 10 performers and their error rates.

You’ll note that the top performers don’t have error rates anywhere near the University of Waterloo’s 99 percent.

So some firms will argue that voice recognition can be spoofed and thus cannot be trusted, while other firms will argue that the best voice recognition algorithms are rarely fooled.

What does this mean for your company?

Obviously, different firms are going to respond to the three questions above in different ways.

  • For example, a firm that offers face biometrics but not voice biometrics will convey how voice is not a secure modality due to the ease of spoofing. “Do you want to lose tens of millions of dollars?”
  • A firm that offers voice biometrics but not face biometrics will emphasize its spoof detection capabilities (and cast shade on face spoofing). “We tested our algorithm against that voice fake that was in the news, and we detected the voice as a deepfake!”

There is no universal truth here, and the message your firm conveys depends upon your firm’s unique characteristics.

And those characteristics can change.

  • Once when I was working for a client, this firm had made a particular choice with one of these three questions. Therefore, when I was writing for the client, I wrote in a way that argued the client’s position.
  • After I stopped working for this particular client, the client’s position changed and the firm adopted the opposite view of the question.
  • Therefore I had to message the client and say, “Hey, remember that piece I wrote for you that said this? Well, you’d better edit it, now that you’ve changed your mind on the question…”

Bear this in mind as you create your blog, white paper, case study, or other identity/biometric content, or have someone like the biometric content marketing expert Bredemarket work with you to create your content. There are people who sincerely hold the opposite belief of your firm…but your firm needs to argue that those people are, um, misinformed.

And as a postscript I’ll provide two videos that feature voices. The first is for those who detected my reference to the ABBA song “Waterloo.”

From https://www.youtube.com/watch?v=4XJBNJ2wq0Y.

The second features the late Steve Bridges as President George W. Bush at the White House Correspondents Dinner.

From https://www.youtube.com/watch?v=u5DpKjlgoP4.

Does Your Gardening Implement Company Require Age Assurance?

Age assurance shows that a customer meets the minimum age for buying a product or service.

I thought I knew every possible use case for age assurance—smoking tobacco or marijuana, buying firearms, driving a car, drinking alcohol, gambling, viewing adult content, or using social media.

But after investigating a product featured in Cultivated Cool, I realized that I had missed one use case. Turns out that there’s another type of company that needs age assurance…and a way to explain the age assurance method the company adopts.

Off on a tangent: what is Cultivated Cool?

Psst…don’t tell anyone what you’re about to read.

The so-called experts say that a piece of content should only have one topic and one call to action. Well, it’s Sunday so hopefully the so-called experts are taking a break and will never see the paragraphs below.

This is my endorsement for Cultivated Cool. Its URL is https://cultivated.cool/, which I hope you can remember.

Cultivated Cool self-identifies as “(y)our weekly guide to the newest, coolest products you didn’t know you needed.” Concentrating on the direct-to-consumer (DTC or D2C) space, Cultivated Cool works with companies to “transform (their) email marketing from a chore into a revenue generator.” And to prove the effectiveness of email, it offers its own weekly email that highlights various eye-catching products. But not trendy ones:

Trends come and go but cool never goes out of style.

From https://cultivated.cool/.

Bredemarket isn’t a prospect for Cultivated Cool’s first service—my written content creation is not continuously cool. (Although it’s definitely not trendy either). But I am a consumer of Cultivated Cool’s weekly emails, and you should subscribe to its weekly emails also. Enter your email and click the “Subscribe” button on Cultivated Cool’s webpage.

And Cultivated Cool’s weekly emails lead me to the point of this post.

The day that Stella sculpted air

Today’s weekly newsletter issue from Cultivated Cool is entitled “Dig It.” But this has nothing to do with the Beatles or with Abba. Instead it has to do with gardening, and the issue tells the story of Stella, in five parts. The first part is entitled “Snip it in the Bud,” and begins as follows.

Stella felt a shiver go down her spine the first time the pruner blades closed. She wasn’t just cutting branches; she was sculpting air.

From https://cultivated.cool/dig-it/.

The pruner blades featured in Cultivated Cool are sold by Niwaki, an English company that offers Japanese-inspired products. As I type this, Niwaki offers 18 different types of secateurs (pruning shears), including large hand, small hand, right-handed, and left-handed varieties. You won’t get these at your dollar store; prices (excluding VAT) range from US$45.50 to US$280.50 (Tobisho Hiryu Secateurs).

Stella, how old are you?

But regardless of price, all the secateurs sold by Niwaki have one thing in common: an age restriction on purchases. Not that Niwaki truly enforces this restriction.

Please note: By law, we are not permitted to sell a knife or blade to any person under the age of 18. By placing an order for one of these items you are declaring that you are 18 years of age or over. These items must be used responsibly and appropriately.

From https://www.niwaki.com/tobisho-hiryu-secateurs/#P00313-1.

That’s the functional equivalent of the so-called age verification scheme used on some alcohol websites.

I hope you’re sitting down as I reveal this to you: underage people can bypass the age assurance scheme on alcohol websites by inputting any year of birth that they wish. Just like anyone, even a small child, can make any declaration of age that they want, as long as their credit card is valid.

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727.

Now I have no idea whether Ofcom’s UK Online Safety Act consultations will eventually govern Niwaki’s sales of adult-controlled physical products. But if Niwaki finds itself under the UK Online Safety Act, or some other act in the United Kingdom or any country where Niwaki conducts business, then a simple assurance that the purchaser is old enough to buy “a knife or blade” will not be sufficient.

Niwaki’s website would then need to adopt some form of age assurance for purchasers, either by using a government-issued identification document (age verification) or examining the face to algorithmically surmise the customer’s age (age estimation).

  • Age verification. For example, the purchaser would need to provide their government-issued identity document so that the seller can verify the purchaser’s age. Ideally, this would be coupled with live face capture so that the seller can compare the live face to the face on the ID, ensuring that a kid didn’t steal mommy’s or daddy’s driver’s license (licence) or passport.
  • Age estimation. For example, the purchaser would need to provide their live face so that the seller can estimate the purchaser’s age. In this case (and in the age verification case if a live face is captured), the seller would need to use liveness dectection to ensure that the face is truly a live face and is not a presentation attack or other deepfake.

And then the seller would need to explain why it was doing all of this.

How can a company explain its age assurance solution in a way that its prospects will understand…and how can the company reassure its prospects that its age assurance method protects their privacy?

Companies other than identity companies must explain their identity solutions

Which brings me to the TRUE call to action in this post. (Sorry Mark and Lindsey. You’re still cool.)

I’ve stated ad nauseum that identity companies need to explain their identity solutions: why they developed them, how they work, what they do, and several other things.

In the same way, firms that incorporate solutions from identity companies got some splainin’ to do.

This applies to a financial institution that requires customers to use an identity verification solution before opening an account, just like it applies to an online gardening implement website that uses an age assurance method to check the age of pruning shear purchasers.

So how can such companies explain their identity and biometrics features in a way their end customers can understand?

Bredemarket can help.

Age Assurance Meets Identity Assurance (Level 2)

I’ve talked about age verification and age estimation here and elsewhere. And I’ve also talked about Identity Assurance Level 2. But I’ve never discussed both simultaneously until now.

I belatedly read this March 2024 article that describes Georgia’s proposed bill to regulate access to material deemed harmful to minors.

A minor in Georgia (named Jimmy Carter) in the 1920s, before computers allowed access to adult material. From National Park Service, https://www.nps.gov/jica/learn/historyculture/early-life.htm.

The Georgia bill explicitly mentions Identity Assurance Level 2.

Under the bill, the age verification methods would have to meet or exceed the National Institute of Standards and Technology’s Identity Assurance Level 2 standard.

So if you think you can use Login.gov to access a porn website, think again.

There’s also a mention of mobile driver’s licenses, albeit without a corresponding mention of the ISO/IEC 18013-5:2021.

Specifically mentioned in the bill text is “digitized identification cards,” described as “a data file available on a mobile device with connectivity to the internet that contains all of the data elements visible on the face and back of a driver’s license or identification card.”

So digital identity is becoming more important for online access, as long as certain standards are met.

Ofcom and the Digital Trust & Safety Partnership

The Digital Trust & Safety Partnership (DTSP) consists of “leading technology companies,” including Apple, Google, Meta (parent of Facebook, Instagram, and WhatsApp), Microsoft (and its LinkedIn subsidiary), TikTok, and others.

The DTSP obviously has its views on Ofcom’s enforcement of the UK Online Safety Act.

Which, as Biometric Update notes, boils down to “the industry can regulate itself.”

Here’s how the DTSP stated this in its submission to Ofcom:

DTSP appreciates and shares Ofcom’s view that there is no one-size-fits-all approach to trust and safety and to protecting people online. We agree that size is not the only factor that should be considered, and our assessment methodology, the Safe Framework, uses a tailoring framework that combines objective measures of organizational size and scale for the product or service in scope of assessment, as well as risk factors.

From https://dtspartnership.org/press-releases/dtsp-submission-to-the-uk-ofcom-consultation-on-illegal-harms-online/.

We’ll get to the “Safe Framework” later. DTSP continues:

Overly prescriptive codes may have unintended effects: Although there is significant overlap between the content of the DTSP Best Practices Framework and the proposed Illegal Content Codes of Practice, the level of prescription in the codes, their status as a safe harbor, and the burden of documenting alternative approaches will discourage services from using other measures that might be more effective. Our framework allows companies to use whatever combination of practices most effectively fulfills their overarching commitments to product development, governance, enforcement, improvement, and transparency. This helps ensure that our practices can evolve in the face of new risks and new technologies.

From https://dtspartnership.org/press-releases/dtsp-submission-to-the-uk-ofcom-consultation-on-illegal-harms-online/.

But remember that the UK’s neighbors in the EU recently prescribed that USB-3 cables are the way to go. This not only forced DTSP member Apple to abandon the Lightning cable worldwide, but it affects Google and others because there will be no efforts to come up with better cables. Who wants to fight the bureaucratic battle with Brussels? Or alternatively we will have the advanced “world” versions of cables and the deprecated “EU” standards-compliant cables.

So forget Ofcom’s so-called overbearing approach and just adopt the Safe Framework. Big tech will take care of everything, including all those age assurance issues.

DTSP’s September 2023 paper on age assurance documents a “not overly prescriptive” approach, with a lot of “it depends” discussion.

Incorporating each characteristic comes with trade-offs, and there is no one-size-fits-all solution. Highly accurate age assurance methods may depend on collection of new personal data such as facial imagery or government-issued ID. Some methods that may be economical may have the consequence of creating inequities among the user base. And each service and even feature may present a different risk profile for younger users; for example, features that are designed to facilitate users meeting in real life pose a very different set of risks than services that provide access to different types of content….

Instead of a single approach, we acknowledge that appropriate age assurance will vary among services, based on an assessment of the risks and benefits of a given context. A single service may also use different
approaches for different aspects or features of the service, taking a multi-layered approach.

From https://dtspartnership.org/wp-content/uploads/2023/09/DTSP_Age-Assurance-Best-Practices.pdf.

So will Ofcom heed the DTSP’s advice and say “Never mind. You figure it out”?

Um, maybe not.

U.S. Sports Betting Tax Revenue

On Tuesday, February 13, Adam Grundy (supervisory statistician in the U.S. Census Bureau’s Economic Management Division) published an article entitled “Quarterly Survey of State and Local Tax Revenue Shows Which States Collected the Most Revenue from Legalized Sports Betting.”

According to Grundy:

New York was the state with the largest share of the nation’s tax revenue in the (third) quarter of 2023: $188.53 million or more than 37% of total tax revenue and gross receipts from sports betting in the United States. Indiana ($38.6 million) and Ohio ($32.9 million) followed.

From https://www.census.gov/library/stories/2024/02/legal-sports-betting.html.

Are you wondering why populous states such as California and Texas don’t appear on the list? That’s because sports betting is only legal in 38 states and the District of Columbia.

Sports betting in any form is currently illegal in California, Texas, Idaho, Utah, Minnesota, Missouri, Alabama, Georgia, South Carolina, Oklahoma, Alaska and Hawaii.

From https://www.forbes.com/betting/legal/states-where-sports-betting-is-legal/#states_where_sports_betting_is_illegal_section.

Sports betting was not legal in Florida during the 3rd quarter of 2023, but was subsequently legalized.

Which returns us to California and Texas, opposites in many ways, who are agreed in the opinion that sports betting is undesirable.

But the remaining states that allow sports betting need to ensure that the gamblers meet age verification requirements. (Even though they have a powerful incentive to let underage people gamble so that they receive more tax revenue.)

“Looks like the over-under for the NBA All-Star Game is 400, Mikey.” By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727

If your identity/biometric firm offers an age verification solution, and you need content to publicize your solution, contact Bredemarket.

Friday Deployment, Brittany Pietsch, and Marketing to “Thirsty People”

As you may know, I dislike the phrase “target audience” and am actively seeking an alternative.

By Christian Gidlöf – Photo taken by Christian Gidlöf, Public Domain, https://commons.wikimedia.org/w/index.php?curid=2065930

So far the best alternative to “target audience” that I’ve found is “hungry people,” which not only focuses on people rather than an abstraction, but also focuses on those who are ready to purchase your product or service.

But I just found an instance in which “thirsty people” may be better than “hungry people.” Specifically, for the Colorado spirits company Friday Deployment, which engages in product marketing in a very…um…targeted way. Including the use of a micro-influencer who is well-known to Friday Deployment’s thirsty people.

Heads up for regular Bredemarket blog readers: the “why” and “how” questions are coming.

Why are Friday Deployment’s “thirsty people” technologists?

Why does Friday Deployment aim its product marketing at technologists?

The website doesn’t elaborate on this, but according to LinkedIn, company owner Rishi Malik is also the VP of Engineering for Varo Bank (an active user of identity verification), and Malik’s history includes two decades of engineering experience. That’s enough to drive anyone to drink, on Fridays or any other day.

Presumably because of this background, Friday Deployment’s product marketing is filled with tech references. Here’s a sample from Friday Deployment’s web page (as of Friday, February 2, 2024).

It was inevitable. The tree is out of date, the history is a mess, and you just want to start your weekend. Maybe you just do a quick little git push --force? Maybe someone already did, and you now get to figure out the correct commit history?

From https://fridaydeployment.co/.

But that isn’t the only way that Friday Deployment markets to its “thirsty people.”

How does Friday Deployment’s marketing resonate with its thirsty people?

How else does Friday Deployment address a technologist audience?

Those of you who are familiar with LinkedIn’s tempests in a teapot realize that LinkedIn users don’t spend all of their time talking about green banners or vaping during remote interviews.

We also spend a lot of time talking about Brittany Pietsch.

TL;DR:

  • Pietsch was an account executive with Cloudflare.
  • Well, she was until one day when she and about 40 others were terminated.
  • Pietsch was terminated by two people that she didn’t know and who could not tell her why she was terminated.
  • This story would have disappeared under the rug…except that Pietsch knew that people were losing their jobs, so when she was invited to a meeting she videorecorded the first part of the termination, and shared it on the tubes.
  • The video went viral and launched a ton of discussion both for and against what Pietsch did. I lean toward the “for,” if you’re wondering.
  • And even Cloudflare admitted it screwed up in how the terminations were handled.

Since Friday Deployment’s “thirsty people” were probably familiar with the Brittany Pietsch story, the company worked with her to re-create her termination video…with a twist. (Not literally, since Pietsch drank the gin straight.)

@brittanypeachhh

Not every day is a good day at work. But every day is a good day for gin. Check out fridaydeployment.co.

♬ original sound – Brittany Pietsch
From https://www.tiktok.com/@brittanypeachhh/video/7330646930009410862.

Well, the product marketing ploy worked, since I clicked on the website of a spirits company that was new to me, and now I’m on their mailing list.

But let’s talk alcohol age verification

The Friday Deployment product marketing partnership with Brittany Pietsch worked…mostly. Except that I have one word of advice for company owner Rishi Malik.

With your Varo Bank engineering experience, you of all people should realize that Friday Deployment’s age verification system is hopelessly inadequate. A robust age verification system, or even an age estimation system, or even a question asking you to provide your date of birth would be better.

Bredemarket can’t create a viral video for your tech firm, but…

But enough about Friday Deployment. Let’s talk about YOUR technology firm.

How can your company market to your thirsty (or hungry) people? Bredemarket can’t create funny videos with micro-influencers, but Bredemarket can craft the words that speak to your audience.

To learn more about Bredemarket’s marketing and writing services for technology firms, click on the image below.

Sugar Pie Honey Bunch

Sorry, but all this discussion about Friday…well, I can’t help myself.

From https://www.youtube.com/watch?v=kfVsfOSbJY0.

And Rebecca Black, who actually has a very fine voice and sounds great when she’s singing non-inane lyrics, has engaged in a number of marketing opportunities herself. See if you can spot her in this ad.

Time for the FIRST Iteration of Your Firm’s UK Online Safety Act Story

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727

A couple of weeks ago, I asked this question:

Is your firm affected by the UK Online Safety Act, and the future implementation of the Act by Ofcom?

From https://bredemarket.com/2023/10/30/uk-online-safety-act-story/

Why did I mention the “future implementation” of the UK Online Safety Act? Because the passage of the UK Online Safety Act is just the FIRST step in a long process. Ofcom still has to figure out how to implement the Act.

Ofcom started to work on this on November 9, but it’s going to take many months to finalize—I mean finalise things. This is the UK Online Safety Act, after all.

This is the first of four major consultations that Ofcom, as regulator of the new Online Safety Act, will publish as part of our work to establish the new regulations over the next 18 months.

It focuses on our proposals for how internet services that enable the sharing of user-generated content (‘user-to-user services’) and search services should approach their new duties relating to illegal content.

From https://www.ofcom.org.uk/consultations-and-statements/category-1/protecting-people-from-illegal-content-online

On November 9 Ofcom published a slew of summary and detailed documents. Here’s a brief excerpt from the overview.

Mae’r ddogfen hon yn rhoi crynodeb lefel uchel o bob pennod o’n hymgynghoriad ar niwed anghyfreithlon i helpu rhanddeiliaid i ddarllen a defnyddio ein dogfen ymgynghori. Mae manylion llawn ein cynigion a’r sail resymegol sylfaenol, yn ogystal â chwestiynau ymgynghori manwl, wedi’u nodi yn y ddogfen lawn. Dyma’r cyntaf o nifer o ymgyngoriadau y byddwn yn eu cyhoeddi o dan y Ddeddf Diogelwch Ar-lein. Mae ein strategaeth a’n map rheoleiddio llawn ar gael ar ein gwefan.

From https://www.ofcom.org.uk/__data/assets/pdf_file/0021/271416/CYM-illegal-harms-consultation-chapter-summaries.pdf

Oops, I seem to have quoted from the Welsh version. Maybe you’ll have better luck reading the English version.

This document sets out a high-level summary of each chapter of our illegal harms consultation to help stakeholders navigate and engage with our consultation document. The full detail of our proposals and the underlying rationale, as well as detailed consultation questions, are set out in the full document. This is the first of several consultations we will be publishing under the Online Safety Act. Our full regulatory roadmap and strategy is available on our website.

From https://www.ofcom.org.uk/__data/assets/pdf_file/0030/270948/illegal-harms-consultation-chapter-summaries.pdf

If you want to peruse everything, go to https://www.ofcom.org.uk/consultations-and-statements/category-1/protecting-people-from-illegal-content-online.

And if you need help telling your firm’s UK Online Safety Act story, Bredemarket can help. (Unless the final content needs to be in Welsh.) Click below!

What Is Your Firm’s UK Online Safety Act Story?

It’s time to revisit my August post entitled “Can There Be Too Much Encryption and Age Verification Regulation?” because the United Kingdom’s Online Safety Bill is now the Online Safety ACT.

Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent (October 26)….

[A]dded in (to the Act) is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.

From Wired.
By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727

This not only opens up issues regarding encryption and privacy, but also specific identity technologies such as age verification and age estimation.

This post looks at three types of firms that are affected by the UK Online Safety Act, the stories they are telling, and the stories they may need to tell in the future. What is YOUR firm’s Online Safety Act-related story?

What three types of firms are affected by the UK Online Safety Act?

As of now I have been unable to locate a full version of the final final Act, but presumably the provisions from this July 2023 version (PDF) have only undergone minor tweaks.

Among other things, this version discusses “User identity verification” in 65, “Category 1 service” in 96(10)(a), “United Kingdom user” in 228(1), and a multitude of other terms that affect how companies will conduct business under the Act.

I am focusing on three different types of companies:

  • Technology services (such as Yoti) that provide identity verification, including but not limited to age verification and age estimation.
  • User-to-user services (such as WhatsApp) that provide encrypted messages.
  • User-to-user services (such as Wikipedia) that allow users (including United Kingdom users) to contribute content.

What types of stories will these firms have to tell, now that the Act is law?

Stories from identity verification services

From Yoti.

For ALL services, the story will vary as Ofcom decides how to implement the Act, but we are already seeing the stories from identity verification services. Here is what Yoti stated after the Act became law:

We have a range of age assurance solutions which allow platforms to know the age of users, without collecting vast amounts of personal information. These include:

  • Age estimation: a user’s age is estimated from a live facial image. They do not need to use identity documents or share any personal information. As soon as their age is estimated, their image is deleted – protecting their privacy at all times. Facial age estimation is 99% accurate and works fairly across all skin tones and ages.
  • Digital ID app: a free app which allows users to verify their age and identity using a government-issued identity document. Once verified, users can use the app to share specific information – they could just share their age or an ‘over 18’ proof of age.
From Yoti.

Stories from encrypted message services

From WhatsApp.

Not surprisingly, message encryption services are telling a different story.

MailOnline has approached WhatsApp’s parent company Meta for comment now that the Bill has received Royal Assent, but the firm has so far refused to comment.

Will Cathcart, Meta’s head of WhatsApp, said earlier this year that the Online Safety Act was the most concerning piece of legislation being discussed in the western world….

[T]o comply with the new law, the platform says it would be forced to weaken its security, which would not only undermine the privacy of WhatsApp messages in the UK but also for every user worldwide. 

‘Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98 per cent of users,’ Mr Cathcart has previously said.

From Daily Mail.

Stories from services with contributed content

From Wikipedia.

And contributed content services are also telling their own story.

Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.)

From Wired.

What is YOUR firm’s story?

All of these firms have shared their stories either before or after the Act became law, and those stories will change depending upon what Ofcom decides.

But what about YOUR firm?

Is your firm affected by the UK Online Safety Act, and the future implementation of the Act by Ofcom?

Do you have a story that you need to tell to achieve your firm’s goals?

Do you need an extra, experienced hand to help out?

Learn how Bredemarket can create content that drives results for your firm.

Click the image below.

The Imperfect Way to Enforce New York’s Child Data Protection Act

It’s often good to use emotion in your marketing.

For example, when biometric companies want to justify the use of their technology, they have found that it is very effective to position biometrics as a way to combat sex trafficking.

Similarly, moves to rein in social media are positioned as a way to preserve mental health.

By Marc NL at English Wikipedia – Transferred from en.wikipedia to Commons., Public Domain, https://commons.wikimedia.org/w/index.php?curid=2747237

Now that’s a not-so-pretty picture, but it effectively speaks to emotions.

“If poor vulnerable children are exposed to addictive, uncontrolled social media, YOUR child may end up in a straitjacket!”

In New York state, four government officials have declared that the ONLY way to preserve the mental health of underage social media users is via two bills, one of which is the “New York Child Data Protection Act.”

But there is a challenge to enforce ALL of the bill’s provisions…and only one way to solve it. An imperfect way—age estimation.

This post only briefly addresses the alleged mental health issues of social media before plunging into one of the two proposed bills to solve the problem. It then examines a potentially unenforceable part of the bill and a possible solution.

Does social media make children sick?

Letitia “Tish” James is the 67th Attorney General for the state of New York. From https://ag.ny.gov/about/meet-letitia-james

On October 11, a host of New York State government officials, led by New York State Attorney General Letitia James, jointly issued a release with the title “Attorney General James, Governor Hochul, Senator Gounardes, and Assemblymember Rozic Take Action to Protect Children Online.”

Because they want to protect the poor vulnerable children.

By Paolo Monti – Available in the BEIC digital library and uploaded in partnership with BEIC Foundation.The image comes from the Fondo Paolo Monti, owned by BEIC and located in the Civico Archivio Fotografico of Milan., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48057924

And because the major U.S. social media companies are headquartered in California. But I digress.

So why do they say that children need protection?

Recent research has shown devastating mental health effects associated with children and young adults’ social media use, including increased rates of depression, anxiety, suicidal ideation, and self-harm. The advent of dangerous, viral ‘challenges’ being promoted through social media has further endangered children and young adults.

From https://ag.ny.gov/child-online-safety

Of course one can also argue that social media is harmful to adults, but the New Yorkers aren’t going to go that far.

So they are just going to protect the poor vulnerable children.

CC BY-SA 4.0.

This post isn’t going to deeply analyze one of the two bills the quartet have championed, but I will briefly mention that bill now.

  • The “Stop Addictive Feeds Exploitation (SAFE) for Kids Act” (S7694/A8148) defines “addictive feeds” as those that are arranged by a social media platform’s algorithm to maximize the platform’s use.
  • Those of us who are flat-out elderly vaguely recall that this replaced the former “chronological feed” in which the most recent content appeared first, and you had to scroll down to see that really cool post from two days ago. New York wants the chronological feed to be the default for social media users under 18.
  • The bill also proposes to limit under 18 access to social media without parental consent, especially between midnight and 6:00 am.
  • And those who love Illinois BIPA will be pleased to know that the bill allows parents (and their lawyers) to sue for damages.

Previous efforts to control underage use of social media have faced legal scrutinity, but since Attorney General James has sworn to uphold the U.S. Constitution, presumably she has thought about all this.

Enough about SAFE for Kids. Let’s look at the other bill.

The New York Child Data Protection Act

The second bill, and the one that concerns me, is the “New York Child Data Protection Act” (S7695/A8149). Here is how the quartet describes how this bill will protect the poor vulnerable children.

CC BY-SA 4.0.

With few privacy protections in place for minors online, children are vulnerable to having their location and other personal data tracked and shared with third parties. To protect children’s privacy, the New York Child Data Protection Act will prohibit all online sites from collecting, using, sharing, or selling personal data of anyone under the age of 18 for the purposes of advertising, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, this informed consent must come from a parent.

From https://ag.ny.gov/child-online-safety

And again, this bill provides a BIPA-like mechanism for parents or guardians (and their lawyers) to sue for damages.

But let’s dig into the details. With apologies to the New York State Assembly, I’m going to dig into the Senate version of the bill (S7695). Bear in mind that this bill could be amended after I post this, and some of the portions that I cite could change.

The “definitions” section of the bill includes the following:

“MINOR” SHALL MEAN A NATURAL PERSON UNDER THE AGE OF EIGHTEEN.

From https://www.nysenate.gov/legislation/bills/2023/S7695, § 899-EE, 2.

This only applies to natural persons. So the bots are safe, regardless of age.

Speaking of age, the age of 18 isn’t the only age referenced in the bill. Here’s a part of the “privacy protection by default” section:

§ 899-FF. PRIVACY PROTECTION BY DEFAULT.

1. EXCEPT AS PROVIDED FOR IN SUBDIVISION SIX OF THIS SECTION AND SECTION EIGHT HUNDRED NINETY-NINE-JJ OF THIS ARTICLE, AN OPERATOR SHALL NOT PROCESS, OR ALLOW A THIRD PARTY TO PROCESS, THE PERSONAL DATA OF A COVERED USER COLLECTED THROUGH THE USE OF A WEBSITE, ONLINE SERVICE, ONLINE APPLICATION, MOBILE APPLICA- TION, OR CONNECTED DEVICE UNLESS AND TO THE EXTENT:

(A) THE COVERED USER IS TWELVE YEARS OF AGE OR YOUNGER AND PROCESSING IS PERMITTED UNDER 15 U.S.C. § 6502 AND ITS IMPLEMENTING REGULATIONS; OR

(B) THE COVERED USER IS THIRTEEN YEARS OF AGE OR OLDER AND PROCESSING IS STRICTLY NECESSARY FOR AN ACTIVITY SET FORTH IN SUBDIVISION TWO OF THIS SECTION, OR INFORMED CONSENT HAS BEEN OBTAINED AS SET FORTH IN SUBDIVISION THREE OF THIS SECTION.

From https://www.nysenate.gov/legislation/bills/2023/S7695

So a lot of this bill depends upon whether a person is over or under the age of eighteen, or over or under the age of thirteen.

And that’s a problem.

How old are you?

The bill needs to know whether or not a person is 18 years old. And I don’t think the quartet will be satisfied with the way that alcohol websites determine whether someone is 21 years old.

This age verification method is…not that robust.

Attorney General James and the others would presumably prefer that the social media companies verify ages with a government-issued ID such as a state driver’s license, a state identification card, or a national passport. This is how most entities verify ages when they have to satisfy legal requirements.

For some people, even some minors, this is not that much of a problem. Anyone who wants to drive in New York State must have a driver’s license, and you have to be at least 16 years old to get a driver’s license. Admittedly some people in the city never bother to get a driver’s license, but at some point these people will probably get a state ID card.

You don’t need a driver’s license to ride the New York City subway, but if the guitarist wants to open a bank account for his cash it would help him prove his financial identity. By David Shankbone – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=2639495
  • However, there are going to be some 17 year olds who don’t have a driver’s license, government ID or passport.
  • And some 16 year olds.
  • And once you look at younger people—15 year olds, 14 year olds, 13 year olds, 12 year olds—the chances of them having a government-issued identification document are much less.

What are these people supposed to do? Provide a birth certificate? And how will the social media companies know if the birth certificate is legitimate?

But there’s another way to determine ages—age estimation.

How old are you, part 2

As long-time readers of the Bredemarket blog know, I have struggled with the issue of age verification, especially for people who do not have driver’s licenses or other government identification. Age estimation in the absence of a government ID is still an inexact science, as even Yoti has stated.

Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.

From https://www.yoti.com/wp-content/uploads/Yoti-Age-Estimation-White-Paper-March-2023.pdf

So if a minor does not have a government ID, and the social media firm has to use age estimation to determine a minor’s age for purposes of the New York Child Data Protection Act, the following two scenarios are possible:

  • An 11 year old may be incorrectly allowed to give informed consent for purposes of the Act.
  • A 14 year old may be incorrectly denied the ability to give informed consent for purposes of the Act.

Is age estimation “good enough for government work”?