Why Did Apple Implement iPhone/iPad Age Verification in the United Kingdom?

There has been ongoing debate on whether age verification should be implemented at the website level or at the operating system level…or not at all.

In the United Kingdom, Apple is opting for OS level age verification, according to the BBC.

“Apple is rolling out age checks for iPhone and iPad users in the UK that will ask them to verify if they are adults to access “certain services” such as 18-plus apps.

“After customers accept the latest iOS 26.4 software update, they will be asked to verify their age, which they can do by providing a credit card or scanning their ID, according to an Apple support page.

“Those who do not confirm how old they are or are underage will have web content filters turned on automatically.”

Specifically, according to Apple:

“When creating a new Apple Account or using Apple services, you may see a prompt asking you to confirm that you’re an adult. This is required by law in some countries and regions.”

Regarding that last sentence, is OS level age verification REQUIRED? Silkie Carlo of Big Brother Watch says no:

“Carlo told the BBC she believed Apple had “crossed the Rubicon” with its new software update which she described as “more like ransomware”, and which she said essentially left millions of Brits owning a “child’s device”, unless they complied with the age checks.

“And she said while she believed children’s online safety was vital, it required more thoughtful tech responsibility and not “sweeping, draconian shock demands by foreign companies for all of our IDs and credit cards”.”

Note the appeal to resist the “American” company, which raises questions about whether Apple’s collection of this information potentially violates United Kingdom privacy laws if the data is sent to Cupertino.

For the record, Ofcom currently only requires age verification for pornographic sites, not for everything.

So why did Apple do it if UK law doesn’t require it?

Two reasons:

  • Future proofing. While the UK and other jurisdictions do not require age verification at the OS level now, they may require it at some point. If so, Apple has already implemented it in the UK (for iPhones and iPads) and can implement it elsewhere.
  • CYA. A jury in California awarded damages after finding that Meta and Google were responsible for a woman’s anxiety and depression, suffered because of her social media use as a child. Apple doesn’t want to face a similar lawsuit.

Incidentally, it’s interesting to note that these and other stories pair “Meta” and “Google.” Does no one refer to “Alphabet” (Google’s parent company) any more?

Data Labelers Gonna Label, and Class Action Lawyers Gonna Lawyer

On Wednesday, I described how Meta’s Kenyan data labelers ended up watching explicit videos from people who presumably didn’t know that smart glasses were recording their activity.

To no one’s surprise, class action lawyers are now involved.

“In the newly filed complaint, plaintiffs Gina Bartone of New Jersey and Mateo Canu of California, represented by the public interest-focused Clarkson Law Firm, allege that Meta violated privacy laws and engaged in false advertising.

“The complaint alleges that the Meta AI smart glasses are advertised using promises like “designed for privacy, controlled by you,” and “built for your privacy,” which might not lead customers to assume their glasses’ footage, including intimate moments, was being watched by overseas workers. The plaintiffs believed Meta’s marketing and said they saw no disclaimer or information that contradicted the advertised privacy protections.”

So what does Meta say?

“Clear, easy device and app settings help you manage your information, giving you control over what content you choose to share with others, and when.”

Except that according to Clarkson, people can’t opt out of the data labeling process.

This could get very revealing.

Data Labelers Gonna Label

Before diving in, I should note that this is not just a Meta Ray Ban AI glasses issue.

This is an issue with ANY video feed that requires AI processing.

Because AI can’t do its job on its own.

To ensure that the AI is trained properly, an army of humans looks at the same data and uses data labeling to classify it.

We allow this when we sign those Terms of Service. And I personally believe it’s a good thing, since it helps correct errors from uncontrolled AI.

But Futurism notes the types of video feeds that the human data labelers have to label.

“I saw a video where a man puts the glasses on the bedside table and leaves the room,” one data annotator told the newspapers. “Shortly afterwards his wife comes in and changes her clothes.”

Grok.

Basically we record more than we should. One example: a bank card.

But regardless of whether data labelers are present or not, assume that any recording device will record anything, and potentially distribute it.

Jane Says…Nothing

Remember Jane, my Instagram AI influencer

Well, I received this notification on Instagram:

“Your Al JaneCPAInfluencer is now private because it goes against our Al Studio policies. Please edit it and submit again.”

Naturally I wondered what the violation was. I was directed to the policies at https://aistudio.instagram.com/policies/.

Which part of the policy does Jane violate? That’s a secret…yet another example of “you violated our terms, but we won’t tell you the specifics; YOU figure it out.”

So, since I can still access Jane myself, I asked her. AI is supposed to help you, after all.

“What portion of the Meta AI Studio Policies do you violate, Jane?”

Her response:

“I can’t respond because one or more of my details goes against the AI Studio policies.”

That answer caused me to wonder if Jane would respond to anything.

“Who is Bredemarket?”

“I can’t respond because one or more of my details goes against the AI Studio policies.”

So is it critically important that I spend a lot of time figuring out what the violation is? Um…no.

But I’m curious how this interaction will affect the ads that Meta will present to me later this year.

Messing Up “Meta Data” via the Meta Challenge

I confess that Meta AI’s cluelessness often amuses me. I need to start collecting examples, but it is often off the, um, mark.

But if you REALLY want to confuse Meta AI, participate in Bredemarket’s “Meta Challenge”:

Meta Challenge: at least once per day in October and November, go to Facebook and/or Instagram and ask Meta AI the most inane questions you can think of.

And feel free to ask these inane questions of Bredemarket’s own two Instagram bots.

Because we all want to know who is the best Osmond brother.

And Mark Zuckerberg’s shoe size.

Conversation with one of my Instagram bots.

Why?

Now since Bredemarket’s readers are of above average intelligence (and also have extremely magnetic personalities), you are probably asking why I am promoting this activity.

Simple reason: the data we feed to Meta AI in October and November will be used in December, according to PYMNTS.

Meta will begin using people’s conversations with its artificial intelligence to create personalized ads and content.

The change is set to go into effect Dec. 16, the tech giant announced Wednesday (Oct. 1), 

If you are concerned about the Really Big Bunch knowing too much about you, feed them false information just to confuse them.

And maybe you’ll get some wild entertaining ads in return.

And if they complain that you’re intentionally messing up their algorithms, tell the Really Big Bunch that you’d be more than happy to provide the REAL data.

For a price.

No Strategy, Tactics, or Content?

I just created a new reel for my Meta social channels, but in the process invented the Bredemarket t-shirt.

If I didn’t insist on shirts with pockets I’d consider printing some.

No strategy, tactics, or content? Contact Bredemarket. bredemarket.com/mark

This is Only a Test

Just trying to figure out what I would do if Meta lowered the handle on Bredemarket and I couldn’t post audio-enhanced conte n via its platforms.

“For a Meaningful Apocryphal Animation.” Details here.

Thankfully it’s not auto playing. I don’t want to go back to the 1990s again.

And this also covers me if my Spotify-hosted podcasting empire is reduced to rubble.

Using Personal Devices at Work: Meta AI Smart Glasses at a CBP Raid?

Although the lines inevitably blur, there is often a line between devices used at home and devices used at work.

  • For example, if you work in an old-fashioned work office, you shouldn’t use the company photocopier to run personal copies of invitations to your wedding.
  • Similarly, if you have a personal generative AI account, you may cause problems if you use that personal account for work-related research…especially if you feed confidential information to the account. (Don’t do this.)
Not work related. Imagen 4.

The line between personal use and work use of devices may have been crossed by a Customs and Border Protection agent on June 30 in Los Angeles, according to 404 Media.

“A Customs and Border Protection (CBP) agent wore Meta’s AI smart glasses to a June 30 immigration raid outside a Home Depot in Cypress Park, Los Angeles, according to photos and videos of the agent verified by 404 Media.”

If you visit the 404 Media story, you can see zoomed in pictures of the agent’s glasses showing the telltale signs that these aren’t your average spectacles.

Now 404 Media doesn’t take this single photo as evidence to indicate that CBP has formally adopted Meta AI glasses for its work. In fact, a likely explanation is that these were the agent’s personal glasses, and he chose to wear them to work that day.

And 404 Media also points out that current Meta AI glasses do NOT have built-in facial recognition capabilities.

But even with these, the mere act of wearing these glasses causes potential problems for the agent, for Customs and Border Protection, and for Meta.

Take Grandma, who uses Meta to find those cute Facebook stories about that hunk Ozzy Osbourne (who appeals to an older demographic). If she finds out that her friend Marky Mark Zuckerberg is letting the Government use Meta technology on those poor hardworking workers who just want a better life, well, Grandma may stop buying those trinkets from Facebook Marketplace.

(Unauthorized) Homeland Security Fashion Show. AI-generated by Imagen 4. And no, I don’t know what a “palienza” is.

So the lesson learned? Don’t use personal devices at work. Especially if they’re controversial.

Meta Mutters

The Meta properties are great for driving engagement, but Meta’s odd and untimely application of its rules can be maddening.

I was checking my personal Facebook account this afternoon when I noticed a “Profile has some issues” message and clicked on the “View details” button to see why my profile had a gold restricted minus sign.

Profile has some issues.

When I clicked on the button I found a list of 11 issues encompassing my personal profile, the Bredemarket page, and the Bredemarket groups.

None later than April 17.

Lovely spam, wonderful spam.

I discussed THAT encounter with the Metabot in my Bredemarket post “Defeating the Metabot to Share Whistic’s Survey Results.” As far as I can tell, my grievous violation was this parenthetical statement:

“(And one more key finding. Read the article.)”

I got flagged because Facebook said my content could “trick people to visit…a website.”

We removed your post. You figure out what happened.

But even after removing the parenthetical comment I got flagged again.

Eventually I just posted a link with no text on Facebook, and since that time have studiously avoided posting calls to action on Facebook posts.

But this past issue remains a present issue because my account is restricted…and I’m supposed to do something about it. But without a DeLorean I’m not sure what. I can’t remove the offending posts since Facebook already did so.

It turns out that Wendy Wilkes wrote about this in late July.

“Many users are seeing this today — it’s caused by old posts flagged by Facebook’s system, not recent activity….

You’re not alone — it’s happening to many!

#FacebookIssue #ContentCreator #StayCalm “

From Wendy Wilkes.

So I guess I will just hang tight and see if it auto clears.

And remind myself again that Facebook is not a dependable platform. That’s the message we’re supposed to get from this…right?