A person in Upland, California posted this on the local NextDoor. While anecdotal and not statistical, in this case the geolocation capabilities of a device (in this case AirPods) identified someone in possession of a stolen vehicle.
Back in August 2023, the U.S. General Services Administration published a blog post that included the following statement:
Login.gov is on a path to providing an IAL2-compliant identity verification service to its customers in a responsible, equitable way. Building on the strong evidence-based identity verification that Login.gov already offers, Login.gov is on a path to providing IAL2-compliant identity verification that ensures both strong security and broad and equitable access.
Login.gov is a secure sign in service used by the public to sign in to participating government agencies. Participating agencies will ask you to create a Login.gov account to securely access your information on their website or application.
You can use the same username and password to access any agency that partners with Login.gov. This streamlines your process and eliminates the need to remember multiple usernames and passwords.
Why would agencies implement Login.gov? Because the agencies want to protect their constituents’ information. If fraudsters capture personally identifiable information (PII) of someone applying for government services, the breached government agency will face severe repurcussions. Login.gov is supposed to protect its partner agencies from these nightmares.
How does Login.gov do this?
Sometimes you might use two-factor authentication consisting of a password and a second factor such as an SMS code or the use of an authentication app.
In more critical cases, Login.gov requests a more reliable method of identification, such as a government-issued photo ID (driver’s license, passport, etc.).
The U.S. National Institute of Standards and Technology, in its publication NIST SP 800-63a, has defined “identity assurance levels” (IALs) that can be used when dealing with digital identities. It’s helpful to review how NIST has defined the IALs. (I’ll define the other acronyms as we go along.)
Assurance in a subscriber’s identity is described using one of three IALs:
IAL1: There is no requirement to link the applicant to a specific real-life identity. Any attributes provided in conjunction with the subject’s activities are self-asserted or should be treated as self-asserted (including attributes a [Credential Service Provider] CSP asserts to an [Relying Party] RP). Self-asserted attributes are neither validated nor verified.
IAL2: Evidence supports the real-world existence of the claimed identity and verifies that the applicant is appropriately associated with this real-world identity. IAL2 introduces the need for either remote or physically-present identity proofing. Attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL2 can support IAL1 transactions if the user consents.
IAL3: Physical presence is required for identity proofing. Identifying attributes must be verified by an authorized and trained CSP representative. As with IAL2, attributes could be asserted by CSPs to RPs in support of pseudonymous identity with verified attributes. A CSP that supports IAL3 can support IAL1 and IAL2 identity attributes if the user consents.
So in its simplest terms, IAL2 requires evidence of a verified credential so that an online person can be linked to a real-life identity. If someone says they’re “John Bredehoft” and fills in an online application to receive government services, IAL2 compliance helps to ensure that the person filling out the online application truly IS John Bredehoft, and not Bernie Madoff.
As more and more of us conduct business—including government business—online, IAL2 compliance is essential to reduce fraud.
One more thing about IAL2 compliance. The mere possession of a valid government issued photo ID is NOT sufficient for IAL2 compliance. After all, Bernie Madoff may be using John Bredehoft’s driver’s license. To make sure that it’s John Bredehoft using John Bredehoft’s driver’s license, an additional check is needed.
This has been explained by ID.me, a private company that happens to compete with Login.gov to provide identity proofing services to government agencies.
Biometric comparison (e.g., selfie with liveness detection or fingerprint) of the strongest piece of evidence to the applicant
So you basically take the information on a driver’s license and perform a facial recognition 1:1 comparison with the person possessing the driver’s license, ideally using liveness detection, to make sure that the presented person is not a fake.
As part of an investigation that has run since last April (2022), GSA’s Office of the Inspector General found that the agency was billing agencies for IAL2-compliant services, even though Login.gov did not meet Identity Assurance Level 2 (IAL2) standards.
GSA knowingly billed over $10 million for services provided through contracts with other federal agencies, even though Login.gov is not IAL2 compliant, according to the watchdog.
When talking about marketing tools, two words that don’t seem to go together are “marketing” and “Excel” (the Microsoft spreadsheet product). Because I’m in marketing, I encounter images like this all the time.
It’s true that marketing analytics requires a ton of Excel work. I’m not going to talk about marketing analytics here, but if you have an interest in using Excel for marketing analytics, you may want to investigate HubSpot Academy’s free Excel crash course.
As I write this, Bredemarket is neck-deep in a research project for a client. A SECRET research project.
By Unnamed photographer for Office of War Information. – U.S. Office of War Information photo, via Library of Congress website [1], converted from TIFF to .jpg and border cropped before upload to Wikimedia Commons., Public Domain, https://commons.wikimedia.org/w/index.php?curid=8989847
While I won’t reveal the name of the client or the specifics about the research project, I can say that the project requires me to track the following information:
Organization name.
Organization type (based upon fairly common classifications).
Organization geographic location.
Vendor providing services to the organization.
Information about the contract between the vendor and the organization.
A multitude of information sources about the organization, the vendor, and the relationship between the two.
To attack the data capture for this project, I did what I’ve done for a number of similar projects for Bredemarket, Incode, IDEMIA, MorphoTrak, et al.
I threw all the data into a worksheet in an Excel workbook.
I can then sort and filter it to my heart’s content. Ror example, if I want to just view the rows for which I have contract information, I can just look at that.
Bredemarket as an identity/biometric research service
For one organization I created a number of different worksheets within a single workbook, in which the worksheet data all fed into a summary worksheet. This allowed my clients to view data either at the detailed level or at the summary level.
For another organization I collected the data from an external source, opened it in Excel, performed some massaging, and then pivoted the data into a new view so that it could then be exported out of Excel and into a super-secret document that I cannot discuss here.
Now none of this (well, except maybe for the pivot) is fancy stuff, and most of it (except for the formulas linking the summary and detailed worksheets) is all that hard to do. But it turns out that Excel is an excellent tool to deal with this data in certain cases.
Which brings me to YOUR research needs.
After all, Bredemarket doesn’t just write stuff.
Sometimes it researches stuff, especially in the core area of biometrics and identity.
After all, I offer 29 years of experience in this area, and I draw on that experience to get answers to your questions.
Unlike the better-bounded projects that require only a single blog post or a single white paper, I quote research projects at an hourly rate or on retainer (where I’m embedded with you).
Why did I mention the “future implementation” of the UK Online Safety Act? Because the passage of the UK Online Safety Act is just the FIRST step in a long process. Ofcom still has to figure out how to implement the Act.
Ofcom started to work on this on November 9, but it’s going to take many months to finalize—I mean finalise things. This is the UK Online Safety Act, after all.
This is the first of four major consultations that Ofcom, as regulator of the new Online Safety Act, will publish as part of our work to establish the new regulations over the next 18 months.
It focuses on our proposals for how internet services that enable the sharing of user-generated content (‘user-to-user services’) and search services should approach their new duties relating to illegal content.
On November 9 Ofcom published a slew of summary and detailed documents. Here’s a brief excerpt from the overview.
Mae’r ddogfen hon yn rhoi crynodeb lefel uchel o bob pennod o’n hymgynghoriad ar niwed anghyfreithlon i helpu rhanddeiliaid i ddarllen a defnyddio ein dogfen ymgynghori. Mae manylion llawn ein cynigion a’r sail resymegol sylfaenol, yn ogystal â chwestiynau ymgynghori manwl, wedi’u nodi yn y ddogfen lawn. Dyma’r cyntaf o nifer o ymgyngoriadau y byddwn yn eu cyhoeddi o dan y Ddeddf Diogelwch Ar-lein. Mae ein strategaeth a’n map rheoleiddio llawn ar gael ar ein gwefan.
Oops, I seem to have quoted from the Welsh version. Maybe you’ll have better luck reading the English version.
This document sets out a high-level summary of each chapter of our illegal harms consultation to help stakeholders navigate and engage with our consultation document. The full detail of our proposals and the underlying rationale, as well as detailed consultation questions, are set out in the full document. This is the first of several consultations we will be publishing under the Online Safety Act. Our full regulatory roadmap and strategy is available on our website.
And if you need help telling your firm’s UK Online Safety Act story, Bredemarket can help. (Unless the final content needs to be in Welsh.) Click below!
My belief that everything on the Internet is true has been irrevocably shattered, all because of what an entertainment executive ordered in his spare time. But the Casey Bloys / “Kelly Shepherd” story is just a tiny bit of what is going on with synthetic identities. And X isn’t the only platform plagued by them, as my LinkedIn experience attests.
By the way, this blog post contains pictures of a lot of people. Casey Bloys is real. Some of the others, not so much.
Casey Bloys is the Chairman and CEO of HBO and Max Content. Bloys had to start a recent 2024 schedule presentation with an apology, according to Variety. After explaining how passionate he is about his programming, he went back in time a couple of years to a period that we all remember.
So when you think of that mindset, and then think of 2020 and 2021, I’m home, working from home and spending an unhealthy amount of scrolling through Twitter. And I come up with a very, very dumb idea to vent my frustration.
So why did Bloys have to apologize on Thursday? Because of an article that Rolling Stone published on Wednesday. The article led off with this juicy showbiz tidbit about Bloys’ idea for responding to a critic.
“Maybe a Twitter user should tweet that that’s a pretty blithe response to what soldiers legitimately go through on [the] battlefield,” he texted. “Do you have a secret handle? Couldn’t we say especially given that it’s D-Day to dismiss a soldier’s experience like that seems pretty disrespectful … this must be answered!”
(A note to my younger readers: Twitter used to be a popular social media service that no longer exists. It was replaced by X.)
Eventually Bloys found someone to create the “secret handle.” Sully Temori is now alleging wrongful termination by HBO (which is why we’re learning about these juicy tidbits, via court filings). But in 2021 he was an executive assistant who wanted to get ahead by pleasing his bosses.
Ms. Shepherd seems like a nice woman. A mom, a Texan, a herbalist and aromatherapist, and a vegan. (The cows love that last part.)
Most critically, Shepherd is a normal person, not one of those Hollywood showbiz folks. Although Shepherd, who never posted anything on her own, seems to have a distinct motivation to respond to critics of HBO shows. Take her first reply to a critic from (checks notes) Rolling Stone. (Two years later, Rolling Stone would gleefully report on this story. Watch out who you anger.)
Kelly’s other three replies were along the same lines.
All were short one-sentence blurbs.
Most were completely in lower case, because that’s how regular non-Hollywood folk tweet.
All were critical of those who were critical of HBO, accusing them of “shitting on a show about women,” getting their “panties in a bunch,” and being “busy virtue signaling.”
Hey, if I couldn’t eat hamburgers and my home was filled with weird herbs and aromas, I’d be a little mad too.
And then, a little over a week later, it was over, and Kelly Shepherd never tweeted again. Although Temori apparently performed other activities against HBO critics via other methods. Well, until he was terminated.
Did Kelly Shepherd open a LinkedIn account?
But as part of the plan to satisfy Casey Bloys’ angry whims, Kelly Shepherd acquired a social media account, which she could use as a possible proof of identity.
Even though we now know she doesn’t exist.
But X isn’t the only platform plagued with synthetic identities, and some synthetic identities can do much more than anger an entertainment reviewer.
Many of us on LinkedIn are regularly receiving InMails and connection requests (in my case, from profiles with pictures of beautiful women) who say that we are constantly recommended by LinkedIn, who tell us how impressive our profiles are, and who want to contact us outside of the LinkedIn platform via text message or WhatsApp.
Now perhaps some of these messages are from real people, but I seriously doubt that so many of the employees at John Q Wine & Liquor Winery in New York happen to have the last name “Walter.” And the exact same job title.
Ms. Walter is a pretty busy freelance general manager / director / content partnerships manager.
As for her colleague Ms. Alice Walter, she has more experience (having started in 2018) but also has an extensive biography that begins:
The United States is a country with innovative challenges, and there is more room for development in the wine industry at John Q Wine & Liquor Winery. I am motivated and love to learn, and like to be exposed to more different cultures, and hope to develop more careers in my future life.
And you can check out Maria Walter’s profile if you’re so inclined. Or at least check out “her” picture.
Now none of the Walters women tried to contact me, but another “employee” (or maybe it was a “freelancer,” I forget) of this company tried to do so, which led my curious nature to discover yet another hive of fake LinkedIn profiles.
Sadly, one person from this company is a second-degree connection, which means that one of my connections accepted “her” connection request.
Synthetic identities are harmless…right?
Who knows what Karina, Alice, and Maria will do with their LinkedIn profiles?
Will they connect with other professionals?
Will they ask said professionals to move the conversation to SMS or WhatsApp, for whatever reason?
Will they apply for new jobs, using their impressive work history? A 98.8% customer satisfaction rate while managing 1,800 sub-partnerships is remarkable.
Will they apply for bank accounts…or loans?
The fraud possibilities from fake LinkedIn accounts are endless, and could be very costly for any company who falls for a fake synthetic identity. In fact, FiVerity reports that “in 2020, an estimated $20 billion was lost to SIF” (synthetic identity fraud). Which means that LinkedIn account holders and Partnerships Managers Karina, Alice, and Maria Walter could make a LOT of money.
Now banks and other financial institutions have safeguards to verify financial identities of people who open accounts and apply for loans, because fraud reduction is critically important to financial institutions.
Social media companies? Identity is only “important” to them.
They don’t even care about uniqueness (as Worldcoin does), evidenced by the fact that I have more than two X accounts (but none in which I portray a female Texas mom and vegan).
So if someone comes up to you on X or LinkedIn, remember that all may not be as it seems.
Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent (October 26)….
[A]dded in (to the Act) is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.
This not only opens up issues regarding encryption and privacy, but also specific identity technologies such as age verification and age estimation.
This post looks at three types of firms that are affected by the UK Online Safety Act, the stories they are telling, and the stories they may need to tell in the future. What is YOUR firm’s Online Safety Act-related story?
What three types of firms are affected by the UK Online Safety Act?
As of now I have been unable to locate a full version of the final final Act, but presumably the provisions from this July 2023 version (PDF) have only undergone minor tweaks.
Among other things, this version discusses “User identity verification” in 65, “Category 1 service” in 96(10)(a), “United Kingdom user” in 228(1), and a multitude of other terms that affect how companies will conduct business under the Act.
I am focusing on three different types of companies:
Technology services (such as Yoti) that provide identity verification, including but not limited to age verification and age estimation.
User-to-user services (such as WhatsApp) that provide encrypted messages.
User-to-user services (such as Wikipedia) that allow users (including United Kingdom users) to contribute content.
What types of stories will these firms have to tell, now that the Act is law?
For ALL services, the story will vary as Ofcom decides how to implement the Act, but we are already seeing the stories from identity verification services. Here is what Yoti stated after the Act became law:
We have a range of age assurance solutions which allow platforms to know the age of users, without collecting vast amounts of personal information. These include:
Age estimation: a user’s age is estimated from a live facial image. They do not need to use identity documents or share any personal information. As soon as their age is estimated, their image is deleted – protecting their privacy at all times. Facial age estimation is 99% accurate and works fairly across all skin tones and ages.
Digital ID app: a free app which allows users to verify their age and identity using a government-issued identity document. Once verified, users can use the app to share specific information – they could just share their age or an ‘over 18’ proof of age.
MailOnline has approached WhatsApp’s parent company Meta for comment now that the Bill has received Royal Assent, but the firm has so far refused to comment.
[T]o comply with the new law, the platform says it would be forced to weaken its security, which would not only undermine the privacy of WhatsApp messages in the UK but also for every user worldwide.
‘Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98 per cent of users,’ Mr Cathcart has previously said.
Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.)
All of these firms have shared their stories either before or after the Act became law, and those stories will change depending upon what Ofcom decides.
Money 20/20 is taking place in Las Vegas, Nevada, USA from Sunday, October 22 to Wednesday October 25.
While I am not in Las Vegas, Bredemarket will monitor the goings-on and share relevant news on Facebook (Bredemarket Identity Firm Services group), Instagram (Bredemarket), LinkedIn (Bredemarket Identity Firm Services page), bredemarket.com, and elsewhere.
For example, when biometric companies want to justify the use of their technology, they have found that it is very effective to position biometrics as a way to combat sex trafficking.
Similarly, moves to rein in social media are positioned as a way to preserve mental health.
Now that’s a not-so-pretty picture, but it effectively speaks to emotions.
“If poor vulnerable children are exposed to addictive, uncontrolled social media, YOUR child may end up in a straitjacket!”
In New York state, four government officials have declared that the ONLY way to preserve the mental health of underage social media users is via two bills, one of which is the “New York Child Data Protection Act.”
But there is a challenge to enforce ALL of the bill’s provisions…and only one way to solve it. An imperfect way—age estimation.
Because they want to protect the poor vulnerable children.
By Paolo Monti – Available in the BEIC digital library and uploaded in partnership with BEIC Foundation.The image comes from the Fondo Paolo Monti, owned by BEIC and located in the Civico Archivio Fotografico of Milan., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=48057924
And because the major U.S. social media companies are headquartered in California. But I digress.
So why do they say that children need protection?
Recent research has shown devastating mental health effects associated with children and young adults’ social media use, including increased rates of depression, anxiety, suicidal ideation, and self-harm. The advent of dangerous, viral ‘challenges’ being promoted through social media has further endangered children and young adults.
Of course one can also argue that social media is harmful to adults, but the New Yorkers aren’t going to go that far.
So they are just going to protect the poor vulnerable children.
CC BY-SA 4.0.
This post isn’t going to deeply analyze one of the two bills the quartet have championed, but I will briefly mention that bill now.
The “Stop Addictive Feeds Exploitation (SAFE) for Kids Act” (S7694/A8148) defines “addictive feeds” as those that are arranged by a social media platform’s algorithm to maximize the platform’s use.
Those of us who are flat-out elderly vaguely recall that this replaced the former “chronological feed” in which the most recent content appeared first, and you had to scroll down to see that really cool post from two days ago. New York wants the chronological feed to be the default for social media users under 18.
The bill also proposes to limit under 18 access to social media without parental consent, especially between midnight and 6:00 am.
And those who love Illinois BIPA will be pleased to know that the bill allows parents (and their lawyers) to sue for damages.
Previous efforts to control underage use of social media have faced legal scrutinity, but since Attorney General James has sworn to uphold the U.S. Constitution, presumably she has thought about all this.
Enough about SAFE for Kids. Let’s look at the other bill.
The New York Child Data Protection Act
The second bill, and the one that concerns me, is the “New York Child Data Protection Act” (S7695/A8149). Here is how the quartet describes how this bill will protect the poor vulnerable children.
CC BY-SA 4.0.
With few privacy protections in place for minors online, children are vulnerable to having their location and other personal data tracked and shared with third parties. To protect children’s privacy, the New York Child Data Protection Act will prohibit all online sites from collecting, using, sharing, or selling personal data of anyone under the age of 18 for the purposes of advertising, unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website. For users under 13, this informed consent must come from a parent.
And again, this bill provides a BIPA-like mechanism for parents or guardians (and their lawyers) to sue for damages.
But let’s dig into the details. With apologies to the New York State Assembly, I’m going to dig into the Senate version of the bill (S7695). Bear in mind that this bill could be amended after I post this, and some of the portions that I cite could change.
This only applies to natural persons. So the bots are safe, regardless of age.
Speaking of age, the age of 18 isn’t the only age referenced in the bill. Here’s a part of the “privacy protection by default” section:
§ 899-FF. PRIVACY PROTECTION BY DEFAULT.
1. EXCEPT AS PROVIDED FOR IN SUBDIVISION SIX OF THIS SECTION AND SECTION EIGHT HUNDRED NINETY-NINE-JJ OF THIS ARTICLE, AN OPERATOR SHALL NOT PROCESS, OR ALLOW A THIRD PARTY TO PROCESS, THE PERSONAL DATA OF A COVERED USER COLLECTED THROUGH THE USE OF A WEBSITE, ONLINE SERVICE, ONLINE APPLICATION, MOBILE APPLICA- TION, OR CONNECTED DEVICE UNLESS AND TO THE EXTENT:
(A) THE COVERED USER IS TWELVE YEARS OF AGE OR YOUNGER AND PROCESSING IS PERMITTED UNDER 15 U.S.C. § 6502 AND ITS IMPLEMENTING REGULATIONS; OR
(B) THE COVERED USER IS THIRTEEN YEARS OF AGE OR OLDER AND PROCESSING IS STRICTLY NECESSARY FOR AN ACTIVITY SET FORTH IN SUBDIVISION TWO OF THIS SECTION, OR INFORMED CONSENT HAS BEEN OBTAINED AS SET FORTH IN SUBDIVISION THREE OF THIS SECTION.
So a lot of this bill depends upon whether a person is over or under the age of eighteen, or over or under the age of thirteen.
And that’s a problem.
How old are you?
The bill needs to know whether or not a person is 18 years old. And I don’t think the quartet will be satisfied with the way that alcohol websites determine whether someone is 21 years old.
Attorney General James and the others would presumably prefer that the social media companies verify ages with a government-issued ID such as a state driver’s license, a state identification card, or a national passport. This is how most entities verify ages when they have to satisfy legal requirements.
For some people, even some minors, this is not that much of a problem. Anyone who wants to drive in New York State must have a driver’s license, and you have to be at least 16 years old to get a driver’s license. Admittedly some people in the city never bother to get a driver’s license, but at some point these people will probably get a state ID card.
However, there are going to be some 17 year olds who don’t have a driver’s license, government ID or passport.
And some 16 year olds.
And once you look at younger people—15 year olds, 14 year olds, 13 year olds, 12 year olds—the chances of them having a government-issued identification document are much less.
What are these people supposed to do? Provide a birth certificate? And how will the social media companies know if the birth certificate is legitimate?
But there’s another way to determine ages—age estimation.
How old are you, part 2
As long-time readers of the Bredemarket blog know, I have struggled with the issue of age verification, especially for people who do not have driver’s licenses or other government identification. Age estimation in the absence of a government ID is still an inexact science, as even Yoti has stated.
Our technology is accurate for 6 to 12 year olds, with a mean absolute error (MAE) of 1.3 years, and of 1.4 years for 13 to 17 year olds. These are the two age ranges regulators focus upon to ensure that under 13s and 18s do not have access to age restricted goods and services.
So if a minor does not have a government ID, and the social media firm has to use age estimation to determine a minor’s age for purposes of the New York Child Data Protection Act, the following two scenarios are possible:
An 11 year old may be incorrectly allowed to give informed consent for purposes of the Act.
A 14 year old may be incorrectly denied the ability to give informed consent for purposes of the Act.
Is age estimation “good enough for government work”?
It’s the end of an era for a once-critical pandemic document: The ubiquitous white COVID-19 vaccination cards are being phased out.
Now that COVID-19 vaccines are not being distributed by the federal government, the U.S. Centers for Disease Control and Prevention has stopped printing new cards.
This doesn’t affect the validity of current cards. It just means that if you get a COVID vaccine, or any future vaccine, and you need to prove you obtained it, you will have to contact the medical facility who administered it.
Or, in selected states (because in the U.S. health is generally a state and not a federal responsibility), you can access the state’s digital health information. For example, the state of Washington offers MyIRmobile, as do the states of Arizona, Louisiana, Maryland, Mississippi, North Dakota, and West Virginia.
Sign up for MyIR Mobile by going to myirmobile.com and follow the registration instructions. Your registration information will be used to match your records with the state immunization registry. You will be sent a verification code on your phone to finalize the process. Once registration is complete, you’ll be able to view your immunization records, Certificate of Immunization Status (CIS) and access your COVID-19 vaccination certificate.