Personally Protected: PII vs. PHI

(Part of the biometric product marketing expert series)

Before you can fully understand the difference between personally identifiable information (PII) and protected health information (PHI), you need to understand the difference between biometrics and…biometrics. (You know sometimes words have two meanings.)

Designed by Google Gemini.

The definitions of biometrics

To address the difference between biometrics and biometrics, I’ll refer to something I wrote over two years ago, in late 2021. In that post, I quoted two paragraphs from the International Biometric Society that illustrated the difference.

Since the IBS has altered these paragraphs in the intervening years, I will quote from the latest version.

The terms “Biometrics” and “Biometry” have been used since early in the 20th century to refer to the field of development of statistical and mathematical methods applicable to data analysis problems in the biological sciences.

Statistical methods for the analysis of data from agricultural field experiments to compare the yields of different varieties of wheat, for the analysis of data from human clinical trials evaluating the relative effectiveness of competing therapies for disease, or for the analysis of data from environmental studies on the effects of air or water pollution on the appearance of human disease in a region or country are all examples of problems that would fall under the umbrella of “Biometrics” as the term has been historically used….

The term “Biometrics” has also been used to refer to the field of technology devoted to the identification of individuals using biological traits, such as those based on retinal or iris scanning, fingerprints, or face recognition. Neither the journal “Biometrics” nor the International Biometric Society is engaged in research, marketing, or reporting related to this technology. Likewise, the editors and staff of the journal are not knowledgeable in this area. 

From https://www.biometricsociety.org/about/what-is-biometry.

In brief, what I call “broad biometrics” refers to analyzing biological sciences data, ranging from crop yields to heart rates. Contrast this with what I call “narrow biometrics,” which (usually) refers only to human beings, and only to those characteristics that identify human beings, such as the ridges on a fingerprint.

The definition of “personally identifiable information” (PII)

Now let’s examine an issue related to narrow biometrics (and other things), personally identifiable information, or PII. (It’s also represented as personal identifiable information by some.) I’ll use a definition provided by the U.S. National Institute of Standards and Technology, or NIST.

Information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other information that is linked or linkable to a specific individual.

From https://csrc.nist.gov/glossary/term/PII.

Note the key words “alone or when combined.” The ten numbers “909 867 5309” are not sufficient to identify an individual alone, but can identify someone when combined with information from another source, such as a telephone book.

Yes, a telephone book. Deal with it.

By © 2010 by Tomasz Sienicki [user: tsca, mail: tomasz.sienicki at gmail.com] – Photograph by Tomasz Sienicki (Own work)Image intentionally scaled down., CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=10330603

What types of information can be combined to identify a person? The U.S. Department of Defense’s Privacy, Civil Liberties, and Freedom of Information Directorate provides multifarious examples of PII, including:

  • Social Security Number.
  • Passport number.
  • Driver’s license number.
  • Taxpayer identification number.
  • Patient identification number.
  • Financial account number.
  • Credit card number.
  • Personal address.
  • Personal telephone number.
  • Photographic image of a face.
  • X-rays.
  • Fingerprints.
  • Retina scan.
  • Voice signature.
  • Facial geometry.
  • Date of birth.
  • Place of birth.
  • Race.
  • Religion.
  • Geographical indicators.
  • Employment information.
  • Medical information.
  • Education information.
  • Financial information.

Now you may ask yourself, “How can I identify someone by a non-unique birthdate? A lot of people were born on the same day!”

But the combination of information is powerful, as researchers discovered in a 2015 study cited by the New York Times.

In the study, titled “Unique in the Shopping Mall: On the Reidentifiability of Credit Card Metadata,” a group of data scientists analyzed credit card transactions made by 1.1 million people in 10,000 stores over a three-month period. The data set contained details including the date of each transaction, amount charged and name of the store.

Although the information had been “anonymized” by removing personal details like names and account numbers, the uniqueness of people’s behavior made it easy to single them out.

In fact, knowing just four random pieces of information was enough to reidentify 90 percent of the shoppers as unique individuals and to uncover their records, researchers calculated. And that uniqueness of behavior — or “unicity,” as the researchers termed it — combined with publicly available information, like Instagram or Twitter posts, could make it possible to reidentify people’s records by name.

From https://archive.nytimes.com/bits.blogs.nytimes.com/2015/01/29/with-a-few-bits-of-data-researchers-identify-anonymous-people/.

So much for anonymization. And privacy.

Now biometrics only form part of the multifarious list of data cited above, but clearly biometric data can be combined with other data to identify someone. An easy example is taking security camera footage of the face of a person walking into a store, and combining that data with the same face taken from a database of driver’s license holders. In some jurisdictions, some entities are legally permitted to combine this data, while others are legally prohibited from doing so. (A few do it anyway. But I digress.)

Because narrow biometric data used for identification, such as fingerprint ridges, can be combined with other data to personally identify an individual, organizations that process biometric data must undertake strict safeguards to protect that data. If personally identifiable information (PII) is not adequately guarded, people could be subject to fraud and other harms.

The definition of “protected health information” (PHI)

In this case, I’ll refer to information published by the U.S. Department of Health and Human Services.

Protected Health Information. The Privacy Rule protects all “individually identifiable health information” held or transmitted by a covered entity or its business associate, in any form or media, whether electronic, paper, or oral. The Privacy Rule calls this information “protected health information (PHI).”12

“Individually identifiable health information” is information, including demographic data, that relates to:

the individual’s past, present or future physical or mental health or condition,

the provision of health care to the individual, or

the past, present, or future payment for the provision of health care to the individual,

and that identifies the individual or for which there is a reasonable basis to believe it can be used to identify the individual.13 Individually identifiable health information includes many common identifiers (e.g., name, address, birth date, Social Security Number).

The Privacy Rule excludes from protected health information employment records that a covered entity maintains in its capacity as an employer and education and certain other records subject to, or defined in, the Family Educational Rights and Privacy Act, 20 U.S.C. §1232g.

From https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html

Now there’s obviously an overlap between personally identifiable information (PII) and protected health information (PHI). For example, names, dates of birth, and Social Security Numbers fall into both categories. But I want to highlight two things are are explicitly mentioned as PHI that aren’t usually cited as PII.

  • Physical or mental health data. This could include information that a medical professional captures from a patient, including biometric (broad biometric) information such as heart rate or blood pressure.
  • Health care provided to an individual. This not only includes written information such as prescriptions, but oral information (“take two aspirin and call my chatbot in the morning”). Yes, chatbot. Deal with it. Dr. Marcus Welby and his staff retired a long time ago.
Robert Young (“Marcus Welby”) and Jane Wyatt (“Margaret Anderson” on a different show). By ABC TelevisionUploaded by We hope at en.wikipedia – eBay itemphoto informationTransferred from en.wikipedia by SreeBot, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16472486

Because broad biometric data used for analysis, such as heart rates, can be combined with other data to personally identify an individual, organizations that process biometric data must undertake strict safeguards to protect that data. If protected health information (PHI) is not adequately guarded, people could be subject to fraud and other harms.

Simple, isn’t it?

Actually, the parallels between identity/biometrics and healthcare have fascinated me for decades, since the dedicated hardware to capture identity/biometric data is often similar to the dedicated hardware to capture health data. And now that we’re moving away from dedicated hardware to multi-purpose hardware such as smartphones, the parallels are even more fascinating.

Designed by Google Gemini.

Clean Data is the New Oxygen, and Dirty Data is the New Carbon Monoxide

I have three questions for you, but don’t sweat; I’m giving you the answers.

  1. How long can you survive without pizza? Years (although your existence will be hellish).
  2. OK, how long can you survive without water? From 3 days to 7 days.
  3. OK, how long can you survive without oxygen? Only 10 minutes.

This post asks how long a 21st century firm can survive without data, and what can happen if the data is “dirty.”

How does Mika survive?

Have you heard of Mika? Here’s her LinkedIn profile.

From Mika’s LinkedIn profile at https://www.linkedin.com/in/mika-ai-ceo/

Yes, you already know that I don’t like LinkedIn profiles that don’t belong to real people. But this one is a bit different.

Mika is the Chief Executive Officer of Dictador, a Polish-Colombian spirits firm, and is responsible for “data insight, strategic provocation and DAO community liaison.” Regarding data insight, Mika described her approach in an interview with Inside Edition:

My decision making process relies on extensive data analysis and aligning with the company’s strategic objectives. It’s devoid of personal bias ensuring unbiased and strategic choices that prioritize the organization’s best interests.

From the transcript to https://www.youtube.com/watch?v=8BQEyQ2-awc
From https://www.youtube.com/watch?v=8BQEyQ2-awc

Mika was brought to my attention by accomplished product marketer/artist Danuta (Dana) Deborgoska. (She’s appeared in the Bredemarket blog before, though not by name.) Dana is also Polish (but not Colombian) and clearly takes pride in the artificial intelligence accomplishments of this Polish-headquartered company. You can read her LinkedIn post to see her thoughts, one of which was as follows:

Data is the new oxygen, and we all know that we need clean data to innovate and sustain business models.

From Dana Debogorska’s LinkedIn post.

Dana succinctly made two points:

  1. Data is the new oxygen.
  2. We need clean data.

Point one: data is the new oxygen

There’s a reference to oxygen again, but it’s certainly appropriate. Just as people cannot survive without oxygen, Generative AI cannot survive without data.

But the need for data predates AI models. From 2017:

Reliance Industries Chairman Mukesh Ambani said India is poised to grow…but to make that happen the country’s telecoms and IT industry would need to play a foundational role and create the necessary digital infrastructure.

Calling data the “oxygen” of the digital economy, Ambani said the telecom industry had the urgent task of empowering 1.3 billion Indians with the tools needed to flourish in the digital marketplace.

From India Times.

And we can go back centuries in history and find examples when a lack of data led to catastrophe. Like the time in 1776 when the Hessians didn’t know that George Washington and his troops had crossed the Delaware.

Point two: we need clean data

Of course, the presence or absence of data alone is not enough. As Debogorska notes, we don’t just need any data; we need CLEAN data, without error and without bias. Dirty data is like carbon monoxide, and as you know carbon monoxide is harmful…well, most of the time.

That’s been the challenge not only with artificial intelligence, but with ALL aspects of data gathering.

The all-male board of directors of a fertilizer company in 1960. Fair use. From the New York Times.

In all of these cases, someone (Amazon, Enron’s shareholders, or NIST) asked questions about the cleanliness of the data, and then set out to answer those questions.

  • In the case of Amazon’s recruitment tool and the company Enron, the answers caused Amazon to abandon the tool and Enron to abandon its existence.
  • Despite the entreaties of so-called privacy advocates (who prefer the privacy nightmare of physical driver’s licenses to the privacy-preserving features of mobile driver’s licenses), we have not abandoned facial recognition, but we’re definitely monitoring it in a statistical (not an anecdotal) sense.

The cleanliness of the data will continue to be the challenge as we apply artificial intelligence to new applications.

Clean room of a semiconductor manufacturing facility. Uploaded by Duk 08:45, 16 Feb 2005 (UTC) – http://www.grc.nasa.gov/WWW/ictd/content/labmicrofab.html (original) and https://images.nasa.gov/details/GRC-1998-C-01261 (high resolution), Public Domain, https://commons.wikimedia.org/w/index.php?curid=60825

Point three: if you’re not saying things, then you’re not selling

(Yes, this is the surprise point.)

Dictador is talking about Mika.

Are you talking about your product, or are you keeping mum about it?

I have more to…um…say about this. Follow this link.

Time for the FIRST Iteration of Your Firm’s UK Online Safety Act Story

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727

A couple of weeks ago, I asked this question:

Is your firm affected by the UK Online Safety Act, and the future implementation of the Act by Ofcom?

From https://bredemarket.com/2023/10/30/uk-online-safety-act-story/

Why did I mention the “future implementation” of the UK Online Safety Act? Because the passage of the UK Online Safety Act is just the FIRST step in a long process. Ofcom still has to figure out how to implement the Act.

Ofcom started to work on this on November 9, but it’s going to take many months to finalize—I mean finalise things. This is the UK Online Safety Act, after all.

This is the first of four major consultations that Ofcom, as regulator of the new Online Safety Act, will publish as part of our work to establish the new regulations over the next 18 months.

It focuses on our proposals for how internet services that enable the sharing of user-generated content (‘user-to-user services’) and search services should approach their new duties relating to illegal content.

From https://www.ofcom.org.uk/consultations-and-statements/category-1/protecting-people-from-illegal-content-online

On November 9 Ofcom published a slew of summary and detailed documents. Here’s a brief excerpt from the overview.

Mae’r ddogfen hon yn rhoi crynodeb lefel uchel o bob pennod o’n hymgynghoriad ar niwed anghyfreithlon i helpu rhanddeiliaid i ddarllen a defnyddio ein dogfen ymgynghori. Mae manylion llawn ein cynigion a’r sail resymegol sylfaenol, yn ogystal â chwestiynau ymgynghori manwl, wedi’u nodi yn y ddogfen lawn. Dyma’r cyntaf o nifer o ymgyngoriadau y byddwn yn eu cyhoeddi o dan y Ddeddf Diogelwch Ar-lein. Mae ein strategaeth a’n map rheoleiddio llawn ar gael ar ein gwefan.

From https://www.ofcom.org.uk/__data/assets/pdf_file/0021/271416/CYM-illegal-harms-consultation-chapter-summaries.pdf

Oops, I seem to have quoted from the Welsh version. Maybe you’ll have better luck reading the English version.

This document sets out a high-level summary of each chapter of our illegal harms consultation to help stakeholders navigate and engage with our consultation document. The full detail of our proposals and the underlying rationale, as well as detailed consultation questions, are set out in the full document. This is the first of several consultations we will be publishing under the Online Safety Act. Our full regulatory roadmap and strategy is available on our website.

From https://www.ofcom.org.uk/__data/assets/pdf_file/0030/270948/illegal-harms-consultation-chapter-summaries.pdf

If you want to peruse everything, go to https://www.ofcom.org.uk/consultations-and-statements/category-1/protecting-people-from-illegal-content-online.

And if you need help telling your firm’s UK Online Safety Act story, Bredemarket can help. (Unless the final content needs to be in Welsh.) Click below!

What Is Your Firm’s UK Online Safety Act Story?

It’s time to revisit my August post entitled “Can There Be Too Much Encryption and Age Verification Regulation?” because the United Kingdom’s Online Safety Bill is now the Online Safety ACT.

Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent (October 26)….

[A]dded in (to the Act) is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.

From Wired.
By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727

This not only opens up issues regarding encryption and privacy, but also specific identity technologies such as age verification and age estimation.

This post looks at three types of firms that are affected by the UK Online Safety Act, the stories they are telling, and the stories they may need to tell in the future. What is YOUR firm’s Online Safety Act-related story?

What three types of firms are affected by the UK Online Safety Act?

As of now I have been unable to locate a full version of the final final Act, but presumably the provisions from this July 2023 version (PDF) have only undergone minor tweaks.

Among other things, this version discusses “User identity verification” in 65, “Category 1 service” in 96(10)(a), “United Kingdom user” in 228(1), and a multitude of other terms that affect how companies will conduct business under the Act.

I am focusing on three different types of companies:

  • Technology services (such as Yoti) that provide identity verification, including but not limited to age verification and age estimation.
  • User-to-user services (such as WhatsApp) that provide encrypted messages.
  • User-to-user services (such as Wikipedia) that allow users (including United Kingdom users) to contribute content.

What types of stories will these firms have to tell, now that the Act is law?

Stories from identity verification services

From Yoti.

For ALL services, the story will vary as Ofcom decides how to implement the Act, but we are already seeing the stories from identity verification services. Here is what Yoti stated after the Act became law:

We have a range of age assurance solutions which allow platforms to know the age of users, without collecting vast amounts of personal information. These include:

  • Age estimation: a user’s age is estimated from a live facial image. They do not need to use identity documents or share any personal information. As soon as their age is estimated, their image is deleted – protecting their privacy at all times. Facial age estimation is 99% accurate and works fairly across all skin tones and ages.
  • Digital ID app: a free app which allows users to verify their age and identity using a government-issued identity document. Once verified, users can use the app to share specific information – they could just share their age or an ‘over 18’ proof of age.
From Yoti.

Stories from encrypted message services

From WhatsApp.

Not surprisingly, message encryption services are telling a different story.

MailOnline has approached WhatsApp’s parent company Meta for comment now that the Bill has received Royal Assent, but the firm has so far refused to comment.

Will Cathcart, Meta’s head of WhatsApp, said earlier this year that the Online Safety Act was the most concerning piece of legislation being discussed in the western world….

[T]o comply with the new law, the platform says it would be forced to weaken its security, which would not only undermine the privacy of WhatsApp messages in the UK but also for every user worldwide. 

‘Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98 per cent of users,’ Mr Cathcart has previously said.

From Daily Mail.

Stories from services with contributed content

From Wikipedia.

And contributed content services are also telling their own story.

Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.)

From Wired.

What is YOUR firm’s story?

All of these firms have shared their stories either before or after the Act became law, and those stories will change depending upon what Ofcom decides.

But what about YOUR firm?

Is your firm affected by the UK Online Safety Act, and the future implementation of the Act by Ofcom?

Do you have a story that you need to tell to achieve your firm’s goals?

Do you need an extra, experienced hand to help out?

Learn how Bredemarket can create content that drives results for your firm.

Click the image below.

Can There Be Too Much Encryption and Age Verification Regulation?

Designed by Freepik.

Approximately 2,700 years ago, the Greek poet Hesiod is recorded as saying “moderation is best in all things.” This applies to government regulations, including encryption and age verification regulations. As the United Kingdom’s House of Lords works through drafts of its Online Safety Bill, interested parties are seeking to influence the level of regulation.

The July 2023 draft of the Online Safety Bill

On July 25, 2023, Richard Allan of Regulate.Tech provided his assessment of the (then) latest draft of the Online Safety Bill that is going through the House of Lords.

In Allan’s assessment, he wondered whether the mandated encryption and age verification regulations would apply to all services, or just critical services.

Allan considered a number of services, but I’m just going to hone in on two of them: WhatsApp and Wikipedia.

The Online Safety Bill and WhatsApp

WhatsApp is owned by a large American company called Meta, which causes two problems for regulators in the United Kingdom (and in Europe):

  • Meta is a large company.
  • Meta is an American company.

WhatsApp itself causes another problem for UK regulators:

  • WhatsApp encrypts messages.

Because of these three truths, UK regulators are not necessarily inclined to play nice with WhatsApp, which may affect whether WhatsApp will be required to comply with the Online Safety Bill’s regulations.

Allan explains the issue:

One of the powers the Bill gives to OFCOM (the UK Office of Communications) is the ability to order services to deploy specific technologies to detect terrorist and child sexual exploitation and abuse content….

But there may be cases where a provider believes that the technology it is being ordered to deploy would break essential functionality of its service and so would prefer to leave the UK rather than accept compliance with the order as a condition of remaining….

If OFCOM does issue this kind of order then we should expect to see some encrypted services leave the UK market, potentially including very popular ones like WhatsApp and iMessage.

From https://www.regulate.tech/online-safety-bill-some-futures-25th-july-2023/

And this isn’t just speculation on Allan’s part. Will Cathcart has been complaining about the provisions of the draft bill for months, especially since it appears that WhatsApp encryption would need to be “dumbed down” for everybody to comply with regulations in the United Kingdom.

Speaking during a UK visit in which he will meet legislators to discuss the government’s flagship internet regulation, Will Cathcart, Meta’s head of WhatsApp, described the bill as the most concerning piece of legislation currently being discussed in the western world.

He said: “It’s a remarkable thing to think about. There isn’t a way to change it in just one part of the world. Some countries have chosen to block it: that’s the reality of shipping a secure product. We’ve recently been blocked in Iran, for example. But we’ve never seen a liberal democracy do that.

“The reality is, our users all around the world want security,” said Cathcart. “Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98% of users.”

From https://www.theguardian.com/technology/2023/mar/09/whatsapp-end-to-end-encryption-online-safety-bill

In passing, the March Guardian article noted that WhatsApp requires UK users to be 16 years old. This doesn’t appear to be an issue for Meta, but could be an issue for another very popular online service.

The Online Safety Bill and Wikipedia

So how does the Online Safety Bill affect Wikipedia?

Wikipedia article about the Online Safety Bill as of August 1, 2023. https://en.wikipedia.org/wiki/Online_Safety_Bill

It depends on how the Online Safety Bill is implemented via the rulemaking process.

As in other countries, the true effects of legislation aren’t apparent until the government writes the rules that implement the legislation. It’s possible that the rulemaking will carve out an exemption allowing Wikipedia to NOT enforce age verification. Or it’s possible that Wikipedia will be mandated to enforce age verification for its writers.

Let’s return to Richard Allan.

If they do not (carve out exemptions) then there could be real challenges for the continued operation of some valuable services in the UK given what we know about the requirements in the Bill and the operating principles of services like Wikipedia.

For example, it would be entirely inconsistent with Wikipedia’s privacy principles to start collecting additional data about the age of their users and yet this is what will be expected from regulated services more generally.

From https://www.regulate.tech/online-safety-bill-some-futures-25th-july-2023/

Left unsaid is the same issue that affects encryption: age verification for Wikipedia may be required in the United Kingdom, but may not be required for other countries.

It’s no surprise that Jimmy Wales of Wikipedia has a number of problems with the Online Safety Bill. Here’s just one of them.

(Wales) used the example of Wikipedia, in which none of its 700 staff or contractors plays a role in content or in moderation.

Instead, the organisation relies on its global community to make democratic decisions on content moderation, and have contentious discussions in public.

By contrast, the “feudal” approach sees major platforms make decisions centrally, erratically, inconsistently, often using automation, and in secret.

By regulating all social media under the assumption that it’s all exactly like Facebook and Twitter, Wales said that authorities would impose rules on upstart competitors that force them into that same model.

From https://www.itpro.com/business-strategy/startups/370036/jimmy-wales-online-safety-bill-could-devastate-small-businesses

And the potential regulations that could be imposed on that “global community” would be anathema to Wikipedia.

Wikipedia will not comply with any age checks required under the Online Safety Bill, its foundation says.

Rebecca MacKinnon, of the Wikimedia Foundation, which supports the website, says it would “violate our commitment to collect minimal data about readers and contributors”.

From https://www.bbc.com/news/technology-65388255

Regulation vs. Privacy

One common thread between these two cases is that implementation of the regulations results in a privacy threat to the affected individuals.

  • For WhatsApp users, the privacy threat is obvious. If WhatsApp is forced to fully or partially disable encryption, or is forced to use an encryption scheme that the UK Government could break, then the privacy of every message (including messages between people outside the UK) would be threatened.
  • For Wikipedia users, anyone contributing to the site would need to undergo substantial identity verification so that the UK Government would know the ages of Wikipedia contributors.

This is yet another example of different government agencies working at cross purposes with each other, as the “catch the pornographers” bureaucrats battle with the “preserve privacy” advocates.

Meta, Wikipedia, and other firms would like the legislation to explicitly carve out exemptions for their firms and services. Opponents say that legislative carve outs aren’t necessary, because no one would ever want to regulate Wikipedia.

Yeah, and the U.S. Social Security Number isn’t an identificaiton number either. (Not true.)

A second “biometrics is evil” post (Amazon One)

This is a follow-up to something I wrote a couple of weeks ago. I concluded that earlier post by noting that when you say that something needs to be replaced because it is bad, you need to evaluate the replacement to see if it is any better…or worse.

First, the recap

Before moving forward, let me briefly recap my points from the earlier post. If you like, you can read the entire post here.

  • Amazon is incentivizing customers ($10) to sign up for its Amazon One palm print program.
  • Amazon is not the first company to use biometrics to speed retail purchases. Pay By Touch, the University of Maryland Dining Hall have already done this, as well as every single store that lets you use Apple Pay, Google Pay, or Samsung Pay.
  • Amazon One is not only being connected in the public eye to unrelated services such as Amazon Rekognition, and to unrelated studies such as Gender Shades (which dealt with classification, not recognition), but has been accused of “asking people to sell their bodies.” Yet companies that offer similar services are not being demonized in the same way.
  • If you don’t use Amazon One to pay for your purchases, that doesn’t necessarily mean that you are protected from surveillance. I’ll dive into that in this post.

Now that we’re caught up, let’s look at the latest player to enter the Amazon One controversy.

Yes, U.S. Senators can be bipartisan

If you listen to the “opinion” news services, you get the feeling that the United States Senate has devolved into two warring factions that can’t get anything done. But Senators have always worked together (see Edward Kennedy and Dan Quayle), and they continue to work together today.

Specifically, three Senators are working together to ask Amazon a few questions: Bill Cassidy, M.D. (R-LA), Amy Klobuchar (D-MN), and Jon Ossoff (D-GA).

And naturally they issued a press release about it.

Now arguments can be made about whether Congressional press releases and hearings merely constitute grandstanding, or whether they are serious attempts to better the nation. Of course, anything that I oppose is obviously grandstanding, and anything I support is obviously a serious effort.

But for the moment let’s assume that the Senators have serious concerns about the privacy of American consumers, and that the nation demands answers to these questions from Amazon.

Here are the Senators’ questions, from the press release:

  1. Does Amazon have plans to expand Amazon One to additional Whole Foods, Amazon Go, and other Amazon store locations, and if so, on what timetable? 
  2. How many third-party customers has Amazon sold (or licensed) Amazon One to? What privacy protections are in place for those third parties and their customers?
  3. How many users have signed up for Amazon One? 
  4. Please describe all the ways you use data collected through Amazon One, including from third-party customers. Do you plan to use data collected through Amazon One devices to personalize advertisements, offers, or product recommendations to users? 
  5. Is Amazon One user data, including the Amazon One ID, ever paired with biometric data from facial recognition systems? 
  6. What information do you provide to consumers about how their data is being used? How will you ensure users understand and consent to Amazon One’s data collection, storage, and use practices when they link their Amazon One and Amazon account information?
  7. What actions have you taken to ensure the security of user data collected through Amazon One?

So when will we investigate other privacy-threatening technologies?

In a sense, the work of these three Senators should be commended, because if Amazon One is not implemented properly, serious privacy breaches could happen which could adversely impact American citizens. And this is the reason why many states and municipalities have moved to restrict the use of biometrics by private businesses.

And we know that Amazon is evil, because Slate said so back in January 2020.

The online bookseller has evolved into a giant of retail, resale, meal delivery, video streaming, cloud computing, fancy produce, original entertainment, cheap human labor, smart home tech, surveillance tech, and surveillance tech for smart homes….The company’s “last mile” shipping operation has led to burnout, injuries, and deaths, all connected to a warehouse operation that, while paying a decent minimum wage, is so efficient in part because it treats its human workers like robots who sometimes get bathroom breaks.

But why stop with Amazon? After all, Slate’s list included 29 other companies (while Amazon tops the list, other “top”-ranked companies include Facebook, Alphabet, Palantir Technologies, and Uber), to say nothing of entire industries that are capable of massive privacy violations.

Privacy breaches are not just tied to biometric systems, but can be tied to any system that stores private data. Restricting or banning biometric systems won’t solve anything, since all of these abuses could potentially occur on other systems.

  • When will the Senators ask these same questions to Apple, Google (part of the aforementioned Alphabet), and Samsung to find out when these companies will expand their “Pay” services? They won’t even have to ask all seven questions, because we already know the answer to question 5.
  • Oh, and while we’re at it, what about Mastercard, Visa, American Express, Discover, and similar credit card services that are often tied to information from our bank accounts? How do these firms personalize their offerings? Who can buy all that data?
  • And while we’re looking at credit cards, what about the debit cards issued by the banks, which are even more vulnerable to abuse. Let’s have the banks publicly reveal all the ways in which they protect user data.
  • You know, you have to watch out for those money orders also. How often do money order issuers ask consumers to show their government ID? What happens to that data?
  • Oh, and what about those gift cards that stores issue? What happens to the location and purchase data that is collected for those gift cards?
  • When people use cash to pay for goods, what is the resolution of the surveillance cameras that are trained on the cash registers? Can those surveillance cameras read the serial numbers on the bills that are exchanged? What assurances can the stores give that they are not tracking those serial numbers as they flow through the economy?

If you think that it’s silly to shut down every single payment system that could result in a privacy violation…you’re right.

Obviously if Amazon is breaking federal law, it should be prosecuted accordingly.

And if Amazon is breaking state law (such as Illinois BIPA law), then…well, that’s not the Senators’ business, that’s the business of class action lawyers.

But now the ball is in Amazon’s court, and Amazon will either provide thousands of pages of documents, a few short answers, a response indicating that the Senators are asking for confidential information on future product plans, or (unlikely with Amazon, but possible with other companies) a reply stating that the Senators can go pound sand.

Either way, the “Amazon is evil” campaign will continue.

Today’s “biometrics is evil” post (Amazon One)

I can’t recall who recorded it, but there’s a radio commercial heard in Southern California (and probably nationwide) that intentionally ridicules people who willingly give up their own personally identifiable information (PII) for short-term gain. In the commercial, both the husband and the wife willingly give away all sorts of PII, including I believe their birth certificates.

While voluntary surrender of PII happens all the time (when was the last time you put your business card in a drawing bowl at a restaurant?), people REALLY freak out when the information that is provided is biometric in nature. But are the non-biometric alternatives any better?

TechCrunch, Amazon One, and Ten Dollars

TechCrunch recently posted “Amazon will pay you $10 in credit for your palm print biometrics.

If you think that the article details an insanely great way to make some easy money from Amazon, then you haven’t been paying attention to the media these last few years.

The article begins with a question:

How much is your palm print worth?

The article then describes how Amazon’s brick-and-mortar stores in several states have incorporated a new palm print scanner technology called “Amazon One.” This technology, which reads both friction ridge and vein information from a shopper’s palms. This then is then associated with a pre-filed credit card and allows the shopper to simply wave a palm to buy the items in the shopping cart.

There is nothing new under the sun

Amazon One is the latest take on processes that have been implemented several times before. I’ll cite three examples.

Pay By Touch. The first one that comes to my mind is Pay By Touch. While the management of the company was extremely sketchy, the technology (provided by Cogent, now part of Thales) was not. In many ways the business idea was ahead of its time, and it had to deal with challenging environmental conditions: the fingerprint readers used for purchases were positioned near the entrances/exits to grocery stores, which could get really cold in the winter. Couple this with the elderly population that used the devices, and it was sometimes difficult to read the fingers themselves. Yet, this relatively ancient implementation is somewhat similar to what Amazon is doing today.

University of Maryland Dining Hall. The second example occurred to me because it came from my former employer (MorphoTrak, then part of Safran and now part of IDEMIA), and was featured at a company user conference for which I coordinated speakers. There’s a video of this solution, but sadly it is not public. I did find an article describing the solution:

With the new system students will no longer need a UMD ID card to access their own meals…

Instead of pulling out a card, the students just wave their hand through a MorphoWave device. And this allows the students to pay for their meals QUICKLY. Good thing when you’re hungry.

This Pay and That Pay. But the most common example that everyone uses is Apple Pay, Google Pay, Samsung Pay, or whatever “pay” system is supported on your smartphone. Again, you don’t have to pull out a credit card or ID card. You just have to look at your phone or swipe your finger on the phone, and payment happens.

Amazon One is the downfall of civilization

I don’t know if TechCrunch editorialized against Pay By Touch or [insert phone vendor here] Pay, and it probably never heard of the MorphoWave implementation at the University of Maryland. But Amazon clearly makes TechCrunch queasy.

While the idea of contactlessly scanning your palm print to pay for goods during a pandemic might seem like a novel idea, it’s one to be met with caution and skepticism given Amazon’s past efforts in developing biometric technology. Amazon’s controversial facial recognition technology, which it historically sold to police and law enforcement, was the subject of lawsuits that allege the company violated state laws that bar the use of personal biometric data without permission.

Oh well, at least TechCrunch didn’t say that Amazon was racist. (If you haven’t already read it, please read the Security Industry Association’s “What Science Really Says About Facial Recognition Accuracy and Bias Concerns.” Unless you don’t like science.)

OK, back to Amazon and Amazon One. TechCrunch also quotes Albert Fox Cahn of the Surveillance Technology Oversight Project.

People Leaving the Cities, photo art by Zbigniew Libera, imagines a dystopian future in which people have to leave dying metropolises. By Zbigniew Libera – https://artmuseum.pl/pl/kolekcja/praca/libera-zbigniew-wyjscie-ludzi-z-miast, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=66055122.

“The dystopian future of science fiction is now. It’s horrifying that Amazon is asking people to sell their bodies, but it’s even worse that people are doing it for such a low price.”

“Sell their bodies.” Isn’t it even MORE dystopian when people “give their bodies away for free” when they sign up for Apple Pay, Google Pay, or Samsung Pay? While the Surveillance Technology Oversight Project (acronym STOP) expresses concern about digital wallets, there is a significant lack of horror in its description of them.

Digital wallets and contactless payment systems like smart chips have been around for years. The introduction of Apple Pay, Amazon Pay, and Google Pay have all contributed to the e-commerce movement, as have fast payment tools like Venmo and online budgeting applications. In response to COVID-19, the public is increasingly looking for ways to reduce or eliminate physical contact. With so many options already available, contactless payments will inevitably gain momentum….

Without strong federal laws regulating the use of our data, we’re left to rely on private companies that have consistently failed to protect our information. To prevent long-term surveillance, we need to limit the data collected and shared with the government to only what is needed. Any sort of monitoring must be secure, transparent, proportionate, temporary, and must allow for a consumer to find out about or be alerted to implications for their data. If we address these challenges now, at a time when we will be generating more and more electronic payment records, we can ensure our privacy is safeguarded.

So STOP isn’t calling for the complete elimination of Amazon Pay. But apparently it wants to eliminate Amazon One.

Is a world without Amazon One a world with less surveillance?

Whenever you propose to eliminate something, you need to look at the replacement and see if it is any better.

In 1998, Fox fired Bill Russell as the manager of the Los Angeles Dodgers. He had a win-loss percentage of .538. His replacement, Glenn Hoffman, lasted less than a season and had a percentage of .534. Hoffman’s replacement, true baseball man Davey Johnson, compiled a percentage of .503 over the next two seasons before he was fired. Should have stuck with Russell.

Anyone who decides (despite the science) that facial recognition is racist is going to have to rely on other methods to identify criminals, such as witness identification. Witness identification has documented inaccuracies.

And if you think that elimination of Amazon One from Amazon’s brick-and-mortar stores will lead to a privacy nirvana, think again. If you don’t use your palm to pay for things, you’re going to have to use a credit card, and that data will certainly be scanned by the FBI and the CIA and the BBC, B. B. King, and Doris Day. (And Matt Busby, of course.) And even if you use cash, the only way that you’ll preserve any semblance of your privacy is to pay anonymously and NOT tie the transaction to your Amazon account.

And if you’re going to do that, you might as well skip Whole Foods and go straight to Dollar General. Or maybe not, since Dollar General has its own app. And no one calls Dollar General dystopian. Wait, they do: “They tend to cluster, like scavengers feasting on the carcasses of the dead.”

I seemed to have strayed from the original point of this post.

But let me sum up. It appears that biometrics is evil, Amazon is evil, and Amazon biometrics are Double Secret Evil.

Maryland will soon deal with privacy stakeholders (and they CAN’T care about the GYRO method)

Just last week, I mentioned that the state of Utah appointed the Department of Government Operations’ first privacy officer. Now Maryland is getting into the act, and it’s worth taking a semi-deep dive into what Maryland is doing, and how it affects (or doesn’t affect) public safety.

By François Jouffroy – Christophe MOUSTIER (1994), Attribution, https://commons.wikimedia.org/w/index.php?curid=727606

According to Government Technology, the state of Maryland has created two new state information technology positions, one of which is the State Chief Privacy Officer. Because government, I will refer to this as the SCPO throughout the remainder of this post. If you are referring to this new position in verbal conversation, you can refer to the “Maryland skip-oh.” Or the “crab skip-oh.”

From https://teeherivar.com/product/maryland-is-for-crabs/. Fair use. Buy it if you like it. Virginians understand the origins of the phrase.

Governor Hogan announced the creation of the SCPO position via an Executive Order, a PDF of which can be found here.

Let me call out a few provisions in this executive order.

  • A.2. defines “personally identifiable information,” consisting of a person’s name in conjunction with other information, including but not limited to “[b]iometric information including an individual’s physiological or biological characteristics, including an individual’s deoxyribonucleic acid.” (Yes, that’s DNA.) Oh, and driver’s license numbers also.
  • At the same time, A.2 excludes “information collected, processed, or shared for the purposes of…public safety.”
  • But on the other hand, A.5 lists specific “state units” covered by certain provisions of the law, including both The Department of Public Safety and Correctional Services and the Department of State Police.
  • The reason for the listing of the state units is because every one of them will need to appoint “an agency privacy official” (C.2) who works with the SCPO.

There are other provisions, including the need for agency justification for the collection of personally identifiable information (PII), and the need to provide individuals with access to their collected PII along with the ability to correct or amend it.

But for law enforcement agencies in Maryland, the “public safety” exemption pretty much limits the applicability of THIS executive order (although other laws to correct public safety data would still apply).

Therefore, if some Maryland sheriff’s department releases an automated fingerprint identification system Request for Proposal (RFP) next month, you probably WON’T see a privacy advocate on the evaluation committee.

But what about an RFP released in 2022? Or an RFP released in a different state?

Be sure to keep up with relevant privacy legislation BEFORE it affects you.

You will soon deal with privacy stakeholders (and they won’t care about the GYRO method)

(Part of the biometric product marketing expert series)

I’ve written about the various stakeholders at government agencies who have an interest in biometrics procurements- not only in this post, but also in a post that is available to Bredemarket Premium subscribers. One of the stakeholders that appeared on my list was this one.

The privacy advocate who needs to ensure that the biometric data complies with state and national privacy laws.

Broken Liberty: Istanbul Archaeology Museum. By © Nevit Dilmen, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1115936

If you haven’t encountered a privacy advocate in your marketing or proposal efforts…you will.

Utah Gov. Spencer Cox has appointed Christopher Bramwell as the Department of Government Operations’ first privacy officer….As privacy officer, Bramwell will be responsible for surveying and compiling information about state agencies’ privacy practices to discern which poses a risk to individual privacy. He will also work with the personal privacy oversight commission and state privacy officer to provide government privacy practice reports and recommendations.

Obviously this affects companies that work with government agencies on projects such as digital identity platforms. After all, mobile driver’s licenses contain a wealth of personally identifiable information (PII), and a privacy advocate will naturally be concerned about who has access to this PII.

But what about law enforcement? Do subjects in law enforcement databases have privacy rights that need to be respected? After all, law enforcement agencies legally share PII all the time.

However, there are limitations on what law enforcement agencies can share.

  • First off, remember that not everyone in a law enforcement database is an arrested individual. For example, agencies may maintain exclusion databases of police officers and crime victims. When biometric evidence is found at a crime scene, agencies may compare the evidence against the exclusion database to ensure that the evidence does not belong to someone who is NOT a suspect. (This can become an issue in DNA mixtures, by the way.)
  • Second off, even arrested individuals have rights that need to be respected. While arrested individuals lose some privacy rights (for example, prisoners’ cells can be searched and prisoners’ mail can be opened), a privacy advocate should ensure that any system does not deny prisoners protections to which they are entitled.

So expect to see a raised concern about privacy rights when dealing with law enforcement agencies. This concern will vary from jurisdiction to jurisdiction based upon the privacy (and biometric) laws that apply in each jurisdiction, but vendors that do business with government agencies need to stay abreast of privacy issues.

A little more about stakeholders, or actors, or whoever

Whether you’re talking about stakeholders in a government agency, stakeholders at a vendor, or external stakeholders, it’s important to identify all of the relevant stakeholders.

Or whatever you call them. I’ve been using the term “stakeholders” to refer to these people in this post and the prior posts, but there are other common terms that could be used. People who construct use cases refer to “actors.” Marketers will refer to “personas.”

Whatever term you use, it’s important to distinguish between these stakeholders/actors/personas/whatever. They have different motivations and need to be addressed in different ways.

When talking with Bredemarket clients, I often need to distinguish between the various stakeholders, because this can influence my messaging significantly. For example, if a key decision-maker is a privacy officer, and I’m communicating about a fingerprint identification system, I’m not going to waste a lot of time talking about the GYRO method.

My time wouldn’t be wasted effort if I were talking to a forensic examiner, but a privacy advocate just wouldn’t care. They would just sit in silence, internally musing about the chances that a single latent examiner’s “green” determination could somehow expose a private citizen to fraud or doxxing or something.

This is why I work with my clients to make sure that the messaging is appropriate for the stakeholder…and when necessary, the client and I jointly develop multiple messages for multiple stakeholders.

If you need such messaging help, please contact Bredemarket for advice and assistance. I can collaborate with you to ensure that the right messages go to the right stakeholders.

The business TikTok post that I couldn’t share with you

I had a really good post planned for today.

While I’m not a big creator of video content, I can certainly appreciate good content, and I planned to share some excellent video content with you.

There is a mobile car washing service in my hometown of Ontario, California. Now videos of mobile car washing are more exciting than videos of…well, videos of writers writing, but not by much. So if you want to grab someone’s attention, you have to put entertaining content into a mobile car washing video.

(No, not that.)

So this local mobile car washing service posted a video on TikTok that began with the service washing…a kid-size vehicle.

Completely cute and entertaining, so I decided to share it from the TikTok app to one of my Facebook groups, and then decided that I wanted to write a blog post about it.

So I went to share the video from the TikTok web page to this blog, and was told the video was not available. I investigated further, and found this on the account page.

Yes, you read that right – a COMPANY’S TikTok account is PRIVATE.

I went back to my TikTok app, navigated to the account, and confirmed that the video was still there (for those of us who were logged in and following the account) and that hundreds of people have seen it.

But I can’t share it with you, nor can I share any of the company’s other videos, which are restricted to “Followers only.”

But trust me, it was a really cute video.