Offboarding: What Happens When You Stop Doing Business with Bredemarket?

Consulting firms (and other firms) make a big deal about the amazing processes we use when we onboard clients. (In Bredemarket’s case, I ask questions.)

But often we don’t talk about what we do when we OFFBOARD clients. And that’s equally important.

So let’s go inside the wildebeest habitat and see how Bredemarket handles client offboarding.

“Hey guys, a client jumped ship.” By Danijel Mihajlovic – https://thenextcrossing.com/wildebeest-migration-kenya, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=96024366.

This is the end, beautiful friend

Yes, offboarding happens.

In 2023 I signed a contract with a client in which I would bill them at an hourly rate. This was a short-term contract, but it was subsequently renewed.

Recently the client chose not to renew the contract for another extended period.

woodleywonderworks, CC BY 2.0, https://www.flickr.com/photos/wwworks/2248069430.

On the surface, that would appear to be the end of it. I had completed all projects assigned to me, and I had been paid for all projects assigned to me.

So what could go wrong?

(Don’t) Tell all the people

Plenty could go wrong.

During the course of my engagement with the client, I had enjoyed access to:

  • Confidential information FROM the client.
  • Confidential information that I sent TO the client, as part of the work for hire arrangement.
  • Access to client systems. (In this particular instance I only had access to a single system with non-confidential information, but other clients have granted me access to storage systems and even software.)

And all of this data was sitting in MY systems, including three storage systems, one CRM system, and one email system.

By Unnamed photographer for Office of War Information. – U.S. Office of War Information photo, via Library of Congress website [1], converted from TIFF to .jpg and border cropped before upload to Wikimedia Commons., Public Domain, https://commons.wikimedia.org/w/index.php?curid=8989847.

Now of course I had signed a non-disclosure agreement with the client, so I legally could not use any of that data even if I wanted to do so.

But the data was still sitting there, and I had to do something about it.

Take It As It Comes

But I already knew what I had to do, because I had done this before.

Long-time readers of the Bredemarket blog will recall an announcement that I made on April 22, 2022, in which I stated that I would no longer “accept client work for solutions that identify individuals using (a) friction ridges (including fingerprints and palm prints) and/or (b) faces.” (I also stopped accepting work for solutions involving driver’s licenses and passports.)

I didn’t say WHY I was refusing this work; I saved that tidbit for a mailing to my mailing list.

So, why I am making these changes at Bredemarket?

I have accepted a full-time position as a Senior Product Marketing Manager with an identity company. (I’ll post the details later on my personal LinkedIn account…)…

If you are a current Bredemarket customer with a friction ridge/face identification solution, then I already sent a communication to you with details on wrapping up our business. Thank you for your support over the last 21 months. I’ll probably see you at the conferences that my employer-to-be attends. 

That communication to then-current Bredemarket customers detailed, among other things, how I was going to deal with the confidential information I held from them.

So I dusted off the pertinent parts of that communication and repurposed it to send to my 2023-2024 client. I’ve reproduced non-redacted portions of that communication below. Although I don’t explicitly name my information storage systems in this public post, as I noted above these include three storage systems, one CRM system, and one email system.

Bredemarket will follow the following procedures to protect your confidential information.

  1. Bredemarket will delete confidential information provided to Bredemarket by your company by (REDACTED). This includes information presently stored on (REDACTED).
  2. Bredemarket will delete draft and final documents created by Bredemarket that include company confidential information by (REDACTED). This includes information presently stored on (REDACTED).
  3. If your company has provided Bredemarket with access to your company OneDrive, Outlook, or Sites, Bredemarket will delete the ability to access these company properties by (REDACTED). This includes deletion from my laptop computer, my mobile phone, and my web browser. Bredemarket further recommends that you revoke Bredemarket’s access to these systems.
  4. If your company has provided Bredemarket with access to all or part of your company Google Drive, Bredemarket recommends that you revoke Bredemarket’s access to this system.

I will inform you when this process is complete.

So I executed the offboarding process for my former client, ensuring that the client’s confidential information remains protected.

Love Me Two Times

Of course, I hope the client comes back to Bredemarket someday, in some capacity.

But perhaps you can take advantage of the opportunity. Since your competitor no longer contracts with Bredemarket, perhaps YOU can.

To learn WHY you should work with Bredemarket, click the image below and read about my CPA (Content-Proposal-Analysis) expertise.

Bredemarket’s “CPA.”

Postscript

No, I’m not going to post videos of the relevant Doors songs on here. Jim’s Oedpidal complex isn’t business-friendly.

Were You Affected by the National Public Data Breach?

(Part of the biometric product marketing expert series)

Fiona Jackson of TechRepublic shared this two days ago.

In August, a hacker dumped 2.7 billion data records, including social security numbers, on a dark web forum, in one of the biggest breaches in history.

The data may have been stolen from background-checking service National Public Data at least four months ago. Each record has a person’s name, mailing address, and SSN, but some also contain other sensitive information, such as names of relatives…

Note that 2.7 billion data records does not equal 2.7 billion people, since a person may have multiple data records.

Was your data leaked?

Rich DeMuro posted a link to see if your data was leaked. If you want to check, go to https://npd.pentester.com/, enter the requested information (you will NOT be asked for your Social Security Number), and the site will display a masked list of the matching information in the breach.

One lesson from the National Public Data breach should have been obvious long ago: anyone who relies on a Social Security Number as a form of positive identification is a fool.

Biometric Product Marketers, BIPA Remains Unaltered

(Part of the biometric product marketing expert series)

You may remember the May hoopla regarding amendments to Illinois’ Biometric Information Privacy Act (BIPA). These amendments do not eliminate the long-standing law, but lessen its damage to offending companies.

Back on May 29, Fox Rothschild explained the timeline:

The General Assembly is expected to send the bill to Illinois Governor JB Pritzker within 30 days. Gov. Pritzker will then have 60 days to sign it into law. It will be immediately effective.

According to the Illinois General Assembly website, the Senate sent the bill to the Governor on June 14.

While the BIPA amendment has passed the Illinois House and Senate and was sent to the Governor, there is no indication that he has signed the bill into law within the 60-day timeframe.

So BIPA 1.0 is still in effect.

As Photomyne found out:

A proposed class action claims Photomyne, the developer of several photo-editing apps, has violated an Illinois privacy law by collecting, storing and using residents’ facial scans without authorization….

The lawsuit contends that the app developer has breached the BIPA’s clear requirements by failing to notify Illinois users of its biometric data collection practices and inform them how long and for what purpose the information will be stored and used.

In addition, the suit claims the company has unlawfully failed to establish public guidelines that detail its data retention and destruction policies.

From https://www.instagram.com/p/C7ZWA9NxUur/.

Marketing Identity Product Privacy

When marketing digital identity products secured by biometrics, emphasize that they are MORE secure and more private than their physical counterparts.

When you hand your physical driver’s license over to a sleazy bartender, they find out EVERYTHING about you, including your name, your birthdate, your driver’s license number, and even where you live.

When you use a digital mobile driver’s license, bartenders ONLY learn what they NEED to know—that you are over 21.

Image source: GET Group NA, https://apps.apple.com/us/app/get-mobile-verify/id1501552424

What is Your Biometric Firm’s BIPA Product Marketing Story?

(Part of the biometric product marketing expert series)

If your biometric firm conducts business in the United States, then your biometric firm probably conducts business in Illinois.

(With some exceptions.)

Your firm and your customers are impacted by Illinois’ Biometric Information Privacy Act, or BIPA.

Including requirements for consumer consent for use of biometrics.

And heavy fines (currently VERY heavy fines) if you don’t obtain that consent.

What is your firm telling your customers about BIPA?

Bredemarket has mentioned BIPA several times in the Bredemarket blog.

But what has YOUR firm said about BIPA?

And if your firm has said nothing about BIPA, why not?

Perhaps the biometric product marketing expert can ensure that your product is marketed properly in Illlinois.

Contact Bredemarket before it’s too late.

From https://www.instagram.com/p/C7ZWA9NxUur/.

If You’re Using ChatGPT Commercially, Are You Violating Reddit’s Terms?

How to give a privacy advocate a coronary? Have OpenAI and Reddit reach an agreement.

Keeping the internet open is crucial, and part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online. Reddit is a uniquely large and vibrant community that has long been an important space for conversation on the internet. Additionally, using LLMs, ML, and AI allow Reddit to improve the user experience for everyone.

In line with this, Reddit and OpenAI today announced a partnership to benefit both the Reddit and OpenAI user communities…

Perhaps some members of the Reddit user community may not feel the benefits when OpenAI is training on their data.

While people who joined Reddit presumably understood that anyone could view their data, they never imagined that a third party would then process its data for its own purposes.

Oh, but wait a minute. Reddit clarifies things:

This partnership…does not change Reddit’s Data API Terms or Developer Terms, which state content accessed through Reddit’s Data API cannot be used for commercial purposes without Reddit’s approval. API access remains free for non-commercial usage under our published threshold.

And, of course, OpenAI’s “primary fiduciary duty is to humanity,” so of course it is NOT using the Reddit data for commercial purposes.

And EVERY ONE of the people who accesses Reddit data through OpenAI’s offerings would NEVER use the data for commercial…

…um…

…we’ll get back to you on that.

BIPA Remains a Four-Letter Word

(Part of the biometric product marketing expert series)

If you’re a biometric product marketing expert, or even if you’re not, you’re presumably analyzing the possible effects to your identity/biometric product from the proposed changes to the Biometric Information Privacy Act (BIPA).

From ilga.gov. Link.

As of May 16, the Illinois General Assembly (House and Senate) passed a bill (SB2979) to amend BIPA. It awaits the Governor’s signature.

What is the amendment? Other than defining an “electronic signature,” the main purpose of the bill is to limit damages under BIPA. The new text regarding the “Right of action” codifies the concept of a “single violation.”

From ilga.gov. Link.
2(b) For purposes of subsection (b) of Section 15, a
3private entity that, in more than one instance, collects,
4captures, purchases, receives through trade, or otherwise
5obtains the same biometric identifier or biometric information
6from the same person using the same method of collection in
7violation of subsection (b) of Section 15 has committed a
8single violation of subsection (b) of Section 15 for which the
9aggrieved person is entitled to, at most, one recovery under
10this Section.
11(c) For purposes of subsection (d) of Section 15, a
12private entity that, in more than one instance, discloses,
13rediscloses, or otherwise disseminates the same biometric
14identifier or biometric information from the same person to
15the same recipient using the same method of collection in
16violation of subsection (d) of Section 15 has committed a
17single violation of subsection (d) of Section 15 for which the
18aggrieved person is entitled to, at most, one recovery under
19this Section regardless of the number of times the private
20entity disclosed, redisclosed, or otherwise disseminated the
21same biometric identifier or biometric information of the same
22person to the same recipient.
From ilga.gov. Link. Emphasis mine.

So does this mean that Google Nest Cam’s “familiar face alert” feature will now be available in Illinois?

Probably not. As Doug “BIPAbuzz” OGorden has noted:

(T)he amended law DOES NOT CHANGE “Private Right of Action” so BIPA LIVES!

Companies who violate the strict requirements of BIPA aren’t off the hook. It’s just that the trial lawyers—whoops, I mean the affected consumers make a lot less money.

LMM vs. LMM (Acronyms Are Funner)

Do you recall my October 2023 post “LLM vs. LMM (Acronyms Are Fun)“?

It discussed both large language models and large multimodal models. In this case “multimodal” is used in a way that I normally DON’T use it, namely to refer to the different modes in which humans interact (text, images, sounds, videos). Of course, I gravitated to a discussion in which an image of a person’s face was one of the modes.

Document processing with GPT-4V. The model’s mistake is highlighted in red. From https://huyenchip.com/2023/10/10/multimodal.html?utm_source=tldrai.

In this post I will look at LMMs…and I will also look at LMMs. There’s a difference. And a ton of power when LMMs and LMMs work together for the common good.

Revisiting the Large Multimodal Model (LMM)

Since I wrote that piece last year, large multimodal models continue to be discussed. Harry Guinness just wrote a piece for Zapier in March.

When Google announced its Gemini series of AI models, it made a big deal about how they were “natively multimodal.” Instead of having different modules tacked on to give the appearance of multimodality, they were apparently trained from the start to be able to handle text, images, audio, video, and more. 

Other AI models are starting to function in a TRULY multimodal way, rather than using separate models to handle the different modes.

So now that we know that LLMs are large multimodal models, we need to…

…um, wait a minute…

Introducing the Large Medical Model (LMM)

It turns out that the health people have a DIFFERENT definition of the acronym LMM. Rather than using it to refer to a large multimodal model, they refer to a large MEDICAL model.

As you can probably guess, the GenHealth.AI model is trained for medical purposes.

Our first of a kind Large Medical Model or LMM for short is a type of machine learning model that is specifically designed for healthcare and medical purposes. It is trained on a large dataset of medical records, claims, and other healthcare information including ICD, CPT, RxNorm, Claim Approvals/Denials, price and cost information, etc.

I don’t think I’m stepping out on a limb if I state that medical records cannot be classified as “natural” language. So the GenHealth.AI model is trained specifically on those attributes found in medical records, and not on people hemming and hawing and asking what a Pekingese dog looks like.

But there is still more work to do.

What about the LMM that is also an LMM?

Unless I’m missing something, the Large Medical Model described above is designed to work with only one mode of data, textual data.

But what if the Large Medical Model were also a Large Multimodal Model?

By Piotr Bodzek, MD – Uploaded from http://www.ginbytom.slam.katowice.pl/25.html with author permission., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=372117
  • Rather than converting a medical professional’s voice notes to text, the LMM-LMM would work directly with the voice data. This could lead to increased accuracy: compare the tone of voice of an offhand comment “This doesn’t look good” with the tone of voice of a shocked comment “This doesn’t look good.” They appear the same when reduced to text format, but the original voice data conveys significant differences.
  • Rather than just using the textual codes associated with an X-ray, the LMM-LMM would read the X-ray itself. If the image model has adequate training, it will again pick up subtleties in the X-ray data that are not present when the data is reduced to a single medical code.
  • In short, the LMM-LMM (large medical model-large multimodal model) would accept ALL the medical outputs: text, voice, image, video, biometric readings, and everything else. And the LMM-LMM would deal with all of it natively, increasing the speed and accuracy of healthcare by removing the need to convert everything to textual codes.

A tall order, but imagine how healthcare would be revolutionized if you didn’t have to convert everything into text format to get things done. And if you could use the actual image, video, audio, or other data rather than someone’s textual summation of it.

Obviously you’d need a ton of training data to develop an LMM-LMM that could perform all these tasks. And you’d have to obtain the training data in a way that conforms to privacy requirements: in this case protected health information (PHI) requirements such as HIPAA requirements.

But if someone successfully pulls this off, the benefits are enormous.

You’ve come a long way, baby.

Robert Young (“Marcus Welby”) and Jane Wyatt (“Margaret Anderson” on a different show). By ABC TelevisionUploaded by We hope at en.wikipedia – eBay itemphoto informationTransferred from en.wikipedia by SreeBot, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16472486.

Ofcom and the Digital Trust & Safety Partnership

The Digital Trust & Safety Partnership (DTSP) consists of “leading technology companies,” including Apple, Google, Meta (parent of Facebook, Instagram, and WhatsApp), Microsoft (and its LinkedIn subsidiary), TikTok, and others.

The DTSP obviously has its views on Ofcom’s enforcement of the UK Online Safety Act.

Which, as Biometric Update notes, boils down to “the industry can regulate itself.”

Here’s how the DTSP stated this in its submission to Ofcom:

DTSP appreciates and shares Ofcom’s view that there is no one-size-fits-all approach to trust and safety and to protecting people online. We agree that size is not the only factor that should be considered, and our assessment methodology, the Safe Framework, uses a tailoring framework that combines objective measures of organizational size and scale for the product or service in scope of assessment, as well as risk factors.

From https://dtspartnership.org/press-releases/dtsp-submission-to-the-uk-ofcom-consultation-on-illegal-harms-online/.

We’ll get to the “Safe Framework” later. DTSP continues:

Overly prescriptive codes may have unintended effects: Although there is significant overlap between the content of the DTSP Best Practices Framework and the proposed Illegal Content Codes of Practice, the level of prescription in the codes, their status as a safe harbor, and the burden of documenting alternative approaches will discourage services from using other measures that might be more effective. Our framework allows companies to use whatever combination of practices most effectively fulfills their overarching commitments to product development, governance, enforcement, improvement, and transparency. This helps ensure that our practices can evolve in the face of new risks and new technologies.

From https://dtspartnership.org/press-releases/dtsp-submission-to-the-uk-ofcom-consultation-on-illegal-harms-online/.

But remember that the UK’s neighbors in the EU recently prescribed that USB-3 cables are the way to go. This not only forced DTSP member Apple to abandon the Lightning cable worldwide, but it affects Google and others because there will be no efforts to come up with better cables. Who wants to fight the bureaucratic battle with Brussels? Or alternatively we will have the advanced “world” versions of cables and the deprecated “EU” standards-compliant cables.

So forget Ofcom’s so-called overbearing approach and just adopt the Safe Framework. Big tech will take care of everything, including all those age assurance issues.

DTSP’s September 2023 paper on age assurance documents a “not overly prescriptive” approach, with a lot of “it depends” discussion.

Incorporating each characteristic comes with trade-offs, and there is no one-size-fits-all solution. Highly accurate age assurance methods may depend on collection of new personal data such as facial imagery or government-issued ID. Some methods that may be economical may have the consequence of creating inequities among the user base. And each service and even feature may present a different risk profile for younger users; for example, features that are designed to facilitate users meeting in real life pose a very different set of risks than services that provide access to different types of content….

Instead of a single approach, we acknowledge that appropriate age assurance will vary among services, based on an assessment of the risks and benefits of a given context. A single service may also use different
approaches for different aspects or features of the service, taking a multi-layered approach.

From https://dtspartnership.org/wp-content/uploads/2023/09/DTSP_Age-Assurance-Best-Practices.pdf.

So will Ofcom heed the DTSP’s advice and say “Never mind. You figure it out”?

Um, maybe not.

Personally Protected: PII vs. PHI

(Part of the biometric product marketing expert series)

Before you can fully understand the difference between personally identifiable information (PII) and protected health information (PHI), you need to understand the difference between biometrics and…biometrics. (You know sometimes words have two meanings.)

Designed by Google Gemini.

The definitions of biometrics

To address the difference between biometrics and biometrics, I’ll refer to something I wrote over two years ago, in late 2021. In that post, I quoted two paragraphs from the International Biometric Society that illustrated the difference.

Since the IBS has altered these paragraphs in the intervening years, I will quote from the latest version.

The terms “Biometrics” and “Biometry” have been used since early in the 20th century to refer to the field of development of statistical and mathematical methods applicable to data analysis problems in the biological sciences.

Statistical methods for the analysis of data from agricultural field experiments to compare the yields of different varieties of wheat, for the analysis of data from human clinical trials evaluating the relative effectiveness of competing therapies for disease, or for the analysis of data from environmental studies on the effects of air or water pollution on the appearance of human disease in a region or country are all examples of problems that would fall under the umbrella of “Biometrics” as the term has been historically used….

The term “Biometrics” has also been used to refer to the field of technology devoted to the identification of individuals using biological traits, such as those based on retinal or iris scanning, fingerprints, or face recognition. Neither the journal “Biometrics” nor the International Biometric Society is engaged in research, marketing, or reporting related to this technology. Likewise, the editors and staff of the journal are not knowledgeable in this area. 

From https://www.biometricsociety.org/about/what-is-biometry.

In brief, what I call “broad biometrics” refers to analyzing biological sciences data, ranging from crop yields to heart rates. Contrast this with what I call “narrow biometrics,” which (usually) refers only to human beings, and only to those characteristics that identify human beings, such as the ridges on a fingerprint.

The definition of “personally identifiable information” (PII)

Now let’s examine an issue related to narrow biometrics (and other things), personally identifiable information, or PII. (It’s also represented as personal identifiable information by some.) I’ll use a definition provided by the U.S. National Institute of Standards and Technology, or NIST.

Information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other information that is linked or linkable to a specific individual.

From https://csrc.nist.gov/glossary/term/PII.

Note the key words “alone or when combined.” The ten numbers “909 867 5309” are not sufficient to identify an individual alone, but can identify someone when combined with information from another source, such as a telephone book.

Yes, a telephone book. Deal with it.

By © 2010 by Tomasz Sienicki [user: tsca, mail: tomasz.sienicki at gmail.com] – Photograph by Tomasz Sienicki (Own work)Image intentionally scaled down., CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=10330603

What types of information can be combined to identify a person? The U.S. Department of Defense’s Privacy, Civil Liberties, and Freedom of Information Directorate provides multifarious examples of PII, including:

  • Social Security Number.
  • Passport number.
  • Driver’s license number.
  • Taxpayer identification number.
  • Patient identification number.
  • Financial account number.
  • Credit card number.
  • Personal address.
  • Personal telephone number.
  • Photographic image of a face.
  • X-rays.
  • Fingerprints.
  • Retina scan.
  • Voice signature.
  • Facial geometry.
  • Date of birth.
  • Place of birth.
  • Race.
  • Religion.
  • Geographical indicators.
  • Employment information.
  • Medical information.
  • Education information.
  • Financial information.

Now you may ask yourself, “How can I identify someone by a non-unique birthdate? A lot of people were born on the same day!”

But the combination of information is powerful, as researchers discovered in a 2015 study cited by the New York Times.

In the study, titled “Unique in the Shopping Mall: On the Reidentifiability of Credit Card Metadata,” a group of data scientists analyzed credit card transactions made by 1.1 million people in 10,000 stores over a three-month period. The data set contained details including the date of each transaction, amount charged and name of the store.

Although the information had been “anonymized” by removing personal details like names and account numbers, the uniqueness of people’s behavior made it easy to single them out.

In fact, knowing just four random pieces of information was enough to reidentify 90 percent of the shoppers as unique individuals and to uncover their records, researchers calculated. And that uniqueness of behavior — or “unicity,” as the researchers termed it — combined with publicly available information, like Instagram or Twitter posts, could make it possible to reidentify people’s records by name.

From https://archive.nytimes.com/bits.blogs.nytimes.com/2015/01/29/with-a-few-bits-of-data-researchers-identify-anonymous-people/.

So much for anonymization. And privacy.

Now biometrics only form part of the multifarious list of data cited above, but clearly biometric data can be combined with other data to identify someone. An easy example is taking security camera footage of the face of a person walking into a store, and combining that data with the same face taken from a database of driver’s license holders. In some jurisdictions, some entities are legally permitted to combine this data, while others are legally prohibited from doing so. (A few do it anyway. But I digress.)

Because narrow biometric data used for identification, such as fingerprint ridges, can be combined with other data to personally identify an individual, organizations that process biometric data must undertake strict safeguards to protect that data. If personally identifiable information (PII) is not adequately guarded, people could be subject to fraud and other harms.

The definition of “protected health information” (PHI)

In this case, I’ll refer to information published by the U.S. Department of Health and Human Services.

Protected Health Information. The Privacy Rule protects all “individually identifiable health information” held or transmitted by a covered entity or its business associate, in any form or media, whether electronic, paper, or oral. The Privacy Rule calls this information “protected health information (PHI).”12

“Individually identifiable health information” is information, including demographic data, that relates to:

the individual’s past, present or future physical or mental health or condition,

the provision of health care to the individual, or

the past, present, or future payment for the provision of health care to the individual,

and that identifies the individual or for which there is a reasonable basis to believe it can be used to identify the individual.13 Individually identifiable health information includes many common identifiers (e.g., name, address, birth date, Social Security Number).

The Privacy Rule excludes from protected health information employment records that a covered entity maintains in its capacity as an employer and education and certain other records subject to, or defined in, the Family Educational Rights and Privacy Act, 20 U.S.C. §1232g.

From https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html

Now there’s obviously an overlap between personally identifiable information (PII) and protected health information (PHI). For example, names, dates of birth, and Social Security Numbers fall into both categories. But I want to highlight two things are are explicitly mentioned as PHI that aren’t usually cited as PII.

  • Physical or mental health data. This could include information that a medical professional captures from a patient, including biometric (broad biometric) information such as heart rate or blood pressure.
  • Health care provided to an individual. This not only includes written information such as prescriptions, but oral information (“take two aspirin and call my chatbot in the morning”). Yes, chatbot. Deal with it. Dr. Marcus Welby and his staff retired a long time ago.
Robert Young (“Marcus Welby”) and Jane Wyatt (“Margaret Anderson” on a different show). By ABC TelevisionUploaded by We hope at en.wikipedia – eBay itemphoto informationTransferred from en.wikipedia by SreeBot, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16472486

Because broad biometric data used for analysis, such as heart rates, can be combined with other data to personally identify an individual, organizations that process biometric data must undertake strict safeguards to protect that data. If protected health information (PHI) is not adequately guarded, people could be subject to fraud and other harms.

Simple, isn’t it?

Actually, the parallels between identity/biometrics and healthcare have fascinated me for decades, since the dedicated hardware to capture identity/biometric data is often similar to the dedicated hardware to capture health data. And now that we’re moving away from dedicated hardware to multi-purpose hardware such as smartphones, the parallels are even more fascinating.

Designed by Google Gemini.