Thinking about “de plane” used in the Fantasy Island television series (image CC BY-SA 3.0) makes me think about travel. Mr. Roarke’s and Tattoo’s guests didn’t have to worry about identifying themselves to disembark from the plane and enter the island. But WE certainly do…and different countries and entities need to adopt standards to facilitate this.
I’ve previously observed that standards often don’t emerge, like Athena, from ivory towers. They emerge when a very powerful entity or person (for example, Microsoft or Taylor Swift) says that THIS is the standard, and waits for the world to comply.
Of course, there can be issues when MULTIPLE powerful entities or people try to champion competing standards.
SITA, the global leader in air transport technology, and IDEMIA Public Security, a world leader in digital technologies, biometrics, and security have announced a collaboration to advance interoperability, trust, and data security through a globally recognized Digital Travel Ecosystem.
Add Indico to the partnership, and perhaps the parties may be on to something.
The goal is to create “an open, secure, and interoperable framework that ensures a travelers’ digital identity is trusted globally, without the need for direct integrations between issuers and verifiers.” It is intentionally decentralized, giving the traveler control over their identity.
Perhaps it’s a fantasy to think that others will buy in. Will they?
Consulting firms (and other firms) make a big deal about the amazing processes we use when we onboard clients. (In Bredemarket’s case, I ask questions.)
But often we don’t talk about what we do when we OFFBOARD clients. And that’s equally important.
So let’s go inside the wildebeest habitat and see how Bredemarket handles client offboarding.
In 2023 I signed a contract with a client in which I would bill them at an hourly rate. This was a short-term contract, but it was subsequently renewed.
Recently the client chose not to renew the contract for another extended period.
On the surface, that would appear to be the end of it. I had completed all projects assigned to me, and I had been paid for all projects assigned to me.
So what could go wrong?
(Don’t) Tell all the people
Plenty could go wrong.
During the course of my engagement with the client, I had enjoyed access to:
Confidential information FROM the client.
Confidential information that I sent TO the client, as part of the work for hire arrangement.
Access to client systems. (In this particular instance I only had access to a single system with non-confidential information, but other clients have granted me access to storage systems and even software.)
And all of this data was sitting in MY systems, including three storage systems, one CRM system, and one email system.
By Unnamed photographer for Office of War Information. – U.S. Office of War Information photo, via Library of Congress website [1], converted from TIFF to .jpg and border cropped before upload to Wikimedia Commons., Public Domain, https://commons.wikimedia.org/w/index.php?curid=8989847.
Now of course I had signed a non-disclosure agreement with the client, so I legally could not use any of that data even if I wanted to do so.
But the data was still sitting there, and I had to do something about it.
Take It As It Comes
But I already knew what I had to do, because I had done this before.
Long-time readers of the Bredemarket blog will recall an announcement that I made on April 22, 2022, in which I stated that I would no longer “accept client work for solutions that identify individuals using (a) friction ridges (including fingerprints and palm prints) and/or (b) faces.” (I also stopped accepting work for solutions involving driver’s licenses and passports.)
I didn’t say WHY I was refusing this work; I saved that tidbit for a mailing to my mailing list.
So, why I am making these changes at Bredemarket?
I have accepted a full-time position as a Senior Product Marketing Manager with an identity company. (I’ll post the details later on my personal LinkedIn account…)…
If you are a current Bredemarket customer with a friction ridge/face identification solution, then I already sent a communication to you with details on wrapping up our business. Thank you for your support over the last 21 months. I’ll probably see you at the conferences that my employer-to-be attends.
That communication to then-current Bredemarket customers detailed, among other things, how I was going to deal with the confidential information I held from them.
So I dusted off the pertinent parts of that communication and repurposed it to send to my 2023-2024 client. I’ve reproduced non-redacted portions of that communication below. Although I don’t explicitly name my information storage systems in this public post, as I noted above these include three storage systems, one CRM system, and one email system.
Bredemarket will follow the following procedures to protect your confidential information.
Bredemarket will delete confidential information provided to Bredemarket by your company by (REDACTED). This includes information presently stored on (REDACTED).
Bredemarket will delete draft and final documents created by Bredemarket that include company confidential information by (REDACTED). This includes information presently stored on (REDACTED).
If your company has provided Bredemarket with access to your company OneDrive, Outlook, or Sites, Bredemarket will delete the ability to access these company properties by (REDACTED). This includes deletion from my laptop computer, my mobile phone, and my web browser. Bredemarket further recommends that you revoke Bredemarket’s access to these systems.
If your company has provided Bredemarket with access to all or part of your company Google Drive, Bredemarket recommends that you revoke Bredemarket’s access to this system.
I will inform you when this process is complete.
So I executed the offboarding process for my former client, ensuring that the client’s confidential information remains protected.
Love Me Two Times
Of course, I hope the client comes back to Bredemarket someday, in some capacity.
But perhaps you can take advantage of the opportunity. Since your competitor no longer contracts with Bredemarket, perhaps YOU can.
To learn WHY you should work with Bredemarket, click the image below and read about my CPA (Content-Proposal-Analysis) expertise.
Bredemarket’s “CPA.”
Postscript
No, I’m not going to post videos of the relevant Doors songs on here. Jim’s Oedpidal complex isn’t business-friendly.
In August, a hacker dumped 2.7 billion data records, including social security numbers, on a dark web forum, in one of the biggest breaches in history.
The data may have been stolen from background-checking service National Public Data at least four months ago. Each record has a person’s name, mailing address, and SSN, but some also contain other sensitive information, such as names of relatives…
Note that 2.7 billion data records does not equal 2.7 billion people, since a person may have multiple data records.
Was your data leaked?
Rich DeMuro posted a link to see if your data was leaked. If you want to check, go to https://npd.pentester.com/, enter the requested information (you will NOT be asked for your Social Security Number), and the site will display a masked list of the matching information in the breach.
One lesson from the National Public Data breach should have been obvious long ago: anyone who relies on a Social Security Number as a form of positive identification is a fool.
You may remember the May hoopla regarding amendments to Illinois’ Biometric Information Privacy Act (BIPA). These amendments do not eliminate the long-standing law, but lessen its damage to offending companies.
The General Assembly is expected to send the bill to Illinois Governor JB Pritzker within 30 days. Gov. Pritzker will then have 60 days to sign it into law. It will be immediately effective.
While the BIPA amendment has passed the Illinois House and Senate and was sent to the Governor, there is no indication that he has signed the bill into law within the 60-day timeframe.
A proposed class action claims Photomyne, the developer of several photo-editing apps, has violated an Illinois privacy law by collecting, storing and using residents’ facial scans without authorization….
The lawsuit contends that the app developer has breached the BIPA’s clear requirements by failing to notify Illinois users of its biometric data collection practices and inform them how long and for what purpose the information will be stored and used.
In addition, the suit claims the company has unlawfully failed to establish public guidelines that detail its data retention and destruction policies.
When marketing digital identity products secured by biometrics, emphasize that they are MORE secure and more private than their physical counterparts.
When you hand your physical driver’s license over to a sleazy bartender, they find out EVERYTHING about you, including your name, your birthdate, your driver’s license number, and even where you live.
When you use a digital mobile driver’s license, bartenders ONLY learn what they NEED to know—that you are over 21.
Keeping the internet open is crucial, and part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online. Reddit is a uniquely large and vibrant community that has long been an important space for conversation on the internet. Additionally, using LLMs, ML, and AI allow Reddit to improve the user experience for everyone.
In line with this, Reddit and OpenAI today announced a partnership to benefit both the Reddit and OpenAI user communities…
Perhaps some members of the Reddit user community may not feel the benefits when OpenAI is training on their data.
While people who joined Reddit presumably understood that anyone could view their data, they never imagined that a third party would then process its data for its own purposes.
Oh, but wait a minute. Reddit clarifies things:
This partnership…does not change Reddit’s Data API Terms or Developer Terms, which state content accessed through Reddit’s Data API cannot be used for commercial purposes without Reddit’s approval. API access remains free for non-commercial usage under our published threshold.
If you’re a biometric product marketing expert, or even if you’re not, you’re presumably analyzing the possible effects to your identity/biometric product from the proposed changes to the Biometric Information Privacy Act (BIPA).
As of May 16, the Illinois General Assembly (House and Senate) passed a bill (SB2979) to amend BIPA. It awaits the Governor’s signature.
What is the amendment? Other than defining an “electronic signature,” the main purpose of the bill is to limit damages under BIPA. The new text regarding the “Right of action” codifies the concept of a “single violation.”
(T)he amended law DOES NOT CHANGE “Private Right of Action” so BIPA LIVES!
Companies who violate the strict requirements of BIPA aren’t off the hook. It’s just that the trial lawyers—whoops, I mean the affected consumers make a lot less money.
It discussed both large language models and large multimodal models. In this case “multimodal” is used in a way that I normally DON’T use it, namely to refer to the different modes in which humans interact (text, images, sounds, videos). Of course, I gravitated to a discussion in which an image of a person’s face was one of the modes.
In this post I will look at LMMs…and I will also look at LMMs. There’s a difference. And a ton of power when LMMs and LMMs work together for the common good.
When Google announced its Gemini series of AI models, it made a big deal about how they were “natively multimodal.” Instead of having different modules tacked on to give the appearance of multimodality, they were apparently trained from the start to be able to handle text, images, audio, video, and more.
Other AI models are starting to function in a TRULY multimodal way, rather than using separate models to handle the different modes.
So now that we know that LLMs are large multimodal models, we need to…
…um, wait a minute…
Introducing the Large Medical Model (LMM)
It turns out that the health people have a DIFFERENT definition of the acronym LMM. Rather than using it to refer to a large multimodal model, they refer to a large MEDICAL model.
Our first of a kind Large Medical Model or LMM for short is a type of machine learning model that is specifically designed for healthcare and medical purposes. It is trained on a large dataset of medical records, claims, and other healthcare information including ICD, CPT, RxNorm, Claim Approvals/Denials, price and cost information, etc.
I don’t think I’m stepping out on a limb if I state that medical records cannot be classified as “natural” language. So the GenHealth.AI model is trained specifically on those attributes found in medical records, and not on people hemming and hawing and asking what a Pekingese dog looks like.
But there is still more work to do.
What about the LMM that is also an LMM?
Unless I’m missing something, the Large Medical Model described above is designed to work with only one mode of data, textual data.
But what if the Large Medical Model were also a Large Multimodal Model?
Rather than converting a medical professional’s voice notes to text, the LMM-LMM would work directly with the voice data. This could lead to increased accuracy: compare the tone of voice of an offhand comment “This doesn’t look good” with the tone of voice of a shocked comment “This doesn’t look good.” They appear the same when reduced to text format, but the original voice data conveys significant differences.
Rather than just using the textual codes associated with an X-ray, the LMM-LMM would read the X-ray itself. If the image model has adequate training, it will again pick up subtleties in the X-ray data that are not present when the data is reduced to a single medical code.
In short, the LMM-LMM (large medical model-large multimodal model) would accept ALL the medical outputs: text, voice, image, video, biometric readings, and everything else. And the LMM-LMM would deal with all of it natively, increasing the speed and accuracy of healthcare by removing the need to convert everything to textual codes.
A tall order, but imagine how healthcare would be revolutionized if you didn’t have to convert everything into text format to get things done. And if you could use the actual image, video, audio, or other data rather than someone’s textual summation of it.
Obviously you’d need a ton of training data to develop an LMM-LMM that could perform all these tasks. And you’d have to obtain the training data in a way that conforms to privacy requirements: in this case protected health information (PHI) requirements such as HIPAA requirements.
But if someone successfully pulls this off, the benefits are enormous.
You’ve come a long way, baby.
Robert Young (“Marcus Welby”) and Jane Wyatt (“Margaret Anderson” on a different show). By ABC TelevisionUploaded by We hope at en.wikipedia – eBay itemphoto informationTransferred from en.wikipedia by SreeBot, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16472486.
The Digital Trust & Safety Partnership (DTSP) consists of “leading technology companies,” including Apple, Google, Meta (parent of Facebook, Instagram, and WhatsApp), Microsoft (and its LinkedIn subsidiary), TikTok, and others.
DTSP appreciates and shares Ofcom’s view that there is no one-size-fits-all approach to trust and safety and to protecting people online. We agree that size is not the only factor that should be considered, and our assessment methodology, the Safe Framework, uses a tailoring framework that combines objective measures of organizational size and scale for the product or service in scope of assessment, as well as risk factors.
We’ll get to the “Safe Framework” later. DTSP continues:
Overly prescriptive codes may have unintended effects: Although there is significant overlap between the content of the DTSP Best Practices Framework and the proposed Illegal Content Codes of Practice, the level of prescription in the codes, their status as a safe harbor, and the burden of documenting alternative approaches will discourage services from using other measures that might be more effective. Our framework allows companies to use whatever combination of practices most effectively fulfills their overarching commitments to product development, governance, enforcement, improvement, and transparency. This helps ensure that our practices can evolve in the face of new risks and new technologies.
But remember that the UK’s neighbors in the EU recently prescribed that USB-3 cables are the way to go. This not only forced DTSP member Apple to abandon the Lightning cable worldwide, but it affects Google and others because there will be no efforts to come up with better cables. Who wants to fight the bureaucratic battle with Brussels? Or alternatively we will have the advanced “world” versions of cables and the deprecated “EU” standards-compliant cables.
So forget Ofcom’s so-called overbearing approach and just adopt the Safe Framework. Big tech will take care of everything, including all those age assurance issues.
Incorporating each characteristic comes with trade-offs, and there is no one-size-fits-all solution. Highly accurate age assurance methods may depend on collection of new personal data such as facial imagery or government-issued ID. Some methods that may be economical may have the consequence of creating inequities among the user base. And each service and even feature may present a different risk profile for younger users; for example, features that are designed to facilitate users meeting in real life pose a very different set of risks than services that provide access to different types of content….
Instead of a single approach, we acknowledge that appropriate age assurance will vary among services, based on an assessment of the risks and benefits of a given context. A single service may also use different approaches for different aspects or features of the service, taking a multi-layered approach.