Keeping the internet open is crucial, and part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online. Reddit is a uniquely large and vibrant community that has long been an important space for conversation on the internet. Additionally, using LLMs, ML, and AI allow Reddit to improve the user experience for everyone.
In line with this, Reddit and OpenAI today announced a partnership to benefit both the Reddit and OpenAI user communities…
Perhaps some members of the Reddit user community may not feel the benefits when OpenAI is training on their data.
While people who joined Reddit presumably understood that anyone could view their data, they never imagined that a third party would then process its data for its own purposes.
Oh, but wait a minute. Reddit clarifies things:
This partnership…does not change Reddit’s Data API Terms or Developer Terms, which state content accessed through Reddit’s Data API cannot be used for commercial purposes without Reddit’s approval. API access remains free for non-commercial usage under our published threshold.
It discussed both large language models and large multimodal models. In this case “multimodal” is used in a way that I normally DON’T use it, namely to refer to the different modes in which humans interact (text, images, sounds, videos). Of course, I gravitated to a discussion in which an image of a person’s face was one of the modes.
In this post I will look at LMMs…and I will also look at LMMs. There’s a difference. And a ton of power when LMMs and LMMs work together for the common good.
When Google announced its Gemini series of AI models, it made a big deal about how they were “natively multimodal.” Instead of having different modules tacked on to give the appearance of multimodality, they were apparently trained from the start to be able to handle text, images, audio, video, and more.
Other AI models are starting to function in a TRULY multimodal way, rather than using separate models to handle the different modes.
So now that we know that LLMs are large multimodal models, we need to…
…um, wait a minute…
Introducing the Large Medical Model (LMM)
It turns out that the health people have a DIFFERENT definition of the acronym LMM. Rather than using it to refer to a large multimodal model, they refer to a large MEDICAL model.
Our first of a kind Large Medical Model or LMM for short is a type of machine learning model that is specifically designed for healthcare and medical purposes. It is trained on a large dataset of medical records, claims, and other healthcare information including ICD, CPT, RxNorm, Claim Approvals/Denials, price and cost information, etc.
I don’t think I’m stepping out on a limb if I state that medical records cannot be classified as “natural” language. So the GenHealth.AI model is trained specifically on those attributes found in medical records, and not on people hemming and hawing and asking what a Pekingese dog looks like.
But there is still more work to do.
What about the LMM that is also an LMM?
Unless I’m missing something, the Large Medical Model described above is designed to work with only one mode of data, textual data.
But what if the Large Medical Model were also a Large Multimodal Model?
Rather than converting a medical professional’s voice notes to text, the LMM-LMM would work directly with the voice data. This could lead to increased accuracy: compare the tone of voice of an offhand comment “This doesn’t look good” with the tone of voice of a shocked comment “This doesn’t look good.” They appear the same when reduced to text format, but the original voice data conveys significant differences.
Rather than just using the textual codes associated with an X-ray, the LMM-LMM would read the X-ray itself. If the image model has adequate training, it will again pick up subtleties in the X-ray data that are not present when the data is reduced to a single medical code.
In short, the LMM-LMM (large medical model-large multimodal model) would accept ALL the medical outputs: text, voice, image, video, biometric readings, and everything else. And the LMM-LMM would deal with all of it natively, increasing the speed and accuracy of healthcare by removing the need to convert everything to textual codes.
A tall order, but imagine how healthcare would be revolutionized if you didn’t have to convert everything into text format to get things done. And if you could use the actual image, video, audio, or other data rather than someone’s textual summation of it.
Obviously you’d need a ton of training data to develop an LMM-LMM that could perform all these tasks. And you’d have to obtain the training data in a way that conforms to privacy requirements: in this case protected health information (PHI) requirements such as HIPAA requirements.
But if someone successfully pulls this off, the benefits are enormous.
You’ve come a long way, baby.
Robert Young (“Marcus Welby”) and Jane Wyatt (“Margaret Anderson” on a different show). By ABC TelevisionUploaded by We hope at en.wikipedia – eBay itemphoto informationTransferred from en.wikipedia by SreeBot, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16472486.
I’m going to describe one example of how Bredemarket has helped its customers, based upon one of my client projects from several years ago.
Stupid Word Tricks. Tell your brother, your sister and your mama too. See below.
I’ve told this story before, but I wanted to take a fresh look at the problem the firm had, and the solution Bredemarket provided. I’m not identifying the firm, but perhaps YOUR firm has a similar problem that I can solve for you. And your firm is the one that matters.
The problem
This happened several years ago, but was one of Bredemarket’s first successes.
The firm that asked for my help is one that focuses on one particular biometric modality, and provides a high-end solution for biometric identification.
In addition, the firm’s solution has multiple applications, crime solving and disaster victim identification being two of them.
The firm needed a way to perform initial prospect outreach via budgetary quotations, targeted to the application that mattered to the prospect. A simple proposal problem to be solved…or so it seemed.
Why the obvious proposal solution didn’t work
I had encountered similar problems while employed at Printrak and MorphoTrak and while consulting here at Bredemarket, so the solution was painfully obvious.
Qvidian, one proposal automation software package that I have used. But there are a LOT of proposal automation software packages out there, including some new ones that incorporate artificial intelligence. From https://uplandsoftware.com/qvidian/.
Have your proposal writers create relevant material in their proposal automation software that could target each of the audiences.
So when your salesperson wants to approach a medical examiner involved in disaster victim identification, the proposal writer could just run the proposal automation software, create the targeted budgetary quotation, populate it with the prospect’s contact information, and give the completed quotation to the salesperson.
Unfortuntely for the firm, the painfully obvious solution was truly painful, for two reasons:
This firm had no proposal automation software. Well, maybe some other division of the firm had such software, but this division didn’t have access to it. So the whole idea of adding proposal text to an existing software solution, and programming the solution to generate the appropriate budgetary quotation, wasn’t going to fly.
In addition, this firm had no proposal writers. The salespeople were doing this on their own. The only proposal writer they had was the contractor from Bredemarket. And they weren’t going to want to pay for me to generate every budgetary quotation they needed.
In this case, the firm needed a way for the salespeople to generate the necessary budgetary quotations as easily as possible, WITHOUT relying on proposal automation software or proposal writers.
Bredemarket’s solution
To solve the firm’s problem, I resorted to Stupid Word Tricks.
I created two similar budgetary quotation templates: one for crime solving, and one for disaster victim identification. (Actually I created more than two.) That way the salesperson could simply choose the budgetary quotation they wanted.
The letters were similar in format, but had little tweaks depending upon the audience.
Using document properties to create easy-to-use budgetary quotations.
The Stupid Word Tricks came into play when I used Word document property features to allow the salesperson to enter the specific information for each prospect, which then rippled throughout the document, providing a customized budgetary quotation to the prospect.
The result
The firms’ salespeople used Bredemarket’s templates to generate initial outreach budgetary quotations to their clients.
And the salespeople were happy.
I’ve used this testimonial quote before, but it doesn’t hurt to use it again.
“I just wanted to truly say thank you for putting these templates together. I worked on this…last week and it was extremely simple to use and I thought really provided a professional advantage and tool to give the customer….TRULY THANK YOU!”
Comment from one of the client’s employees who used the standard proposal text
While I actively consulted for the firm I maintained the templates, updating as needed as the firm achieved additional certifications.
Why am I telling this story again?
I just want to remind people that Bredemarket doesn’t just write posts, articles, and other collateral. I can also create collateral such as these proposal templates that you can re-use.
(T)here were 7,887 nurses who recently ended their healthcare careers between 2018 and 2021….39% of respondents said their decision to leave healthcare was due to a planned retirement. However, 26% of respondents cited burnout or emotional exhaustion, and 21% cited insufficient staffing.
And this is ALL nurses. Not just the forensic nurses who have to deal with upsetting examinations that (literally) probe into sexual assault and child abuse. All nurses have it tough.
At Artisight we are committed to reversing this trend through AI-driven technology that is bringing the joy back to medicine!!
Can artificial intelligence bots truly relieve the exhaustion of overworked health professionals? Let’s look at two AI solutions from 3M and Artisight and see whether they truly benefit medical staff.
3M, a former competitor to MorphoTrak until 3M sold its biometric offerings (as did MorphoTrak’s parent Safran), has invested heavily into healthcare artificial intelligence solutions. This includes a solution that addresses the bane of medical professionals everywhere—keeping up with the paperwork (and checking for potentially catastrophic errors).
Our solutions use artificial intelligence (AI) to alleviate administrative burden and proactively identify gaps and inconsistencies within clinical documentation. Supporting completeness and accuracy every step of the way, from capture to code, means rework doesn’t end up on the physician’s plate before or even after discharge. That enables you to keep your focus where it needs to be – on the patient right in front of you.
But what about Artisight, whose assertion inspired this post in the first place?
A recent PYMNTS article interviewed Artisight President Stephanie Lahr to uncover Artight’s approach.
The Artisight platform marries IoT sensors with machine learning and large language models. The overall goal in a hospital setting is to streamline safe patient care, including virtual nursing. Compliance with HIPAA, according to Lahr, has been an important part of the platform’s development, which includes computer vision, voice recognition, vital sign monitoring, indoor positioning capabilities and actionable analytics reports.
In more detail, a hospital patient room is equipped with Al-powered devices such as high-quality, two-way audio and video with multiple participants for virtual care. Ultra-wideband technology tracks the movement and flow of assets throughout the hospital. Remote nurses and observers monitor patient room activity off-site and interact virtually with patients and clinicians.
At a minimum, this reduces the need for nurses to run down the hall just to check things. At a maximum, tracking of asset flows and actionable analytics reports make the job of everyone in the hospital easier.
So how can 3M’s and Artisight’s artificial intelligence offerings benefit medical facilities?
Allow medical professionals to concentrate on care. Patients don’t need medical professionals who are buried in paperwork. Patients need medical professionals who are spending time with them. The circumstances that land a patient in a hospital are bad enough, and to have people who are forced to ignore patient needs makes it worse. Maybe some day we’ll even get back to Welbycare.
Free medical professionals from routine tasks. Assuming the solutions work as advertised, they eliminate the need to double-check a report for errors, or the need to walk down the hall to capture vital signs.
Save lives. Yeah, medical professionals do that. If the Marcus Welby AI bot spots an error in a report, or if the bot detects a negative change in vital signs while a nurse is occupied with another patient, the technology could very well save a life.
I’ve talked ad nauseum about the need for a firm to differentiate itself from its competitors. If your firm engages in “me too” marketing, prospects have no reason to choose you.
But what about companies that DO differentiate themselves…and suddenly stop doing so?
There are four reasons why companies could stop differentiating themselves:
Sometimes companies gain a temporary competitive advantage that disappears as other firms catch up. But more often, the company only pursues the differentiator temporarily.
In 1985, amid anxiety about trade deficits and the loss of American manufacturing jobs, Walton launched a “Made in America” campaign that committed Wal-Mart to buying American-made products if suppliers could get within 5 percent of the price of a foreign competitor. This may have compromised the bottom line in the short term, but Walton understood the long-term benefit of convincing employees and customers that the company had a conscience as well as a calculator.
Now some of you may not remember Walmart’s “Made in America” banners, but I can assure you they were prevalent in many Walmarts in the 1980s and 1990s. Sam Walton’s autobiography even featured the phrase.
But as time passed, Walmart stocked fewer and fewer “Made in America” items as customers valued low prices over everything else. And some of the “Made in America” banners in Walmarts in the 1990s shouldn’t have been there:
“Dateline NBC” produced an exposé on the company’s sourcing practices. Although Wal-Mart’s “Made in America” campaign was still nominally in effect, “Dateline” showed that store-level associates had posted “Made in America” signs over merchandise actually produced in far away sweatshops. This sort of exposure was new to a company that had been a press darling for many years, and Wal-Mart’s stock immediately declined by 3 percent.
The Walmart domestic production episodes illustrate something else. If Walmart wanted to, it could have persevered and bought from domestic suppliers, even if the supplier price differential was greater than 5%.
But the buying customers didn’t really care.
Affordability was much more important to buyers than U.S. job creation.
So while labor leaders, politicians, and others may have complained about Walmart’s increasing reliance on Chinese goods, the company’s customers continued to do business with Walmart, bringing profitability to the company.
And before you decry the actions of consumers who act against their national self-interest…where was YOUR phone manufactured? China? Vietnam? Unless you own a Librem 5 USA, your phone isn’t from around here. We’re all Commies.
The market has changed
Sometimes the market changes and consumers look at things a little differently.
I’ve previously told the story of Mita, and its 1980s slogan “all we make are great copiers.” In essence, Mita had to adopt this slogan because, unlike its competitors, it did NOT have a diversified portfolio.
This worked for a while…until the “document solutions” industry (copiers and everything else) embraced digital technologies. Well, Fuji-Xerox, Ricoh and Konica did. Mita didn’t, and went bankrupt.
Before Walmart emphasized “Made in America” products, former (and present) stand-up comedian Steve Martin was dispensing tax advice.
“Steve.. how can I be a millionaire.. and never pay taxes?” First.. get a million dollars. Now.. you say, “Steve.. what do I say to the tax man when he comes to my door and says, ‘You.. have never paid taxes’?” Two simple words. Two simple words in the English language: “I forgot!”
While the IRS will not accept this defense, there are times when people, and companies, forget things.
I know of one company that had a clear differentiator over most of its competition: the fact that a key component of its solution was self-authored, rather than being sourced from a third party.
For a time, the company strongly emphasized this differentiator, casting fear, uncertainty, and doubt against its competitors who depended upon third parties for this key component.
But time passes, priorities change, and the company’s website now buries this differentiator on a back page…making the company sound like all its competitors.
But the company has an impressive array of features, so there’s that.
Restore your differentiators
If your differentiators have faded away, or your former differentiators are no longer important, perhaps it’s time to re-emphasize them so that your prospects have a reason to choose you.
Ask yourself questions about why your firm is great, why all the other firms suck, and what benefits (not features) your customers enjoy that the competition’s customers don’t. Only THEN can you create content (or have your content creator do it for you).
A little postscript: originally I was only going to list three items in this post, but Hana LaRock counsels against this because bots default to three-item lists (see her item 4).
Um, thanks but no thanks. When the first sentence doesn’t even bother to define the acronym “B2B,” you know the content isn’t useful to explain the topic “what is B2B writing.”
Before I explain what B2B writing is, maybe I’d better explain what “B2B” is. And two related acronyms.
B2B stands for business to business. Bredemarket, for example, is a business that sells to other businesses. In my case, marketing and writing services.
B2G stands for business to government. Kinda sorta like B2B, but government folks are a little different. For example, these folks mourned the death of Mike Causey. (I lived outside of Washington DC early in Causey’s career. He was a big deal.) A B2G company, for example, could sell driver’s license products and services to state motor vehicle agencies.
B2C stands for business to consumer. Many businesses create products and services that are intended for consumers and marketed directly to them, not to intermediate businesses. Promotion of a fast food sandwich is an example of a B2C marketing effort.
I included the “B2G” acronym because most of my years in identity and biometrics were devoted to local, state, federal, and international government sales. My B2G experience is much deeper than my B2B experience, and way deeper than my B2C expertise.
Let’s NOT make this complicated
I’m sure that Ubersuggest could spin out a whole bunch of long-winded paragraphs that explain the critical differences between the three marketing efforts above. But let’s keep it simple and limit ourselves to two truths and no lies.
TRUTH ONE: When you market B2B or B2G products or services, you have FEWER customers than when you market B2C products or services.
That’s pretty much it in terms of differences. I’ll give you an example.
If Bredemarket promoted its marketing and writing services to all of the identity verification companies, I would target less than 200 customers.
If IDEMIA or Thales or GET Group or CBN promoted their driver’s license products and services to all of the state, provincial, and territorial motor vehicle agencies in the United States and Canada, they would target less than 100 customers.
If McDonald’s resurrects and promotes its McRib sandwich, it would target hundreds of millions of customers in the United States alone.
The sheer scale of B2C marketing vs. B2B/B2G marketing is tremendous and affects how the company markets its products and services.
But one thing is similar among all three types of writing.
TRUTH TWO: B2B writing, B2G writing, and B2C writing are all addressed to PEOPLE.
Well, until we program the bots to read stuff for us.
This is something we often forget. We think that we are addressing a blog post or a proposal to an impersonal “company.” Um, who works in companies? People.
(Again, until we program the bots.)
Whether you’re marketing a business blog post writing service, a government software system, or a pseudo rib sandwich, you’re pitching it to a person. A person with problems and needs that you can potentially solve.
So solve their needs.
Don’t make it complex.
But what IS B2B writing?
Let’s return to the original question. Sorry, I got off on a bit of a tangent. (But at least I didn’t trail off into musings about “the dynamic and competitive world.”)
When I write something for a business:
I must focus on that business and not myself (customer focus). The business doesn’t want to hear my talk about myself. The business wants to hear what I can do for it.
I must acknowledge the business’ needs and explain the benefits of my solution to meet the business needs. A feature list without any benefits is just a list of cool things; you still have to explain how the cool things will benefit the business by solving its problem.
My writing must address one, or more, different types of people who are hungry for my solution to their problem. (This is what Ubersuggest and others call a “target audience,” because I guess Ubersuggest aims lasers at the assembled anonymous crowd.)
She delivered a Thursday presentation entitled “Customizing generative AI applications for your business using your data.” The tool that Tanke uses for customization is Amazon Bedrock, which supports Retrieval-Augmented Generation, or RAG.
Retrieval-Augmented Generation (RAG) is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response. Large Language Models (LLMs) are trained on vast volumes of data and use billions of parameters to generate original output for tasks like answering questions, translating languages, and completing sentences. RAG extends the already powerful capabilities of LLMs to specific domains or an organization’s internal knowledge base, all without the need to retrain the model. It is a cost-effective approach to improving LLM output so it remains relevant, accurate, and useful in various contexts.
Because Amazon has obviously referred to my seven questions—OK, maybe they didn’t—the RAG page devotes time to the “why” question and the “benefits” question.
So what happens when you use LLMs WITHOUT retrieval-augmented generation?
You can think of the Large Language Model as an over-enthusiastic new employee who refuses to stay informed with current events but will always answer every question with absolute confidence.
How does RAG solve these problems? It “redirects the LLM to retrieve relevant information from authoritative, pre-determined knowledge sources.” RAG allows you to introduce more current information to the LLM which reduces cost, increases accuracy (and attributes sources), and supports better testing and improvements.
Years ago, Steve Martin had a routine in which he encouraged his audience to say, in unison, that they promise to be different and they promise to be unique.
No, repeating the canned phrase about standing out from the crowd does NOT make you stand out from the crowd.
But wait. It gets worse.
The authenticity bot
When I reshared Rodriguez’s post, I wanted to illustrate it with an image that showed how many people use the phrase “stand out from the crowd.”
But while I couldn’t get that exact number on my smartphone search (a subsequent laptop search revealed 477 million search results), I got something else: Google Gemini’s experimental generative AI response to the question, bereft of irony just like everything else we’ve encountered in this exercise.
You see, according to Gemini, one way to stand out from the crowd is to “be authentic.”
Yes, Google Gemini really said that.
Google search results, including generative AI results.
Now I don’t know about a bot telling me to “be authentic.”
Rodriguez addresses “how” and “why”
Going back to Taylor “Taz” Rodriguez’s post, he had a better suggestion for marketers. Instead of using canned phrases, we should instead create original answers to these two questions:
HOW do you help your clients stand apart from the competition?
WHY have your past & current clientele chosen to work with you?
BOTH questions are important, both need to be addressed, and it really doesn’t matter which one you address first.
In fact, there are some very good reasons to start with the “how” question in this case. It’s wonderful for the marketer to focus on the question of how they stand apart from the competition.
Apologies in advance, but if you’re NOT interested in fingerprints, you’ll want to skip over this Bredemarket identity/biometrics post, my THIRD one about fingerprint uniqueness and/or similarity or whatever because the difference between uniqueness and similarity really isn’t important, is it?
Yes, one more post about the study whose principal author was Gabe Guo, the self-styled “inventor of cross-fingerprint recognition.”
I also wrote something only on LinkedIn (and Facebook) that cited a CNN article that quoted Christophe Champod and Simon Cole. (Interestingly enough, my last post on Cole concerned how words matter, which is appropriate in this discussion.) Unfortunately, the person who wrote the CNN headline (“Are fingerprints unique? Not really, AI-based study finds”) didn’t pay attention to a word that Champod and Simon Cole said.
But don’t miss this
Well, two other people have weighed in on the paper: Glenn Langenburg and Eric Ray, co-presenters on the Double Loop Podcast. (“Double loop” is a fingerprint thing.)
So who are Langenburg and Ray? You can read their full biographies here, but both of them are certified latent print examiners. This certification, administered by the International Association for Identification, is designed to ensure that the certified person is knowledgeable about both latent (crime scene) fingerprints and known fingerprints, and how to determine whether or not two prints come from the same person. If someone is going to testify in court about fingerprint comparison, this certification is recognized as a way to designate someone as an expert on the subject, as opposed to a college undergraduate. (As of today, the list of IAI certified latent print examiners as of December 2023 can be found here in PDF form.)
Podcast episode 264 dives into the Columbia study in detail, including what the study said, what it didn’t say, and what the publicity for the study said that doesn’t match the study.
Eric and Glenn respond to the recent allegations that a computer science undergraduate at Columbia University, using Artificial Intelligence, has “proven that fingerprints aren’t unique” or at least…that’s how the media is mischaracterizing a new published paper by Guo, et al. The guys dissect the actual publication (“Unveiling intra-person fingerprint similarity via deep contrastive learning” in Science Advances, 2024 by Gabe Guo, et al.). They state very clearly what the paper actually does show, which is a far cry from the headlines and even public dissemination originating from Columbia University and the author. The guys talk about some of the important limitations of the study and how limited the application is to real forensic investigations. They then explore some of the media and social media outlets that have clearly misunderstood this paper and seem to have little understanding of forensic science. Finally, Eric and Glenn look at some quotes and comments from knowledgeable sources who also have recognized the flaws in the paper, the authors’ exaggerations, and lack of understanding of the value of their findings.
Yes, the episode is over an hour long, but if you want to hear a good discussion of the paper that goes beyond the headlines, I strongly recommend that you listen to it.
TL;DR
If you’re in a TL;DR frame of mind, I’ll just offer one tidbit: “uniqueness” and “similarity” are not identical. Frankly, they’re not even similar.
Let’s say that your identity/biometric firm has decided that silence ISN’T golden, and that perhaps your firm needs to talk about its products and services.
So you turn to your favorite generative AI tool to write something that will represent your company in front of everyone. What could go wrong?
Battling synthetic identities requires a multi-pronged approach. Layering advanced technology is key: robust identity verification using government-issued IDs and biometrics to confirm a person’s existence, data enrichment and validation from diverse sources to check for inconsistencies, and machine learning algorithms to identify suspicious patterns and red flags. Collaboration is crucial too, from financial institutions sharing watchlists to governments strengthening regulations and consumers practicing good cyber hygiene. Ultimately, vigilance and a layered defense are the best weapons against these ever-evolving digital phantoms.
From Google Bard.
Great. You’re done, and you saved a lot of money by NOT hiring an identity blog writing expert. The text makes a lot of important points, so I’m sure that your prospects will be inspired by it.
Bot-speak is not an optimal communication strategy either. Generated at craiyon.com.
Well…
…until your prospects ask what YOU do and how you are better than every other identity firm out there. If you’re the same as all the other “me too” solutions, then your prospects will just go with the lowest price provider.
So how do you go about intelligently writing about biometrics?
Intelligently writing about biometrics doesn’t only require some critical words such as “validation.”
Intelligently writing about biometrics doesn’t only require that you KNOW what those words mean, and that you’re conversant in basic biometric topics. (If you want to know five topics a biometric content marketing expert needs to understand, read my post on that subject.)
Intelligently writing about biometrics requires that you put all of this information together AND effectively communicate your message…
…including why your identity/biometrics firm is great and why all the other identity/biometric firms are NOT great.
If you’re doing this on your own, be sure to ask yourself a lot of questions so that you get started on the right track.
If you’re asking Bredemarket to help you create your identity/biometric content by intelligently writing about biometrics, I’ll take care of the questions.
Oh, and one more thing: if you noted my use of the word “no siree” earlier in this post, it was taken from the Talking Heads song “The Big Country.” Here’s an independent video of that song, especially recommended for people outside of North America who may not realize that the United States and Canada are…well, big countries.