AI Articles in Ten (Not Five) Minutes—But I Can’t Tell You Why

More on the “human vs. AI vs. both” debate on content generation, and another alternative—the Scalenut tool.

The five-minute turnaround

I’ve been concerned about my own obsolescence for over a year now.

I haven’t seen a lot of discussion of one aspect of #generativeai:

Its ability to write something in about a minute.

(OK, maybe five minutes if you try a few prompts,)

Now I consider myself capable of cranking out a draft relatively quickly, but even my fastest work takes a lot longer than five minutes to write.

“Who cares, John? No one is demanding a five minute turnaround.”

Not yet.

Because it was never possible before (unless you had proposal automation software, but even that couldn’t create NEW text).

What happens to us writers when a five-minute turnaround becomes the norm?

The five-minute requirement

I returned to the topic in January, with a comment on the quality of generative AI text.

Never mind that the resulting generative AI content was wordy, crappy, and possibly incorrect. For some people the fact that the content was THERE was good enough.

OK, Writer.com (with a private dataset) claims to do a better job, but much of the publicly-available free generative AI tools are substandard.

Then I noted that sometimes I will HAVE to get that content out without proper reflection. I outlined two measures to do this:

  1. Don’t sleep on the content.
  2. Let full-grown ideas spring out of your head.

But I still prefer to take my time brewing my content. I’ve spent way more than five minutes on this post alone, and I don’t even know how I’m going to end it yet. And I still haven’t selected the critically important image to accompany the post.

Am I a nut for doing things manually?

You’ve gone from idea to 2500+ word articles in 10 minutes.

Now that I’ve set the context, let’s see what Kieran MacRae (quoted above) has to say about Scalenut. But first, let’s see Kieran’s comments about the state of the industry:

Sure, once upon a time, AI writing tools would write about as well as a 4-year-old.

So what does Scalenut do?

With Scalenut, you will reduce your content creation time by 75% and become a content machine. 

The content gets written in your tone of voice, and the only changes I made were adding personal anecdotes and a little Kieran charm.

But…why?

Why is Scalenut better?

Kieran doesn’t say.

And if Scalenut explains WHY its technology is so great, the description is hidden behind an array of features, benefits, and statistics.

Maybe it’s me, but Scalenut could improve its differentiation here, as outlined in my video.

Differentiation, by Bredemarket.

What Scalenut does…and doesn’t do

I should clarify that copyrighting is but one part of Scalenut’s arsenal.

Scalenut is a one-stop-shop AI-powered SEO writing tool that will see you through keyword selection, research, and content production. Plus, you get full access to their copywriting tool, which can create more specific short-form content like product descriptions.

You optimize SEO content by adding NLP keywords, which are the words that Google uses to decide what an article is about.

MacRae cautions that it’s not for “individuals whose writing is their brand,” and Scalenut’s price point means that it’s not for people who only need a few pieces a month.

But if you need a lot of content, and you’re not Stephen King or Dave Barry or John Bredehoft (not in terms of popularity, but of distinctness), then perhaps Scalenut may help you.

I can’t tell you why, though.

(And an apology for those who watch the video; like “The Long Run” album itself, it takes forever to get to the song.)

From https://www.youtube.com/watch?v=Odcn6qk94bs.

Text Generation in Images? Use an LMM.

I use both text generators (sparingly) and image generators (less sparingly) to artificially create text and images. But I encounter one image challenge that you’ve probably encountered also: bizarre misspellings.

This post includes an example, created in Google Gemini, that was created using the following prompt:

Create a square image of a library bookshelf devoted to the works authored by Dave Barry.

Now in the ideal world, my prompt would completely research Barry’s published titles, and the resulting image would include these book titles (such as Dave Barry Slept Here, one of the greatest history books of all time maybe or maybe not).

In the mediocre world, at least the book spines would include the words “Dave Barry.”

Gemini gave me nothing of the sort.

From Google Gemini.

The bookshelf may as well contain books by Charles Dikkens, the well-known Dutch author.

Why can’t your image generator spell words properly?

It always mystified me that AI-generated images had so many weird words, to the point where I wondered whether the AI was specifically programmed to misspell.

It wasn’t…but it wasn’t programmed to spell either.

TechCrunch recently published an article in which the title was so good you didn’t have to read the article itself. The title? “Why is AI so bad at spelling? Because image generators aren’t actually reading text.”

This is something that I pretty much forgot.

  • When I use an AI-powered text generator, it has been trained to respond to my textual prompts and create text.
  • When I use an AI-powered image generator, it has been trained to respond to my textual prompts and create images.

Two very different tasks, as noted by Asmelash Teka Hadgu, co-founder of Lesan and a fellow at the DAIR Institute.

“The diffusion models, the latest kind of algorithms used for image generation, are reconstructing a given input,” Hagdu told TechCrunch. “We can assume writings on an image are a very, very tiny part, so the image generator learns the patterns that cover more of these pixels.”

The algorithms are incentivized to recreate something that looks like what it’s seen in its training data, but it doesn’t natively know the rules that we take for granted — that “hello” is not spelled “heeelllooo,” and that human hands usually have five fingers.

So what’s the solution?

We need LMM image-text generators

The solution is something I’ve talked about before: large multimodal models. Permit me to repeat myself (it’s called repurposing) and quote from Chip Huyen again.

For a long time, each ML (machine learning) model operated in one data mode – text (translation, language modeling), image (object detection, image classification), or audio (speech recognition).

However, natural intelligence is not limited to just a single modality. Humans can read and write text. We can see images and watch videos. We listen to music to relax and watch out for strange noises to detect danger. Being able to work with multimodal data is essential for us or any AI to operate in the real world.

So if we ask an image generator to create an image of a library bookshelf with Dave Barry works, it would actually display book spines with Barry’s actual titles.

So why doesn’t my Google Gemini already provide this capability? It has a text generator and it has an image generator: why not provide both simultaneously?

Because that’s EXPENSIVE.

I don’t know whether Google’s Vertex AI provides the multimodal capabilties I seek, where text in images is spelled correctly.

And even with $300 in credits, I’m not going to spend the money to find out. See Vertex AI’s generative AI pricing here.

Who You Are, Plus What You Have, Equals What You Are

(Part of the biometric product marketing expert series)

Yes, I know the differences between the various factors of authentication.

Let me focus on two of the factors.

  • Something You Are. This is the factor that identifies people. It includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.
  • Something You Have. While this is used to identify people, in truth this is the factor that identifies things. It includes driver’s licenses and hardware or software tokens.

There’s a very clear distinction between these two factors of authentication: “something you are” for people, and “something you have” for things.

But what happens when we treat the things as beings?

Who, or what, possesses identity?

License Plate Recognition

I’ve spent a decade working with automatic license plate recognitrion (ALPR), sometimes known as automatic number plate recognition (ANPR).

Actually more than a decade, since my car’s picture was taken in Montclair, California a couple of decades ago doing something it shouldn’t have been doing. I ended up in traffic school for that one.

But my traffic school didn’t have a music soundtrack. From https://www.imdb.com/title/tt0088847/mediaviewer/rm1290438144/?ref_=tt_md_2.

Now license plate recognition isn’t that reliable of an identifier, since within a minute I can remove a license plate from a vehicle and substitute another one in its place. However, it’s deemed to be reliable enough that it is used to identify who a car is.

Note my intentional use of the word “who” in the sentence above.

  • Because when my car made a left turn against a red light all those years ago, the police didn’t haul MY CAR into court.
  • Using then-current technology, it identified the car, looked up the registered owner, and hauled ME into court.

These days, it’s theoretically possible (where legally allowed) to identify the license plate of the car AND identify the face of the person driving the car.

But you still have this strange merger of who and what in which the non-human characteristics of an entity are used to identify the entity.

What you are.

But that’s nothing compared to what’s emerged over the past few years.

We Are The Robots

When the predecessors to today’s Internet were conceived in the 1960s, they were intended as a way for people to communicate with each other electronically.

And for decades the Internet continued to operate this way.

Until the Internet of Things (IoT) became more and more prominent.

From LINK REMOVED 2025-01-20

How prominent? The Hacker News explains:

Application programming interfaces (APIs) are the connective tissue behind digital modernization, helping applications and databases exchange data more effectively. The State of API Security in 2024 Report from Imperva, a Thales company, found that the majority of internet traffic (71%) in 2023 was API calls.

Couple this with the increasing use of chatbots and other artificial intelligence bots to generate content, and the result is that when you are communicating with someone on the Internet, there is often no “who.” There’s a “what.”

What you are.

Between the cars and the bots, there’s a lot going on.

What does this mean?

There are numerous legal and technical ramifications, but I want to concentrate on the higher meaning of all this. I’ve spent 29 years professionally devoted to the identification of who people are, but this focus on people is undergoing a seismic change.

KITT. By Tabercil – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=14927883.

The science fiction stories of the past, including TV shows such as Knight Rider and its car KITT, are becoming the present as we interact with automobiles, refrigerators, and other things. None of them have true sentience, but it doesn’t matter because they have the power to do things.

The late Dr. Frank Poole, died in 2001. From https://cinemorgue.fandom.com/wiki/Gary_Lockwood.

In the meantime, the identification industry not only has to identify people, but also identify things.

And it’s becoming more crucial that we do so, and do it accurately.

If You’re Using ChatGPT Commercially, Are You Violating Reddit’s Terms?

How to give a privacy advocate a coronary? Have OpenAI and Reddit reach an agreement.

Keeping the internet open is crucial, and part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online. Reddit is a uniquely large and vibrant community that has long been an important space for conversation on the internet. Additionally, using LLMs, ML, and AI allow Reddit to improve the user experience for everyone.

In line with this, Reddit and OpenAI today announced a partnership to benefit both the Reddit and OpenAI user communities…

Perhaps some members of the Reddit user community may not feel the benefits when OpenAI is training on their data.

While people who joined Reddit presumably understood that anyone could view their data, they never imagined that a third party would then process its data for its own purposes.

Oh, but wait a minute. Reddit clarifies things:

This partnership…does not change Reddit’s Data API Terms or Developer Terms, which state content accessed through Reddit’s Data API cannot be used for commercial purposes without Reddit’s approval. API access remains free for non-commercial usage under our published threshold.

And, of course, OpenAI’s “primary fiduciary duty is to humanity,” so of course it is NOT using the Reddit data for commercial purposes.

And EVERY ONE of the people who accesses Reddit data through OpenAI’s offerings would NEVER use the data for commercial…

…um…

…we’ll get back to you on that.

LMM vs. LMM (Acronyms Are Funner)

Do you recall my October 2023 post “LLM vs. LMM (Acronyms Are Fun)“?

It discussed both large language models and large multimodal models. In this case “multimodal” is used in a way that I normally DON’T use it, namely to refer to the different modes in which humans interact (text, images, sounds, videos). Of course, I gravitated to a discussion in which an image of a person’s face was one of the modes.

Document processing with GPT-4V. The model’s mistake is highlighted in red. From https://huyenchip.com/2023/10/10/multimodal.html?utm_source=tldrai.

In this post I will look at LMMs…and I will also look at LMMs. There’s a difference. And a ton of power when LMMs and LMMs work together for the common good.

Revisiting the Large Multimodal Model (LMM)

Since I wrote that piece last year, large multimodal models continue to be discussed. Harry Guinness just wrote a piece for Zapier in March.

When Google announced its Gemini series of AI models, it made a big deal about how they were “natively multimodal.” Instead of having different modules tacked on to give the appearance of multimodality, they were apparently trained from the start to be able to handle text, images, audio, video, and more. 

Other AI models are starting to function in a TRULY multimodal way, rather than using separate models to handle the different modes.

So now that we know that LLMs are large multimodal models, we need to…

…um, wait a minute…

Introducing the Large Medical Model (LMM)

It turns out that the health people have a DIFFERENT definition of the acronym LMM. Rather than using it to refer to a large multimodal model, they refer to a large MEDICAL model.

As you can probably guess, the GenHealth.AI model is trained for medical purposes.

Our first of a kind Large Medical Model or LMM for short is a type of machine learning model that is specifically designed for healthcare and medical purposes. It is trained on a large dataset of medical records, claims, and other healthcare information including ICD, CPT, RxNorm, Claim Approvals/Denials, price and cost information, etc.

I don’t think I’m stepping out on a limb if I state that medical records cannot be classified as “natural” language. So the GenHealth.AI model is trained specifically on those attributes found in medical records, and not on people hemming and hawing and asking what a Pekingese dog looks like.

But there is still more work to do.

What about the LMM that is also an LMM?

Unless I’m missing something, the Large Medical Model described above is designed to work with only one mode of data, textual data.

But what if the Large Medical Model were also a Large Multimodal Model?

By Piotr Bodzek, MD – Uploaded from http://www.ginbytom.slam.katowice.pl/25.html with author permission., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=372117
  • Rather than converting a medical professional’s voice notes to text, the LMM-LMM would work directly with the voice data. This could lead to increased accuracy: compare the tone of voice of an offhand comment “This doesn’t look good” with the tone of voice of a shocked comment “This doesn’t look good.” They appear the same when reduced to text format, but the original voice data conveys significant differences.
  • Rather than just using the textual codes associated with an X-ray, the LMM-LMM would read the X-ray itself. If the image model has adequate training, it will again pick up subtleties in the X-ray data that are not present when the data is reduced to a single medical code.
  • In short, the LMM-LMM (large medical model-large multimodal model) would accept ALL the medical outputs: text, voice, image, video, biometric readings, and everything else. And the LMM-LMM would deal with all of it natively, increasing the speed and accuracy of healthcare by removing the need to convert everything to textual codes.

A tall order, but imagine how healthcare would be revolutionized if you didn’t have to convert everything into text format to get things done. And if you could use the actual image, video, audio, or other data rather than someone’s textual summation of it.

Obviously you’d need a ton of training data to develop an LMM-LMM that could perform all these tasks. And you’d have to obtain the training data in a way that conforms to privacy requirements: in this case protected health information (PHI) requirements such as HIPAA requirements.

But if someone successfully pulls this off, the benefits are enormous.

You’ve come a long way, baby.

Robert Young (“Marcus Welby”) and Jane Wyatt (“Margaret Anderson” on a different show). By ABC TelevisionUploaded by We hope at en.wikipedia – eBay itemphoto informationTransferred from en.wikipedia by SreeBot, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16472486.

How Bredemarket Helps in Early Proposal Engagement

Man, I’ve been negative lately.

I figure that it is time to become more positive.

I’m going to describe one example of how Bredemarket has helped its customers, based upon one of my client projects from several years ago.

Stupid Word Tricks. Tell your brother, your sister and your mama too. See below.

I’ve told this story before, but I wanted to take a fresh look at the problem the firm had, and the solution Bredemarket provided. I’m not identifying the firm, but perhaps YOUR firm has a similar problem that I can solve for you. And your firm is the one that matters.

The problem

This happened several years ago, but was one of Bredemarket’s first successes.

From Sandeep Kumar, A. Sony, Rahul Hooda, Yashpal Singh, in Journal of Advances and Scholarly Researches in Allied Education | Multidisciplinary Academic Research, “Multimodal Biometric Authentication System for Automatic Certificate Generation.”

I should preface this by noting that there are a lot of different biometric modalities, including some that aren’t even listed in the image above.

The firm that asked for my help is one that focuses on one particular biometric modality, and provides a high-end solution for biometric identification.

In addition, the firm’s solution has multiple applications, crime solving and disaster victim identification being two of them.

The firm needed a way to perform initial prospect outreach via budgetary quotations, targeted to the application that mattered to the prospect. A simple proposal problem to be solved…or so it seemed.

Why the obvious proposal solution didn’t work

I had encountered similar problems while employed at Printrak and MorphoTrak and while consulting here at Bredemarket, so the solution was painfully obvious.

Qvidian, one proposal automation software package that I have used. But there are a LOT of proposal automation software packages out there, including some new ones that incorporate artificial intelligence. From https://uplandsoftware.com/qvidian/.

Have your proposal writers create relevant material in their proposal automation software that could target each of the audiences.

So when your salesperson wants to approach a medical examiner involved in disaster victim identification, the proposal writer could just run the proposal automation software, create the targeted budgetary quotation, populate it with the prospect’s contact information, and give the completed quotation to the salesperson.

Unfortuntely for the firm, the painfully obvious solution was truly painful, for two reasons:

  • This firm had no proposal automation software. Well, maybe some other division of the firm had such software, but this division didn’t have access to it. So the whole idea of adding proposal text to an existing software solution, and programming the solution to generate the appropriate budgetary quotation, wasn’t going to fly.
  • In addition, this firm had no proposal writers. The salespeople were doing this on their own. The only proposal writer they had was the contractor from Bredemarket. And they weren’t going to want to pay for me to generate every budgetary quotation they needed.

In this case, the firm needed a way for the salespeople to generate the necessary budgetary quotations as easily as possible, WITHOUT relying on proposal automation software or proposal writers.

Bredemarket’s solution

To solve the firm’s problem, I resorted to Stupid Word Tricks.

(Microsoft Word, not Cameo.)

I created two similar budgetary quotation templates: one for crime solving, and one for disaster victim identification. (Actually I created more than two.) That way the salesperson could simply choose the budgetary quotation they wanted.

The letters were similar in format, but had little tweaks depending upon the audience.

Using document properties to create easy-to-use budgetary quotations.

The Stupid Word Tricks came into play when I used Word document property features to allow the salesperson to enter the specific information for each prospect, which then rippled throughout the document, providing a customized budgetary quotation to the prospect.

The result

The firms’ salespeople used Bredemarket’s templates to generate initial outreach budgetary quotations to their clients.

And the salespeople were happy.

I’ve used this testimonial quote before, but it doesn’t hurt to use it again.

“I just wanted to truly say thank you for putting these templates together. I worked on this…last week and it was extremely simple to use and I thought really provided a professional advantage and tool to give the customer….TRULY THANK YOU!”

Comment from one of the client’s employees who used the standard proposal text

While I actively consulted for the firm I maintained the templates, updating as needed as the firm achieved additional certifications.

Why am I telling this story again?

I just want to remind people that Bredemarket doesn’t just write posts, articles, and other collateral. I can also create collateral such as these proposal templates that you can re-use.

So if you have a need that can’t be met by the painfully obvious solutions, talk to me. Perhaps we can develop our own solution.

Can Artificial Intelligence Reduce Healthcare Burnout?

Burnout in the healthcare industry is real—but can targeted artificial intelligence solutions reduce burnout?

In a LinkedIn post, healthcare company Artisight references an Advisory Board article with the following statistics:

(T)here were 7,887 nurses who recently ended their healthcare careers between 2018 and 2021….39% of respondents said their decision to leave healthcare was due to a planned retirement. However, 26% of respondents cited burnout or emotional exhaustion, and 21% cited insufficient staffing.

And this is ALL nurses. Not just the forensic nurses who have to deal with upsetting examinations that (literally) probe into sexual assault and child abuse. All nurses have it tough.

But the Artisight LinkedIn post continues with the following assertion:

At Artisight we are committed to reversing this trend through AI-driven technology that is bringing the joy back to medicine!!

Can artificial intelligence bots truly relieve the exhaustion of overworked health professionals? Let’s look at two AI solutions from 3M and Artisight and see whether they truly benefit medical staff.

3M and documentation solutions

3M. From mining and manufacturing to note-taking, biometrics, and artificial intelligence. By McGhiever – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=51409624

3M, a former competitor to MorphoTrak until 3M sold its biometric offerings (as did MorphoTrak’s parent Safran), has invested heavily into healthcare artificial intelligence solutions. This includes a solution that addresses the bane of medical professionals everywhere—keeping up with the paperwork (and checking for potentially catastrophic errors).

Our solutions use artificial intelligence (AI) to alleviate administrative burden and proactively identify gaps and inconsistencies within clinical documentation. Supporting completeness and accuracy every step of the way, from capture to code, means rework doesn’t end up on the physician’s plate before or even after discharge. That enables you to keep your focus where it needs to be – on the patient right in front of you.

Artisight and “smart hospitals”

But what about Artisight, whose assertion inspired this post in the first place?

A recent PYMNTS article interviewed Artisight President Stephanie Lahr to uncover Artight’s approach.

The Artisight platform marries IoT sensors with machine learning and large language models. The overall goal in a hospital setting is to streamline safe patient care, including virtual nursing. Compliance with HIPAA, according to Lahr, has been an important part of the platform’s development, which includes computer vision, voice recognition, vital sign monitoring, indoor positioning capabilities and actionable analytics reports.

In more detail, a hospital patient room is equipped with Al-powered devices such as high-quality, two-way audio and video with multiple participants for virtual care. Ultra-wideband technology tracks the movement and flow of assets throughout the hospital. Remote nurses and observers monitor patient room activity off-site and interact virtually with patients and clinicians.

At a minimum, this reduces the need for nurses to run down the hall just to check things. At a maximum, tracking of asset flows and actionable analytics reports make the job of everyone in the hospital easier.

What about the benefits?

As Bredemarket blog readers have heard ad nauseum, simply saying that your health solution uses features such as artificial intelligence makes no difference to the medical facility. The facility doesn’t care about your features or your product—it only cares about what benefits them. (Cool feature? So what?)

By Mindaugas Danys from Vilnius, Lithuania, Lithuania – scream and shout, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=44907034.

So how can 3M’s and Artisight’s artificial intelligence offerings benefit medical facilities?

  • Allow medical professionals to concentrate on care. Patients don’t need medical professionals who are buried in paperwork. Patients need medical professionals who are spending time with them. The circumstances that land a patient in a hospital are bad enough, and to have people who are forced to ignore patient needs makes it worse. Maybe some day we’ll even get back to Welbycare.
  • Free medical professionals from routine tasks. Assuming the solutions work as advertised, they eliminate the need to double-check a report for errors, or the need to walk down the hall to capture vital signs.
  • Save lives. Yeah, medical professionals do that. If the Marcus Welby AI bot spots an error in a report, or if the bot detects a negative change in vital signs while a nurse is occupied with another patient, the technology could very well save a life.
I’m old enough to remember Welbycare. Robert Young (“Marcus Welby”) and Jane Wyatt (“Margaret Anderson” on a different show). By ABC Television. Public Domain,  https://commons.wikimedia.org/w/index.php?curid=16472486

Now I am not a doctor and cannot evaluate whether these artificial intelligence solutions actually work (unlike some other so-called artificial intelligence solutions that were in reality powered manually). But if the solutions truly work, wonderful.

What’s YOUR healthcare story? And who can tell your story?

Four Reasons Why Differentiators Fade Away

I’ve talked ad nauseum about the need for a firm to differentiate itself from its competitors. If your firm engages in “me too” marketing, prospects have no reason to choose you.

But what about companies that DO differentiate themselves…and suddenly stop doing so?

There are four reasons why companies could stop differentiating themselves:

  1. The differentiator no longer exists.
  2. The differentiator is no longer important to prospects.
  3. The market has changed and the differentiator is no longer applicable.
  4. The differentiator still exists, but the company forgot about it.

Let’s look at these in turn.

The differentiator no longer exists

Sometimes companies gain a temporary competitive advantage that disappears as other firms catch up. But more often, the company only pursues the differentiator temporarily.

 In 1985, amid anxiety about trade deficits and the loss of American manufacturing jobs, Walton launched a “Made in America” campaign that committed Wal-Mart to buying American-made products if suppliers could get within 5 percent of the price of a foreign competitor. This may have compromised the bottom line in the short term, but Walton understood the long-term benefit of convincing employees and customers that the company had a conscience as well as a calculator. 

From https://reclaimdemocracy.org/brief-history-of-walmart/.

Now some of you may not remember Walmart’s “Made in America” banners, but I can assure you they were prevalent in many Walmarts in the 1980s and 1990s. Sam Walton’s autobiography even featured the phrase.

But as time passed, Walmart stocked fewer and fewer “Made in America” items as customers valued low prices over everything else. And some of the “Made in America” banners in Walmarts in the 1990s shouldn’t have been there:

“Dateline NBC” produced an exposé on the company’s sourcing practices. Although Wal-Mart’s “Made in America” campaign was still nominally in effect, “Dateline” showed that store-level associates had posted “Made in America” signs over merchandise actually produced in far away sweatshops. This sort of exposure was new to a company that had been a press darling for many years, and Wal-Mart’s stock immediately declined by 3 percent. 

From https://reclaimdemocracy.org/brief-history-of-walmart/.

The decline was only temporary as Walmart stock bounced back. And 20 years later, the cycle would repeat as Walmart launched a similar “Made in USA” campaign in 2013, only to run into Federal Trade Commission (FTC) enforcement actions two years later.

The differentiator is no longer important

The Walmart domestic production episodes illustrate something else. If Walmart wanted to, it could have persevered and bought from domestic suppliers, even if the supplier price differential was greater than 5%.

But the buying customers didn’t really care.

Affordability was much more important to buyers than U.S. job creation.

So while labor leaders, politicians, and others may have complained about Walmart’s increasing reliance on Chinese goods, the company’s customers continued to do business with Walmart, bringing profitability to the company.

And before you decry the actions of consumers who act against their national self-interest…where was YOUR phone manufactured? China? Vietnam? Unless you own a Librem 5 USA, your phone isn’t from around here. We’re all Commies.

The market has changed

Sometimes the market changes and consumers look at things a little differently.

I’ve previously told the story of Mita, and its 1980s slogan “all we make are great copiers.” In essence, Mita had to adopt this slogan because, unlike its competitors, it did NOT have a diversified portfolio.

This worked for a while…until the “document solutions” industry (copiers and everything else) embraced digital technologies. Well, Fuji-Xerox, Ricoh and Konica did. Mita didn’t, and went bankrupt.

The former Mita is now part of Kyocera Document Solutions.

And stand-alone copiers aren’t even offered.

The company forgot

Before Walmart emphasized “Made in America” products, former (and present) stand-up comedian Steve Martin was dispensing tax advice.

“Steve.. how can I be a millionaire.. and never pay taxes?” First.. get a million dollars. Now.. you say, “Steve.. what do I say to the tax man when he comes to my door and says, ‘You.. have never paid taxes’?” Two simple words. Two simple words in the English language: “I forgot!”

From https://tonynovak.com/how-to-be-a-millionaire-and-not-pay-any-taxes/.

While the IRS will not accept this defense, there are times when people, and companies, forget things.

  • I know of one company that had a clear differentiator over most of its competition: the fact that a key component of its solution was self-authored, rather than being sourced from a third party.
  • For a time, the company strongly emphasized this differentiator, casting fear, uncertainty, and doubt against its competitors who depended upon third parties for this key component.
  • But time passes, priorities change, and the company’s website now buries this differentiator on a back page…making the company sound like all its competitors.

But the company has an impressive array of features, so there’s that.

Restore your differentiators

If your differentiators have faded away, or your former differentiators are no longer important, perhaps it’s time to re-emphasize them so that your prospects have a reason to choose you.

Ask yourself questions about why your firm is great, why all the other firms suck, and what benefits (not features) your customers enjoy that the competition’s customers don’t. Only THEN can you create content (or have your content creator do it for you).

A little postscript: originally I was only going to list three items in this post, but Hana LaRock counsels against this because bots default to three-item lists (see her item 4).

What is B2B Writing?

Business-to-business (B2B) writing isn’t as complex as some people say it is. It may be hard, but it’s not complex.

Why do I care about what B2B writing is?

Neil Patel (or, more accurately, his Ubersuggest service) um, suggested that I say something about B2B writing.

And then he (or it) suggested that I use generative artificial intelligence (AI) to write the piece.

I had a feeling the result was going to suck, but I clicked the “Write For Me” button anyway.

Um, thanks but no thanks. When the first sentence doesn’t even bother to define the acronym “B2B,” you know the content isn’t useful to explain the topic “what is B2B writing.”

And this, my friends, is why I never let generative AI write the first draft of a piece.

So, what IS B2B writing?

Before I explain what B2B writing is, maybe I’d better explain what “B2B” is. And two related acronyms.

  • B2B stands for business to business. Bredemarket, for example, is a business that sells to other businesses. In my case, marketing and writing services.
  • B2G stands for business to government. Kinda sorta like B2B, but government folks are a little different. For example, these folks mourned the death of Mike Causey. (I lived outside of Washington DC early in Causey’s career. He was a big deal.) A B2G company, for example, could sell driver’s license products and services to state motor vehicle agencies.
  • B2C stands for business to consumer. Many businesses create products and services that are intended for consumers and marketed directly to them, not to intermediate businesses. Promotion of a fast food sandwich is an example of a B2C marketing effort.

I included the “B2G” acronym because most of my years in identity and biometrics were devoted to local, state, federal, and international government sales. My B2G experience is much deeper than my B2B experience, and way deeper than my B2C expertise.

Let’s NOT make this complicated

I’m sure that Ubersuggest could spin out a whole bunch of long-winded paragraphs that explain the critical differences between the three marketing efforts above. But let’s keep it simple and limit ourselves to two truths and no lies.

TRUTH ONE: When you market B2B or B2G products or services, you have FEWER customers than when you market B2C products or services.

That’s pretty much it in terms of differences. I’ll give you an example.

  • If Bredemarket promoted its marketing and writing services to all of the identity verification companies, I would target less than 200 customers.
  • If IDEMIA or Thales or GET Group or CBN promoted their driver’s license products and services to all of the state, provincial, and territorial motor vehicle agencies in the United States and Canada, they would target less than 100 customers.
  • If McDonald’s resurrects and promotes its McRib sandwich, it would target hundreds of millions of customers in the United States alone.

The sheer scale of B2C marketing vs. B2B/B2G marketing is tremendous and affects how the company markets its products and services.

But one thing is similar among all three types of writing.

TRUTH TWO: B2B writing, B2G writing, and B2C writing are all addressed to PEOPLE.

Well, until we program the bots to read stuff for us.

This is something we often forget. We think that we are addressing a blog post or a proposal to an impersonal “company.” Um, who works in companies? People.

(Again, until we program the bots.)

Whether you’re marketing a business blog post writing service, a government software system, or a pseudo rib sandwich, you’re pitching it to a person. A person with problems and needs that you can potentially solve.

So solve their needs.

Don’t make it complex.

But what IS B2B writing?

Let’s return to the original question. Sorry, I got off on a bit of a tangent. (But at least I didn’t trail off into musings about “the dynamic and competitive world.”)

When I write something for a business:

  • I must focus on that business and not myself (customer focus). The business doesn’t want to hear my talk about myself. The business wants to hear what I can do for it.
  • I must acknowledge the business’ needs and explain the benefits of my solution to meet the business needs. A feature list without any benefits is just a list of cool things; you still have to explain how the cool things will benefit the business by solving its problem.
  • My writing must address one, or more, different types of people who are hungry for my solution to their problem. (This is what Ubersuggest and others call a “target audience,” because I guess Ubersuggest aims lasers at the assembled anonymous crowd.)

Again, this is hard, but not complex.

It’s possible to make this MUCH MORE complex and create a 96 step plan to author B2B content.

But why?

So now I’ve answered the question “What is B2B writing?”

Can Bredemarket write for your business? If so, contact me.