Google Docs format sharing non-standards

In the past, I’ve mentioned that blog posts are often transitory. I hope this one is transitory and that I can come up with a technical or a procedural solution, because as of now I don’t have one.

Most of my writing experience over the past…many years has been with various versions of Microsoft Word, with the exception of a brief period in which I used FullWrite Professional. (The then-current version of Microsoft Word was NOT Mac-like, and FullWrite Professional was reputed to have a more graphical interface. It did…but it was slower than molasses, so we switched back to Word.)

Google Docs. Can’t you tell?

However, two of my recent projects used Google Docs instead of Microsoft Word, primarily because of Google Docs’ excellent collaboration features, including:

  • The ability to share a document with multiple collaborators, even outside of your organization.
  • The ability to restrict collaboration permissions to only allow commenting, or only allow viewing.
  • The ability to assign items to collaborators, and have the collaborators resolve them.
  • The ability for multiple people to edit the document at the same time. (This could be a drawback, but in my experience it was a plus.)

Everything worked well in my first project, a fairly simple project in which my collaborator took final ownership at the end and performed the final formats.

It was a different story with my second project, a much more complex project.

From my experience with Microsoft Word, a person could modify the existing styles, define new styles, and ensure that anyone who edited the document used the defined styles.

Google Docs is…a little different, as I found out the hard way.

First, Google Docs doesn’t allow you to define custom styles. So if I’m doing work for WidgetCorp, I can’t define styles like WidgetCorpAnswer or WidgetCorpHeading1.

Second, even Google Docs’ standard set of styles is limited.

Normal, Title/Subtitle, and six levels of Headings. That’s it. Forget about special styles for captions, table contents, or anything else.

Third, while Google Docs lets you modify these nine standard styles, there’s no good way to let other people use your modified styles. Here’s how I shared the issue in a Google community:

How can style customizations be shared between collaborators to avoid this issue?

Person A and Person B are collaborating on a Google Docs document.

Person A defines custom styles for Heading 1 and Heading 2.

Person B moves text from Heading 2 to Heading 1.

Person B’s heading style (not Person A’s heading style) is applied.

Person A has to reformat the text in Person A’s custom Heading 1 style. 

This happened on a recent project. I was Person B.

I was hoping that there was some easy way for “Person A” to share style definitions with “Person B” without having those become Person B’s defaults. (Because, after all, Person B may eventually collaborate with Person C, who has different style preferences.)

Sadly, my hopes were dashed:

I feel your pain. Sadly, there currently isn’t any way around this, as editors have free rein. The only solution would be to change their access from “Editor” to “Commenter.”

In other words, only one person can perform final formatting of a Google Docs document. While this restriction can apply to some business workflows, it can’t apply to all of them.

So is there a solution that allows multiple people to format a Google Docs document, and use common formatting styles?

(And before you say “Use Word,” I should also add that real-time collaboration is essential.)

As I said, I hope this post is transitory and I can come up with an acceptable solution.

(But even then, there are other drawbacks in Google Docs, including the inability to automatically number figure and table captions…)

Behind the scenes postscript: I was getting ready to write another post that referenced this post, and then I discovered that I had never actually finalized or shared this post. So now I’ve shared it, primarily so that I can reference it in the future without confusing everyone.

Today’s “biometrics is evil” post (Amazon One)

I can’t recall who recorded it, but there’s a radio commercial heard in Southern California (and probably nationwide) that intentionally ridicules people who willingly give up their own personally identifiable information (PII) for short-term gain. In the commercial, both the husband and the wife willingly give away all sorts of PII, including I believe their birth certificates.

While voluntary surrender of PII happens all the time (when was the last time you put your business card in a drawing bowl at a restaurant?), people REALLY freak out when the information that is provided is biometric in nature. But are the non-biometric alternatives any better?

TechCrunch, Amazon One, and Ten Dollars

TechCrunch recently posted “Amazon will pay you $10 in credit for your palm print biometrics.

If you think that the article details an insanely great way to make some easy money from Amazon, then you haven’t been paying attention to the media these last few years.

The article begins with a question:

How much is your palm print worth?

The article then describes how Amazon’s brick-and-mortar stores in several states have incorporated a new palm print scanner technology called “Amazon One.” This technology, which reads both friction ridge and vein information from a shopper’s palms. This then is then associated with a pre-filed credit card and allows the shopper to simply wave a palm to buy the items in the shopping cart.

There is nothing new under the sun

Amazon One is the latest take on processes that have been implemented several times before. I’ll cite three examples.

Pay By Touch. The first one that comes to my mind is Pay By Touch. While the management of the company was extremely sketchy, the technology (provided by Cogent, now part of Thales) was not. In many ways the business idea was ahead of its time, and it had to deal with challenging environmental conditions: the fingerprint readers used for purchases were positioned near the entrances/exits to grocery stores, which could get really cold in the winter. Couple this with the elderly population that used the devices, and it was sometimes difficult to read the fingers themselves. Yet, this relatively ancient implementation is somewhat similar to what Amazon is doing today.

University of Maryland Dining Hall. The second example occurred to me because it came from my former employer (MorphoTrak, then part of Safran and now part of IDEMIA), and was featured at a company user conference for which I coordinated speakers. There’s a video of this solution, but sadly it is not public. I did find an article describing the solution:

With the new system students will no longer need a UMD ID card to access their own meals…

Instead of pulling out a card, the students just wave their hand through a MorphoWave device. And this allows the students to pay for their meals QUICKLY. Good thing when you’re hungry.

This Pay and That Pay. But the most common example that everyone uses is Apple Pay, Google Pay, Samsung Pay, or whatever “pay” system is supported on your smartphone. Again, you don’t have to pull out a credit card or ID card. You just have to look at your phone or swipe your finger on the phone, and payment happens.

Amazon One is the downfall of civilization

I don’t know if TechCrunch editorialized against Pay By Touch or [insert phone vendor here] Pay, and it probably never heard of the MorphoWave implementation at the University of Maryland. But Amazon clearly makes TechCrunch queasy.

While the idea of contactlessly scanning your palm print to pay for goods during a pandemic might seem like a novel idea, it’s one to be met with caution and skepticism given Amazon’s past efforts in developing biometric technology. Amazon’s controversial facial recognition technology, which it historically sold to police and law enforcement, was the subject of lawsuits that allege the company violated state laws that bar the use of personal biometric data without permission.

Oh well, at least TechCrunch didn’t say that Amazon was racist. (If you haven’t already read it, please read the Security Industry Association’s “What Science Really Says About Facial Recognition Accuracy and Bias Concerns.” Unless you don’t like science.)

OK, back to Amazon and Amazon One. TechCrunch also quotes Albert Fox Cahn of the Surveillance Technology Oversight Project.

People Leaving the Cities, photo art by Zbigniew Libera, imagines a dystopian future in which people have to leave dying metropolises. By Zbigniew Libera – https://artmuseum.pl/pl/kolekcja/praca/libera-zbigniew-wyjscie-ludzi-z-miast, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=66055122.

“The dystopian future of science fiction is now. It’s horrifying that Amazon is asking people to sell their bodies, but it’s even worse that people are doing it for such a low price.”

“Sell their bodies.” Isn’t it even MORE dystopian when people “give their bodies away for free” when they sign up for Apple Pay, Google Pay, or Samsung Pay? While the Surveillance Technology Oversight Project (acronym STOP) expresses concern about digital wallets, there is a significant lack of horror in its description of them.

Digital wallets and contactless payment systems like smart chips have been around for years. The introduction of Apple Pay, Amazon Pay, and Google Pay have all contributed to the e-commerce movement, as have fast payment tools like Venmo and online budgeting applications. In response to COVID-19, the public is increasingly looking for ways to reduce or eliminate physical contact. With so many options already available, contactless payments will inevitably gain momentum….

Without strong federal laws regulating the use of our data, we’re left to rely on private companies that have consistently failed to protect our information. To prevent long-term surveillance, we need to limit the data collected and shared with the government to only what is needed. Any sort of monitoring must be secure, transparent, proportionate, temporary, and must allow for a consumer to find out about or be alerted to implications for their data. If we address these challenges now, at a time when we will be generating more and more electronic payment records, we can ensure our privacy is safeguarded.

So STOP isn’t calling for the complete elimination of Amazon Pay. But apparently it wants to eliminate Amazon One.

Is a world without Amazon One a world with less surveillance?

Whenever you propose to eliminate something, you need to look at the replacement and see if it is any better.

In 1998, Fox fired Bill Russell as the manager of the Los Angeles Dodgers. He had a win-loss percentage of .538. His replacement, Glenn Hoffman, lasted less than a season and had a percentage of .534. Hoffman’s replacement, true baseball man Davey Johnson, compiled a percentage of .503 over the next two seasons before he was fired. Should have stuck with Russell.

Anyone who decides (despite the science) that facial recognition is racist is going to have to rely on other methods to identify criminals, such as witness identification. Witness identification has documented inaccuracies.

And if you think that elimination of Amazon One from Amazon’s brick-and-mortar stores will lead to a privacy nirvana, think again. If you don’t use your palm to pay for things, you’re going to have to use a credit card, and that data will certainly be scanned by the FBI and the CIA and the BBC, B. B. King, and Doris Day. (And Matt Busby, of course.) And even if you use cash, the only way that you’ll preserve any semblance of your privacy is to pay anonymously and NOT tie the transaction to your Amazon account.

And if you’re going to do that, you might as well skip Whole Foods and go straight to Dollar General. Or maybe not, since Dollar General has its own app. And no one calls Dollar General dystopian. Wait, they do: “They tend to cluster, like scavengers feasting on the carcasses of the dead.”

I seemed to have strayed from the original point of this post.

But let me sum up. It appears that biometrics is evil, Amazon is evil, and Amazon biometrics are Double Secret Evil.

Is your home your castle when you use consumer doorbell facial recognition?

For purposes of this post, I will define three entities that can employ facial recognition:

  • Public organizations such as governments.
  • Private organizations such as businesses.
  • Individuals.

Some people are very concerned about facial recognition use by the first two categories of entities.

But what about the third category, individuals?

Can individuals assert a Constitutional right to use facial recognition in their own homes? And what if said individuals live in Peoria?

Concerns about ANY use of facial recognition

Let’s start with an ACLU article from 2018 regarding “Amazon’s Disturbing Plan to Add Face Surveillance to Your Front Door.”

Let me go out on a limb and guess that the ACLU opposes the practice.

The article was prompted by an Amazon 2018 patent application which involved both its Rekognition facial recognition service and its Ring cameras.

One of the figures in Amazon’s patent application, courtesy the ACLU. https://www.aclunc.org/docs/Amazon_Patent.pdf

While the main thrust of the ACLU article concerns acquisition of front door face surveillance (and other biometric) information by the government, it also briefly addresses the entity that is initially performing the face surveillance: namely, the individual.

Likewise, homeowners can also add photos of “suspicious” people into the system and then the doorbell’s facial recognition program will scan anyone passing their home.

I should note in passing that ACLU author Jacob Snow is describing a “deny list,” which flags people who should NOT be granted access such as that pesky solar power salesperson. In most cases, consumer products tout the use of an “allow list,” which flags people who SHOULD be granted access such as family members.

Regardless of whether you’re discussing a deny list or an allow list, the thrust of the ACLU article isn’t that governments shouldn’t use facial recognition. The thrust of the article is that facial recognition shouldn’t be used at all.

The ACLU and other civil rights groups have repeatedly warned that face surveillance poses an unprecedented threat to civil liberties and civil rights that must be stopped before it becomes widespread.

Again, not face surveillance by governments, but face surveillance period. People should not have the, um, “civil liberties” to use the technology.

But how does the tech world approach this?

The reason that I cited that particular ACLU article was that it was subsequently referenced in a CNET article from May 2021. This article bore the title “The best facial recognition security cameras of 2021.”

Let me go out on a limb and guess that CNET supports the practice.

The last part of author Megan Wollerton’s article delves into some of the issues regarding facial recognition use, including those raised by the ACLU. But the bulk of the article talks about really cool tech.

As I stated above, Wollerton notes that the intended use case for home facial recognition security systems involves the creation of an “allow list”:

Some home security cameras have facial recognition, an advanced option that lets you make a database of people who visit your house regularly. Then, when the camera sees a face, it determines whether or not it belongs to someone in your list of known faces. If the recognition system does not know who is at the door, it can alert you to an unknown person on your property.

Obviously you could repurpose such a system for anything you want, provided that you can obtain a clear picture of the face of the pesky social power salesperson.

Before posting her reviews of various security systems, and after a brief mention (expanded later in the article) about possible governmental misuse of facial recognition, Wollerton redirects the conversation.

But let’s step back a bit to the consumer realm. Your home is your castle, and the option of having surveillance cameras with facial recognition software is still compelling for those who want to be on the cutting edge of smart home innovation.

“Your home is your castle” may be a distinctly American concept, but it certainly applies here as organizations such as, um, the ACLU defend a person’s right against unreasonable actions by governments.

Obviously, there are limits to ANY Constitutional right. I cannot exercise my Fourth Amendment right to be secure in my house, couple that with my First Amendment right to freely exercise my religion, and conclude that I have the unrestricted right to perform ritual child sacrifices in my home. (Although I guess if I have a home theater and only my family members are present, I can probably yell “Fire!” all I want.)

So perhaps I could mount an argument that I can use facial recognition at my house any time I want, if the government agrees that this right is “reasonable.”

But it turns out that other people are involved.

You knew I was going to mention Illinois in this post

OK, it’s BIPA time.

As I previously explained in a January 2021 post about the Kami Doorbell Camera, “BIPA” is Illinois’ Biometric Information Privacy Act. This act imposes constraints on a private entity’s use of biometrics. (Governments are excluded in Illinois BIPA.) And here’s how BIPA defines the term “private entity”:

“Private entity” means any individual, partnership, corporation, limited liability company, association, or other group, however organized. A private entity does not include a State or local government agency. A private entity does not include any court of Illinois, a clerk of the court, or a judge or justice thereof.

Did you see the term “individual” in that definition?

So BIPA not only affects company use of biometrics, such as use of biometrics by Google or by a theme park or by a fitness center. It also affects an individual such as Harry or Harriet Homeowner’s use of biometrics.

As I previously noted, Google does not sell its Nest Cam “familiar face alert” feature in Illinois. But I guess it’s possible (via location spoofing if necessary) for someone to buy Nest Cam familiar face alerts in Indiana, and then sneak the feature across the border and implement it in the Land of Lincoln. But while this may (or may not) get Google off the hook, the individual is in a heap of trouble (should a trial lawyer decide to sue the individual).

Let’s face it. The average user of Nest Cam’s familiar face alerts, or the Kami Doorbell Camera, or any other home security camera with facial recognition (note that Amazon currently is not using facial recognition in its consumer products), is probably NOT complying with BIPA.

A private entity in possession of biometric identifiers or biometric information must develop a written policy, made available to the public, establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the private entity, whichever occurs first.

I mean it’s hard enough for Harry and Harriet to get their teenage son to acknowledge receipt of the Homeowner family’s written policy for the use of the family doorbell camera. And you can forget about getting the pesky solar power salesperson to acknowledge receipt.

So from a legal perspective, it appears that any individual homeowner who installs a facial recognition security system can be hauled into civil court under BIPA.

But will these court cases be filed from a practical perspective?

Probably not.

When a social media company violates BIPA, the violation conceivably affects millions of individuals and can result in millions or billions of dollars in civil damages.

When the pesky solar power salesperson discovers that Harry and Harriet Homeowner, the damages would be limited to $1,000 or $5,000 plus relevant legal fees.

It’s not worth pursuing, any more than it’s worth pursuing the Illinois driver who is speeding down the expressway at 66 miles per hour.

Will the Kami Doorbell Camera sell in Illinois?

There was a recent press release that I missed until Biometric Update started talking about it two days later. The January 19 press release from Kami was entitled “Kami Releases Smart Video Doorbell With Facial Recognition Capabilities.” The subhead announced, “The device also offers user privacy controls.”

And while reading that Kami press release, I noticed a potential issue that wasn’t fully addressed in the press release, or (so far) in the media coverage of the press release. That issue relates to that four-letter word “BIPA.”

This post explains what BIPA is and why it’s important.

  • But it starts by looking at smart video doorbells.
  • Next, it looks at this particular press release about a smart video doorbell.
  • Then we’ll look at a competitor’s smart video doorbell, and a particular decision that the competitor made because of BIPA.
  • Only then will we dive into BIPA.
  • Finally, we’ll circle back to Kami, and how it may be affected by BIPA. (Caution: I’m not a lawyer.)

What is a smart video doorbell?

Many of us can figure out what a smart video doorbell would do, since Kami isn’t the first company to offer such a product. (I’ll talk about another company in a little bit.)

The basic concept is that the owner of the video doorbell (whom I’ll refer to as the “user,” to be consistent with Kami’s terminology) manages a small database of faces that could be recognized by the video doorbell. For example, if I owned such a device, I would definitely want to enroll my face and the face of my wife, and I would probably want to enroll the faces of other relatives and close friends. Doing this would create an allowlist of people who are known to the smart video doorbell system.

However, because technology itself is neutral, I need to point out two things about a standard smart video doorbell implementation:

  • Depending upon the design, you can enroll a person into the system without the person knowing it. If the user of the system controls the enrollment, then the user has complete control over the people that are enrolled into the system. All I need is a picture of the person, and I can use that picture to enroll the person into my smart video doorbell. I can grab a picture that I took from New Year’s Eve, or I could even grab a picture from the Internet. After all, if President Joe Biden walked up to my front door, I’d definitely want to know about it. Now there are technological solutions to this; for example, liveness detection could be used to ensure that the person who is enrolling in the system is a live person and not a picture. But I’m not aware of any system that requires liveness detection for this particular use case.
  • You can enroll a person into the system for ANY reason. Usually consumer smart video doorbells are presented as a way to let you know when friends and family come to the door. But the technology has no way of detecting whether you are actually enrolling a “friend.” Perhaps you want to know when your ex-girlfriend comes to the door. Or perhaps you have a really good picture of the guy who’s been breaking into homes in your neighborhood. Now enterprise and government systems account for this by supporting separate allowlists and blocklists, but frankly you can put anyone on to any list for any reason.

So with that introduction, let’s see what Kami is offering, and why it’s different.

The Kami Doorbell Camera

Let’s return to the Kami press release. It, as well as the description of the item in Kami’s online store, parallels a lot of the features that you can find in any smart video doorbell.

Know exactly who’s at your door. Save the faces of friends and family in your Kami or YI Home App, allowing you to get notified if the person outside your front door is a familiar face or a stranger.

And it has other features, such as an IP-65 rating stating that the camera will continue to work outdoors in challenging weather conditions.

However, Yamin Durrani, Kami’s CEO, emphasized a particular point in the press release:

“The Kami Doorbell Camera was inspired by a greater need for safety and peace of mind as people spend more time at home and consumers’ increasing desire to reside in smart homes,” said Yamin Durrani, CEO of Kami. “However, we noticed one gaping hole in the smart doorbell market — it was lacking an extremely advanced security solution that also puts the user in complete control of their privacy. In designing our video doorbell camera we considered all the ways people live in their homes to elegantly combine accelerated intelligence with a level of customization and privacy that is unmatched in today’s market. The result is a solution that provides comfort, safety and peace of mind.”

Privacy for the user(s) makes sense, because you don’t want someone hacking into the system and stealing the pictures and other stored information. As described, Kami lets the user(s) control their own data, and the system has presumably been designed from the ground up to support this.

But Kami isn’t the only product out there.

One of Kami’s competitors has an interesting footnote in its product description

There’s this company called Google. You may have heard of it. And Google offers a product called Nest Aware. This product is a subscription service that works with Nest cameras and provides various types of alerts for activities within the range of the cameras.

And Nest even has a feature that sounds, um, familiar to Kami users. Nest refers to the feature as “familiar face detection.”

Nest speakers and displays listen for unusual sounds. Nest cameras can spot a familiar face.4 And they all send intelligent alerts that matter.

So it sounds like Nest Aware has the same type of “allowlist” feature that allows the Nest Aware user to enroll friends and family (or whoever) into the system, so that they can be automatically recognized and so you can receive relevant information.

Hmm…did you note that there is a footnote next to the mention of “familiar face”? Let’s see what that footnote says.

4. Familiar face alerts not available on Nest Cams used in Illinois.

To the average consumer, that footnote probably looks a little odd. Why would this feature not be available in Illinois, but available in all the other states?

Or perhaps the average consumer may recall another Google app from three years ago, the Google Art & Culture app. That app became all the rage when it introduced a feature that let you compare your face to the faces on famous works of art. Well, it let you perform that comparison…unless you lived in Illinois or Texas.

So what’s the big deal about Illinois?

Those of us who are active in the facial recognition industry, or people who are active in the privacy industry, are well aware of the Illinois Biometric Information and Privacy Act, or BIPA. This Act, which was passed in 2008, provides Illinois residents control over the use of their biometric data. And if a company violates that control, the resident is permitted to sue the offending company. And class action lawsuits are allowed, thus increasing the possible damages to the offending company.

And there are plenty of lawyers that are willing to help residents exercise their rights under BIPA.

One early example of a BIPA lawsuit was filed against L.A. Tan. This firm offered memberships, and rather than requiring the member to present a membership card, the member simply placed his or her fingerprint onto a scanner to verify membership. But under BIPA, that could be a problem:

The plaintiffs in the L.A. Tan case alleged that the company, which used customers’ fingerprint scans in lieu of key fobs for tanning membership ID purposes, violated the BIPA by failing to obtain the customers’ written consent to use the fingerprint data and by not disclosing to customers the company’s plans for storing the data or destroying it in the event a tanning customer terminated her salon membership or a franchise closed. The plaintiffs did not claim L.A. Tan illegally sold or lost customers’ fingerprint data, just that it did not handle the data as carefully as the BIPA requires.

L.A. Tan ended up settling the case for over a million dollars, but Illinois Policy wondered:

This outcome is reassuring for anyone concerned about the handling of private information like facial-recognition data and fingerprints, but it also could signal a flood of similar lawsuits to come.

And there certainly was a flood of lawsuits. I was working in strategic marketing at the time, and I would duly note the second lawsuit filed under BIPA, and then the third lawsuit, and the fourth…Eventually I stopped counting.

As of June 2019, 324 such lawsuits had been filed in total, including 161 in the first six months of 2019 alone. And some big names have been sued under BIPA.

Facebook settled for $650 million.

Google was sued in October 2019 over Google Photos, again in February 2020 over Google Photos, again in April 2020 over its G Suite for Education, again in July 2020 over its use of IBM’s Diversity in Faces algorithm, and probably several other times besides.

So you can understand why Google is a little reluctant to sell Nest Aware’s familiar face detection feature in Illinois.

So where does that leave Kami?

Here’s where the problem may lie. Based upon the other lawsuits, it appears that lawyers are alleging that before an Illinois resident’s biometric features are stored in a database, the person has to give consent for the biometric to be stored, and the person has to be informed of his or her rights under BIPA.

So such explicit permission has to be given for every biometric database physically within the state of Illinois?

Yes…and then some. Remember that Facebook and Google’s databases aren’t necessarily physically located within the state of Illinois, but those companies have been sued under BIPA. I’m not a lawyer, but conceivably an Illinois resident could sue a Swiss company, with its databases in Switzerland, for violating BIPA.

Now when someone sets up a Kami system, does the Kami user ensure that every Illinois resident has received the proper BIPA notices? And if the Kami user doesn’t do that, is Kami legally liable?

For all I know, the Kami enrollment feature may include explicit BIPA questions, such as “Is the person in this picture a resident of Illinois?” Then again, it may not.

Again, I’m not a lawyer, but it’s interesting to note that Google, who does have access to a bunch of lawyers, decided to dodge the issue by not selling familiar face detection to Illinois residents.

Which doesn’t answer the question of an Iowa Nest Aware familiar face detection user who enrolls an Illinois resident…

If your marketing channels lack content, your potential customers may not know that you exist

[Update, January 27, 2021: a July 2020 study from Demand Gen Report explains WHY up-to-date content is important. I addressed that study in this post.]

One of Bredemarket’s most popular services is the Short Writing Service. It can help small (or large) businesses solve the content problem.

You know what the content problem is. Your business has established one or more marketing channels: a website, blog, email list, Google My Business site, Facebook, Instagram, LinkedIn, Snapchat, Tumblr, Twitter…or many others.

But the marketing channels are useless IF THEY HAVE NO CONTENT.

Or old content.

Or poorly-written content.

Maybe the information on the marketing channel is six months old, or a year old, or nine years old. (Trust me, this happens.) Or maybe there’s content on one marketing channel, but it’s never cross-posted to the other marketing channels for your business.

What are the ramifications of this? If your channels lack content, your potential customers may forget about you. And that’s NOT good for business.

I’ll use myself as a BAD example. In addition to my business blogs at Bredemarket (https://bredemarket.com/blog/) and JEBredCal (https://jebredcal.wordpress.com/blog/), I maintain several personal blogs. One of those personal blogs is Empoprise-NTN (https://empoprise-ntn.blogspot.com/), and that blog is obviously the ugly stepchild of the bunch. Between 2016 and 2019 I authored exactly ZERO posts on that blog. So if someone is looking for authoritative commentary on NTN Buzztime games, they’re obviously NOT going to look to me.

The obvious solution to the content problem is to CREATE CONTENT. Some people have no problem creating content, but others may need some help. They may not have the time (https://bredemarket.com/2020/09/25/when-you-dont-have-the-time-to-craft-your-own-text/), or they may need some help in selecting the right words to say.

Bredemarket can help you solve the content problem, one post at a time. The Bredemarket 400 Short Writing Service (https://bredemarket.com/bredemarket-400-short-writing-service/) uses a collaborative process, in which you and Bredemarket agree on a topic, Bredemarket provides a draft of the text, and the text goes through two review cycles. At the end of the process, you have the text, you own the text (this is a “work for hire”), and you can post the text on your blog or Facebook or wherever you please. Your content problem is solved! And if the post includes a call for action, your potential customers can ACT, potentially providing you with new business.

Speaking of a call for action…

If you would like to talk to Bredemarket about ways to solve your business’ content problem, contact me!

Bredemarket 400 Short Writing Service

(new text of approximately 400 to 600 words)