At least in the United States, the mobile driver’s license world is fragmented.
Because driver’s license issuance in the U.S. is a state and not a federal responsibility, each state has to develop its own mobile driver’s license implementation. Subject to federal and international standards, of course.
To date there have been two parties helping the states with this:
mDL vendors such as Envoc and IDEMIA, who work with the states to create mDLs.
Operating system vendors such as Apple and Google, who work with the states to incorporate mDLs in smartphone wallets.
But because the Android ecosystem is more fragmented than the iOS ecosystem, we now have a third party that is involved in mDLs. In addition to mDL vendors and operating system vendors, we also have really large smartphone providers.
Samsung Electronics America today announced it is bringing mobile driver’s licenses and state IDs to Samsung Wallet. Arizona and Iowa will be the first states to offer a mobile version of its driver’s license to their residents. The update expands the Samsung Wallet experience by adding a convenient and secure way to use state-issued IDs and driver’s licenses
In this particular case Samsung is working with IDEMIA (the mDL provider for Arizona and Iowa), but Samsung announced that it is working with other states and with the Transportation Security Administration (TSA).
On a personal note, I’m still working on validating my driver’s license for California’s pilot mDL program. It probably didn’t help that I renewed my physical driver’s license right in the middle of the mDL validation process.
I just made an exception to my “no bots on the first draft” approach. (The Product Marketing Alliance made me do it.)
The setting
I recently attended the Product Marketing Alliance’s “Masters of Product Marketing” webinar, and the Wednesday morning (my time) workshop was led by Elliott Rayner of ARION.
The title? “Workshop – AI-powered storytelling: the definitive guide.”
Red flags. The “temperamental writer” side of me always reacts negatively at the suggestion that a person turn the writing process over to a bot.
Here is the generic prompt that Rayner shared with the workshop participants (note that this should be Christopher Booker, not Charles Brooker; generative AI figured things out anyway):
“Using Charles Brooker’s plot framework “Overcoming the monster” can you write a product campaign narrative for the brand BLANK who are on a mission to BLANK. Please place focus on the customer overcoming the pain paint of BLANK within the market segment of BLANK”
From the Product Marketing Alliance workshop
My prompt
So I took the prompt above and modified it for Bredemarket.
“Using Charles Brooker’s plot framework “Overcoming the monster” can you write a product campaign narrative for the brand Bredemarket who are on a mission to equip customers with superior marketing content. Please place focus on the customer overcoming the pain paint of addressing customer needs within the market segment of financial services”
While Elliott used ChatGPT for his generative AI engine, I used Bard. The next section contains Bard’s response to the prompt above.
Bredemarket: Overcoming the Monster of Marketing
Plot:
Once upon a time, in the competitive and ever-changing world of financial services, there lived a company called Bredemarket. Bredemarket’s mission was to equip customers with superior marketing content, but they faced a daunting challenge: how to address the pain point of addressing customer needs within a complex and ever-evolving market segment?
Bredemarket knew that the key to overcoming this monster was to create marketing content that was both informative and engaging. They needed to create content that would educate customers about their products and services, while also inspiring them to take action.
To do this, Bredemarket assembled a team of experienced and talented marketers who were experts in the financial services industry. Together, they developed a comprehensive content strategy that included a variety of formats, such as blog posts, infographics, e-books, and videos.
Bredemarket’s content was also highly targeted. They took the time to understand the needs of their target audience and created content that was specifically relevant to them. This approach helped Bredemarket to build trust and credibility with their customers.
Over time, Bredemarket’s content marketing efforts began to pay off. They saw an increase in website traffic, leads, and sales. Their customers were also more engaged and satisfied than ever before.
Conclusion:
Bredemarket’s story is a testament to the power of content marketing. By creating informative and engaging content that addresses customer needs, Bredemarket was able to overcome the monster of marketing and achieve success.
How Bredemarket can help financial services companies overcome the monster of marketing:
Bredemarket understands the unique challenges that financial services companies face when it comes to marketing. That’s why they offer a variety of services that can help companies overcome these challenges and achieve their marketing goals.
Bredemarket’s services include:
Content strategy development
Content creation
Content distribution
Content marketing analytics
Bredemarket also offers a variety of industry-specific content packages that are specifically designed for financial services companies. These packages include content that covers a wide range of topics, such as:
Financial planning
Retirement planning
Investing
Insurance
Mortgages
If you’re a financial services company that’s struggling to address customer needs with your marketing content, Bredemarket can help. With their expertise and experience in the financial services industry, they can help you create content that is both informative and engaging, and that will help you achieve your marketing goals.
My observations on the result above
So here’s what I think of Bard’s work.
If I took time to improve the prompt, I probably could have received a better result. I’ve previously said that you may have to go through five iterations of a generative AI prompt to get satisfactory results, which led to my conclusion that AI yields results in five minutes (and that humans will therefore be pressured to yield quick results also). So if I took the time to tweak the prompt, rather than just going with the first result I received, I’d get a better story. For example, rather than using the term “financial services,” perhaps I should have used “financial identity verification” to hone in on my interest in financial identity.
Hallucination is rampant in the text. When generative AI doesn’t know something, it loudly asserts what it doesn’t know. Bard obviously doesn’t know a lot about Bredemarket, but it loudly proclaimed that I provide “retirement planning.” (If I knew anything about retirement planning, I’d retire by now.) And the idea of the “team of experienced and talented marketers” is kinda sorta inaccurate. You just have me.
The tone of voice is all wrong. One reason that I would never use this result for real is because it is not in Bredemarket’s conversational tone of voice. And it would be unusual for me to tell an odyssey. I’ll leave that to John Sculley. To get Bard to write like me, perhaps I can design a prompt that includes the words “mention wildebeests a lot in the response.”
Despite these drawbacks, the exercise was helpful as a brainstorming tool. It provides a framework that would allow me to write a REAL post about how Bredemarket can help financial firms (and vendors to such firms) communicate a customer-focused message about financial identity.
So in the end, it was a worthwhile exercise.
Postscript
This isn’t the first time that I’ve written about the song “The Girl and the Robot.” Roughly a decade ago, I wrote a piece for the online MungBeing Magazine entitled “Robots Dot Txt.” This wasn’t about the official video for the song, but another video documenting a “live” performance of the song.
So in the Senkveld performance, Robyn and Röyksopp (and Davide Rossi and Anneli Drecker, not present on stage but present nevertheless) make me happy by becoming flesh-and-blood robots themselves, capably performing a variety of often complex human tasks that were programmed in a recording studio several months previously.
And one of those records was so unmemorable that it was memorable.
The album, recorded in the early to mid 1960s, trumpeted the fact that the group that recorded the album was extremely versatile. You see, the record not only included surf songs, but also included car songs!
The only problem? The album was NOT by the Beach Boys.
Instead, the album was from some otherwise unknown band that was trying to achieve success by doing what the competition did. (In this case, the Beach Boys.)
I can’t remember the name of the band, and I bet no one else can either.
“Me too” in computing and lawn care
Sadly, this tactic of Xeroxing (or Mitaing) the competition is not confined to popular music. Have you noticed that so many recipes for marketing success involve copying what your competitors do?
Semrush: “Analyze your competitors’ keywords that you are not ranking for to discover gaps in your SEO strategy.”
iSpionage: “If you can emulate your competitors but do things slightly better you have a good chance of being successful.”
Someone who shall remain nameless: “Look at this piece of collateral that one of our competitors did. We should do something just like that.”
And of course the tactic of slavishly copying competitors has been proven to work. For example, remember when Apple Computer adopted the slogan “Think The Same” as the company dressed in blue, ensured all its computers could run MS-DOS, and otherwise imitated everything that IBM did?
“But John,” you are saying. “That’s unfair. Not everyone can be Apple.”
My point exactly. Everyone can’t be Apple because they’re so busy trying to imitate someone else—either a competitor or some other really popular company.
Personally, I’m waiting for some company to claim to be “the Bredemarket of satellite television. (Which would simply mean that the company would have a lot of shows about wildebeests.) But I’ll probably have to wait a while for some company to be the Bredemarket of anything.
(An aside: while talking with a friend, I compared the British phrase “eating your pudding” to the American phrase “eating your own dog food,” although I noted that “I like to say ‘eating your own wildebeest food‘ just to stand out.” Let’s see ChatGPT do THAT.)
“Me too” in identity verification
Now I’ll tread into more dangerous territory.
Here’s an example from the identity/biometric world. Since I self-identity (heh) as the identity content marketing expert, I’m supremely qualified to cite this example.
I spent a year embedded in the identity verification industry, and got to see the messaging from my own company and by the competition.
After a while, I realized that most of the firms in the industry were saying the same thing. Here are a few examples. See if you can spot the one word that EVERY company is using:
(Company I) “Reimagine trust.”
(Company J) “To protect against fraud and financial crime, businesses online need to know and trust that their customers are who they claim to be — and that these customers continue to be trustworthy.”
(Company M) “Trust is the core of any successful business relationship. As the digital revolution continues to push businesses and financial industries towards digital-first services, gaining digital trust with consumers will be of utmost importance for survival.”
(Company O) “Create trust at onboarding and beyond with a complete, AI-powered digital identity solution built to help you know your customers online.”
(Company P) “Trust that users are who they say they are, and gain their trust by humanizing the identity experience.”
(Company V) “Stop fraud. Build trust. Identity verification made simple.”
Yes, these companies, and many others, prominently feature the t-word in their messaging.
Now perhaps some of you would argue that trust is essential to identity verification in the same way that water is essential to an ocean, and that therefore EVERYBODY HAS to use the t-word in their communications.
After all, if I was going to create content for this prospect, we had to ensure that the content stood out from their competitors.
Without revealing confidential information, I can say that I asked the firm why they were better than every other firm out there, and why all the other firms sucked. And the firm provided me with a compelling answer to that question. I can’t reveal that answer, but you can probably guess that the word “trust” was not involved.
A final thought
So let me ask you:
Why is YOUR firm better than every other firm out there, and why do all or YOUR competitors suck?
Your firm’s survival may depend upon communicating that answer.
The vast majority of people who visit the Bredemarket website arrive via Google. Others arrive via Bing, DuckDuckGo, Facebook, Feedspot, Instagram, LinkedIn, Meltwater, Twitter (WordPress’ Stats page didn’t get the memo from Elon), WordPress itself, and other sites.
Yes, people are using ChatGPT and other generative AI tools as search engines.
Patel was curious about why ChatGPT recommended Neil Patel Digital, and he started to investigate. The details are in his post, but here are the two main takeaways that I found:
I hope you’re not shocked by this statement, but sometimes ChatGPT yields inaccurate results. One example: Patel asked ChatGPT to recommend ad agencies who could provide SEO help, and received two inaccurate recommendations. “2 of the top 4 results… Moz and HubSpot are software companies and not ad agencies. They don’t really offer services.”
After a lot of experimentation and number-crunching, Patel identified six specific factors that correlated with ChatGPT’s recommendation of a particular brand: brand mentions, reviews, relevancy, age, recommendations, and authority.
For a detailed discussion of these six factors, see Patel’s post. Let’s look at one of those factors, brand mentions, that has a relatively high (0.87) correlation.
How do you increase brand mentions?
So, how do you increase brand mentions across the web to rank higher on ChatGPT, other generative AI platforms, and more traditional search engines like TikTok?
By mentioning your brand. (And getting relevant authorities to mention your brand also.)
Yes, this is a Captain Obvious statement, but it’s important. There are at least ten places where you can mention your brand.
Articles
Blog Posts
Briefs/Data/Literature Sheets
Case Studies
Proposals
Scientific Book Chapters
Smartphone Application Content
Social Media (Facebook, Instagram, LinkedIn, Threads, TikTok, Twitter)
Web Page Content
White Papers and E-Books
If this list appears familiar to some of you, that’s intentional. It’s a list of the 10 types of external content that I have created for others, and can create for you. See my post “The 22 (or more) Types of Content That Product Marketers Create” for more information on both external and internal content types that I can create.
Because the more you look at my brand content, the more my brand will be recognized by generative AI.
I was curious myself, so I tested this. ChatGPT was at capacity when I wrote this post, so I asked Bard “Who is the biometric content marketing expert?”
From Google Bard, September 26, 2023.
John Bredehoft, of the company Bredemarket, is a biometric content marketing expert. He has over 25 years of experience in biometrics and other identity modalities, including friction ridge, face, iris, voice, and DNA. He has created content for Bredemarket clients and his previous employers, including case studies, white papers, and blog posts, that have positioned his firms as biometric authorities.
Bredehoft’s expertise in biometrics and content marketing allows him to create content that is both informative and engaging for his target audience. He understands the issues behind biometric technologies and can explain them in a clear and concise way. He also knows how to write content that is optimized for search engines and social media.
If you are looking for a biometric content marketing expert, John Bredehoft is a great choice. He has the knowledge and experience to create content that will help you achieve your marketing goals.
From Google Bard, September 26, 2023.
Now I don’t know yet if ChatGPT will yield a similar response, and if not I have some more work to do.
But enough about me.
How can you increase YOUR brand mentions?
Let’s talk about you, your content marketing needs, and your need for prospects and customers to know about your brand.
Whether you want to rank in a traditional search engine or generative AI, the key is the creation of content. When you work with Bredemarket as your content creation partner, we start by discussing your goals and other critical information that is important to you. We do this before I start writing your blog post, social media post, case study, white paper, or other piece of content (car show posters, anyone?).
Let’s hold that (complimentary) discussion to see if Bredemarket’s services are a fit for your needs. Book a meeting with me at calendly.com/bredemarket. Be sure to fill out the information form so I can best help you.
Machine learning models need training data to improve their accuracy—something I know from my many years in biometrics.
And it’s difficult to get that training data—something else I know from my many years in biometrics. Consider the acronyms GDPR, CRPA, and especially BIPA. It’s very hard to get data to train biometric algorithms, so they are trained on relatively limited data sets.
At the same time that biometric algorithm training data is limited, Kevin Indig believes that generative AI large language models are ALSO going to encounter limited accessibility to training data. Actually, they are already.
The lawsuits have already begun
A few months ago, generative AI models like ChatGPT were going to solve all of humanity’s problems and allow us to lead lives of leisure as the bots did all our work for us. Or potentially the bots would get us all fired. Or something.
But then people began to ask HOW these large language models work…and where they get their training data.
Just like biometric training models that just grab images and associated data from the web without asking permission (you know the example that I’m talking about), some are alleging that LLMs are training their models on copyrighted content in violation of the law.
I am not a lawyer and cannot meaningfully discuss what is “fair use” and what is not, but suffice it to say that alleged victims are filing court cases.
Comedian and author Sarah Silverman, as well as authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement.
The suits alleges, among other things, that OpenAI’s ChatGPT and Meta’s LLaMA were trained on illegally-acquired datasets containing their works, which they say were acquired from “shadow library” websites like Bibliotik, Library Genesis, Z-Library, and others, noting the books are “available in bulk via torrent systems.”
This could be a big mess, especially since copyright laws vary from country to country. This description of copyright law LLM implications, for example, is focused upon United Kingdom law. Laws in other countries differ.
Systems that get data from the web, such as Google, Bing, and (relevant to us) ChatGPT, use “crawlers” to gather the information from the web for their use. ChatGPT, for example, has its own crawler.
But that only includes the sites that blocked the crawler when Originality AI performed its analysis.
More sites will block the LLM crawlers
Indig believes that in the future, the number of the top 1000 sites that will block ChatGPT’s crawler will rise significantly…to 84%. His belief is based on analyzing the business models for the sites that already block ChatGPT and assuming that other sites that use the same business models will also find it in their interest to block ChatGPT.
The business models that won’t block ChatGPT are assumed to include governments, universities, and search engines. Such sites are friendly to the sharing of information, and thus would have no reason to block ChatGPT or any other LLM crawler.
The business models that would block ChatGPT are assumed to include publishers, marketplaces, and many others. Entities using these business models are not just going to turn it over to an LLM for free.
One possibility is that LLMs will run into the same training issues as biometric algorithms.
In biometrics, the same people that loudly exclaim that biometric algorithms are racist would be horrified at the purely technical solution that would solve all inaccuracy problems—let the biometric algorithms train on ALL available biometric data. In the activists’ view (and in the view of many), unrestricted access to biometric data for algorithmic training would be a privacy nightmare.
Similarly, those who complain that LLMs are woefully inaccurate would be horrified if the LLM accuracy problem were solved by a purely technical solution: let the algorithms train themselves on ALL available data.
Could LLMs buy training data?
Of course, there’s another solution to the problem: have the companies SELL their data to the LLMs.
In theory, this could provide the data holders with a nice revenue stream while allowing the LLMs to be extremely accurate. (Of course the users who actually contribute the data to the data holders would probably be shut out of any revenue, but them’s the breaks.)
But that’s only in theory. Based upon past experience with data holders, the people who want to use the data are probably not going to pay the data holders sufficiently.
Google and Meta to Canada: Drop dead / Mourir
By The original uploader was Illegitimate Barrister at Wikimedia Commons. The current SVG encoding is a rewrite performed by MapGrid. – This vector image is generated programmatically from geometry defined in File:Flag of Canada (construction sheet – leaf geometry).svg., Public Domain, https://commons.wikimedia.org/w/index.php?curid=32276527
Even today, Google and Meta (Facebook et al) are greeting Canada’s government-mandated Bill C-18 with resistance. Here’s what Google is saying:
Bill C-18 requires two companies (including Google) to pay for simply showing links to Canadian news publications, something that everyone else does for free. The unprecedented decision to put a price on links (a so-called “link tax”) breaks the way the web and search engines work, and exposes us to uncapped financial liability simply for facilitating access to news from Canadian publications….
As a result, we have informed them that we have made the difficult decision that, when the law takes effect, we will be removing links to Canadian news publications from our Search, News, and Discover products.
Google News Showcase is the program that gives money to news organizations in Canada. Meta has a similar program. Peter Menzies notes that these programs give tens of millions of (Canadian) dollars to news organizations, but that could end, despite government threats.
The federal and Quebec governments pulled their advertising spends, but those moves amount to less money than Meta will save by ending its $18 million in existing journalism funding.
Bearing in mind that Big Tech is reluctant to give journalistic data holders money even when a government ORDERS that they do so…
…what is the likelihood that generative AI algorithm authors (including Big Tech companies like Google and Microsoft) will VOLUNTARILY pay funds to data holders for algorithm training?
If Kevin Indig is right, LLM training data will become extremely limited, adversely affecting the algorithms’ use.
What does AdvoLogix say about using AI in the workplace?
AdvoLogix’s post is clear in its intent. It is entitled “9 Ways to Use AI in the Workplace.” The introduction to the post explains AdvoLogix’s position on the use of artificial intelligence.
Rather than replacing human professionals, AI applications take a complementary role in the workplace and improve overall efficiency. Here are nine actionable ways to use artificial intelligence, no matter your industry.
I won’t list ALL nine of the ways—I want you to go read the post, after all. But let me highlight one of them—not the first one, but the eighth one.
Individual entrepreneurs can also benefit from AI-driven technologies. Entrepreneurship requires great financial and personal risk, especially when starting a new business. Entrepreneurs must often invest in essential resources and engage with potential customers to build a brand from scratch. With AI tools, entrepreneurs can greatly limit risk by improving their organization and efficiency.
The AdvoLogix post then goes on to recommend specific ways that entrepreneurs can use artificial intelligence, including:
AI shopping
Use AI Chatbots for Customer Engagement
Regardless of how you feel about the use of AI in these areas, you should at least consider them as possible options.
Why did AdvoLogix write the post?
Obviously the company had a reason for writing the post, and for sharing the post with people like me (and like you).
AdvoLogix provides law firms, legal offices, and public agencies with advanced, cloud-based legal software solutions that address their actual needs.
Thanks to AI tools like Caster, AdvoLogix can provide your office with effective automation of data entry, invoicing, and other essential but time-consuming processes. Contact AdvoLogix to request a free demo of the industry’s best AI tools for law offices like yours.
So I’m not even going to provide a Bredemarket call to action, since AdvoLogix already provided its own. Good for AdvoLogix.
But what about Steven Schwartz?
The AdvoLogix post did not specifically reference Steven Schwartz, although the company stated that you should control the process yourself and not cede control to your artificial intelligence tool.
Roberto Mata sued Avianca airlines for injuries he says he sustained from a serving cart while on the airline in 2019, claiming negligence by an employee. Steven Schwartz, an attorney with Levidow, Levidow & Oberman and licensed in New York for over three decades, handled Mata’s representation.
But at least six of the submitted cases by Schwartz as research for a brief “appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” said Judge Kevin Castel of the Southern District of New York in an order….
In late April, Avianca’s lawyers from Condon & Forsyth penned a letter to Castel questioning the authenticity of the cases….
Among the purported cases: Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines, all of which did not appear to exist to either the judge or defense, the filing said.
Schwartz, in an affidavit, said that he had never used ChatGPT as a legal research source prior to this case and, therefore, “was unaware of the possibility that its content could be false.” He accepted responsibility for not confirming the chatbot’s sources.
Schwartz is now facing a sanctions hearing on June 8.
Inland Empire companies are asking the same question that other companies are asking: “Should my business use ChatGPT (or another generative AI tool) to quickly generate content?”
Bear in mind that I have a vested interest in the answer to this question, because Bredemarket’s business is content generation for other businesses. So I don’t have a natural inclination to tell you not to buy my services and use ChatGPT or Bard or whatever instead.
Nevertheless, I’ll outline the advantages and drawbacks of using automation (in this case generative AI tools) to create written content. Then I’ll look at the three options for your business’ content generation needs. (Surprise! One of those options is to contract with Bredemarket. You’ve been warned that a call to action is coming.)
An “exchange” of questions
As I mentioned in my post about creating and populating Facebook business pages, yesterday afternoon was the first time in years that I had attended a monthly Ontario IDEA Exchange meeting. For those who aren’t familiar with these meetings, they allow small business owners from the Ontario, California area to meet and…um…exchange ideas.
One of the attendees asked me what I thought of ChatGPT.
If you’ve read the Bredemarket blog or the Bredemarket LinkedIn page, you know that I have some thoughts on this. (If you haven’t seen my thoughts, I’ll share them later in this post.)
It turns out that the attendee had a reason for asking the question that went well beyond generative artificial intelligence (generative AI), and touched on automation in general.
Should we use ChatGPT, Bard, or another generative AI tool to generate text content, or should we contact a content marketing expert?
Similarly, should we use a generative AI tool to generate image content, or should we contact a graphic design expert?
Going beyond generative AI, should we use tax preparation software, or should we contact a tax preparation expert?
You can ask this same question about a variety of business services. Should you opt for the (sometimes) cheaper automated solution, or should you get a real person to perform the service?
This can be a loaded question. After all, I can’t tell you that you should always use a person for these tasks when I personally use Freepik and a well-known tax preparation software package.
So, returning to content, let’s take a (probably non-objective) look at the pros and cons of using generative AI tools for content creation. After all, some of my best friends use generative AI…and they’re still my friends (and obviously not my customers).
Generative AI content advantages
While generative AI isn’t as trendy a topic as it was a few months ago, people are still talking about using generative AI to create text content. And there are clearly some advantages to it.
It’s very fast. No, you won’t create your text in a minute. But with properly written prompts (commands to the generative AI tool) and by using a few iterations, you can probably get generative AI text content in five minutes. You won’t get text content of any length from Bredemarket in five minutes.
It’s (usually) free. In most cases, you can obtain your text at no cost to you. Again, Bredemarket doesn’t work for free.
It’s easy. You don’t need to know how to write. Sure, you need to know how to write the prompt, but you don’t need to worry about the mechanics of writing the text itself, and when to use “its” or “it’s”. You don’t want to sound like a looser, after all. (That was wrong.)
It’s “good enough.” Under most circumstances, the text created by a generative AI tool gets the job done, and is obviously better than no text at all.
Generative AI content drawbacks
Perhaps I’m biased, but I think that we can all agree on the drawbacks that I’ve listed below.
Potential accuracy issues. Generative AI content is sourced from the Internet, and you know how accurate the Internet can be at times. For example, when I was working for my former employer, I asked one popular generative AI tool to tell me the weaknesses of my employer’s product. The tool returned two responses, one of which was plain wrong.
More potential accuracy issues. The way generative AI tools work, most of them provide their answers as if they were most certainly true, with no attempt to provide the probability that the response is correct or incorrect. In essence, Mr. Gen AI is an automated know-it-all with complete certainty in its answers, even when they are obviously wrong.
Potential relevancy issues. Depending upon your generative AI tool, you may not have access to the latest data and may instead rely on older data that was available when the generative AI tool was trained. At one point, several of the popular generative AI tools were trained on data from 2021 or earlier, which meant that they knew nothing about events that happened last week.
Potential privacy issues. When you feed prompts to a generative AI tool, the prompts are stored by the tool. What if you’re using a public tool, and you’re feeding confidential data from your company to the tool? What’s going to happen to your company’s private data? (This is especially critical when the data crosses international borders, something that gave Samsung serious concern.)
Potential tone of voice issues. If your company feeds a prompt to a generative AI tool, and your competitor feeds the same prompt to the same tool, you will get similar or identical responses. If both companies use these responses, then the companies will sound like bland “me-too” providers, with no differentiation.
I’ve touched on a number of these issues in more detail in pieces that I’ve written on the Bredemarket blog (and in one case on LinkedIn). These explain when I use generative AI, and when I do not.
Depending upon your business needs, your budget, and your marketing direction, you have three choices.
(UPDATE OCTOBER 23, 2023: “SIX QUESTIONS YOUR CONTENT CREATOR SHOULD ASK YOU IS SO 2022. DOWNLOAD THE NEWER “SEVEN QUESTIONS YOUR CONTENT CREATOR SHOULD ASK YOU” HERE.)
Use a generative AI tool to write your content. It’s fast, it’s free, it’s easy. It may be good enough.
Use a low-cost writer to write your content. If you want to spend a little money, you can find writers who will work for pennies per word. That’s more than a generative AI tool, but you have a better chance of getting the content you want.
Use Bredemarket. I cost more than pennies per word (the cost for my standard 400-600 word writing package is in the brochure on my Bredemarket 400 Short Writing Service page), but Bredemarket’s collaborative kickoff process, including the opening six questions that I ask you, helps to ensure that your firm will get the content that you need.
Authorize Bredemarket, Ontario California’s content marketing expert, to help your firm produce words that return results.
Bonus: here’s what Google Bard says on the topic (written by Google, not by me)
Sure, here are three advantages and three drawbacks of using generative AI to create written content:
Advantages:
Speed: Generative AI can create content much faster than humans. This can be a major advantage for businesses that need to produce a lot of content quickly, such as content marketing agencies or news organizations.
Cost-effectiveness: Generative AI can be a more cost-effective way to create content than hiring human writers. This is especially true for tasks that are repetitive or time-consuming, such as writing product descriptions or blog posts.
Accuracy: Generative AI can be very accurate in generating content that is factually correct. This is because it is trained on large datasets of text and code.
Drawbacks:
Creativity: Generative AI is not as creative as humans. It can only generate content that is based on the data it has been trained on. This means that it may not be able to come up with new ideas or insights.
Bias: Generative AI can be biased, depending on the data it is trained on. This is a concern because it could lead to the creation of content that is inaccurate or harmful.
Plagiarism: Generative AI can sometimes plagiarize content from other sources. This is because it is trained on a variety of text data, including some that may not be properly cited.
Overall, generative AI can be a valuable tool for creating written content. However, it is important to be aware of its limitations and to use it responsibly.
Here are some additional things to keep in mind when using generative AI to create written content:
Always fact-check the content before publishing it. Generative AI can sometimes make mistakes, so it is important to double-check the information.
Be clear about the purpose of the content. Are you trying to inform, persuade, or entertain? The purpose of the content will help you determine the tone and style of the writing.
Be aware of the audience. Who are you writing for? What are their interests and needs? Tailor the content to your audience.
Use generative AI as a tool, not a replacement for human creativity. Generative AI can be a great way to get started with a writing project, but it is important to put your own unique spin on the content.
MCLEAN, Va., May 2, 2023 /PRNewswire/ — The West Virginia University Research Corporation (WVURC) and Pangiam, a leading trade a travel technology company, announced a new partnership to conduct research and develop new, cutting-edge artificial intelligence, machine learning and computer vision technologies for commercial and government applications.
Pangiam and WVURC will work together to launch Pangiam Bridge, a cutting-edge artificial intelligence driven solution for customs authorities worldwide. Pangiam Bridge will allow customs officials to automate portions of the customs inspection process for baggage and cargo. Jim McLaughlin, Pangiam Chief Technology Officer, said, “we are excited to grow Pangiam’s artificial intelligence work in partnership with West Virginia University and continued development of Pangiam Bridge for customs authorities.”
Pangiam Bridge is obviously not ready for prime time yet; it’s not even mentioned on Pangiam’s Products and Services page, nor is it mentioned anywhere else on Pangiam’s website. The only mention of Pangiam Bridge is in this press release, which isn’t surprising considering that this is a research effort. But if the research holds out, then many of the manual processes used by customs agents may be significantly reduced or eliminated entirely.
Project DARTMOUTH is the collaboration between Pangiam and Google Cloud, named after the 1956 Dartmouth Summer Research Project on Artificial Intelligence. Project DARTMOUTH utilizes AI and pattern analysis technologies to digest and analyze vast amounts of data in real-time and identify potential prohibited items in carry-on baggage, checked baggage, airline cargo and shipments.
Years ago, I joked that Printrak/Motorola/MorphoTrak/IDEMIA’s research and development group shouldn’t be researching fingerprint algorithms, but should REALLY be researching human cloning. This would allow the company to have multiple “Peter Lo” entities researching and developing new fingerprint algorithms.
Today the jokes are becoming reality. Not for biometric scientists (yet), but for online gamers.
What (or who) is Kwebbelkop AI?
The Publish Press’ most recent daily newsletter begins with the article “Kwebbelkop Turns to VTubing.” I can forgive you if two of the four words in that article are puzzling; I hadn’t heard of Kwebbelkop or “VTubing” either.
(Before I proceed to explain Kwebbelkop and VTubing, I’d like to encourage you to subscribe to The Publish Press yourself, via my link. You get an informative daily newsletter focused on creators, while I get stickers and stuff.)
“The Digital Kwebbelkop,” as the new character is known, was developed at Van Den Bussche’s company, JVDBStudios. The gamer and his team have revealed several new tools: They are training VTubers to mimic specific creators (Kwebbelkop, in this case) and using AI technology to streamline the video production process. The result, at least in the first Digital Kwebbelkop video, is a character who looks like Van Den Bussche, sounds like him, and plays Minecraft like him.
It’s important to differentiate between VTubers, where a person controls the animated appearance of the avatar (a la our old friend Max Headroom), and an AI VTuber, where AI controls all the actions of the avatar.
Kwebbelkop AI is not the first AI VTuber. In fact, this is the second AI VTuber that Kwebbelkop has created himself; his first one is Bloo.
Another AI VTuber is Neuro-sama. This Live3D article discusses the “AI” part of Neuro-sama:
Deep Learning: Deep learning endows AI avatar with the ability to understand viewers chats in stream. It uses large text database training to enable AI vtuber to have the ability to understand and answer questions.
It is a very tough problem to create an AI VTuber who can appear to play video games in a realistic manner, and even converse with people watching it (them) play.
It’s an even tougher problem to to create an AI entity that can create accurate biometric algorithms.
The Bredemarket blog has previously considered how private companies like Samsung and Adobe use generative AI. Government use is similar, yet differs in some ways. Let’s see how San Jose, California approaches it.
Privacy is also a concern, and IT advises generative AI users to assume any information entered will be exposed to the public. Materials unready for publication shouldn’t be entered, nor should private emails. Employees looking for help drafting emails should avoid copy-pasting messages into generative AI, instead prompting the tools to write a generic message they can fact-check or augment with personalized details. The guidelines advise users to fact-check with multiple credible sources, including peer-reviewed journals and official documents.
This is a big concern for private companies, also.
But there are also issues that governments need to consider that private companies may not need to address.
One consideration is that government writing requires a particular style. Senate bills, for example, are written with a certain structure and formality. The city also uses gender-neutral language and the term “resident” rather than “citizen.”
Of course private companies have their own writing styles, but the world won’t come to an end if the IBM memorandum includes the word “gnarly.” But the wrong word in a Senate bill, or the use of the term “citizen” in a blue state, could be catastrophic.
One thing is clear: San Jose Chief Information Officer Khaled Tawfik doesn’t think that general-purpose generative AI will cut it.
San Jose has talked with several vendors about the possibility of AI trained on data from government, potentially restricted to San Jose data only.
As I noted in my post about Writer.com, this also allows implementation of privacy restrictions that could help avert problems if an employee inputs confidential information into the tool.
For the moment, San Jose is asking employees and contractors to log all use of generative AI. This will be referenced as the city develops its guidelines and policies in the future. As the city says:
Generative Artificial Intelligence (AI) is a new branch of AI technology that can generate content—such as stories, poetry, images, voice, and music— at the request of a user. Many organizations have banned Generative AI, while others allow unrestricted usage. The City recognizes the opportunity for a controlled and responsible approach that acknowledges the benefits to efficiency while minimizing the risks around AI bias, privacy, and cybersecurity.
This is the first step in a collaborative process to develop the City’s overall AI policy. Registered users will be invited to join the Information Technology Department in a working group to share their experience and co-develop the City’s AI policies.