Robert Young (“Marcus Welby”) and Jane Wyatt (“Margaret Anderson” on a different show). By ABC TelevisionUploaded by We hope at en.wikipedia – eBay itemphoto informationTransferred from en.wikipedia by SreeBot, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16472486
We’ve come a long way since the days of Marcus Welby, M.D. (who was a fictional character).
Back in the days of Marcus Welby, M.D., we trusted the doctor as the sole provider of medical information. Doctor knows best!
Later, we learned about health by searching the Internet ourselves, using sources of varying trustworthiness such as pharmaceutical company commercials.
Now, we don’t even conduct the searches ourselves, but let an artificial intelligence healthcare bot search for us, even though the bot hallucinates sometimes.
A “hallucination” occurs when generative AI is convinced that its answer is correct, even when it is wrong. These hallucinations could be a problem—in healthcare, literally a matter of life or death.
For example, a counselor may tell a patient with a substance use disorder to use an app in order to track cravings, states of mind, and other information helpful in treating addiction. The app may recommend certain therapeutic actions in case the counselor cannot be reached. Setting aside preemption issues raised by Food and Drug Administration regulation of these apps, important questions in tort law arise. If these therapeutic actions are contraindicated and result in harm to the patient or others, is the app to blame? Or does the doctor who prescribed the app bear the blame?
That’s right. WHO is going to ensure that these bots can be trusted.
A World Health Organization publication…
…underscores the critical need to ensure the safety and efficacy of AI systems, accelerating their availability to those in need and encouraging collaboration among various stakeholders, including developers, regulators, manufacturers, healthcare professionals, and patients.
According to WHO, its document proposes six areas of artificial intelligence regulation for health.
To foster trust, the publication stresses the importance of transparency and documentation, such as through documenting the entire product lifecycle and tracking development processes.
For risk management, issues like ‘intended use’, ‘continuous learning’, human interventions, training models and cybersecurity threats must all be comprehensively addressed, with models made as simple as possible.
Externally validating data and being clear about the intended use of AI helps assure safety and facilitate regulation.
A commitment to data quality, such as through rigorously evaluating systems pre-release, is vital to ensuring systems do not amplify biases and errors.
The challenges posed by important, complex regulations – such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States of America – are addressed with an emphasis on understanding the scope of jurisdiction and consent requirements, in service of privacy and data protection.
Fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners, can help ensure products and services stay compliant with regulation throughout their lifecycles.
My decision making process relies on extensive data analysis and aligning with the company’s strategic objectives. It’s devoid of personal bias ensuring unbiased and strategic choices that prioritize the organization’s best interests.
Mika was brought to my attention by accomplished product marketer/artist Danuta (Dana) Deborgoska. (She’s appeared in the Bredemarket blog before, though not by name.) Dana is also Polish (but not Colombian) and clearly takes pride in the artificial intelligence accomplishments of this Polish-headquartered company. You can read her LinkedIn post to see her thoughts, one of which was as follows:
Data is the new oxygen, and we all know that we need clean data to innovate and sustain business models.
There’s a reference to oxygen again, but it’s certainly appropriate. Just as people cannot survive without oxygen, Generative AI cannot survive without data.
But the need for data predates AI models. From 2017:
Reliance Industries Chairman Mukesh Ambani said India is poised to grow…but to make that happen the country’s telecoms and IT industry would need to play a foundational role and create the necessary digital infrastructure.
Calling data the “oxygen” of the digital economy, Ambani said the telecom industry had the urgent task of empowering 1.3 billion Indians with the tools needed to flourish in the digital marketplace.
Of course, the presence or absence of data alone is not enough. As Debogorska notes, we don’t just need any data; we need CLEAN data, without error and without bias. Dirty data is like carbon monoxide, and as you know carbon monoxide is harmful…well, most of the time.
That’s been the challenge not only with artificial intelligence, but with ALL aspects of data gathering.
The all-male board of directors of a fertilizer company in 1960. Fair use. From the New York Times.
In all of these cases, someone (Amazon, Enron’s shareholders, or NIST) asked questions about the cleanliness of the data, and then set out to answer those questions.
In the case of Amazon’s recruitment tool and the company Enron, the answers caused Amazon to abandon the tool and Enron to abandon its existence.
Despite the entreaties of so-called privacy advocates (who prefer the privacy nightmare of physical driver’s licenses to the privacy-preserving features of mobile driver’s licenses), we have not abandoned facial recognition, but we’re definitely monitoring it in a statistical (not an anecdotal) sense.
The cleanliness of the data will continue to be the challenge as we apply artificial intelligence to new applications.
Things change. Pangiam, a company that didn’t even exist a few years ago, and that started off by acquiring a one-off project from a local government agency, is now itself a friendly acquisition target (pending stockholder and regulatory approvals).
From MWAA to Pangiam
Back when I worked for IDEMIA and helped to market its border control solutions, one of our competitors for airport business was an airport itself—specifically, the Metropolitan Washington Airports Authority. Rather than buying a biometric exit solution from someone else, the MWAA developed its own, called veriScan.
2021 image from the former airportveriscan website.
ALEXANDRIA, Va., March 19, 2021 /PRNewswire/ — Pangiam, a technology-based security and travel services provider, announced today that it has acquired veriScan, an integrated biometric facial recognition system for airports and airlines, from the Metropolitan Washington Airports Authority (“Airports Authority”). Terms of the transaction were not disclosed.
So what will Pangiam work on next? Where will it expand? What will it acquire?
Nothing.
Enter BigBear.ai
Pangiam itself is now an acquisition target.
COLUMBIA, MD.— November 6, 2023 — BigBear.ai (NYSE: BBAI), a leading provider of AI-enabled business intelligence solutions, today announced a definitive merger agreement to acquire Pangiam Intermediate Holdings, LLC (Pangiam), a leader in Vision AI for the global trade, travel, and digital identity industries, for approximately $70 million in an all-stock transaction. The combined company will create one of the industry’s most comprehensive Vision AI portfolios, combining Pangiam’s facial recognition and advanced biometrics with BigBear.ai’s computer vision capabilities, positioning the company as a foundational leader in one of the fastest growing categories for the application of AI. The proposed acquisition is expected to close in the first quarter of 2024, subject to customary closing conditions, including approval by the holders of a majority of BigBear.ai’s outstanding common shares and receipt of regulatory approval.
Yet another example of how biometrics is now just a minor part of general artificial intelligence efforts. Identify a face or a grenade, it’s all the same.
Anyway, let’s check back in a few months. Because of the technology involved, this proposed acquisition will DEFINITELY merit government review.
For a long time, each ML (machine learning) model operated in one data mode – text (translation, language modeling), image (object detection, image classification), or audio (speech recognition).
However, natural intelligence is not limited to just a single modality. Humans can read and write text. We can see images and watch videos. We listen to music to relax and watch out for strange noises to detect danger. Being able to work with multimodal data is essential for us or any AI to operate in the real world.
As you can see from the title, Huyen uses an acronym “LMM” that is very similar to another generative AI acronym, “LLM” (large language model).
So what’s the difference?
Not all multimodal systems are LMMs. For example, text-to-image models like Midjourney, Stable Diffusion, and Dall-E are multimodal but don’t have a language model component.
If you’re interested in delving into the topic, Huyen’s long three-part post covers the context for multimodality, the fundamentals of a multimodal system, and active research areas.
At least in the United States, the mobile driver’s license world is fragmented.
Because driver’s license issuance in the U.S. is a state and not a federal responsibility, each state has to develop its own mobile driver’s license implementation. Subject to federal and international standards, of course.
To date there have been two parties helping the states with this:
mDL vendors such as Envoc and IDEMIA, who work with the states to create mDLs.
Operating system vendors such as Apple and Google, who work with the states to incorporate mDLs in smartphone wallets.
But because the Android ecosystem is more fragmented than the iOS ecosystem, we now have a third party that is involved in mDLs. In addition to mDL vendors and operating system vendors, we also have really large smartphone providers.
Samsung Electronics America today announced it is bringing mobile driver’s licenses and state IDs to Samsung Wallet. Arizona and Iowa will be the first states to offer a mobile version of its driver’s license to their residents. The update expands the Samsung Wallet experience by adding a convenient and secure way to use state-issued IDs and driver’s licenses
In this particular case Samsung is working with IDEMIA (the mDL provider for Arizona and Iowa), but Samsung announced that it is working with other states and with the Transportation Security Administration (TSA).
On a personal note, I’m still working on validating my driver’s license for California’s pilot mDL program. It probably didn’t help that I renewed my physical driver’s license right in the middle of the mDL validation process.
I just made an exception to my “no bots on the first draft” approach. (The Product Marketing Alliance made me do it.)
The setting
I recently attended the Product Marketing Alliance’s “Masters of Product Marketing” webinar, and the Wednesday morning (my time) workshop was led by Elliott Rayner of ARION.
The title? “Workshop – AI-powered storytelling: the definitive guide.”
Red flags. The “temperamental writer” side of me always reacts negatively at the suggestion that a person turn the writing process over to a bot.
Here is the generic prompt that Rayner shared with the workshop participants (note that this should be Christopher Booker, not Charles Brooker; generative AI figured things out anyway):
“Using Charles Brooker’s plot framework “Overcoming the monster” can you write a product campaign narrative for the brand BLANK who are on a mission to BLANK. Please place focus on the customer overcoming the pain paint of BLANK within the market segment of BLANK”
From the Product Marketing Alliance workshop
My prompt
So I took the prompt above and modified it for Bredemarket.
“Using Charles Brooker’s plot framework “Overcoming the monster” can you write a product campaign narrative for the brand Bredemarket who are on a mission to equip customers with superior marketing content. Please place focus on the customer overcoming the pain paint of addressing customer needs within the market segment of financial services”
While Elliott used ChatGPT for his generative AI engine, I used Bard. The next section contains Bard’s response to the prompt above.
Bredemarket: Overcoming the Monster of Marketing
Plot:
Once upon a time, in the competitive and ever-changing world of financial services, there lived a company called Bredemarket. Bredemarket’s mission was to equip customers with superior marketing content, but they faced a daunting challenge: how to address the pain point of addressing customer needs within a complex and ever-evolving market segment?
Bredemarket knew that the key to overcoming this monster was to create marketing content that was both informative and engaging. They needed to create content that would educate customers about their products and services, while also inspiring them to take action.
To do this, Bredemarket assembled a team of experienced and talented marketers who were experts in the financial services industry. Together, they developed a comprehensive content strategy that included a variety of formats, such as blog posts, infographics, e-books, and videos.
Bredemarket’s content was also highly targeted. They took the time to understand the needs of their target audience and created content that was specifically relevant to them. This approach helped Bredemarket to build trust and credibility with their customers.
Over time, Bredemarket’s content marketing efforts began to pay off. They saw an increase in website traffic, leads, and sales. Their customers were also more engaged and satisfied than ever before.
Conclusion:
Bredemarket’s story is a testament to the power of content marketing. By creating informative and engaging content that addresses customer needs, Bredemarket was able to overcome the monster of marketing and achieve success.
How Bredemarket can help financial services companies overcome the monster of marketing:
Bredemarket understands the unique challenges that financial services companies face when it comes to marketing. That’s why they offer a variety of services that can help companies overcome these challenges and achieve their marketing goals.
Bredemarket’s services include:
Content strategy development
Content creation
Content distribution
Content marketing analytics
Bredemarket also offers a variety of industry-specific content packages that are specifically designed for financial services companies. These packages include content that covers a wide range of topics, such as:
Financial planning
Retirement planning
Investing
Insurance
Mortgages
If you’re a financial services company that’s struggling to address customer needs with your marketing content, Bredemarket can help. With their expertise and experience in the financial services industry, they can help you create content that is both informative and engaging, and that will help you achieve your marketing goals.
My observations on the result above
So here’s what I think of Bard’s work.
If I took time to improve the prompt, I probably could have received a better result. I’ve previously said that you may have to go through five iterations of a generative AI prompt to get satisfactory results, which led to my conclusion that AI yields results in five minutes (and that humans will therefore be pressured to yield quick results also). So if I took the time to tweak the prompt, rather than just going with the first result I received, I’d get a better story. For example, rather than using the term “financial services,” perhaps I should have used “financial identity verification” to hone in on my interest in financial identity.
Hallucination is rampant in the text. When generative AI doesn’t know something, it loudly asserts what it doesn’t know. Bard obviously doesn’t know a lot about Bredemarket, but it loudly proclaimed that I provide “retirement planning.” (If I knew anything about retirement planning, I’d retire by now.) And the idea of the “team of experienced and talented marketers” is kinda sorta inaccurate. You just have me.
The tone of voice is all wrong. One reason that I would never use this result for real is because it is not in Bredemarket’s conversational tone of voice. And it would be unusual for me to tell an odyssey. I’ll leave that to John Sculley. To get Bard to write like me, perhaps I can design a prompt that includes the words “mention wildebeests a lot in the response.”
Despite these drawbacks, the exercise was helpful as a brainstorming tool. It provides a framework that would allow me to write a REAL post about how Bredemarket can help financial firms (and vendors to such firms) communicate a customer-focused message about financial identity.
So in the end, it was a worthwhile exercise.
Postscript
This isn’t the first time that I’ve written about the song “The Girl and the Robot.” Roughly a decade ago, I wrote a piece for the online MungBeing Magazine entitled “Robots Dot Txt.” This wasn’t about the official video for the song, but another video documenting a “live” performance of the song.
So in the Senkveld performance, Robyn and Röyksopp (and Davide Rossi and Anneli Drecker, not present on stage but present nevertheless) make me happy by becoming flesh-and-blood robots themselves, capably performing a variety of often complex human tasks that were programmed in a recording studio several months previously.
And one of those records was so unmemorable that it was memorable.
The album, recorded in the early to mid 1960s, trumpeted the fact that the group that recorded the album was extremely versatile. You see, the record not only included surf songs, but also included car songs!
The only problem? The album was NOT by the Beach Boys.
Instead, the album was from some otherwise unknown band that was trying to achieve success by doing what the competition did. (In this case, the Beach Boys.)
I can’t remember the name of the band, and I bet no one else can either.
“Me too” in computing and lawn care
Sadly, this tactic of Xeroxing (or Mitaing) the competition is not confined to popular music. Have you noticed that so many recipes for marketing success involve copying what your competitors do?
Semrush: “Analyze your competitors’ keywords that you are not ranking for to discover gaps in your SEO strategy.”
iSpionage: “If you can emulate your competitors but do things slightly better you have a good chance of being successful.”
Someone who shall remain nameless: “Look at this piece of collateral that one of our competitors did. We should do something just like that.”
And of course the tactic of slavishly copying competitors has been proven to work. For example, remember when Apple Computer adopted the slogan “Think The Same” as the company dressed in blue, ensured all its computers could run MS-DOS, and otherwise imitated everything that IBM did?
“But John,” you are saying. “That’s unfair. Not everyone can be Apple.”
My point exactly. Everyone can’t be Apple because they’re so busy trying to imitate someone else—either a competitor or some other really popular company.
Personally, I’m waiting for some company to claim to be “the Bredemarket of satellite television. (Which would simply mean that the company would have a lot of shows about wildebeests.) But I’ll probably have to wait a while for some company to be the Bredemarket of anything.
(An aside: while talking with a friend, I compared the British phrase “eating your pudding” to the American phrase “eating your own dog food,” although I noted that “I like to say ‘eating your own wildebeest food‘ just to stand out.” Let’s see ChatGPT do THAT.)
“Me too” in identity verification
Now I’ll tread into more dangerous territory.
Here’s an example from the identity/biometric world. Since I self-identity (heh) as the identity content marketing expert, I’m supremely qualified to cite this example.
I spent a year embedded in the identity verification industry, and got to see the messaging from my own company and by the competition.
After a while, I realized that most of the firms in the industry were saying the same thing. Here are a few examples. See if you can spot the one word that EVERY company is using:
(Company I) “Reimagine trust.”
(Company J) “To protect against fraud and financial crime, businesses online need to know and trust that their customers are who they claim to be — and that these customers continue to be trustworthy.”
(Company M) “Trust is the core of any successful business relationship. As the digital revolution continues to push businesses and financial industries towards digital-first services, gaining digital trust with consumers will be of utmost importance for survival.”
(Company O) “Create trust at onboarding and beyond with a complete, AI-powered digital identity solution built to help you know your customers online.”
(Company P) “Trust that users are who they say they are, and gain their trust by humanizing the identity experience.”
(Company V) “Stop fraud. Build trust. Identity verification made simple.”
Yes, these companies, and many others, prominently feature the t-word in their messaging.
Now perhaps some of you would argue that trust is essential to identity verification in the same way that water is essential to an ocean, and that therefore EVERYBODY HAS to use the t-word in their communications.
After all, if I was going to create content for this prospect, we had to ensure that the content stood out from their competitors.
Without revealing confidential information, I can say that I asked the firm why they were better than every other firm out there, and why all the other firms sucked. And the firm provided me with a compelling answer to that question. I can’t reveal that answer, but you can probably guess that the word “trust” was not involved.
A final thought
So let me ask you:
Why is YOUR firm better than every other firm out there, and why do all or YOUR competitors suck?
Your firm’s survival may depend upon communicating that answer.
The vast majority of people who visit the Bredemarket website arrive via Google. Others arrive via Bing, DuckDuckGo, Facebook, Feedspot, Instagram, LinkedIn, Meltwater, Twitter (WordPress’ Stats page didn’t get the memo from Elon), WordPress itself, and other sites.
Yes, people are using ChatGPT and other generative AI tools as search engines.
Patel was curious about why ChatGPT recommended Neil Patel Digital, and he started to investigate. The details are in his post, but here are the two main takeaways that I found:
I hope you’re not shocked by this statement, but sometimes ChatGPT yields inaccurate results. One example: Patel asked ChatGPT to recommend ad agencies who could provide SEO help, and received two inaccurate recommendations. “2 of the top 4 results… Moz and HubSpot are software companies and not ad agencies. They don’t really offer services.”
After a lot of experimentation and number-crunching, Patel identified six specific factors that correlated with ChatGPT’s recommendation of a particular brand: brand mentions, reviews, relevancy, age, recommendations, and authority.
For a detailed discussion of these six factors, see Patel’s post. Let’s look at one of those factors, brand mentions, that has a relatively high (0.87) correlation.
How do you increase brand mentions?
So, how do you increase brand mentions across the web to rank higher on ChatGPT, other generative AI platforms, and more traditional search engines like TikTok?
By mentioning your brand. (And getting relevant authorities to mention your brand also.)
Yes, this is a Captain Obvious statement, but it’s important. There are at least ten places where you can mention your brand.
Articles
Blog Posts
Briefs/Data/Literature Sheets
Case Studies
Proposals
Scientific Book Chapters
Smartphone Application Content
Social Media (Facebook, Instagram, LinkedIn, Threads, TikTok, Twitter)
Web Page Content
White Papers and E-Books
If this list appears familiar to some of you, that’s intentional. It’s a list of the 10 types of external content that I have created for others, and can create for you. See my post “The 22 (or more) Types of Content That Product Marketers Create” for more information on both external and internal content types that I can create.
Because the more you look at my brand content, the more my brand will be recognized by generative AI.
I was curious myself, so I tested this. ChatGPT was at capacity when I wrote this post, so I asked Bard “Who is the biometric content marketing expert?”
From Google Bard, September 26, 2023.
John Bredehoft, of the company Bredemarket, is a biometric content marketing expert. He has over 25 years of experience in biometrics and other identity modalities, including friction ridge, face, iris, voice, and DNA. He has created content for Bredemarket clients and his previous employers, including case studies, white papers, and blog posts, that have positioned his firms as biometric authorities.
Bredehoft’s expertise in biometrics and content marketing allows him to create content that is both informative and engaging for his target audience. He understands the issues behind biometric technologies and can explain them in a clear and concise way. He also knows how to write content that is optimized for search engines and social media.
If you are looking for a biometric content marketing expert, John Bredehoft is a great choice. He has the knowledge and experience to create content that will help you achieve your marketing goals.
From Google Bard, September 26, 2023.
Now I don’t know yet if ChatGPT will yield a similar response, and if not I have some more work to do.
But enough about me.
How can you increase YOUR brand mentions?
Let’s talk about you, your content marketing needs, and your need for prospects and customers to know about your brand.
Whether you want to rank in a traditional search engine or generative AI, the key is the creation of content. When you work with Bredemarket as your content creation partner, we start by discussing your goals and other critical information that is important to you. We do this before I start writing your blog post, social media post, case study, white paper, or other piece of content (car show posters, anyone?).
Let’s hold that (complimentary) discussion to see if Bredemarket’s services are a fit for your needs. Book a meeting with me at calendly.com/bredemarket. Be sure to fill out the information form so I can best help you.
Machine learning models need training data to improve their accuracy—something I know from my many years in biometrics.
And it’s difficult to get that training data—something else I know from my many years in biometrics. Consider the acronyms GDPR, CRPA, and especially BIPA. It’s very hard to get data to train biometric algorithms, so they are trained on relatively limited data sets.
At the same time that biometric algorithm training data is limited, Kevin Indig believes that generative AI large language models are ALSO going to encounter limited accessibility to training data. Actually, they are already.
The lawsuits have already begun
A few months ago, generative AI models like ChatGPT were going to solve all of humanity’s problems and allow us to lead lives of leisure as the bots did all our work for us. Or potentially the bots would get us all fired. Or something.
But then people began to ask HOW these large language models work…and where they get their training data.
Just like biometric training models that just grab images and associated data from the web without asking permission (you know the example that I’m talking about), some are alleging that LLMs are training their models on copyrighted content in violation of the law.
I am not a lawyer and cannot meaningfully discuss what is “fair use” and what is not, but suffice it to say that alleged victims are filing court cases.
Comedian and author Sarah Silverman, as well as authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement.
The suits alleges, among other things, that OpenAI’s ChatGPT and Meta’s LLaMA were trained on illegally-acquired datasets containing their works, which they say were acquired from “shadow library” websites like Bibliotik, Library Genesis, Z-Library, and others, noting the books are “available in bulk via torrent systems.”
This could be a big mess, especially since copyright laws vary from country to country. This description of copyright law LLM implications, for example, is focused upon United Kingdom law. Laws in other countries differ.
Systems that get data from the web, such as Google, Bing, and (relevant to us) ChatGPT, use “crawlers” to gather the information from the web for their use. ChatGPT, for example, has its own crawler.
But that only includes the sites that blocked the crawler when Originality AI performed its analysis.
More sites will block the LLM crawlers
Indig believes that in the future, the number of the top 1000 sites that will block ChatGPT’s crawler will rise significantly…to 84%. His belief is based on analyzing the business models for the sites that already block ChatGPT and assuming that other sites that use the same business models will also find it in their interest to block ChatGPT.
The business models that won’t block ChatGPT are assumed to include governments, universities, and search engines. Such sites are friendly to the sharing of information, and thus would have no reason to block ChatGPT or any other LLM crawler.
The business models that would block ChatGPT are assumed to include publishers, marketplaces, and many others. Entities using these business models are not just going to turn it over to an LLM for free.
One possibility is that LLMs will run into the same training issues as biometric algorithms.
In biometrics, the same people that loudly exclaim that biometric algorithms are racist would be horrified at the purely technical solution that would solve all inaccuracy problems—let the biometric algorithms train on ALL available biometric data. In the activists’ view (and in the view of many), unrestricted access to biometric data for algorithmic training would be a privacy nightmare.
Similarly, those who complain that LLMs are woefully inaccurate would be horrified if the LLM accuracy problem were solved by a purely technical solution: let the algorithms train themselves on ALL available data.
Could LLMs buy training data?
Of course, there’s another solution to the problem: have the companies SELL their data to the LLMs.
In theory, this could provide the data holders with a nice revenue stream while allowing the LLMs to be extremely accurate. (Of course the users who actually contribute the data to the data holders would probably be shut out of any revenue, but them’s the breaks.)
But that’s only in theory. Based upon past experience with data holders, the people who want to use the data are probably not going to pay the data holders sufficiently.
Google and Meta to Canada: Drop dead / Mourir
By The original uploader was Illegitimate Barrister at Wikimedia Commons. The current SVG encoding is a rewrite performed by MapGrid. – This vector image is generated programmatically from geometry defined in File:Flag of Canada (construction sheet – leaf geometry).svg., Public Domain, https://commons.wikimedia.org/w/index.php?curid=32276527
Even today, Google and Meta (Facebook et al) are greeting Canada’s government-mandated Bill C-18 with resistance. Here’s what Google is saying:
Bill C-18 requires two companies (including Google) to pay for simply showing links to Canadian news publications, something that everyone else does for free. The unprecedented decision to put a price on links (a so-called “link tax”) breaks the way the web and search engines work, and exposes us to uncapped financial liability simply for facilitating access to news from Canadian publications….
As a result, we have informed them that we have made the difficult decision that, when the law takes effect, we will be removing links to Canadian news publications from our Search, News, and Discover products.
Google News Showcase is the program that gives money to news organizations in Canada. Meta has a similar program. Peter Menzies notes that these programs give tens of millions of (Canadian) dollars to news organizations, but that could end, despite government threats.
The federal and Quebec governments pulled their advertising spends, but those moves amount to less money than Meta will save by ending its $18 million in existing journalism funding.
Bearing in mind that Big Tech is reluctant to give journalistic data holders money even when a government ORDERS that they do so…
…what is the likelihood that generative AI algorithm authors (including Big Tech companies like Google and Microsoft) will VOLUNTARILY pay funds to data holders for algorithm training?
If Kevin Indig is right, LLM training data will become extremely limited, adversely affecting the algorithms’ use.
What does AdvoLogix say about using AI in the workplace?
AdvoLogix’s post is clear in its intent. It is entitled “9 Ways to Use AI in the Workplace.” The introduction to the post explains AdvoLogix’s position on the use of artificial intelligence.
Rather than replacing human professionals, AI applications take a complementary role in the workplace and improve overall efficiency. Here are nine actionable ways to use artificial intelligence, no matter your industry.
I won’t list ALL nine of the ways—I want you to go read the post, after all. But let me highlight one of them—not the first one, but the eighth one.
Individual entrepreneurs can also benefit from AI-driven technologies. Entrepreneurship requires great financial and personal risk, especially when starting a new business. Entrepreneurs must often invest in essential resources and engage with potential customers to build a brand from scratch. With AI tools, entrepreneurs can greatly limit risk by improving their organization and efficiency.
The AdvoLogix post then goes on to recommend specific ways that entrepreneurs can use artificial intelligence, including:
AI shopping
Use AI Chatbots for Customer Engagement
Regardless of how you feel about the use of AI in these areas, you should at least consider them as possible options.
Why did AdvoLogix write the post?
Obviously the company had a reason for writing the post, and for sharing the post with people like me (and like you).
AdvoLogix provides law firms, legal offices, and public agencies with advanced, cloud-based legal software solutions that address their actual needs.
Thanks to AI tools like Caster, AdvoLogix can provide your office with effective automation of data entry, invoicing, and other essential but time-consuming processes. Contact AdvoLogix to request a free demo of the industry’s best AI tools for law offices like yours.
So I’m not even going to provide a Bredemarket call to action, since AdvoLogix already provided its own. Good for AdvoLogix.
But what about Steven Schwartz?
The AdvoLogix post did not specifically reference Steven Schwartz, although the company stated that you should control the process yourself and not cede control to your artificial intelligence tool.
Roberto Mata sued Avianca airlines for injuries he says he sustained from a serving cart while on the airline in 2019, claiming negligence by an employee. Steven Schwartz, an attorney with Levidow, Levidow & Oberman and licensed in New York for over three decades, handled Mata’s representation.
But at least six of the submitted cases by Schwartz as research for a brief “appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” said Judge Kevin Castel of the Southern District of New York in an order….
In late April, Avianca’s lawyers from Condon & Forsyth penned a letter to Castel questioning the authenticity of the cases….
Among the purported cases: Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines, all of which did not appear to exist to either the judge or defense, the filing said.
Schwartz, in an affidavit, said that he had never used ChatGPT as a legal research source prior to this case and, therefore, “was unaware of the possibility that its content could be false.” He accepted responsibility for not confirming the chatbot’s sources.
Schwartz is now facing a sanctions hearing on June 8.