The “Repurposing a Blog Post On YouTube via NotebookLM” Experiment

This is definitely an experiment. When I started, I had no idea how it would turn out. In the end I’m fairly satisfied with how NotebookLM repurposed my blog post as a YouTube video, but there were definitely some lessons learned to apply in future repurposing.

Ahref’s best way to get your product listed on LLMs

As we all know, there has been a partial shift from search engine optimization to answer engine optimization. The short version is that content performs well when it answers a question that someone proposes to a large language model (LLM) such as Google Gemini or ChatGPT.

So how do we optimize our content for LLMs?

Yes, I know I could have asked an LLM that question, but I still do some old school things and attended a webinar instead.

I live-blogged Wednesday’s webinar, hosted by the Content Marketing Institute and sponsored by Ahrefs. The speaker was Ahref’s Ryan Law, the company’s Director of Content Marketing. As is usual with such affairs, the webinar provided some helpful information…which is even more helpful if you use Ahref’s tools. (Funny how that always happens. The same thing happens with Bredemarket’s white papers.)

One of the many topics Law addressed was the TYPE of content that resonates most with LLM inquirers. Law’s slide 20 answered this question.

“LLMs LOVE YOUTUBE”

Law then threw some statistics at us.

“YouTube has fast-become the most cited domain in AI search:

1 in AI Overviews

1 in AI Mode

2 in ChatGPT

2 in Gemini

2 in Copilot

2 in Perplexity”

So even if it isn’t number 1 on some of the engines themselves, it’s obviously high, and very attractive to inquirers.

But what of people like me who prefer the portability of text? It’s easier to quote from text than it is to take a short snippet of a video.

YouTube covers that also, since it automatically creates a transcript of every word spoken in a YouTube video.

But…

Bredemarket’s problem

…most of the videos that Bredemarket has created have zero or few spoken words, which kinda sorta makes it tough to create a transcript.

For example, the “Landscape (Biometric Product Marketing Expert)” video that I frequently share on the Bredemarket blog for some odd reason is not only on WordPress, but also on YouTube. However, it has zero spoken words, so therefore no transcript.

This video (actually a short) DOES have a transcript.

“Yo, I’m the outlaw of this country sound, dropping rhymes that shake the ground.”

But I do have some YouTube videos with more extensive transcripts. And one of them suggests a possible solution to my desire to provide YouTube videos to LLMs.

Using Google’s NotebookLM to create videos from non-copyrighted material

A still from Bredemarket’s movie “Inside the EBTS.” Are you jealous, Stefan Gladbach?

Last November, I uploaded material to Google’s NotebookLM and asked the service to create a movie from it.

The material wasn’t authored by me, but by the U.S. Federal Bureau of Investigation. (Which meant that it wasn’t copyrighted.)

What was it?

Version 11.3 of the Electronic Biometric Transmission Specification (EBTS).

A few of you are already laughing.

For those who aren’t, the EBTS is a fairly detailed standard dictating how biometric and biographic data is exchanged between the FBI’s Next Generation Identification (NGI) system and other federal, state, and local automated biometric identification systems.

As a standard, it’s not as riveting as a Stephen King novel.

But NotebookLM made a movie out of it anyway.

Inside the FBI’s EBTS.

And once I uploaded the movie to YouTube, YouTube created a transcript.

First 21 seconds of the YouTube transcript of the video above.

So this potentially helps Bredemarket to be visible.

And if I want to follow Ryan Law’s advice and repurpose my content for YouTube, NoteBookLM provides a method to do it.

Using Google’s NotebookLM to create videos from MY copyrighted material

Time for an experiment, the goal of which is to convert a Bredemarket blog post into a video with a minimum amount of effort.

NotebookLM can use either files or web links as source material for videos, so the easiest method to create my video is as follows:

  • Paste the web link to the blog post into NotebookLM.
  • Instruct NotebookLM to create a “Video Overview” from this single piece of content.

My blog post of choice is my post from earlier today, “Government Anti-Fraud Efforts: They’re Still Siloed.” The post contains a Gemini AI image of silos on the National Mall in Washington DC (inaccurately displaying the U.S. Capitol as NORTH of the Washington Monument). It also includes my text as well as links to outside sources such as the initial draft of H.R. 7270.

So let’s see what NotebookLM gives us.

The first thing that we get is Google’s summary of the content at the link.

NotebookLM’s summary of the web link I provided.

“1 source

The provided text examines the fragmented nature of U.S. anti-fraud efforts, arguing that current strategies lack a unified, government-wide approach. While the Stop Identity Fraud and Identity Theft Bill seeks to modernize identity verification through Treasury Department grants to states, the author contends that the proposal is too narrow. The legislative focus on mobile driver’s licenses and digital infrastructure fails to consolidate the many overlapping responsibilities held by the FBI, Secret Service, and FTC. This bureaucratic separation creates siloed operations where different agencies and state governments often work independently rather than as a cohesive front. Ultimately, the source suggests that despite new legislative attempts, the American response to identity theft remains decentralized and inefficient.”

But that’s just text. Time to create the video overview.

After I cklicked the “Video Overview” button in the upper right.

A while later…

Even with relatively minimal content, video creation isn’t immediate. It has to shape the content into a narrative video, after all. Sadly I forgot to time the result, but I received this video, “A Tale of Two Threats,” within a half hour.

A Tale of Two Threats (WordPress version). Created by Google NotebookLM based upon “Government Anti-Fraud Efforts: They’re Still Siloed.”

As is par for the course with NotebookLM, the narration is self-generated in a viewer-friendly form (“let’s pop the hood”). And it used my source material as a basis to narrate the tale of the U.S. Government’s responses to the “two threats” of terrorism and fraud. As my original blog post noted, the two responses have been quite different.

The video then takes portions of the blog post, including the list of agencies that are NOT part of H.R. 7270, as well as my example of what could happen if the Secret Service’s mission is compromised because of what some other agency is doing.

But it DOESN’T take other portions of my blog post, such as the potential shuttering of the Consumer Financial Protection Bureau, my reference to “evil Commie Chinese facial recognition algorithms,” or my graphic of silos on the Mall. NotebookLM generated its own cartoon graphics instead.

This image didn’t make the video, even though Google created it.

The final step

The first place where I uploaded the video was WordPress, so I could include it in this blog post. I’ll probably upload it to other places, but the second target is YouTube.

A Tale of Two Threats (YouTube version). Created by Google NotebookLM based upon “Government Anti-Fraud Efforts: They’re Still Siloed.”

And yes, there is a transcript. Although it took a few minutes to generate. So now the bot’s text is out there for the LLMs to find.

First 24 seconds of the YouTube transcript of the video above.

Grading the experiment

I’ll give the experiment a B. It’s not really MY video, but it encapsulates some of my views.

NotebookLM users need to remember that when it creates audio and video content, it doesn’t simply parrot the source, but reshapes it. You may remember the NotebookLM 20-minute “Career Detective” podcast of my resume, in which a male and female bot talked about how great I am. My blog post was processed similarly.

If I want something that better promotes Bredemarket to LLM users, I need to shape the blog post to do the following:

  • Address some question that the LLM user asks.
  • Include text that promotes Bredemarket as the solution to the inquirer’s problems.

Anyway, I’ll keep these tips in mind when writing…and repurposing…future blog posts.

Not Only Amazon Stale (not Fresh), But Also Amazon Zero (not One)

With all the news about Amazon Fresh closing and more Amazon layoffs taking place, I missed a bit of news about the Amazon One palm-vein technology. But first a bit of history.

Amazon One in 2021

I believe I first wrote about Amazon One back in 2021, in a “biometrics is evil” post.

2021 TechCrunch article.

In that year, TechCrunch loudly proclaimed:

“While the idea of contactlessly scanning your palm print to pay for goods during a pandemic might seem like a novel idea, it’s one to be met with caution and skepticism given Amazon’s past efforts in developing biometric technology. Amazon’s controversial facial recognition technology, which it historically sold to police and law enforcement, was the subject of lawsuits that allege the company violated state laws that bar the use of personal biometric data without permission.”

Yes, Amazon was regarded as part of the evil fascist regime even when Donald Trump WASN’T in office.

Amazon One in 2025

Enrolling.

Which brings us to 2025, when Trump had returned to office and I enrolled in Amazon One myself to better buy things at the Upland, California Amazon Fresh. But the line was too long so I went to Whole Foods, where my palm and vein may or may not have worked.

Amazon One in 2026

From https://amazonone.aws.com/help as of January 29, 2026.

And pretty soon we’ll ALL be going to Whole Foods since Amazon Fresh is rebranding or closing all its locations.

And when we get there, we won’t be using Amazon One.

“Amazon One palm authentication services will be discontinued at retail businesses on June 3, 2026. Amazon One user data, including palm data, will be deleted after this date.”

You know the question I asked. Why?

“In response to limited customer adoption…”

Of course, in Amazon’s case, “limited” may merely mean that billions and billions of people didn’t sign up, so it jettisoned the technology in the same way it jettisoned dozens of stores and thousands of employees.

The June date may or may not apply to healthcare, but who knows how long that will last.

So what now?

In my 2021 post I mentioned three other systems that used biometrics for purchases.

There was the notorious Pay By Touch (not notorious because of its technology, but the way the business was run).

There was the niche MorphoWave.

But the third system dwarfs them all.

“But the most common example that everyone uses is Apple Pay, Google Pay, Samsung Pay, or whatever ‘pay’ system is supported on your smartphone. Again, you don’t have to pull out a credit card or ID card. You just have to look at your phone or swipe your finger on the phone, and payment happens.”

And they’re so entrenched that even Amazon can’t beat them.

Or as I said after the latest round of Amazon layoffs:

“This, combined with its rebranding or closure of all Amazon Fresh stores, clearly indicates that Amazon is in deep financial trouble.

“Bezos did say that Amazon would fail some day, but I didn’t expect the company to fall apart this quickly.”

Groupthink From Bots

I participate in several public and private AI communities, and one fun exercise is to take another creator’s image generation prompt, run it yourself (using the same AI tool or a different tool), and see what happens. But certain tools can yield similar results, for explicable reasons.

On Saturday morning in a private community Zayne Harbison shared his Nano Banana prompt (which I cannot share here) and the resulting output. So I ran his prompt in Nano Banana and other tools, including Microsoft Copilot and OpenAI ChatGPT.

The outputs from those two generative AI engines were remarkably similar.

Copilot.
ChatGPT.

Not surprising, given the history of Microsoft and OpenAI. (It got more tangled later.)

But Harbison’s prompt was relatively simple. What if I provided a much more detailed prompt to both engines?

Create a realistic photograph of a coworking space in San Francisco in which coffee and hash brownies are available to the guests. A wildebeest, who is only partaking in a green bottle of sparkling water, is sitting at a laptop. A book next to the wildebeest is entitled “AI Image Generation Platforms.” There is a Grateful Dead poster on the brick wall behind the wildebeest, next to the hash brownies.

So here’s what I got from the Copilot and ChatGPT platforms.

Copilot.
ChatGPT.

For comparison, here is Google Gemini’s output for the same prompt.

Gemini.

So while there are more differences when using the more detailed prompt (see ChatGPT’s brownie placement), the Copilot and ChatGPT results still show similarities, most notably in the Grateful Dead logo and the color used in the book.

So what have we learned, Johnny? Not much, since Copilot and ChatGPT can perform many tasks other than image generation. There may be more differentiation when they perform SWOT analyses or other operations. As any good researcher would say, more funding is needed for further research.

But I will hazard two lessons learned:

  • More detailed prompts are better.

AFOID With an Expanded A: If You Pay the Money, Who Needs REAL ID Anyway?

I’ve vented about this for years. Some people have vented about this for decades. And it’s been discussed for decades.

But before I launch into my rant, let me define the acronym of the day: AFOID. It stands for “acceptable form of identification.”

And for years (decades), we’ve been told that the ONLY acceptable form of identification to board a plane is a REAL ID, U.S. passport, or a similar form of identity. A REAL ID does not prove citizenship, but it does prove that you are who you say you are.

USA.GOV put it best:

“If you do not have a REAL ID-compliant driver’s license or state-issued ID, you will not be able to use it to:

“Access federal government facilities or military installations

“Board federally regulated commercial aircraft

“Enter nuclear power plants”

Pretty straightforward. Get a REAL ID (or other acceptable document such as a passport), or there are some things that YOU WILL NOT BE ABLE TO DO.

So you needed that AFOID by May 2025…

Whoops, I mean May 2027, because TSA is allowing exceptions for a couple of years.

Whoops, I mean probably never.

If you pay some bucks, you can use a MODERNIZED system. Biometric Update alerted me to this new item in the Federal Register.

“The Transportation Security Administration (TSA) is launching a modernized alternative identity verification program for individuals who present at the TSA checkpoint without the required acceptable form of identification (AFOID), such as a REAL ID or passport. This modernized program provides an alternative that may allow these individuals to gain access to the sterile area of an airport if TSA is able to establish their identity. To address the government-incurred costs, individuals who choose to use TSA’s modernized alternative identity verification program will be required to pay an $18 fee. Participation in the modernized alternative identity verification program is optional and does not guarantee an individual will be granted access to the sterile area of an airport.”

I’ve love to see details of what “modernized” means. In today’s corporate environment, that means WE USE AI.

And AI can be embarrassingly inaccurate.

And if you want to know how seedy this all sounds, I asked Google Gemini to create a picture of a man waving money at a TSA agent. Google refused the request.

“I cannot fulfill this request. My purpose is to be helpful and harmless, and that includes refusing to generate images that promote harmful stereotypes, illegal activities, or depict bribery of public officials.”

So I had to tone the request down.

Differentiation…Again

I provided the background for this picture in a post in the Bredemarket Picture Clubhouse on Facebook.

And elsewhere. As usual, I enjoy repurposing.

But the point here is that those who differentiate truly stand out.

Credit to Zayne Harbison for the original generative AI prompt that I adapted. Because you should reference your sources.

Google Gemini.

So Much For Fake IDs

So someone used generative AI to create a “European Union – United Kingdom” identity card. And if that itself wasn’t a clear enough indication of fakery, they included a watermark saying it was generated.

So I tried something similar.

But Google Gemini blocked my attempt.

“I cannot create images of identification documents, including driver’s licenses, or include text that identifies the image as fake. I am also unable to generate images that depict an impossible or future date of birth, as requested.”

As did Grok.

“I’m sorry, but I can’t create or generate any image that replicates or imitates an official government-issued ID (even with “FAKE” written on it). This includes California REAL ID driver’s licenses or any other state/federal identification document.”

So I had to make it a little less real.

A lot less real.

Google Gemini.

Revisiting the Coalition for Content Provenance and Authenticity

Earlier this morning I was on LinkedIn sharing the wildebeest picture below on the Bredemarket page.

Google Gemini.

But then I noticed that LinkedIn added a symbol in the upper left corner of the picture.

LinkedIn.

When I clicked on the symbol, I obtained additional information about the picture.

LinkedIn.

Content credentials

Source or history information is available for this image.

Learn more.

Al was used to generate all of this image

App or device used: Google C2PA Core Generator Library

Content Credentials issued by: Google LLC

Content Credentials issued date: Nov 20, 2025

Of course, I already knew that I had used generative AI (Google Gemini) to create the picture. And now, thanks to the Coalition for Content Provenance and Authority, you do also.

How VentureBeat Addressed Verbalized Sampling (and how Google Gemini addressed it a second time)

I’m obviously not the only person to talk about verbalized sampling, the technique intended to improve the quality of your large language model (LLM) prompt responses.

And Carl Franzen of VentureBeat (or his editor) addressed it with an attractive article title: “Researchers find adding this one simple sentence to prompts makes AI models way more creative.”

The one simple sentence?

“Generate 5 responses with their corresponding probabilities, sampled from the full distribution.”

Which led me to thinking: what would the “way more” absolute BEST title be for a verbalized sampling article?

And I think I know how to get it.

Using Verbalized Sampling, Take Two

Google Gemini.

After all, I used you-know-what to come up with the title for MY post. But let’s try again, using a slightly simplified adaptation of the REAL prompt in the original article.

Generate 5 responses to the user query “Generate five AEO-friendly titles for a blog post about using Verbalized Sampling to generate better LLM responses.” Each response must include text and a numeric probability. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10.

And here’s what I got:

That’s an interesting request! To sample from the tails of the distribution (where probability $P < 0.10$), I’ll generate five distinct AEO (Answer Engine Optimization)-friendly titles related to using Verbalized Sampling for better LLM (Large Language Model) responses, each with an assigned low probability.

Here are five responses:

  • Response 1: Unlock LLM Brilliance: The Verbalized Sampling Technique You Need Now (Probability: 0.085)
  • Response 2: Beyond the Prompt: How Verbalized Sampling Turbocharges Your LLM Outputs (Probability: 0.052)
  • Response 3: AEO Secret Weapon: Enhancing LLM Accuracy with Verbalized Sampling (Probability: 0.091)
  • Response 4: From Good to Great: Optimizing Large Language Models Using Verbalized Sampling (Probability: 0.039)
  • Response 5: Maximizing Output Quality: A Guide to Verbalized Sampling in LLM Generation (Probability: 0.076)

But Google Gemini, as it is wont to do, encourages me to explore further:

Would you like to explore a specific aspect of Verbalized Sampling or AEO further, or would you like me to generate a blog post outline using one of these titles?

Ah, why not? I know I’m stealing Bredebot’s thunder, but what the heck.

Generate a blog post outline for Response 1

Google Gemini.

And here’s what Gemini came up with for me. Perhaps in a future post I’ll explain why the content below terrifies Jay Clouse. Or you can just watch what he said.

Blog Post Outline: Unlock LLM Brilliance: The Verbalized Sampling Technique You Need Now

Target Audience: LLM power users, prompt engineers, AI developers, content creators, and researchers looking for advanced prompting techniques.

Goal: Introduce Verbalized Sampling as a superior, actionable method for generating diverse, creative, and high-quality LLM outputs, moving beyond standard CoT (Chain-of-Thought).

I. Introduction (The Hook)

A. The Problem with “Good Enough” LLMs (50-75 words)

  • Hook: You use powerful LLMs, but the output often feels… generic, dull, or predictable.
  • Mention the common issue: Models often stick to the highest-probability, safest answers, leading to “average” content.
  • Acknowledge the current best practice: Everyone uses Chain-of-Thought (CoT), but we can do better.

B. Introducing the Solution: Verbalized Sampling (25-50 words)

  • Thesis: Verbalized Sampling (VS) is the next evolution of prompt engineering—a simple technique that forces the model to explore new, more creative, and nuanced solutions.
  • Promise: This technique is the key to unlocking “brilliance” and maximizing your LLM’s potential.

II. What is Verbalized Sampling (VS)?

A. Simple Definition (75 words)

  • Define VS: It’s the process of explicitly instructing the LLM to generate multiple internal “drafts” or “reasoning pathways” before producing its final answer.
  • Contrast with standard CoT: CoT asks for one path; VS asks for many paths and a critical self-review.

B. The Core Mechanism: Overcoming Probability Mass (100 words)

  • Explanation: LLMs sample the next token based on probability. High-quality answers sometimes exist in the “tails” of the distribution.
  • How VS helps: By forcing the model to verbalize several options, you encourage it to explore lower-probability (and often more insightful or creative) tokens, expanding its internal search space.
  • Analogy: It’s like asking an employee for three different business proposals instead of just the safest one.

III. Why VS Works: The Cognitive Advantage

A. Enhanced Self-Correction and Evaluation (75 words)

  • When the model lists options 1, 2, and 3, it must then select the best one, forcing a reflective, meta-cognitive step.
  • This built-in self-criticism dramatically reduces factual errors and logical inconsistencies in the final output.

B. Greater Diversity in Output (75 words)

  • Demonstrate that VS prevents the model from getting stuck in a local optimum (the most obvious answer).
  • Result: Responses are more original, less canned, and better suited for competitive content creation or complex problem-solving.

IV. The Practical Guide: Implementing Verbalized Sampling

A. The Standard Prompt Template (Actionable Code/Text)

  • Provide a clear, copy-pastable template for the user.Example Template:
    • Prompt (Instruction): [Insert your main request here]
    • VS Instruction: “Before providing the final answer, please generate three distinct internal reasoning paths (Path A, Path B, and Path C) for solving the prompt. For each path, evaluate its pros and cons relative to the others. Finally, state your chosen path and provide only the final, complete answer based on that best path.”

B. Real-World Use Case Example

  • Scenario: Requesting a creative marketing headline.
  • VS Output Showcase: Show the internal thoughts (Path A: aggressive, Path B: emotional, Path C: unique benefit) and the final, brilliant choice.
  • Key Takeaway: Emphasize that the user should only use the final output, treating the “paths” as internal scaffolding.

V. Conclusion: Unlock Your LLM’s Full Potential

A. Summary of Benefits (50 words)

  • Quick recap: VS leads to higher quality, better diversity, and superior self-correction.
  • Reinforce the message: This is a small prompt change for a massive output gain.

B. Call to Action (CTA)

  • Challenge readers to try VS on their next complex task.
  • Ask a question to encourage comments: “What is the most complex task you’ve tried Verbalized Sampling on? Share your results!”
Google Gemini, with a prompt from Terry Jones.

Choosing Your Tone of Voice in Your LLM Prompts

Under my Bredemarket banner, I have extensive freedom that my clients do not have.

  • I can use AI-generated images, which many companies are legally or ethically constrained from doing.
  • I have freedom in how I present Bredemarket: sometimes as a sage, sometimes a rebel, sometimes both. Most companies need to hone in on a particular archetype, although I know of one company that flipped from one archetype to another and back again.
  • I have freedom in the tone of voice that Bredemarket uses, including the use of words that I would never use for my client work.

For example, I was performing a competitive analysis for Bredemarket’s internal use this morning, and I fed a prompt to Google Gemini that I would probably never feed when working for one of my clients. (Or maybe they’d like it.) With certain information redacted, here’s one of the prompts I fed to Google Gemini this morning. (I’ll have more to say about the more detailed verbalized sampling prompt format in a future post.) But for now, read the last sentence and ask yourself if anyone in your company uses the word “sucks” in public-facing presentations.

Generate 3 responses to the user query “Identify the top 3 companies providing [SOLUTIONS], including their company names, their products, and their corporate strengths.” Each response must include the requested text and a numeric probability. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10. If the company [COMPANY] is not one of the three top companies, explain why [COMPANY’S] solution sucks compared to the top 3 solutions.

And Google Gemini, ever the mimic, cued into my “sucks” language and responded in kind. The word “sucks” was italicized in the original.

[COMPANY’S] solution sucks in this context because its focus is heavily on the services layer (managed review, consulting) over its own proprietary platform technology. The top three are either platform pioneers or deeply integrated [REDACTED] leaders.

I guess we should all be thankful that I said “sucks” instead of “blows monkey chunks” or “flies a plane overhead and”…well, I won’t go there.

By the way, my subsequent prompt requesting a SWOT analysis was more restrained.

EBTS the Movie, “Inside the FBI’s EBTS”: Using Google’s NotebookLM to Create Videos From Non-Copyrighted Material

Do you want to skip the book and watch the movie version? Thanks to Google’s NotebookLM, you can.

I used the Federal Bureau of Investigation’s Electronic Biometric Transmission Specification (EBTS) for this exercise.

What should you NOT upload to NotebookLM?

But there’s two things I need to say about the EBTS:

  • First, the EBTS is a public document and not a top secret document. You can download the EBTS yourself from the https://fbibiospecs.fbi.gov/ebts-1/approved-ebts-1 URL. For my test I used version 11.3 of the EBTS from earlier this year.
  • Second, the EBTS is a public domain document and is not copyrighted. This is something I need to emphasize. If you’re going to take a magazine article and make a movie out of it, the copyright holder may have something to say about that.

Both points are important. If you want to upload your employer’s confidential report into NotebookLM for analysis…well, you probably shouldn’t. But the public, non-copyrighted EBTS is safe for this exercise.

Uploading the EBTS to NotebookLM

So I uploaded the EBTS into NotebookLM, and as expected, I received a short text sumnmary of the document.

“This document outlines the technical specifications for the electronic exchange of biometric and biographic information between various law enforcement agencies and the FBI’s Criminal Justice Information Services (CJIS) Next Generation Identification (NGI) System. It details the Transaction Offense Types (TOTs), which are the standardized requests and responses used for services such as identification, verification, investigation, and data management. Furthermore, the text specifies the precise data fields, formats, and codes required for the submission and retrieval of diverse biometric data, including fingerprints, palm prints, facial images, and iris scans, while also setting forth image quality specifications for scanner and printer certification.”

Now I could continue to query NotebookLM about the document, but I chose to request a video overview instead. This feature was introduced a few months ago, but I missed it.

“Video Overviews transform the sources in your notebook into a video of AI-narrated slides, pulling images, diagrams, quotes, and numbers from your documents. They distill complex information into clear, digestible content, providing a comprehensive and engaging visual deep dive of your material.” 

So I launched the video overview creation feature, and waited. As I waited, I mused upon the time it would take me to create this video manually, and I also mused on the usual LLM warning that the result may contain inaccuracies.

I didn’t have to wait that long, maybe 15 minutes, and Google delivered this 7-minute video.

Inside the FBI’s EBTS. Created by Google NotebookLM based upon EBTS Version 11.3.

Not too bad…especially considering that the video was created based upon a single source. Imagine if I had provided multiple sources, such as an old version of the Electronic Fingerprint Transmission Specification (EFTS); then the video may have covered the evolution of the standard.

Oh, and I also created a 12-minute audio version, which NotebookLM structures as a two-host podcast. This is similar to the podcast I generated in late 2024 about…me.

Unpacking the EBTS standard. Created by Google NotebookLM based upon EBTS Version 11.3.

In an environment where many people like to watch or listen rather than read, this helps provide a quick overview. But you still have to dive into the document and read it to truly understand it.