Lyria…and Fingerprint Matching

Google Gemini has a new capability: artificial intelligence music generation.

It’s called Lyria.

“Since launching the Gemini app, we’ve built tools to encourage creative expression through images and video. Today, we’re taking the next step: custom music generationLyria 3, Google DeepMind’s latest generative music model, is rolling out today in beta in the Gemini app.”

I used this prompt to generate a Lyria tune:

“Create music for a wildebeest using a laptop to compare a latent fingerprint to a tenprint fingerprint”

Google’s response:

“’I’ve created a unique atmospheric track for you that blends electronic data-processing sounds with the deep, resonant textures of a wildebeest’s world. You can play the music directly here in our chat!”

Or I could download it and share it with you.

Lyria.

Something Went Wrong: Last Night’s Partial YouTube Outage

On Tuesday afternoon around 5pm Pacific time I had wrapped up laptop work and was going to watch YouTube on my phone.

As some of you know, I couldn’t.

“Something went wrong.”

I saw other posts on LinkedIn that also reported on outage, as well as other online accounts from foreign and domestic sources. I shared some of these on my Bredemarket Technology Firm Services LinkedIn page.

However, the only news from these sources was that hundreds of thousands of people were reporting the issue to Downdetector.

Downdetector, 5:15 pm PST yesterday.

However, the site wasn’t COMPLETELY down. If you had a direct link to a YouTube video, you could still watch it. I confirmed this by watching the YouTube version of one of my Bredemarket promotional videos.

Remember Tactical Goal 2C?

Then, a little over an hour later, YouTube was fully operational again.

After the fact, Google revealed the cause:

“An issue with our recommendations system prevented videos from appearing across surfaces on YouTube (including the homepage, the YouTube app, YouTube Music and YouTube Kids).”

Since recommendations appear almost everywhere, just about everything was affected. Because YouTube, like most other social services, can’t just show “the site”; it has to show what it thinks you should see.

Think about it. What would YouTube look like if it couldn’t recommend anything?

Now we know.

With no recommendations.

Privacy, by Google Gemini

Google’s concept:

“Abstract 3D render of a human silhouette made of shimmering frosted glass, iridescent light refracting through, symbolizing secure data encryption and zero-knowledge proofs, elegant and high-end.”

Personally I think it’s TOO abstract, but perhaps that’s just me.

I didn’t create a musical version of this on Instagram because stuff, but there’s a Facebook version here. Sadly non-embeddable…but that’s why you should join my Facebook Bredemarket Identity Firm Services group.

Impressionable Bots?

Update to my prior post: Google Analytics shows lower numbers for February 5.

Why?

Google Gemini suggests bots may be to blame.

The internet is full of “bots” (automated scripts from search engines or malicious actors).  

Google Analytics has an industry-leading database of known bots and filters them out very aggressively to give you “human” data.  

Jetpack also filters bots, but its list is different. Jetpack often catches fewer bots than Google, which usually results in Jetpack showing higher traffic numbers than GA.

Still unanswered: why did the bots swarm on that particular day?

Looks like disregarding the traffic is the correct choice.

The “Repurposing a Blog Post On YouTube via NotebookLM” Experiment

This is definitely an experiment. When I started, I had no idea how it would turn out. In the end I’m fairly satisfied with how NotebookLM repurposed my blog post as a YouTube video, but there were definitely some lessons learned to apply in future repurposing.

Ahref’s best way to get your product listed on LLMs

As we all know, there has been a partial shift from search engine optimization to answer engine optimization. The short version is that content performs well when it answers a question that someone proposes to a large language model (LLM) such as Google Gemini or ChatGPT.

So how do we optimize our content for LLMs?

Yes, I know I could have asked an LLM that question, but I still do some old school things and attended a webinar instead.

I live-blogged Wednesday’s webinar, hosted by the Content Marketing Institute and sponsored by Ahrefs. The speaker was Ahref’s Ryan Law, the company’s Director of Content Marketing. As is usual with such affairs, the webinar provided some helpful information…which is even more helpful if you use Ahref’s tools. (Funny how that always happens. The same thing happens with Bredemarket’s white papers.)

One of the many topics Law addressed was the TYPE of content that resonates most with LLM inquirers. Law’s slide 20 answered this question.

“LLMs LOVE YOUTUBE”

Law then threw some statistics at us.

“YouTube has fast-become the most cited domain in AI search:

1 in AI Overviews

1 in AI Mode

2 in ChatGPT

2 in Gemini

2 in Copilot

2 in Perplexity”

So even if it isn’t number 1 on some of the engines themselves, it’s obviously high, and very attractive to inquirers.

But what of people like me who prefer the portability of text? It’s easier to quote from text than it is to take a short snippet of a video.

YouTube covers that also, since it automatically creates a transcript of every word spoken in a YouTube video.

But…

Bredemarket’s problem

…most of the videos that Bredemarket has created have zero or few spoken words, which kinda sorta makes it tough to create a transcript.

For example, the “Landscape (Biometric Product Marketing Expert)” video that I frequently share on the Bredemarket blog for some odd reason is not only on WordPress, but also on YouTube. However, it has zero spoken words, so therefore no transcript.

This video (actually a short) DOES have a transcript.

“Yo, I’m the outlaw of this country sound, dropping rhymes that shake the ground.”

But I do have some YouTube videos with more extensive transcripts. And one of them suggests a possible solution to my desire to provide YouTube videos to LLMs.

Using Google’s NotebookLM to create videos from non-copyrighted material

A still from Bredemarket’s movie “Inside the EBTS.” Are you jealous, Stefan Gladbach?

Last November, I uploaded material to Google’s NotebookLM and asked the service to create a movie from it.

The material wasn’t authored by me, but by the U.S. Federal Bureau of Investigation. (Which meant that it wasn’t copyrighted.)

What was it?

Version 11.3 of the Electronic Biometric Transmission Specification (EBTS).

A few of you are already laughing.

For those who aren’t, the EBTS is a fairly detailed standard dictating how biometric and biographic data is exchanged between the FBI’s Next Generation Identification (NGI) system and other federal, state, and local automated biometric identification systems.

As a standard, it’s not as riveting as a Stephen King novel.

But NotebookLM made a movie out of it anyway.

Inside the FBI’s EBTS.

And once I uploaded the movie to YouTube, YouTube created a transcript.

First 21 seconds of the YouTube transcript of the video above.

So this potentially helps Bredemarket to be visible.

And if I want to follow Ryan Law’s advice and repurpose my content for YouTube, NoteBookLM provides a method to do it.

Using Google’s NotebookLM to create videos from MY copyrighted material

Time for an experiment, the goal of which is to convert a Bredemarket blog post into a video with a minimum amount of effort.

NotebookLM can use either files or web links as source material for videos, so the easiest method to create my video is as follows:

  • Paste the web link to the blog post into NotebookLM.
  • Instruct NotebookLM to create a “Video Overview” from this single piece of content.

My blog post of choice is my post from earlier today, “Government Anti-Fraud Efforts: They’re Still Siloed.” The post contains a Gemini AI image of silos on the National Mall in Washington DC (inaccurately displaying the U.S. Capitol as NORTH of the Washington Monument). It also includes my text as well as links to outside sources such as the initial draft of H.R. 7270.

So let’s see what NotebookLM gives us.

The first thing that we get is Google’s summary of the content at the link.

NotebookLM’s summary of the web link I provided.

“1 source

The provided text examines the fragmented nature of U.S. anti-fraud efforts, arguing that current strategies lack a unified, government-wide approach. While the Stop Identity Fraud and Identity Theft Bill seeks to modernize identity verification through Treasury Department grants to states, the author contends that the proposal is too narrow. The legislative focus on mobile driver’s licenses and digital infrastructure fails to consolidate the many overlapping responsibilities held by the FBI, Secret Service, and FTC. This bureaucratic separation creates siloed operations where different agencies and state governments often work independently rather than as a cohesive front. Ultimately, the source suggests that despite new legislative attempts, the American response to identity theft remains decentralized and inefficient.”

But that’s just text. Time to create the video overview.

After I cklicked the “Video Overview” button in the upper right.

A while later…

Even with relatively minimal content, video creation isn’t immediate. It has to shape the content into a narrative video, after all. Sadly I forgot to time the result, but I received this video, “A Tale of Two Threats,” within a half hour.

A Tale of Two Threats (WordPress version). Created by Google NotebookLM based upon “Government Anti-Fraud Efforts: They’re Still Siloed.”

As is par for the course with NotebookLM, the narration is self-generated in a viewer-friendly form (“let’s pop the hood”). And it used my source material as a basis to narrate the tale of the U.S. Government’s responses to the “two threats” of terrorism and fraud. As my original blog post noted, the two responses have been quite different.

The video then takes portions of the blog post, including the list of agencies that are NOT part of H.R. 7270, as well as my example of what could happen if the Secret Service’s mission is compromised because of what some other agency is doing.

But it DOESN’T take other portions of my blog post, such as the potential shuttering of the Consumer Financial Protection Bureau, my reference to “evil Commie Chinese facial recognition algorithms,” or my graphic of silos on the Mall. NotebookLM generated its own cartoon graphics instead.

This image didn’t make the video, even though Google created it.

The final step

The first place where I uploaded the video was WordPress, so I could include it in this blog post. I’ll probably upload it to other places, but the second target is YouTube.

A Tale of Two Threats (YouTube version). Created by Google NotebookLM based upon “Government Anti-Fraud Efforts: They’re Still Siloed.”

And yes, there is a transcript. Although it took a few minutes to generate. So now the bot’s text is out there for the LLMs to find.

First 24 seconds of the YouTube transcript of the video above.

Grading the experiment

I’ll give the experiment a B. It’s not really MY video, but it encapsulates some of my views.

NotebookLM users need to remember that when it creates audio and video content, it doesn’t simply parrot the source, but reshapes it. You may remember the NotebookLM 20-minute “Career Detective” podcast of my resume, in which a male and female bot talked about how great I am. My blog post was processed similarly.

If I want something that better promotes Bredemarket to LLM users, I need to shape the blog post to do the following:

  • Address some question that the LLM user asks.
  • Include text that promotes Bredemarket as the solution to the inquirer’s problems.

Anyway, I’ll keep these tips in mind when writing…and repurposing…future blog posts.

Not Only Amazon Stale (not Fresh), But Also Amazon Zero (not One)

With all the news about Amazon Fresh closing and more Amazon layoffs taking place, I missed a bit of news about the Amazon One palm-vein technology. But first a bit of history.

Amazon One in 2021

I believe I first wrote about Amazon One back in 2021, in a “biometrics is evil” post.

2021 TechCrunch article.

In that year, TechCrunch loudly proclaimed:

“While the idea of contactlessly scanning your palm print to pay for goods during a pandemic might seem like a novel idea, it’s one to be met with caution and skepticism given Amazon’s past efforts in developing biometric technology. Amazon’s controversial facial recognition technology, which it historically sold to police and law enforcement, was the subject of lawsuits that allege the company violated state laws that bar the use of personal biometric data without permission.”

Yes, Amazon was regarded as part of the evil fascist regime even when Donald Trump WASN’T in office.

Amazon One in 2025

Enrolling.

Which brings us to 2025, when Trump had returned to office and I enrolled in Amazon One myself to better buy things at the Upland, California Amazon Fresh. But the line was too long so I went to Whole Foods, where my palm and vein may or may not have worked.

Amazon One in 2026

From https://amazonone.aws.com/help as of January 29, 2026.

And pretty soon we’ll ALL be going to Whole Foods since Amazon Fresh is rebranding or closing all its locations.

And when we get there, we won’t be using Amazon One.

“Amazon One palm authentication services will be discontinued at retail businesses on June 3, 2026. Amazon One user data, including palm data, will be deleted after this date.”

You know the question I asked. Why?

“In response to limited customer adoption…”

Of course, in Amazon’s case, “limited” may merely mean that billions and billions of people didn’t sign up, so it jettisoned the technology in the same way it jettisoned dozens of stores and thousands of employees.

The June date may or may not apply to healthcare, but who knows how long that will last.

So what now?

In my 2021 post I mentioned three other systems that used biometrics for purchases.

There was the notorious Pay By Touch (not notorious because of its technology, but the way the business was run).

There was the niche MorphoWave.

But the third system dwarfs them all.

“But the most common example that everyone uses is Apple Pay, Google Pay, Samsung Pay, or whatever ‘pay’ system is supported on your smartphone. Again, you don’t have to pull out a credit card or ID card. You just have to look at your phone or swipe your finger on the phone, and payment happens.”

And they’re so entrenched that even Amazon can’t beat them.

Or as I said after the latest round of Amazon layoffs:

“This, combined with its rebranding or closure of all Amazon Fresh stores, clearly indicates that Amazon is in deep financial trouble.

“Bezos did say that Amazon would fail some day, but I didn’t expect the company to fall apart this quickly.”

Groupthink From Bots

I participate in several public and private AI communities, and one fun exercise is to take another creator’s image generation prompt, run it yourself (using the same AI tool or a different tool), and see what happens. But certain tools can yield similar results, for explicable reasons.

On Saturday morning in a private community Zayne Harbison shared his Nano Banana prompt (which I cannot share here) and the resulting output. So I ran his prompt in Nano Banana and other tools, including Microsoft Copilot and OpenAI ChatGPT.

The outputs from those two generative AI engines were remarkably similar.

Copilot.
ChatGPT.

Not surprising, given the history of Microsoft and OpenAI. (It got more tangled later.)

But Harbison’s prompt was relatively simple. What if I provided a much more detailed prompt to both engines?

Create a realistic photograph of a coworking space in San Francisco in which coffee and hash brownies are available to the guests. A wildebeest, who is only partaking in a green bottle of sparkling water, is sitting at a laptop. A book next to the wildebeest is entitled “AI Image Generation Platforms.” There is a Grateful Dead poster on the brick wall behind the wildebeest, next to the hash brownies.

So here’s what I got from the Copilot and ChatGPT platforms.

Copilot.
ChatGPT.

For comparison, here is Google Gemini’s output for the same prompt.

Gemini.

So while there are more differences when using the more detailed prompt (see ChatGPT’s brownie placement), the Copilot and ChatGPT results still show similarities, most notably in the Grateful Dead logo and the color used in the book.

So what have we learned, Johnny? Not much, since Copilot and ChatGPT can perform many tasks other than image generation. There may be more differentiation when they perform SWOT analyses or other operations. As any good researcher would say, more funding is needed for further research.

But I will hazard two lessons learned:

  • More detailed prompts are better.

AFOID With an Expanded A: If You Pay the Money, Who Needs REAL ID Anyway?

I’ve vented about this for years. Some people have vented about this for decades. And it’s been discussed for decades.

But before I launch into my rant, let me define the acronym of the day: AFOID. It stands for “acceptable form of identification.”

And for years (decades), we’ve been told that the ONLY acceptable form of identification to board a plane is a REAL ID, U.S. passport, or a similar form of identity. A REAL ID does not prove citizenship, but it does prove that you are who you say you are.

USA.GOV put it best:

“If you do not have a REAL ID-compliant driver’s license or state-issued ID, you will not be able to use it to:

“Access federal government facilities or military installations

“Board federally regulated commercial aircraft

“Enter nuclear power plants”

Pretty straightforward. Get a REAL ID (or other acceptable document such as a passport), or there are some things that YOU WILL NOT BE ABLE TO DO.

So you needed that AFOID by May 2025…

Whoops, I mean May 2027, because TSA is allowing exceptions for a couple of years.

Whoops, I mean probably never.

If you pay some bucks, you can use a MODERNIZED system. Biometric Update alerted me to this new item in the Federal Register.

“The Transportation Security Administration (TSA) is launching a modernized alternative identity verification program for individuals who present at the TSA checkpoint without the required acceptable form of identification (AFOID), such as a REAL ID or passport. This modernized program provides an alternative that may allow these individuals to gain access to the sterile area of an airport if TSA is able to establish their identity. To address the government-incurred costs, individuals who choose to use TSA’s modernized alternative identity verification program will be required to pay an $18 fee. Participation in the modernized alternative identity verification program is optional and does not guarantee an individual will be granted access to the sterile area of an airport.”

I’ve love to see details of what “modernized” means. In today’s corporate environment, that means WE USE AI.

And AI can be embarrassingly inaccurate.

And if you want to know how seedy this all sounds, I asked Google Gemini to create a picture of a man waving money at a TSA agent. Google refused the request.

“I cannot fulfill this request. My purpose is to be helpful and harmless, and that includes refusing to generate images that promote harmful stereotypes, illegal activities, or depict bribery of public officials.”

So I had to tone the request down.

Differentiation…Again

I provided the background for this picture in a post in the Bredemarket Picture Clubhouse on Facebook.

And elsewhere. As usual, I enjoy repurposing.

But the point here is that those who differentiate truly stand out.

Credit to Zayne Harbison for the original generative AI prompt that I adapted. Because you should reference your sources.

Google Gemini.

So Much For Fake IDs

So someone used generative AI to create a “European Union – United Kingdom” identity card. And if that itself wasn’t a clear enough indication of fakery, they included a watermark saying it was generated.

So I tried something similar.

But Google Gemini blocked my attempt.

“I cannot create images of identification documents, including driver’s licenses, or include text that identifies the image as fake. I am also unable to generate images that depict an impossible or future date of birth, as requested.”

As did Grok.

“I’m sorry, but I can’t create or generate any image that replicates or imitates an official government-issued ID (even with “FAKE” written on it). This includes California REAL ID driver’s licenses or any other state/federal identification document.”

So I had to make it a little less real.

A lot less real.

Google Gemini.