Artificial Intelligence Body Farm: Google AI Grows a Basilar Ganglia

(Imagen 4)

Last month I discussed Google’s advances in health and artificial intelligence, specifically the ability to MedGemma and MedSigLIP to analyze medical images. But writing about health is more problematic. Either that, or Google AI is growing body parts such as the “basilar ganglia.”

Futurism includes the details of a Google research paper that “invented” this “basilar ganglia” body part.

“In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.

“It identified an “old left basilar ganglia infarct,” referring to a purported part of the brain — “basilar ganglia” — that simply doesn’t exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.”

A little scary…especially the fact that it took a year to discover the error, a conflation of the basal ganglia (in the brain) and the basilar artery (at the brainstem). There’s no “basilar ganglia” per se.

And the MedGemma engine that I discussed last month has its own problems.

“Google’s more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time.”

One could argue that the same thing could happen with humans. After all, if a patient words a problem in one way to one doctor, and in a different way to a different doctor, you could also have divergent diagnoses.

But this reminds us that we need to fact-check EVERYTHING we read.

Us, Them, Pornographic Deepfakes, and Guardrails

(Imagen 4)

Some of you may remember the Pink Floyd song “Us and Them.” The band had a history of examining things from different perspectives, to the point where Roger Waters and the band subsequently conceived a very long playing record (actually two records) derived from a single incident of Waters spitting on a member of the audience.

Is it OK to spit on the audience…or does this raise the threat of the audience spitting on you? Things appear different when you’re the recipient.

And yes, this has everything to do with generative artificial intelligence and pornographic deepfakes. Bear with me here.

Non-Consensual Activity in AI Apps

My former IDEMIA colleague Peter Kirkwood recently shared an observation on the pace of innovation and the accompanying risks.

“I’m a strong believer in the transformative potential of emerging technologies. The pace of innovation brings enormous benefits, but it also introduces risks we often can’t fully anticipate or regulate until the damage is already visible.”

Kirkwood then linked to an instance in which the technology is moving faster than the business and legal processes: specifically, Bernard Marr’s LinkedIn article “AI Apps Are Undressing Women Without Consent And It’s A Problem.”

Marr begins by explaining what “nudification apps” can do, and notes the significant financial profits that criminals can realize by employing them, Marr then outlines what various countries are doing to battle nudification apps and their derived content, including the United States, the United Kingdom, China, and Australia.

But then Marr notes why some people don’t take nudification all that seriously.

“One frustration for those campaigning for a solution is that authorities haven’t always seemed willing to treat AI-generated image abuse as seriously as they would photographic image abuse, due to a perception that it isn’t real’.”

First they created the deepfakes of the hot women

After his experiences under the Nazi regime, in which he transitioned from sympathizer to prisoner, Martin Niemoller frequently discussed how those who first “came for the Socialists” would eventually come for the trade unionists, then the Jews…then ourselves.

And I’m sure that some of you believe I’m trivializing Niemoller’s statement by equating deepfake creation with persecution of socialists. After all, deepfakes aren’t real.

But the effects of deepfakes are real, as Psychology Today notes:

“Being targeted by deepfake nudes is profoundly distressing, especially for adolescents and young adults. Deepfake nudes violate an individual’s right to bodily autonomy—the control over one’s own body without interference. Victims experience a severe invasion of privacy and may feel a loss of control over their bodies, as their likeness is manipulated without consent. This often leads to shameanxiety, and a decreased sense of self-worth. Fear of social ostracism can also contribute to anxiety, depression, and, in extreme cases, suicidal thoughts.”

And again I raise the question. If it’s OK to create realistic-looking pornographic deepfakes of hot women you don’t know, or of children you don’t know, then is it also OK to create realistic-looking pornographic deepfakes of your own family…or of you?

Guardrails

Imagen 4.

The difficulty, of course, is enforcing guardrails to stop this activity. Even if most of the governments are in agreement, and most of the businesses (such as Meta and Alphabet) are in agreement, “most” does not equal “all.” And as long as there is a market for pornographic deepfakes, someone will satisfy the demand.

Using Personal Devices at Work: Meta AI Smart Glasses at a CBP Raid?

Although the lines inevitably blur, there is often a line between devices used at home and devices used at work.

  • For example, if you work in an old-fashioned work office, you shouldn’t use the company photocopier to run personal copies of invitations to your wedding.
  • Similarly, if you have a personal generative AI account, you may cause problems if you use that personal account for work-related research…especially if you feed confidential information to the account. (Don’t do this.)
Not work related. Imagen 4.

The line between personal use and work use of devices may have been crossed by a Customs and Border Protection agent on June 30 in Los Angeles, according to 404 Media.

“A Customs and Border Protection (CBP) agent wore Meta’s AI smart glasses to a June 30 immigration raid outside a Home Depot in Cypress Park, Los Angeles, according to photos and videos of the agent verified by 404 Media.”

If you visit the 404 Media story, you can see zoomed in pictures of the agent’s glasses showing the telltale signs that these aren’t your average spectacles.

Now 404 Media doesn’t take this single photo as evidence to indicate that CBP has formally adopted Meta AI glasses for its work. In fact, a likely explanation is that these were the agent’s personal glasses, and he chose to wear them to work that day.

And 404 Media also points out that current Meta AI glasses do NOT have built-in facial recognition capabilities.

But even with these, the mere act of wearing these glasses causes potential problems for the agent, for Customs and Border Protection, and for Meta.

Take Grandma, who uses Meta to find those cute Facebook stories about that hunk Ozzy Osbourne (who appeals to an older demographic). If she finds out that her friend Marky Mark Zuckerberg is letting the Government use Meta technology on those poor hardworking workers who just want a better life, well, Grandma may stop buying those trinkets from Facebook Marketplace.

(Unauthorized) Homeland Security Fashion Show. AI-generated by Imagen 4. And no, I don’t know what a “palienza” is.

So the lesson learned? Don’t use personal devices at work. Especially if they’re controversial.

LMM vs. LMM vs. LMM (Acronyms Are Delirious With Joy)

I’ve previously noted that the acronym LLM can represent a large multimodal model.

(Not to be confused with large language model. But I digress.)

And I’ve also noted that LMM can mean a large medical model.

But healthcare professionals aren’t the only ones adopting this acronym. Enter the marketers at WPP Media.

Large marketing model

“You might have heard us talking a lot lately about something pretty exciting: Open Intelligence, our new AI-powered data solution.  And along with that, we’ve been dropping the term the world’s first Large Marketing Model (LMM)…”

Large multimodal, medical, and marketing models. Imagen 4.

Although marketers could clearly use large multimodal models. Oh well.

So why do we marketers need our own generative AI model?

“In the context of marketing, this can extend to understanding customers, audiences, channels, and creative.”

Large marketing model. Imagen 4.

Which I guess the general-purpose engines are too generic to handle.

Dedicated

But Open Intelligence (the LMM) apparently can.

“[Open Intelligence] has been trained to understand and predict audience behavior and marketing performance based on patterns derived from real-time data about how people engage with content, brands, platforms, and products.”

It has been trained on “trillions of signals across more than 350 partners in over 75 markets.” Trillions of signals sounds like an impressive feature, but what if there are actually quadrillions of signals?

Are there other LMMs?

And are we going to get more of these special purpose models?

  • Large meteorological model? (We have those already.)
  • Large macroeconomic model? (Those too.)
  • Large microbiological model?
  • Large metaphysical model? (Don’t ask.)
  • Large mythological model? (Really don’t ask.)
Large mythological model. Imagen 4.

Behind the Scenes: Working on Mesmerizing Storytelling

(Imagen 4)

This was never supposed to go on the Bredemarket blog, but here it is. Because when a product marketing consultant wants to improve his storytelling skills, he practices with…toilet paper.

A Facebook challenge

I’ve been working on improving my AI art generation skills, and even created a special Facebook group, Bredemarket Picture Clubhouse, as my practice area. One of my inspirations has been Danie Wylie, whom I first encountered during the HiveLLM thingie.

Wylie likes to share art challenges, and she recently shared this one. The text below, including the emojis, is straight from the challenge.

📣 New Weekly Wednesday Challenge 📣

🌟 Glitch N’ Sass  and AI Anonymous  Present:

🎭✨ MESMERIZE THE MUNDANE ✨🎭

Where glitter drips from code and imagination struts in stilettos. @everyone  💥

Take the forgotten, the overlooked, the tragically basic —

and unleash the glam-core magic of AI.

Allow creativity to glitch the system, let sass polish the mundane, all while reshaping reality.

Flip the script on the everyday:

🥄 A spoon stirs time’s secrets

👟 A shoelace coils into cosmic scales

📎 A paperclip snaps open hidden realms

✨ Rewire purpose.

✨ Reframe presence.

✨ Reveal what the world forgets to see.

📌 Tag it: #AIAnonymous #GlitchNSass #MesmerizeTheMundane

💬 This isn’t an art drop — it’s an everyday clutch, transformed into a chasm of creativity .

A call to those who see depth in the digital, beauty in glitches, and freedom behind the mask.

We are not escaping the world — we are a reminder, to view it. For all the purposes they told us it never possessed. 🔥

✨ So go on… Mesmerize us, With glitter in one hand and encrypted vision in the other. ✨

Preparing my response

Now on the surface such an exercise has nothing to do with “know your business” or “biometric product marketing expert” or “content – proposal – analysis”…

…but it does.

In essence, written business communications are opportunities for storytelling. As I noted, case studies are inspiring stories about how a challenged company realized amazing success, all thanks to the wonderful Green Widget Gizmo.

Now that’s a riveting story.

Tell us about the Green Widget Gizmo again PLEASE PLEASE PLEASE! Imagen 4.

And of course I’ve performed AI image storytelling before: for example, with my three “Biometric product marketing expert” reels. Here’s the second:

Biometric product marketing expert, the content for tech marketers version.

But back to the “Mesmerize the Mundane” challenge. So to participate in the challenge I had to find something mundane. Now some of you think a single finger sensor is mundane…but I don’t. (There’s actually a connection between fingerprint sensors and art, but I’m under NDA.)

My response

So I picked a mundane topic: toilet paper.

What’s even better is that toilet paper is filled with emotion. Particularly relative to the ongoing debate about whether…

I’m not going to say it. I hope this reel—my entry into the “Mesmerize the Mundane” challenge—speaks for itself.

The over/under.

When I shared this reel on Facebook and elsewhere, I did so with the following text.

A storytelling exercise…and a challenge.

You can’t get more mundane than toilet paper, or spawn fiercer battles over orientation. But love conquers battles.

#AIAnonymous #GlitchNSass #MesmerizeTheMundane #BredemarketPictureClubhouse 

But before I close this post I will get a little technical.

Time to show how the sausage is made


By Rklawton – Own work, CC BY-SA 2.5, https://commons.wikimedia.org/w/index.php?curid=735848.

One of the challenges in multi-image storytelling is the need for consistency between the images. You can’t have the hero wildebeest wearing a blue cap in the first picture and a red one in the second.

So to enforce consistency, I’ve been bundling all my picture prompts into a single request to Google Gemini, and including instructions to enforce similarity between the pictures in the series.

AI art creation. This is the picture I use for the Bredemarket Picture Clubhouse Facebook group.

So here is the specific request used to create the four pictures in the reel above.

Draw realistic pictures based upon the following four prompts:

Prompt 1: Draw a realistic picture of a toilet paper holder on a blue tiled bathroom wall, next to the toilet. The toilet paper is white. The toilet paper end is hanging in front of the roll.

Prompt 2: Draw a realistic picture similar to the image in the previous prompt, a toilet paper holder on a blue tiled bathroom wall, next to the toilet. The toilet paper is still white. This time, however, the toilet paper end is hanging behind the roll.

Prompt 3: Draw a realistic picture similar to the image in the previous prompts, a toilet paper holder on a blue tiled bathroom wall, next to the toilet. Now the toilet paper is glowing in a neon red. Due to mesmerizing magic, there is a toilet paper end hanging in front of the roll, and there is also a duplicate toilet paper end hanging behind the roll. The presence of both toilet paper ends removes the conflict of whether to hang toilet paper in front of our behind the roll; now, both are simultaneously true.

Prompt 4: Draw a realistic picture similar to the image in the previous prompts, a toilet paper holder next to the toilet. But now the tiles on the bathroom wall are colored gold, vibrating, and throbbing. The toilet itself is glowing with a bright light. Now the toilet paper is glowing in red, green, and blue, and sparkles are shooting away from the toilet paper roll like fireworks. Again, due to mesmerizing magic, there is a toilet paper end hanging in front of the roll, and there is also a duplicate toilet paper end hanging behind the roll. The bathroom floor is covered in hundred dollar bills and shiny gold coins.

And here are the full square pictures, which do not completely display in the reel.

Now I just have to tell the riveting story of a single finger sensor.

Asking For Connections From My Street Team

(Imagen 4)

I’m asking for a connection favor from the people who read this, my street team.

The ask

Here is the ask:

  • If you know a technology Chief Marketing Officer or other leader…
  • …who faces challenges in content, proposals, or analysis…
  • …and can use consulting help:

Ask your marketing leader to visit https://bredemarket.com/mark/ to learn about Bredemarket’s marketing and writing services:

  • The why, how, what, and who about Bredemarket’s ability to drive content results.
  • What I can do for your marketing leader.
  • Who uses my services; I’ve worked in many technology industries.
  • My collaborative process with Bredemarket’s clients.

The connection

If they like what they see, they can connect with me by booking a free 30 minute content needs assessment meeting with me, right from the https://bredemarket.com/mark/ page.

The reward

Thank you, street team. No monetary commission, but I can give you a shout out and  a personal AI-generated wildebeest picture on Bredemarket’s blog and social media empire. Yes, even TikTok (if it’s still legal).

Actually, I already owe a shout out to Roger Morrison, who has supported Bredemarket for years and has supported me personally for decades. Roger offers extensive experience in multiple biometric modalities (finger, face, Iris, voice), identity credentials, and broadband and other technologies. Despite attending the wrong high school in Arlington, Virginia (should have gone to Wakefield), he is very knowledgeable and very supportive. Warning: Roger is NOT bland or generic.

Imagen 4.

Painting a Picture: The Content Challenges of a Biometric Chief Marketing Officer

(Imagen 4)

If this reads odd, there’s a reason.

Imagine a Chief Marketing Officer sitting at her desk, wondering how she can overcome her latest challenge within three weeks.

She is a CMO at a biometric software company, and she needs someone to write the first two entries in a projected series of blog posts about the company’s chief software product. The posts need to build awareness, and need to appeal to prospects with some biometric knowledge.

So she contacts the biometric product marketing expert, John E. Bredehoft of Bredemarket, via his meeting request form, and schedules a Google Meet for the following meeting.

At the scheduled time she joins the meeting from her laptop on her office desk and sees John on the screen. John is a middle-aged Caucasian man with graying hair. He is wearing wire-rimmed glasses with a double bridge. He has a broad smile, with visible lines around his eyes and mouth. His eyes are brown  and appear to be looking directly at the camera. He is wearing a dark blue collared shirt. While his background is blurred, he appears to be in a room inside his home, with a bookcase and craft materials in the background.

After some pleasantries and some identity industry chit chat, John started asking some questions. Why? How? What? Goal? Benefits? Target audience (which he called hungry people)? Emotions? Plus some other questions.

They discussed some ideas for the first two blog posts, each of which would be about 500 words long and each of which would cost $500 each. John pledged to provide the first draft of the first post within three calendar days.

After the call, the CMO had a good feeling. John knew biometrics, knew blogging, and had some good ideas about how to raise the company’s awareness. She couldn’t wait to read Bredemarket’s first draft.

If you are in the same situation as the CMO is this story, schedule your own meeting with Bredemarket by visiting the https://bredemarket.com/mark/ URL and filling out the Calendly form.

Remember how I warned you that this post was going to read odd? In case you’re wondering about the unusual phrasing—including a detailed description of what I look like—it’s because I fed the entire text of this blog post to Google Gemini. Preceded by the words “Draw a realistic picture of.” And here’s what I got.

Imagen 4. I’m not on the screen, but I like the content ideas.
Imagen 4. With the bookcases. And I’ve never had a beard.
Imagen 4. But that’s not blurred.

Why We Fact Check AI

According to Meta AI, “Bredemarket’s history dates back to L-1 Identity Solutions.”

Um, no.

  • Bredemarket was established in 2020.
  • L-1 Identity Solutions ceased to exist 9 years before that in 2011, when Safran acquired it.
  • John E. Bredehoft was never an employee of L-1, or or any of the companies that L-1 acquired.

Now that’s a hallucination.