What if this deepfake video DIDN’T have the “Just kidding, I’m not real” disclaimers?
Tag Archives: artificial intelligence
Fictional “The Amazing Computer” TV Show From 1975
Imagen 4 tried to generate this picture, but even with my second prompt attempt (below) it didn’t understand what an FBI tenprint card was.
I couldn’t get Walter Cronkite in there either, so I settled for a generic newsman.
My prompt:
Please generate a realistic picture of a 1975 television show called The Amazing Computer. The picture shows an FBI fingerprint card with ten rolled inked prints and four slap prints sitting on a gargantuan flatbed scanner. A newsman is talking about the technology.
For the real (not fictional) story, read what Dorothy Bullard wrote.
Veo 3 and Deepfakes
(Not a video, but a still image from Imagen 4)
My Google Gemini account does not include access to Google’s new video generation tool Veo 3. But I’m learning about its capabilities from sources such as TIME magazine.
Which claims to be worried.
“TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence.”
However, TIME notes that the ability to create fake videos has existed for years. So why worry now?
“Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery.”
Some of this could be sensationalism. After all, simple text can communicate misinformation.
And you can use common sense to detect deepfakes…sometimes.
Mom’s spaghetti
Then again, some of the Veo 3 deepfakes look pretty good. Take this example of Will Smith slapping down some pasta at Eminem’s restaurant. The first part of the short was generated with old technology, the last part with Veo 3.
Now I am certain that Google will attempt to put guardrails on Veo 3, as it has attempted to do with other products.
But what will happen if a guardrail-lacking Grok video generator is released?
Or if someone creates a non-SaaS video generator that a user can run on their own with all guardrails disabled?
Increase the impact of your deepfake detection technology
In that case, deepfake detection technology will become even more critical.
Does your firm offer deepfake detection technology?
Do you want your prospects to know how your technology benefits them?
Here’s how Bredemarket can help you help your prospects: https://bredemarket.com/cpa/
Expanding My Generative AI Picture Prompts
I’m experimenting with more detailed prompts for generative AI.
If you haven’t noticed, I use a ton of AI-generated images in Bredemarket blog posts and social media posts. They primarily feature wildebeests, wombats, and iguanas, although sometimes they feature other things.
My prompts for these images are usually fairly short, no more than two sentences.
But when I saw some examples of prompts written by Danie Wylie—yes, the same Danie Wylie who wrote the Facebook post earlier this year at the https://m.facebook.com/story.php?story_fbid=pfbid0nvmhyuLpn3jwMv8K8sbK5EXfS4kcpjfWHicgj4BJhdFLMme87P5fvPSYf9CwjRH7l&id=100001380243595&mibextid=wwXIfr URL—then I realized that I could include a lot more detail in my own image prompts.
If you read Wylie’s Facebook post, or my own subsequent post at the https://bredemarket.com/2025/06/03/when-hivellm-pitches-an-anti-fraud-professional/ URL, then you know exactly what the picture depicts.
Plus some other stuff buried in the details.
By the way, here is my prompt, which Google Gemini (Imagen 4) stored as “Eerie Scene: Sara’s Fake Bills.”
“Draw a realistic picture of a ghost-like woman wearing a t-shirt with the name “Sara.” She is holding out a large stack of dollar bills that is obviously fake because the picture on the bill is a picture of a clown with orange face makeup wearing a blue suit and a red tie. Next to Sara is a dead tree with a beehive hanging from it. Bees buzz around the beehive. A laptop with the word “HiveLLM” on the screen sits on the rocky ground beneath the tree. It is night time, and the full moon casts an eerie glow over the landscape.”
I didn’t get exactly what I wanted—the bills are two-faced—but close enough. And the accident of two-faced bills is a GOOD thing.
How detailed are your picture prompts?
LiveView Technologies and Agentic AI-powered Contextual Detection and Behavioral Deterrence
Government Technology shared an article entitled “Talking Agentic AI Cameras: Can They Prevent Crime?” In the article, Nikki Davidson spoke with Steve Lindsey of LiveView Technologies about the surveillance company’s newest capability:
“The technology analyzes footage to detect activity and determine a best course of action. This can include directly speaking to individuals with personalized, AI-generated voice warnings, without human intervention….
“Lindsey explained the newest update with the technology uses contextual detection as well as generative AI behavioral deterrence. He said the new tech doesn’t just automate tasks; it gives AI agents the ability to make smart decisions based on evolving situations — such how to react to different scenarios.”
But a video is worth 10,000 words, so watch the video.
Lindsey clarifies that the intent of the agentic technology is to handle low-priority situations (such as trespassing on private property), while leaving high-priority situations in the capabilities of human security personnel.
I wonder if LiveView Technologies’ object recognition capabilities are able to detect guns as other video analytic programs do.
Don’t Learn to Code 2
(Imagen 4)
As a follow-up to my first post on this topic, look at the Guardian’s summary article, “Will AI wipe out the first rung of the career ladder?”
The Guardian cites several sources:
- Anthropic states (possibly in self-interest) that unemployment could hit 20% in five years.
- One quarter of all programming jobs already vanished in the last two years.
- A LinkedIn executive echoed the pessimism about the future (while LinkedIn hypes its own AI capabilities to secure the dwindling number of jobs remaining).
- The Federal Reserve cited high college graduate rates of unemployment (5.8%) and underemployment (41.2%).
Read the entire article here.
Today’s Large Multimodal Model (LMM) is FLUX.1 Kontext
Do you remember when I explained what a Large Multimodal Model (LMM) is, and why an LMM is crucial to correctly render text in generative AI-created images?
Well, Black Forest Labs (with an Impressum…in Delaware) announced a new LMM last Thursday:
“FLUX.1 Kontext marks a significant expansion of classic text-to-image models by unifying instant text-based image editing and text-to-image generation. As a multimodal flow model, it combines state-of-the-art character consistency, context understanding and local editing capabilities with strong text-to-image synthesis.”
FLUX.1 Kontext has also received TechCrunch coverage.
And yes, the company does have a German presence.
(And no, the picture is obviously not from FLUX.1 Kontext. It’s from Imagen 4.)
Don’t Learn to Code
(Imagen 4)
Some of you may remember the 2010s, when learning to code would solve all your problems forever and ever.
There was even an “Hour of Code” in 2014:
“The White House also announced Monday that seven of the nation’s largest school districts are joining more than 50 others to start offering introductory computer science courses.”
But people on the other side of the aisle endorsed the advice:
“On its own, telling a laid-off journalist to “learn to code” is a profoundly annoying bit of “advice,” a nugget of condescension and antipathy. It’s also a line many of us may have already heard from relatives who pretend to be well-meaning, and who question an idealistic, unstable, and impecunious career choice.”
But the sentiment was the same: get out of dying industries and do something meaningful that will set you up for life.
Well, that’s what they thought in the 2010s.
Where are the “learn to code” advocates in 2025?
They’re talking to non-person entities, not people:
“Microsoft CTO Kevin Scott expects the next half-decade to see more AI-generated code than ever — but that doesn’t mean human beings will be cut out of the programming process.
“”95% is going to be AI-generated,” Scott said when asked about code within the next five years on an episode of the 20VC podcast. “Very little is going to be — line by line — is going to be human-written code.””
So the 2010s “learn to code” movement has been replaced by the 2020s “let AI code” movement. While there are valid questions about whether AI can actually code, it’s clear that companies would prefer not to hire human coders, who they perceive to be as useless as human journalists.
Identity-Bound Non-Person Entities
In my writings on non-person entities (NPEs), I have mentally assumed that NPEs go their own way and do their own thing, separate from people. So while I (John Bredehoft) have one set of permissions, the bot N. P. E. Bredemarket has “his” own set of permissions.
Not necessarily.
Anonybit and SmartUp have challenged my assumption, saying that AI agents could be bound to human identities.
“Anonybit…announced the first-ever live implementation of agentic commerce secured by decentralized biometrics, marking a significant milestone in the evolution of enterprise AI.
“Through a strategic partnership with SmartUp, a no-code platform for deploying enterprise AI agents, Anonybit is powering authenticated, identity-bound agents in real-world order, payment, and supply chain workflows….
“Anonybit’s identity token management system enables agents to operate on behalf of users with precise, auditable authorization across any workflow—online, in-person, or automated.”
So—if you want to—all your bot buddies can be linked to you, and you bear the responsibility for their actions. Are you ready?
(Imagen 4)
Make America Hallucinate Again
While some are concentrating on the political aspects of this story, I would like to focus on the technological aspects.
“[Dr. Katherine] Keyes is cited in a paper titled ‘Changes in mental health and substance use among US adolescents during the COVID-19 pandemic,’ which appears on page 52 of the MAHA report and lists JAMA Pediatrics as the journal. A representative for the journal confirmed to ABC News the paper does not exist.”
Quoted from https://abcnews.go.com/Politics/rfk-jrs-maha-report-contained-existent-studies/story?id=122321059
Anybody who has paid attention over the last two years knows EXACTLY what happened.
The word “hallucination” comes to mind.
Figure it out yet?
Someone took a shortcut in researching and/or writing the MAHA paper…something that all the generative AI companies are saying is a perfectly wonderful thing to do. After all, you won’t lose your job to AI…you will lose your job to someone who uses AI’s “help.” Until AI hallucinates and puts organic food dye-free egg whites on your face.
The continued inaccuracies in generative AI-authored writing are not limited to one political movement.
(Imagen 4)
