How to Find LinkedIn’s “Most Recent” Feed

It was Sunday afternoon, and I was reading my LinkedIn feed. (Yes, I know; the first step is admitting you have a problem.)

Except that I was seeing stuff that was weeks old. Posts about “upcoming” trade shows that already took place. News about the “upcoming” Prism Project deepfake report that was released long ago.

I don’t know why LinkedIn’s algorithm thinks I need to read ancient history. What’s next…reports that Enron may be a fraud?

The chronological feed

So I decided to bypass the algorithm and access the tried and true chronological feed. You know, the way things used to work before we supposedly got “smart.”

(As an aside, I remember when FriendFeed would AUTOMATICALLY update the chronological feed when new content was posted. The way that the pitchforks were raised, you would have thought the world ended. As it turned out, the world wouldn’t end until August 10, 2009…or April 10, 2015. But I digress.)

Anyway, I went to the feed to look for the switch to swap to chronological…but could find no such switch.

So I checked Google Gemini, and discovered that the “Most Recent” feed switch was buried in the Settings. For mobile LinkedIn users, it was in the “Account preferences” section, in the “Feed preferences.”

Except that it wasn’t.

Whack a Mole

“Feed preferences” only governed display or non-display of political content. The option below “Feed preferences,” “Preferred feed view,” was the one I wanted.

Preferred feed view.

Color me conspiratorial, but I think everyone in the Really Big Bunch—Microsoft (LinkedIn), Meta (Facebook), and the others—likes to play “Whack a Mole” with the location of the chronological feed setting so that we give up and stick with the algorithmic feed of The Things We Are Supposed To See.

So the instructions here, written on June 22, 2025, may be invalid on June 22, 2026. Or July 22, 2025. Or June 23, 2025.

But for this moment I have the chronological feed set on LinkedIn, and since it takes effort to change it back, I don’t know when I will.

Update

When I returned to LinkedIn to share a LinkedIn version of this post, my preferred feed view had been reset to “most relevant.”

Would You Like Ads Embedded in Your Generative AI Responses? No? Too Bad.

(Imagen 4)

There is no such thing as a free lunch.  

Researching shoes

This applies to our incessant use of generative AI, as Micah Willbrand found out the hard way.

Willbrand, a product expert who has worked for multiple identity companies, started his story by saying that he uses Perplexity AI.

I realize that many of you just fell off your chairs in shock. Because the first rule of Generative AI is that you ALWAYS talk about ChatGPT. Well, there are other generative AI tools. Deal with it.

From https://www.perplexity.ai/.

Anyway, Willbrand was prompting Perplexity about shoes, and awaiting the responses. 

Which were unreadable.

“Every result forced inserted an Apple map with shoe stores onto the response page. It was 2/3rds the screen. Now as a text based app primarily this is super annoying because you can’t see … The …. Text.”

Monetization gone bad

Should we be surprised? No.

Now I don’t fault software vendors for trying to make money. I have no sympathy for those who complain that Threads should never ever have ads because Facebook makes a bajillion dollars. If Threads isn’t making money for Meta, then Meta will kill it.

Where I DO have a problem is when a software vendor’s monetization efforts infringe with my ability to use the software.

This applies to some smartphone games in which you play the game for 30 seconds before you’re locked in to watching 60 seconds of ads.

And this also applies to what I fear will be the future format for generative AI responses.

“The best way to overcome a marketing challenge is to do something, rather than surrendering to paralysis. But before you begin…what would you do for a Klondike bar?”

Sadly I don’t make any money off this.

Repurposing

And yes, this blog post was repurposed from something I wrote on the Bredemarket Technology Firm Services LinkedIn page. Now I just need an idea for a video…

Why Generic Pablum is Critical for Your Company—Critically Bad

(Imagen 4)

I spend a lot of time on LinkedIn and therefore endure the regular assault from the so-called LinkedIn “experts.”

You know them. 

  • The people who get all bent out of shape over this character—because it’s certain proof that you use “ChatGPT” (because there is no other generative AI tool) because no human ever uses em dashes.
  • And then in the next breath the LinkedIn “experts” slam people who don’t use “ChatGPT” to increase productivity. For example, jobseekers should use “ChatGPT” to “beat the ATS,” automatically fine-tune their resumes for every individual application, and apply to thousands of positions.
  • Oh, but the LinkedIn “experts” say you shouldn’t spray and pray. Tap into the hidden job market via our members-only gated website.

But that’s not the worst thing they say.

Formulate Safe Generic Pablum

When they’re not commanding you to avoid the em dash, the LinkedIn “experts” remind us that LinkedIn is a professional network. And that our communications must be professional.

  • No cat pictures.
  • No “life sucks” posts.
  • Nothing that would cause anyone any offense.

The ideal personal communication is this: “I am thrilled and excited to announce my CJIS certification!” 

The ideal business communication is this:

Yes, the “experts” wish that businesses said nothing at all. But if they do say something, a statement like this optimizes outcomes: “WidgetCorp is dedicated to bettering the technology ecosystem.”

Such a statement is especially effective if all your competitors are saying the same thing. This unity of messaging positions you as an industry leader.

Which enables you to…argh, I can’t do this any more. I am hating myself more and more with each word I type. Can I throw up now? This is emotionally painful.

Derek Hughes just sent me an email that describes this generic pablum. It read, in part:

“Everything reads like it was written by a robot on decaf.

“Same recycled tips. Same recycled tone. Somehow, it’s all… grey.”

Obliterate Safe Generic Pablum

If your company wants conversions—and I assume that you do—avoid the generic pablum and say something. 

This will bring your hungry people (target audience) to you.

And for the prospects that despise humanness and glory in generic pablum…if their focus is elsewhere, your focus won’t impede. Let them roam in the distance.

In the distance.

Fictional “The Amazing Computer” TV Show From 1975

Imagen 4 tried to generate this picture, but even with my second prompt attempt (below) it didn’t understand what an FBI tenprint card was.

I couldn’t get Walter Cronkite in there either, so I settled for a generic newsman.

My prompt:

Please generate a realistic picture of a 1975 television show called The Amazing Computer. The picture shows an FBI fingerprint card with ten rolled inked prints and four slap prints sitting on a gargantuan flatbed scanner. A newsman is talking about the technology.

For the real (not fictional) story, read what Dorothy Bullard wrote.

Veo 3 and Deepfakes

(Not a video, but a still image from Imagen 4)

My Google Gemini account does not include access to Google’s new video generation tool Veo 3. But I’m learning about its capabilities from sources such as TIME magazine.

Which claims to be worried.

“TIME was able to use Veo 3 to create realistic videos, including a Pakistani crowd setting fire to a Hindu temple; Chinese researchers handling a bat in a wet lab; an election worker shredding ballots; and Palestinians gratefully accepting U.S. aid in Gaza. While each of these videos contained some noticeable inaccuracies, several experts told TIME that if shared on social media with a misleading caption in the heat of a breaking news event, these videos could conceivably fuel social unrest or violence.”

However, TIME notes that the ability to create fake videos has existed for years. So why worry now?

“Veo 3 videos can include dialogue, soundtracks and sound effects. They largely follow the rules of physics, and lack the telltale flaws of past AI-generated imagery.”

Some of this could be sensationalism. After all, simple text can communicate misinformation.

And you can use common sense to detect deepfakes…sometimes.

Mom’s spaghetti 

Then again, some of the Veo 3 deepfakes look pretty good. Take this example of Will Smith slapping down some pasta at Eminem’s restaurant. The first part of the short was generated with old technology, the last part with Veo 3.

Now I am certain that Google will attempt to put guardrails on Veo 3, as it has attempted to do with other products.

But what will happen if a guardrail-lacking Grok video generator is released?

Or if someone creates a non-SaaS video generator that a user can run on their own with all guardrails disabled?

Increase the impact of your deepfake detection technology

In that case, deepfake detection technology will become even more critical.

Does your firm offer deepfake detection technology?

Do you want your prospects to know how your technology benefits them?

Here’s how Bredemarket can help you help your prospects: https://bredemarket.com/cpa/

Expanding My Generative AI Picture Prompts

I’m experimenting with more detailed prompts for generative AI.

If you haven’t noticed, I use a ton of AI-generated images in Bredemarket blog posts and social media posts. They primarily feature wildebeests, wombats, and iguanas, although sometimes they feature other things.

My prompts for these images are usually fairly short, no more than two sentences.

But when I saw some examples of prompts written by Danie Wylie—yes, the same Danie Wylie who wrote the Facebook post earlier this year at the https://m.facebook.com/story.php?story_fbid=pfbid0nvmhyuLpn3jwMv8K8sbK5EXfS4kcpjfWHicgj4BJhdFLMme87P5fvPSYf9CwjRH7l&id=100001380243595&mibextid=wwXIfr URL—then I realized that I could include a lot more detail in my own image prompts.

If you read Wylie’s Facebook post, or my own subsequent post at the https://bredemarket.com/2025/06/03/when-hivellm-pitches-an-anti-fraud-professional/ URL, then you know exactly what the picture depicts. 

Plus some other stuff buried in the details.

By the way, here is my prompt, which Google Gemini (Imagen 4) stored as “Eerie Scene: Sara’s Fake Bills.”

“Draw a realistic picture of a ghost-like woman wearing a t-shirt with the name “Sara.” She is holding out a large stack of dollar bills that is obviously fake because the picture on the bill is a picture of a clown with orange face makeup wearing a blue suit and a red tie. Next to Sara is a dead tree with a beehive hanging from it. Bees buzz around the beehive. A laptop with the word “HiveLLM” on the screen sits on the rocky ground beneath the tree. It is night time, and the full moon casts an eerie glow over the landscape.”

I didn’t get exactly what I wanted—the bills are two-faced—but close enough. And the accident of two-faced bills is a GOOD thing.

How detailed are your picture prompts?

Eerie.

LiveView Technologies and Agentic AI-powered Contextual Detection and Behavioral Deterrence

Government Technology shared an article entitled “Talking Agentic AI Cameras: Can They Prevent Crime?” In the article, Nikki Davidson spoke with Steve Lindsey of LiveView Technologies about the surveillance company’s newest capability:

“The technology analyzes footage to detect activity and determine a best course of action. This can include directly speaking to individuals with personalized, AI-generated voice warnings, without human intervention….

“Lindsey explained the newest update with the technology uses contextual detection as well as generative AI behavioral deterrence. He said the new tech doesn’t just automate tasks; it gives AI agents the ability to make smart decisions based on evolving situations — such how to react to different scenarios.”

But a video is worth 10,000 words, so watch the video.

Lindsey clarifies that the intent of the agentic technology is to handle low-priority situations (such as trespassing on private property), while leaving high-priority situations in the capabilities of human security personnel.

I wonder if LiveView Technologies’ object recognition capabilities are able to detect guns as other video analytic programs do.

Don’t Learn to Code 2

(Imagen 4)

As a follow-up to my first post on this topic, look at the Guardian’s summary article, “Will AI wipe out the first rung of the career ladder?

The Guardian cites several sources:

  • Anthropic states (possibly in self-interest) that unemployment could hit 20% in five years.
  • One quarter of all programming jobs already vanished in the last two years.
  • A LinkedIn executive echoed the pessimism about the future (while LinkedIn hypes its own AI capabilities to secure the dwindling number of jobs remaining).
  • The Federal Reserve cited high college graduate rates of unemployment (5.8%) and underemployment (41.2%).

Read the entire article here.