In addition to becoming the biometric product marketing expert by studying the biometric modalities and non-biometric factors associated with a person…I’ve also studied the identification of non-person entities.
“[On June 18] the U.S. Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) held the 62nd semi-annual plenary meeting of the Bank Secrecy Act Advisory Group (BSAAG). Deputy Secretary of the Treasury Michael Faulkender delivered remarks at the event laying out guiding principles for BSA modernization.”
“The most eye-catching update is that the Treasury will attempt to “change the AML/CFT [Anti Money Laundering/Combating the Financing of Terrorism] status quo” so the BSA “explicitly permits financial institutions to de-prioritize risks” and direct resources towards higher-risk areas. The Treasury also intends to streamline reporting processes to minimize the SAR [Suspicious Activity Report] and CTR [Currency Transaction Report] burden on organizations.”
It was Sunday afternoon, and I was reading my LinkedIn feed. (Yes, I know; the first step is admitting you have a problem.)
Except that I was seeing stuff that was weeks old. Posts about “upcoming” trade shows that already took place. News about the “upcoming” Prism Project deepfake report that was released long ago.
I don’t know why LinkedIn’s algorithm thinks I need to read ancient history. What’s next…reports that Enron may be a fraud?
The chronological feed
So I decided to bypass the algorithm and access the tried and true chronological feed. You know, the way things used to work before we supposedly got “smart.”
(As an aside, I remember when FriendFeed would AUTOMATICALLY update the chronological feed when new content was posted. The way that the pitchforks were raised, you would have thought the world ended. As it turned out, the world wouldn’t end until August 10, 2009…or April 10, 2015. But I digress.)
Anyway, I went to the feed to look for the switch to swap to chronological…but could find no such switch.
So I checked Google Gemini, and discovered that the “Most Recent” feed switch was buried in the Settings. For mobile LinkedIn users, it was in the “Account preferences” section, in the “Feed preferences.”
Except that it wasn’t.
Whack a Mole
“Feed preferences” only governed display or non-display of political content. The option below “Feed preferences,” “Preferred feed view,” was the one I wanted.
Preferred feed view.
Color me conspiratorial, but I think everyone in the Really Big Bunch—Microsoft (LinkedIn), Meta (Facebook), and the others—likes to play “Whack a Mole” with the location of the chronological feed setting so that we give up and stick with the algorithmic feed of The Things We Are Supposed To See.
So the instructions here, written on June 22, 2025, may be invalid on June 22, 2026. Or July 22, 2025. Or June 23, 2025.
But for this moment I have the chronological feed set on LinkedIn, and since it takes effort to change it back, I don’t know when I will.
Update
When I returned to LinkedIn to share a LinkedIn version of this post, my preferred feed view had been reset to “most relevant.”
I even scheduled a Facebook event. Because Meta wants me to turn every Facebook post into an event, I set one up for Monday at 8 am (Pacific Daylight Time).
Nothing special at the event; I’m not even planning to go live. Just a time to check to see if the video is posted, and to spend 32 seconds watching it.
Are you having trouble finding an asset such as a digital identity or a commercial asset? If you are, there are ways to make things easier to find.
An example from the identity world
Identity Jedi David Lee recently shared his thoughts on “The Hidden Cost of Bad Identity Data (and How to Fix It).” Lee didn’t focus on the biometric data, but instead on the textual data that is associated with a digital identity.
“Let’s say you’re kicking off a new identity program. You know you need user location to drive access policies, governance rules, or onboarding flows. But your authoritative source has location data in five different formats—some say “NY,” others say “New York,” and some list office addresses with zip codes and floor numbers.
“You tell yourself: “We’ll clean it up later.”
“What you’ve really done is commit your future self to a much more expensive project.”
Garbage in, garbage out.
An example from the commerce world
Krassimir Boyanov of KBWEB Consult provides another example of a problem in his post “Why AEM Assets Smart Tagging Makes Your Marketing Work Easier.” Let’s say that you’re managing the images (the “assets”) that display on a company’s online website. You have thousands if not millions of images to manage. How do you find a particular image?
One way to do this is to “tag” each image with descriptive information.
But if you do it wrong, there will be problems.
“Tagging is inconsistent. If 10 people are tagging the items, the tags will probably be inconsistent. While one person tags an item as a “car,” another may tag a similar item as an “automobile.” Although the two assets are similar, this is hidden because of inconsistent tag use.”
Again, garbage in, garbage out.
An organizational solution from the identity world
Lee and Boyanov approach these similar problems from two perspectives.
Lee, as an Identity and Access Management (IAM) expert, approaches this as a business problem and offers the following recommendations (among others):
“Clean early, not late: Push for authoritative sources to normalize and codify the data before it hits the IAM system….
“Push accountability upstream: Don’t accept ownership of fixing problems you don’t control. Instead, elevate the data issue to the right stakeholder (hint: HR, IT, or Legal).”
While Lee can certainly speak to the technologies that can normalize and codify the data, he prefers in this post to concentrate on the organizational issues that cause dirty data, and on how to prevent these issues from reoccurring in the future.
A technological solution from the commerce world
Boyanov can also speak to business and organizational issues as an Adobe Experience Manager consultant who has helped multiple organizations implement the Adobe product. But in this case he concentrates on a technological approach offered by Adobe:
A Taxonomy is a system of organizing tags based on shared characteristics, which are usually hierarchical structured per organizational need. The structure can help finding a tag faster or impose a generalization. Example: There is a need to subcategorize stock imagery of cars. The taxonomy could look like:
Once the taxonomy is defined, assets can be tagged (preferably automatically) in accordance with the hierarchy.
Presumably David Lee’s identity world can similarly come up with a method to standardize addresses BEFORE they are added to an IAM system.
As deep as any ocean
Whether you’re dealing with a digital identity or a commercial asset, you need to ensure that you can find this asset in the future. This requires planning beforehand.
And a content creation project also requires planning beforehand, such as asking questions before beginning the project.
If you are an identity/biometric or technology firm that requires content creation, or perhaps proposal or analysis services, Bredemarket can help. After all, content creation is science…and art.
“A deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning called “deep” learning (hence the name).”
UVA then launches into a technical explanation.
“Deep learning is a special kind of machine learning that involves “hidden layers.” Typically, deep learning is executed by a special class of algorithm called a neural network….A hidden layer is a series of nodes within the network that performs mathematical transformations to convert input signals to output signals (in the case of deepfakes, to convert real images to really good fake images). The more hidden layers a neural network has, the “deeper” the network is.”
Why are shallowfakes shallow?
So if you don’t use a multi-level neural network to create your fake, then it is by definition shallow. Although you most likely need to use cumbersome manual methods to create it.
In injection attack detection, no fakery of the image is necessary. You can insert a real image of the person.
For presentation attack detection (liveness detection, either active or passive), you can dispense with the neural network and just use old fashioned makeup.
From NIST.
Or a mask.
Imagen 4.
It’s all semantics
In truth, we commonly refer to all face, voice, and finger fakes as “deep” fakes even when they don’t originate in a neural network.
But if someone wants to refer to shallowfakes, it’s OK with me.
Last Friday I shared my beef with the so-called LinkedIn “experts” and their championing of generic pablum.
“The ideal personal communication is this: ‘I am thrilled and excited to announce my CJIS certification!’”
This drivel is rooted in the idea that LinkedIn is a business network…and anything else is just “Facebook.”
Oddly enough, my Bredemarket consulting blog gets much more traffic from Facebook than it does from LinkedIn.
Despite me emphasizing LinkedIn more than Facebook for Bredemarket social media.
And despite the fact that Bredemarket’s LinkedIn pages have many more followers than Bredemarket’s Facebook page and groups.
It appears that Facebook users are more willing to click on links (and leave the walled garden).
Perhaps that’s not “businesslike” on LinkedIn.
Therefore, despite my issues with the Metabot at times, I’m paying more attention to Facebook these days.
And if Facebook users pay more attention to Bredemarket than LinkedIn users…well, I won’t impede on the LinkedIn users as they perform thrilling and exciting things.
In the distance.
By the way, I probably won’t post an anti-LinkedIn “experts” diatribe on the Bredemarket blog next Friday…