The Grok version of the Bredemarket blog post at https://bredemarket.com/2025/12/10/access-and-somewhat-you-why/
Tag Archives: 6fa
Access and “Somewhat You Why”
In case you missed it, I’ve been pushing a sixth factor of authentication called “Somewhat You Why.”
“As I refined my thinking, I came to the conclusion that “why” is a reasonable factor of authentication, and that this was separate from the other authentication factors (such as “something you do”).”
And now Identity Jedi Harvey Lee is also asking the “why” question, but specifically in terms of access control.
“[B]ecause we couldn’t determine why someone needed access, we built systems that tried to guess the answer for us….
“Roles were never about “least privilege.” Roles were our attempt to predict intent at scale. And like most predictions, especially in complex systems, they were right until they weren’t….
“Instead of front-loading permissions for every possible future scenario, we authorize the current scenario. Identity might still be the new perimeter — but intent is the new access key.”
Read “Intent Is the New Access Key.”
For example, if a dehydrated man wants to unlock a water tank, I have a pretty good idea of his intent.

Continuous Authentication HAS To Be Multi-Factor
If you authenticate a person at the beginning of a session and never authenticate them again, you have a huge security hole.
For example, you may authenticate an adult delivery person and then find a kid illegally making your delivery. 31,000 Brazilians already know how to do this.

That’s why more secure firms practice continuous authentication for high-risk transactions.
But continuous authentication can be intrusive.
How would you feel if you had to press your finger on a fingerprint reader every six seconds?
Enough of that and you’ll start using the middle finger to authenticate.
Even face authentication is intrusive, if it’s 3 am and you don’t feel like being on camera.
Now I’ve already said that Amazon doesn’t want to over-authenticate everything.
But Amazon does want to authenticate the critical transactions. Identity Week:
“Amazon treats authentication as a continuous process, not a one-time event. It starts with verifying who a user is at login, but risk is assessed throughout the entire session, watching for unusual behaviours or signals to ensure ongoing confidence in the user’s identity.”
That’s right: Amazon uses “somewhat you why” as an authentication factor.
I say they’re smart.
“Somewhat You Why” and Geolocation Stalkerware
Geolocation and “somewhat you why” (my proposed sixth factor of identity verification and authentication) can not only be used to identify and authenticate people.
They can also be used to learn things about people already authenticated, via the objects they might have in their possession.
Stalkerware
404 Media recently wrote an article about “stalkerware” geolocation tools that vendors claim can secretly determine if your partner is cheating on you.
Before you get excited about them, 404 Media reveals that many of these tools are NOT secret.
“Immediately notifies anyone traveling with it.” (From a review)
Three use cases for geolocation tracking
But let’s get back to the tool, and the intent. Because I maintain that intent makes all the difference. Look at these three use cases for geolocation tracking of objects:
- Tracking an iPhone (held by a person). Many years ago, an iPhone user had to take a long walk from one location to another after dark. This iPhone user asked me to track their whereabouts while on that walk. Both of us consented to the arrangement.
- Tracking luggage. Recently, passengers have placed AirTags in their luggage before boarding a flight. This lets the passengers know where their luggage is at any given time. But some airlines were not fans of the practice:
“Lufthansa created all sorts of unnecessary confusion after it initially banned AirTags out of concern that they are powered by a lithium battery and could emit radio signals and potentially interfere with aircraft navigation.
“The FAA put an end to those baseless concerns saying, “Luggage tracking devices powered by lithium metal cells that have 0.3 grams or less of lithium can be used on checked baggage”. The Apple AirTag battery is a third of that size and poses no risk to aircraft operation.”
- Tracking an automobile. And then there’s the third case, raised by the 404 Media article. 404 Media found countless TikTok advertisements for geolocation trackers with pitches such as “men with cheating wives, you might wanna get one of these.” As mentioned above, the trackers claim to be undetectable, which reinforces the fact that the person whose car is being tracked did NOT consent.
From consent to stalkerware, and the privacy implications
Geolocation technologies are used in every instance. But in one case it’s perfectly acceptable, while it’s less acceptable in the other two cases.
Banning geolocation tracking technology would be heavy-handed since it would prevent legitimate, consent-based uses of the technology.
So how do we set up the business and technical solutions that ensure that any tracking is authorized by all parties?
Does your firm offer a solution that promotes privacy? Do you need Bredemarket’s help to tell prospects about your solution? Contact me.
How Many Authentication Factor Types Are There?
(Imagen 4)
An authentication factor is a discrete method of authenticating yourself. Each factor is a distinct category.
For example, authenticating with fingerprint biometrics and authenticating with facial image biometrics are both the same factor type, because they both involve “something you are.”
But how many factors are there?
Three factors of authentication
There are some people who argue that there are only really three authentication factors:
- Something you know, such as a password, or a personal identification number (PIN), or your mother’s maiden name.
- Something you have, such as a driver’s license, passport, or hardware or software token.
- Something you are, such as the aforementioned fingerprint and facial image, plus others such as iris, voice, vein, DNA, and behavioral biometrics such as gait.
Five factors of authentication, not three
I argue that there are more than three.
- Something you do, such as super-secret swiping patterns to unlock a device.
- Somewhere you are, or geolocation.
For some of us, these are the five standard authentication factors. And they can also function for identity verification.
Six factors of authentication, not five
But I’ve postulated that there is one more.
- Somewhat you why, or a measure of intent and reasonableness.
For example, take a person with a particular password, ID card, biometric, action, and geolocation (the five factors). Sometimes this person may deserve access, sometimes they may not.
- The person may deserve access if they are an employee and arrive at the location during working hours.
- That same person may deserve access if they were fired and are returning a company computer. (But wouldn’t their ID card and biometric access have already been revoked if they were fired? Sometimes…sometimes not.)
- That same person may NOT deserve access if they were fired and they’re heading straight for their former boss’ personal HR file.
Or maybe just five factors of authentication
Now not everyone agrees that this sixth factor of authentication is truly a factor. If “not everyone” means no one, and I’m the only person blabbering about it.
So while I still work on evangelizing the sixth factor, use the partially accepted notion that there are five factors.
Six Factors of Biometric Fame
The sixth factor of authentication?
Hey, I’m famous for something.
But if you want Bredemarket to write about your identity/biometric firm’s FIVE (not four) factors of authentication, contact me.
“Somewhat You Why” in Minnesota
Remember my earlier post “‘Somewhat You Why,’ and Whether Deepfakes are Evil or Good or Both”?
When I posted it, I said:
I debated whether or not I should publish this because it touches upon two controversial topics: U.S. politics, and my proposed sixth factor of authentication.
I eventually decided to share it on the Bredemarket blog but NOT link to it or quote it on my socials.
Well, I’m having the same debate with this post, which is ironic because I learned about the content via the socials. Not that I will identify the source, because it is from someone’s personal Facebook feed.

My earlier post analyzed my assumption that deepfakes are bad. It covered the end of National Science Foundation funding for deepfake research, apparently because deepfakes can be used as a form of First Amendment free speech.
Well, the same issue is appearing at the state level, according to the AP:
X Corp., the social media platform owned by Trump adviser Elon Musk, is challenging the constitutionality of a Minnesota ban on using deepfakes to influence elections and harm candidates, saying it violates First Amendment speech protections.
As I previously noted, this does NOT mean that X believes in a Constitutional right to financially defraud people.
- But do I have a Constitutional right to dummy up a driver’s license for X identity verification?
- Or do I have a Constitutional right to practice my freedom of religion by creating my own biometric-free voter identification card like John Wahl did?
Again, is it all about intent? Somewhat you why?
And if your firm provides facial recognition, how do you address such issues?
If you need help with your facial recognition product marketing, Bredemarket has an opening for a facial recognition client. I can offer
- compelling content creation
- winning proposal development
- actionable analysis
If Bredemarket can help your stretched staff, book a free meeting with me: https://bredemarket.com/cpa/
(Lincoln’s laptop from Imagen 3)
“Somewhat You Why,” and Whether Deepfakes are Evil or Good or Both
I debated whether or not I should publish this because it touches upon two controversial topics: U.S. politics, and my proposed sixth factor of authentication.
I eventually decided to share it on the Bredemarket blog but NOT link to it or quote it on my socials.
Although I could change my mind later.
Are deepfakes bad?
When I first discussed deepfakes in June 2023, I detailed two deepfake applications.
One deepfake was an audio-video creation purportedly showing Richard Nixon paying homage to the Apollo 11 astronauts who were stranded on the surface of the moon.
- Of course, no Apollo 11 astronauts were ever stranded on the surface of the moon; Neil Armstrong and Buzz Aldrin returned to Earth safely.
- So Nixon never had to pay homage to them, although William Safire wrote a speech as a contingency.
- This deepfake is not in itself bad, unless it is taught in a history course as true history about “the failure of the U.S. moon program.” (The Apollo program had a fatal catastrophe, but not involving Apollo 11.)
The other deepfake was more sinister.
In early 2020, a branch manager of a Japanese company in Hong Kong received a call from a man whose voice he recognized—the director of his parent business. The director had good news: the company was about to make an acquisition, so he needed to authorize some transfers to the tune of $35 million….The manager, believing everything appeared legitimate, began making the transfers.
Except that the director wasn’t the director, and the company had just been swindled to the tune of $35 million.
I think everyone knows now that deepfakes can be used for bad things. So we establish standards to determine “content provenance and authenticity,” which is a fancy way to say whether content is real or a deepfake.
In addition to establishing standards, we do a lot of research to counter deepfakes, because they are bad.
Or are they?
What the National Science Foundation won’t do
Multiple sources, including both Nextgov and Biometric Update, are reporting on the cancellation of approximately 430 grants from the National Science Foundation. Among these grants are ones for deepfake research.
Around 430 federally-funded research grants covering topics like deepfake detection, artificial intelligence advancement and the empowerment of marginalized groups in scientific fields were among several projects terminated in recent days following a major realignment in research priorities at the National Science Foundation.
As you can probably guess, the cancellation of these grants is driven by the Trump Administration and the so-called Department of Government Efficiency (DOGE).
Why?
Because freedom:
Per the Presidential Action announced January 20, 2025, NSF will not prioritize research proposals that engage in or facilitate any conduct that would unconstitutionally abridge the free speech of any American citizen. NSF will not support research with the goal of combating “misinformation,” “disinformation,” and “malinformation” that could be used to infringe on the constitutionally protected speech rights of American citizens across the United States in a manner that advances a preferred narrative about significant matters of public debate.
The NSF argues that a person’s First Amendment rights permit them, I mean permit him, to share content without having the government prevent its dissemination by tagging it as misinformation, disinformation, or malinformation.
And it’s not the responsibility of the U.S. Government to research creation of so-called misinformation content. Hence the end of funding for deepfake research.
So deepfakes are good because they’re protected by the First Amendment.
But wait a minute…
Just because the U.S. Government doesn’t like it when patriotic citizens are censored from distributing deepfake videos for political purposes, that doesn’t necessarily mean that the U.S. Government objects to ALL deepfakes.
For example, let’s say that a Palm Beach, Florida golf course receives a video message from Tiger Woods reserving a tee time and paying a lot of money to reserve the tee time. The golf course doesn’t allow anyone to book a tee time and waits for Tiger’s wire transfer to clear. After the fact, the golf course discovers that (a) the money was wired from a non-existent account, and (b) the person making the video call was not Tiger Woods, but a faked version of him.
I don’t think anyone in the U.S. Government or DOGE thinks that ripping off a Palm Beach, Florida golf course is a legitimate use of First Amendment free speech rights.
So deepfakes are bad because they lead to banking fraud and other forms of fraud.
This is not unique to deepfakes, but is also true of many other technologies. Nuclear technology can provide energy to homes, or it can kill people. Facial recognition (of real people) can find missing and abducted persons, or it can send Chinese Muslims to re-education camps.
Let’s go back to factors of authentication and liveness detection
Now let’s say that Tiger Woods’ face shows up on YOUR screen. You can use liveness detection and other technologies to determine whether it is truly Tiger Woods, and take action accordingly.
- If the interaction with Woods is trivial, you may NOT want to spend time and resources to perform a robust authentication.
- If the interaction with Woods is critical, you WILL want to perform a robust authentication.
It all boils down to something that I’ve previously called “somewhat you why.”
Why is Tiger Woods speaking?
- If Tiger Woods is performing First Amendment-protected activity such as political talk, then “somewhat you why” asserts that whether this is REALLY Woods or not doesn’t matter.
- If Tiger Woods is making a financial transaction with a Palm Beach, Florida golf course, then “somewhat you why” asserts that you MUST determine if this is really Woods.
It’s simple…right?
What about your deepfake solution?
Regardless of federal funding, companies are still going to offer deepfake detection products. Perhaps yours is one of them.
How will you market that product?
Do you have the resources to market your product, or are your resources already stretched thin?
If you need help with your facial recognition product marketing, Bredemarket has an opening for a facial recognition client. I can offer
- compelling content creation
- winning proposal development
- actionable analysis
If Bredemarket can help your stretched staff, book a free meeting with me: https://bredemarket.com/cpa/
(Lincoln’s laptop from Imagen 3)
You Can’t Prove that an International Mobile Equipment Identity (IMEI) Number is Unique
I’m admittedly fascinated by the parallels between people and non-person entities (NPEs), to the point where I asked at one point whether NPEs can use the factors of authentication. (All six. Long story.)
When I got to the “something you are” factor, which corresponds to biometrics in humans, here is what I wrote:
Something you are. For simplicity’s sake, I’ll stick to physical objects here, ranging from pocket calculators to hand-made ceramic plates. The major reason that we like to use “something you are” as a factor is the promise of uniqueness. We believe that fingerprints are unique (well, most of us), and that irises are unique, and that DNA is unique except for identical twins. But is a pocket calculator truly unique, given that the same assembly line manufactures many pocket calculators? Perhaps ceramic plates exhibit uniqueness, perhaps not.
But I missed one thing in that discussion, so I wanted to revisit it.
Understanding IMEI Numbers
Now this doesn’t apply to ceramic plates or pocket calculators, but there are some NPEs that assert uniqueness.
Our smartphones, each of which has an International Mobile Equipment Identity (IMEI) number.
Let’s start off with the high level explanation.
IMEI stands for International Mobile Equipment Identity. It’s a unique identifier for mobile devices, much like a fingerprint for your phone’s IMEI number.
Now some of you who are familiar with biometrics are saying, “Hold it right there.”
- Have we ever PROVEN that fingerprints are unique?
- And I’m not just talking about Columbia undergrads here.
- Can someone assert that there has NEVER been two people with the same fingerprint in all of human history?
But let’s stick to phones, Johnny.
Each IMEI number is a 15-digit code that’s assigned to every mobile phone during its production. This number helps in uniquely identifying a device regardless of the SIM card used.
This is an important point here. Even Americans understand that SIM cards are transient and can move from one phone to another, and therefore are not valid to uniquely identify phones.
What about IMEIs?
Are IMEIs unique?
I won’t go into the specifics of the 15-digit IMEI number format, which you can read about here. Suffice it to say that the format dictates that the number incorporate the make and model, a serial number, and a check digit.
- Therefore smartphones with different makes and models cannot have the same IMEI number by definition.
- And even within the make and model, by definition no two phones can have the same serial number.
Why not? Because everyone says so.
It’s even part of the law.
Changing an IMEI number is illegal in many countries due to the potential misuse, such as using a stolen phone. Tampering with the IMEI can lead to severe legal consequences, including fines and imprisonment. This regulation helps in maintaining the integrity of mobile device tracking and discourages the theft and illegal resale of devices.
IMEIs in India
To all of the evidence above about the uniqueness of IMEI numbers, I only have two words:
So what?
A dedicated person can create or modify multiple smartphones to have the exact same IMEI number if desired. Here’s a recent example:
The Indore Police Crime Branch has dismantled two major digital arrest fraud rackets operating in different parts of the country, seizing a massive database containing private details of 20,000 pensioners in Indore….
A dark room in the flat functioned as the nerve centre of the cyber fraud operation, which had been active since 2019. The group specialised in IMEI cloning and used thousands of SIM cards from select mobile networks.
IMEIs in Canada
“Oh, but that’s India,” you say. “That couldn’t happen in a First World country.”
A Calgary senior is warning others after he was scammed out of $1,000 after buying what he thought was a new iPhone 15 Pro Max.
“I didn’t have any doubt that it was real,” Boyd told Global News….
The seller even provided him with the “original” receipt showing the phone had been purchased down east back in October 2023. Boyd said he also checked the phone’s serial number and the International Mobile Equipment Identity (IMEI). All checked out fine.
Boyd said the first sign of a problem was when he tried to update the phone with his own information and it wouldn’t update. It was only after he took it to a representative at a local Apple retailer, that he realized he had been duped.
IMEIs in general
Even IMEICheck.net, which notes that the threat of stealing one’s phone information is overrated, admits that it is possible (albeit difficult) to clone an IMEI number.
In theory, hackers can clone a phone using its IMEI, but this requires significant effort. They need physical access to the device or SIM card to extract data, typically using specialized tools.
The cloning process involves copying the IMEI and other credentials necessary to create a functional duplicate of the phone. However, IMEI number security features in modern devices are designed to prevent unauthorized cloning. Even if cloning is successful, hackers cannot access personal data such as apps, messages, photos, or passwords. Cloning usually only affects network-related functions, such as making calls or sending messages from the cloned device.
Again, NOTHING provides 100.00000% security. Not even an IMEI number.
What this means for IMEI uniqueness claims
So if you are claiming uniqueness of your smartphone’s IMEI, be aware that there are proven examples to the contrary.
Perhaps the shortcomings of IMEI uniqueness don’t matter in your case, and using IMEIs for individualization is “good enough.”
But I wouldn’t discuss war plans on such a device.
(Imagen 3 image. Oddly enough, Google Gemini was unable, or unwilling, to generate an image of three smartphones displaying the exact same 15-digit string of numbers, or even a 2-digit string. I guess Google thought I was a fraudster.)
Oh, and since I mentioned pocket calculators…excuse me, calcolatrici tascabili…
KYV: Know Your (Healthcare) Visitor
Who is accessing healthcare assets and data?
Healthcare identity verification and authentication is often substandard, as I noted in a prior Bredemarket blog post entitled “Medical Fraudsters: Birthday Party People.” In too many cases, all you need to know is a patient’s name and birthdate to obtain fraudulent access to the patient’s protected health information (PHI).
But healthcare providers need to identify more than just patients. Providers need to identify their own workers, as well as other healthcare workers.
Know Your Visitor
Healthcare providers also need to identify visitors. When a patient is in a hospital, a rehabilitation facility, or a similar place, loved ones often desire to visit them. (So do hated ones, but we won’t go there now.)
I was recently visiting a loved one in a facility that required identification of visitors. The usual identification method was to present a driver’s license at the desk. The staffer would then print out a paper badge showing the visitor’s name and the validity date.
Like this…

So John “Bederhoft” (sic) enjoyed access that day. Whoops.
Oh, and I could have handed my badge to someone else after a shift change, and no one would have been the wiser.
Let’s apply “somewhat you why”
There’s a more critical question: WHY was John “Berdehoft” visiting (REDACTED PHI)? Was I a relative? A friend? A bill collector?
My proposed sixth factor of identity verification/authentication, “somewhat you why,” would genuinely help here.
Somewhat you why “applies a test of intent or reasonableness to any identification request.”
Maybe I should have said “and” instead of “or.”
- Visiting a relative shows intent AND reasonableness.
- Visiting a debtor shows intent but (IMHO) does NOT show reasonableness.
Do you need to analyze healthcare identity issues for your healthcare product or service? Or create go-to-market content for the same? Or proposals?
Contact me at Bredemarket’s “CPA” page.
