Bredemarket helps identity/biometric firms.
- Finger, face, iris, voice, DNA, ID documents, geolocation, and even knowledge.
- Content-Proposal-Analysis. (Bredemarket’s “CPA.”)
Don’t miss the boat.
Augment your team with Bredemarket.
Identity/biometrics/technology marketing and writing services
Bredemarket helps identity/biometric firms.
Don’t miss the boat.
Augment your team with Bredemarket.
I’ve talked ad nauseam about the five factors of identity verification and authentication. In case you’ve forgotten, these factors are:
I’ll leave “somewhat you why” out of the discussion for now, but perhaps I’ll bring it back later.
These five (or six) factors are traditionally used to identify people.
But what happens when the entity you want to identify is not a person? I’ll give two examples:
There’s clearly a need to identify non-person entities. If I work for IBM and have a computer issued by IBM, the internal network needs to know that this is my computer, and not the computer of a North Korean hacker.
But I was curious. Can the five (or six) factors identify non-person entities?
Let’s consider factor applicability, going from the easiest to the hardest.
Those three were easy. Now it gets harder.
Something you know. This one is a conceptual challenge. What does an NPE “know”? For artificial intelligence creations such as Kwebbelkop AI, you can look at the training data used to create it and maintain it. For a German musician’s (or an Oregon college student’s) pocket calculator, you can look at the code used in the device, from the little melody itself to the action to take when the user enters a 1, a plus sign, and another 1. But is this knowledge? I lean toward saying yes—I can teach a bot my mother’s maiden name just as easily as I can teach myself my maiden name. But perhaps some would disagree.
Something you are. For simplicity’s sake, I’ll stick to physical objects here, ranging from pocket calculators to hand-made ceramic plates. The major reason that we like to use “something you are” as a factor is the promise of uniqueness. We believe that fingerprints are unique (well, most of us), and that irises are unique, and that DNA is unique except for identical twins. But is a pocket calculator truly unique, given that the same assembly line manufactures many pocket calculators? Perhaps ceramic plates exhibit uniqueness, perhaps not.
That’s all five factors, right?
Well, let’s look at the sixth one.
You know that I like the “why” question, and some time ago I tried to apply it to identity.
The first example is fundamental from an identity standpoint. It’s taken from real life, because I had never used any credit card in Atlantic City before. However, there was data that indicated that someone with my name (but not my REAL ID; they didn’t exist yet) flew to Atlantic City, so a reasonable person (or identity verification system) could conclude that I might want to eat while I was there.
But can you measure intent for an NPE?
I’m not sure.

Unlike the other rumors over the last few years, this is official.
From IDEMIA:
“IN Groupe and IDEMIA Group have entered into exclusive negotiations regarding the acquisition of IDEMIA Smart Identity, one of the three divisions of IDEMIA Group.”
But discussions are one thing, and government approvals are another. By the way, IN Groupe’s sole shareholder is the French state…
Plus IDEMIA, like Motorola before it, will have to figure out how the, um, bifurcated components will work with each other. After all, IDEMIA Smart Identity is intertwined with the other parts of IDEMIA.
Again, from IDEMIA:
“IDEMIA Smart Identity, a division of IDEMIA Group, is a leader in physical and digital identity solutions. We have fostered longstanding relationships with governments across the globe, based on the shared understanding that a secured legal identity enables citizens to access their fundamental rights in the physical and digital worlds.”
Regardless, this process will take some time.
And what will Advent International eventually do with the other parts of IDEMIA? That will take even more time to figure out.
I should properly open this post by stating any necessary disclosures…but I don’t have any. I know NOTHING about the goings-on reported in this post other than what I read in the papers.

However, I do know the history of Thales and mobile driver’s licenses. Which makes the recent announcements from Florida and Thales even more surprising.
Back when I worked for IDEMIA from 2017 to 2020, many states were performing some level of testing of mobile driver’s licenses. Rather than having to carry a physical driver’s license card, you would be able to carry a virtual one on your phone.
While Louisiana was the first state to release an operational mobile driver’s license (with Envoc’s “LA Wallet”), several states were working on pilot projects.
Some of these states were working with the company Gemalto to create pilots for mobile driver’s licenses. As early as 2016, Gemalto announced its participation in pilot mDL projects in Colorado, Idaho, Maryland, and Washington DC. As I recall, at the time Gemalto had more publicly-known pilots in process than any other vendor, and appeared to be leading the pack in the effort to transition driver’s licenses from the (physical) wallet to the smartphone.
By the time Gemalto was acquired by and absorbed into Thales, the company won the opportunity to provide an operational (as opposed to pilot) driver’s license. The Florida Smart ID app has been available to both iPhone and Android users since 2021.

This morning I woke up to a slew of articles (such as the LinkedIn post from PEAK IDV’s Steve Craig, and the Biometric Update post from Chris Burt) that indicated the situation had changed.
One of the most important pieces of new information was a revised set of Frequently Asked Questions (or “Question,” or “Statement”) on the “Florida Smart ID” section of the Florida Highway Safety and Motor Vehicles website.
The Florida Smart ID applications will be updated and improved by a new vendor. At this time, the Florida Department of Highway Safety and Motor Vehicles is removing the current Florida Smart ID application from the app store. Please email FloridaSmartID@flhsmv.gov to receive notification of future availability.
Um…that was abrupt.
But a second piece of information, a Thales statement shared by PC Mag, explained the abruptness…in part.
In a statement provided to PCMag, a Thales spokesperson said the company’s contract with the FLHSMV expired on June 30, 2024.
“The project has now entered a new phase in which the FLHSMV requirements have evolved, necessitating a retender,” Thales says. “Thales chose not to compete in this tender. However, we are pleased to have been a part of this pioneering solution and wishes it continued success.”
Now normally when a government project transitions from one vendor to another, the old vendor continues to provide the service until the date that the new vendor’s system is operational. This is true even in contentious cases, such as the North Carolina physical driver’s license transition from IDEMIA to CBN Secure Technologies.
But in the Florida case:
This third point is especially odd. I’ve known of situations where Company A lost a renewal bid to Company B, Company B was unable to deliver the new system on time, and Company A was all too happy to continue to provide service until Company B (or in some cases the government agency itself) got its act together.
Anyway, for whatever reason, those who had Florida mobile driver’s licenses have now lost them, and will presumably have to go through an entirely new process (with an as-yet unknown vendor) to get their mobile driver’s licenses again.
I’m not sure how much more we will learn publicly, and I don’t know how much is being whispered privately. Presumably the new vendor, whoever it is, has some insight, but they’re not talking.
(Part of the biometric product marketing expert series)
Yes, I know the differences between the various factors of authentication.
Let me focus on two of the factors.
There’s a very clear distinction between these two factors of authentication: “something you are” for people, and “something you have” for things.
But what happens when we treat the things as beings?
Who, or what, possesses identity?
I’ve spent a decade working with automatic license plate recognitrion (ALPR), sometimes known as automatic number plate recognition (ANPR).

Actually more than a decade, since my car’s picture was taken in Montclair, California a couple of decades ago doing something it shouldn’t have been doing. I ended up in traffic school for that one.

Now license plate recognition isn’t that reliable of an identifier, since within a minute I can remove a license plate from a vehicle and substitute another one in its place. However, it’s deemed to be reliable enough that it is used to identify who a car is.
Note my intentional use of the word “who” in the sentence above.
These days, it’s theoretically possible (where legally allowed) to identify the license plate of the car AND identify the face of the person driving the car.
But you still have this strange merger of who and what in which the non-human characteristics of an entity are used to identify the entity.
What you are.
But that’s nothing compared to what’s emerged over the past few years.
When the predecessors to today’s Internet were conceived in the 1960s, they were intended as a way for people to communicate with each other electronically.
And for decades the Internet continued to operate this way.
Until the Internet of Things (IoT) became more and more prominent.

How prominent? The Hacker News explains:
Application programming interfaces (APIs) are the connective tissue behind digital modernization, helping applications and databases exchange data more effectively. The State of API Security in 2024 Report from Imperva, a Thales company, found that the majority of internet traffic (71%) in 2023 was API calls.
Couple this with the increasing use of chatbots and other artificial intelligence bots to generate content, and the result is that when you are communicating with someone on the Internet, there is often no “who.” There’s a “what.”
What you are.
Between the cars and the bots, there’s a lot going on.
There are numerous legal and technical ramifications, but I want to concentrate on the higher meaning of all this. I’ve spent 29 years professionally devoted to the identification of who people are, but this focus on people is undergoing a seismic change.

The science fiction stories of the past, including TV shows such as Knight Rider and its car KITT, are becoming the present as we interact with automobiles, refrigerators, and other things. None of them have true sentience, but it doesn’t matter because they have the power to do things.

In the meantime, the identification industry not only has to identify people, but also identify things.
And it’s becoming more crucial that we do so, and do it accurately.
(Part of the biometric product marketing expert series)

When marketing your facial recognition product (or any product), you need to pay attention to your positioning and messaging. This includes developing the answers to why, how, and what questions. But your positioning and your resulting messaging are deeply influenced by the characteristics of your product.
There are hundreds of facial recognition products on the market that are used for identity verification, authentication, crime solving (but ONLY as an investigative lead), and other purposes.
Some of these solutions ONLY use face as a biometric modality. Others use additional biometric modalities.

Your positioning depends upon whether your solution only uses face, or uses other factors such as voice.
Of course, if you initially only offer a face solution and then offer a second biometric, you’ll have to rewrite all your material. “You know how we said that face is great? Well, face and gait are even greater!”
It’s no secret that I am NOT a fan of the “passwords are dead” movement.

It seems that many of the people that are waiting the long-delayed death of the password think that biometrics is the magic solution that will completely replace passwords.
For this reason, your company might have decided to use biometrics as your sole factor of identity verification and authentication.
Or perhaps your company took a different approach, and believes that multiple factors—perhaps all five factors—are required to truly verify and/or authenticate an individual. Use some combination of biometrics, secure documents such as driver’s licenses, geolocation, “something you do” such as a particular swiping pattern, and even (horrors!) knowledge-based authentication such as passwords or PINs.
This naturally shapes your positioning and messaging.
So position yourself however you need to position yourself. Again, be prepared to change if your single factor solution adopts a second factor.
Every company has its own way of approaching a problem, and your company is no different. As you prepare to market your products, survey your product, your customers, and your prospects and choose the correct positioning (and messaging) for your own circumstances.
And if you need help with biometric positioning and messaging, feel free to contact the biometric product marketing expert, John E. Bredehoft. (Full-time employment opportunities via LinkedIn, consulting opportunities via Bredemarket.)
In the meantime, take care of yourself, and each other.

Do you recall my October 2023 post “LLM vs. LMM (Acronyms Are Fun)“?
It discussed both large language models and large multimodal models. In this case “multimodal” is used in a way that I normally DON’T use it, namely to refer to the different modes in which humans interact (text, images, sounds, videos). Of course, I gravitated to a discussion in which an image of a person’s face was one of the modes.

In this post I will look at LMMs…and I will also look at LMMs. There’s a difference. And a ton of power when LMMs and LMMs work together for the common good.
Since I wrote that piece last year, large multimodal models continue to be discussed. Harry Guinness just wrote a piece for Zapier in March.
When Google announced its Gemini series of AI models, it made a big deal about how they were “natively multimodal.” Instead of having different modules tacked on to give the appearance of multimodality, they were apparently trained from the start to be able to handle text, images, audio, video, and more.
Other AI models are starting to function in a TRULY multimodal way, rather than using separate models to handle the different modes.
So now that we know that LLMs are large multimodal models, we need to…
…um, wait a minute…
It turns out that the health people have a DIFFERENT definition of the acronym LMM. Rather than using it to refer to a large multimodal model, they refer to a large MEDICAL model.
As you can probably guess, the GenHealth.AI model is trained for medical purposes.
Our first of a kind Large Medical Model or LMM for short is a type of machine learning model that is specifically designed for healthcare and medical purposes. It is trained on a large dataset of medical records, claims, and other healthcare information including ICD, CPT, RxNorm, Claim Approvals/Denials, price and cost information, etc.
I don’t think I’m stepping out on a limb if I state that medical records cannot be classified as “natural” language. So the GenHealth.AI model is trained specifically on those attributes found in medical records, and not on people hemming and hawing and asking what a Pekingese dog looks like.
But there is still more work to do.
Unless I’m missing something, the Large Medical Model described above is designed to work with only one mode of data, textual data.
But what if the Large Medical Model were also a Large Multimodal Model?

A tall order, but imagine how healthcare would be revolutionized if you didn’t have to convert everything into text format to get things done. And if you could use the actual image, video, audio, or other data rather than someone’s textual summation of it.
Obviously you’d need a ton of training data to develop an LMM-LMM that could perform all these tasks. And you’d have to obtain the training data in a way that conforms to privacy requirements: in this case protected health information (PHI) requirements such as HIPAA requirements.
But if someone successfully pulls this off, the benefits are enormous.
You’ve come a long way, baby.

I’ve talked about synthetic identity fraud a lot in the Bredemarket blog over the past several years. I’ll summarize a few examples in this post, talk about how to fight synthetic identity fraud, and wrap up by suggesting how to get the word out about your anti-synthetic identity solution.
But first let’s look at a few examples of synthetic identity.
As far back as December 2020, I discussed Kris’ Rides’ encounter with a synthetic employee from a company with a number of synthetic employees (many of who were young females).

More recently, I discussed attempts to create synthetic identities using gummy fingers and fake/fraudulent voices. The topic of deepfakes continues to be hot across all biometric modalities.
I shared a video I created about synthetic identities and their use to create fraudulent financial identities.
I even discussed Kelly Shepherd, the fake vegan mom created by HBO executive Casey Bloys to respond to HBO critics.
And that’s just some of what Bredemarket has written about synthetic identity. You can find the complete list of my synthetic identity posts here.
It isn’t enough to talk about the fact that synthetic identities exist: sometimes for innocent reasons, sometimes for outright fraudulent reasons.
You need to communicate how to fight synthetic identities, especially if your firm offers an anti-fraud solution.

Here are four ways to fight synthetic identities:
If you conduct all four tests, then you have used multiple factors of authentication to confirm that the person is who they say they are. If the identity is synthetic, chances are the purported person will fail at least one of these tests.
If you fight synthetic identity fraud, you should let people know about your solution.
Perhaps you can use Bredemarket, the identity content marketing expert. I work with you (and I have worked with others) to ensure that your content meets your awareness, consideration, and/or conversion goals.
How can I work with you to communicate your firm’s anti-synthetic identity message? For example, I can apply my identity/biometric blog expert knowledge to create an identity blog post for your firm. Blog posts provide an immediate business impact to your firm, and are easy to reshare and repurpose. For B2B needs, LinkedIn articles provide similar benefits.
If Bredemarket can help your firm convey your message about synthetic identity, let’s talk.
(Part of the biometric product marketing expert series)
There are a LOT of biometric companies out there.

With over 100 firms in the biometric industry, their offerings are going to naturally differ—even if all the firms are TRYING to copy each other and offer “me too” solutions.

I’ve worked for over a dozen biometric firms as an employee or independent contractor, and I’ve analyzed over 80 biometric firms in competitive intelligence exercises, so I’m well aware of the vast implementation differences between the biometric offerings.
Some of the implementation differences provoke vehement disagreements between biometric firms regarding which choice is correct. Yes, we FIGHT.

Let’s look at three (out of many) of these implementation differences and see how they affect YOUR company’s content marketing efforts—whether you’re engaging in identity blog post writing, or some other content marketing activity.
Firms that develop biometric solutions make (or should make) the following choices when implementing their solutions.
I will address each of these questions in turn, highlighting the pros and cons of each implementation choice. After that, we’ll see how this affects your firm’s content marketing.
Back in June 2023 I defined what a “presentation attack” is.
(I)nstead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).
This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).
Then I talked about standards and testing.
But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.
And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.
(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)
Well…that day is today.

Now I could cite a firm using active liveness detection to say why it’s great, or I could cite a firm using passive liveness detection to say why it’s great. But perhaps the most balanced assessment comes from facia, which offers both types of liveness detection. How does facia define the two types of liveness detection?
Active liveness detection, as the name suggests, requires some sort of activity from the user. If a system is unable to detect liveness, it will ask the user to perform some specific actions such as nodding, blinking or any other facial movement. This allows the system to detect natural movements and separate it from a system trying to mimic a human being….
Passive liveness detection operates discreetly in the background, requiring no explicit action from the user. The system’s artificial intelligence continuously analyses facial movements, depth, texture, and other biometric indicators to detect an individual’s liveness.
Briefly, the pros and cons of the two methods are as follows:
So in truth the choice is up to each firm. I’ve worked with firms that used both liveness detection methods, and while I’ve spent most of my time with passive implementations, the active ones can work also.
A perfect wishy-washy statement that will get BOTH sides angry at me. (Except perhaps for companies like facia that use both.)

There are a lot of applications for age assurance, or knowing how old a person is. These include smoking tobacco or marijuana, buying firearms, driving a car, drinking alcohol, gambling, viewing adult content, using social media, or buying garden implements.
If you need to know a person’s age, you can ask them. Because people never lie.
Well, maybe they do. There are two better age assurance methods:
I’ve gone back and forth on this. As I previously mentioned, my employment history includes time with a firm produces driver’s licenses for the majority of U.S. states. And back when that firm was providing my paycheck, I was financially incentivized to champion age verification based upon the driver’s licenses that my company (or occasionally some inferior company) produced.
But as age assurance applications moved into other areas such as social media use, a problem occurred since 13 year olds usually don’t have government IDs. A few of them may have passports or other government IDs, but none of them have driver’s licenses.

But does age estimation work? I’m not sure if ANYONE has posted a non-biased view, so I’ll try to do so myself.
How precise is age estimation? We’ll find out soon, once NIST releases the results of its Face Analysis Technology Evaluation (FATE) Age Estimation & Verification test. The release of results is expected in early May.

Fingerprints, palm prints, faces, irises, and everything up to gait. (And behavioral biometrics.) There are a lot of biometric modalities out there, and one that has been around for years is the voice biometric.
I’ve discussed this topic before, and the partial title of the post (“We’ll Survive Voice Spoofing”) gives away how I feel about the matter, but I’ll present both sides of the issue.

No one can deny that voice spoofing exists and is effective, but many of the examples cited by the popular press are cases in which a HUMAN (rather than an ALGORITHM) was fooled by a deepfake voice. But voice recognition software can also be fooled.
(Incidentally, there is a difference between voice recognition and speech recognition. Voice recognition attempts to determine who a person is. Speech recognition attempts to determine what a person says.)
Take a study from the University of Waterloo, summarized here, that proclaims: “Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.”
If you re-read that sentence, you will notice that it includes the words “up to.” Those words are significant if you actually read the article.
In a recent test against Amazon Connect’s voice authentication system, they achieved a 10 per cent success rate in one four-second attack, with this rate rising to over 40 per cent in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99 per cent success rate after six attempts.
Similar to Gender Shades, the University of Waterloo study does not appear to have tested hundreds of voice recognition algorithms. But there are other studies.
You’ll note that the top performers don’t have error rates anywhere near the University of Waterloo’s 99 percent.
So some firms will argue that voice recognition can be spoofed and thus cannot be trusted, while other firms will argue that the best voice recognition algorithms are rarely fooled.
Obviously, different firms are going to respond to the three questions above in different ways.
There is no universal truth here, and the message your firm conveys depends upon your firm’s unique characteristics.
And those characteristics can change.
Bear this in mind as you create your blog, white paper, case study, or other identity/biometric content, or have someone like the biometric content marketing expert Bredemarket work with you to create your content. There are people who sincerely hold the opposite belief of your firm…but your firm needs to argue that those people are, um, misinformed.
And as a postscript I’ll provide two videos that feature voices. The first is for those who detected my reference to the ABBA song “Waterloo.”
The second features the late Steve Bridges as President George W. Bush at the White House Correspondents Dinner.
As further proof that I am celebrating, rather than hiding, my “seasoned” experience—and you know what the code word “seasoned” means—I am entitling this blog post “Take Me to the Pilot.”
Although I’m thinking about a different type of “pilot”—a pilot to establish that Login.gov can satisfy Identity Assurance Level 2 (IAL2).
I just mentioned IAL2 in a blog post on Wednesday, with this seemingly throwaway sentence.
So if you think you can use Login.gov to access a porn website, think again.
From https://bredemarket.com/2024/04/10/age-assurance-meets-identity-assurance-level-2/.
The link in that sentence directs the kind reader to a post I wrote in November 2023, detailing that fact that the GSA Inspector General criticized…the GSA…for implying that Login.gov was IAL2-compliant when it was not. The November post references a GSA-authored August blog post which reads in part (in bold):
Login.gov is on a path to providing an IAL2-compliant identity verification service to its customers in a responsible, equitable way.
From https://www.gsa.gov/blog/2023/08/18/reducing-fraud-and-increasing-access-drives-record-adoption-and-usage-of-logingov.
Because it obviously wouldn’t be good to do it in an irresponsible inequitable way.
But the GSA didn’t say how long that path would be. Would Login.gov be IAL2-compliant by the end of 2023? By mid 2024?
It turns out the answer is neither.
You would think that achieving IAL2 compliance would be a top priority. After all, the longer that Login.gov doesn’t comply, the more government agencies that will flock to IAL2-compliant ID.me.
Enter Steve Craig of PEAK.IDV and the weekly news summaries that he posts on LinkedIn. Today’s summary includes the following item:
4/ GSA’s Login.gov Pilots Enhanced Identity Verification
From https://www.linkedin.com/posts/stevenbcraig_digitalidentity-aml-compliance-activity-7184539504504930306-LVPF/.
Login.gov’s pilot will allow users to match a live selfie with the photo on a self-supplied form of photo ID, such as a driver’s license
Other interesting updates in the press release 👇
And here’s what GSA’s April 11 press release says.
Specifically, over the next few months, Login.gov will:
Pilot facial matching technology consistent with the National Institute of Standards and Technology’s Digital Identity Guidelines (800-63-3) to achieve evidence-based remote identity verification at the IAL2 level….
Using proven facial matching technology, Login.gov’s pilot will allow users to match a live selfie with the photo on a self-supplied form of photo ID, such as a driver’s license. Login.gov will not allow these images to be used for any purpose other than verifying identity, an approach which reflects Login.gov’s longstanding commitment to ensuring the privacy of its users. This pilot is slated to start in May with a handful of existing agency-partners who have expressed interest, with the pilot expanding to additional partners over the summer. GSA will simultaneously seek an independent third party assessment (Kantara) of IAL2 compliance, which GSA expects will be completed later this year.
From https://www.gsa.gov/about-us/newsroom/news-releases/general-services-administrations-logingov-pilot-04112024#.
In short, GSA’s April 11 press release about the Login.gov pilot says that it expects to complete IAL2 compliance later this year. So it’s going to take more than a year for the GSA to repair the gap that its Inspector General identified.
Once I saw Steve’s update this morning, I felt it sufficiently important to share the news among Bredemarket’s various social channels.
With a picture.

For those of you who are not as “seasoned” as I am, the picture depicts the B-side of a 1970 vinyl 7″ single (not a compact disc) from Elton John, taken from the album that broke Elton in the United States. (Not literally; that would come a few years later.)
By the way, while the original orchestrated studio version is great, the November 1970 live version with just the Elton John – Dee Murray – Nigel Olsson trio is OUTSTANDING.
Back to Bredemarket social media. If you go to my Instagram post on this topic, I was able to incorporate an audio snippet from “Take Me to the Pilot” (studio version) into the post. (You may have to go to the Instagram post to actually hear the audio.)
Not that the song has anything to do with identity verification using government ID documents paired with facial recognition. Or maybe it does; Elton John doesn’t know what the song means, and even lyricist Bernie Taupin doesn’t know what the song means.
So from now on I’m going to say that “Take Me to the Pilot” documents future efforts toward IAL2 compliance. Although frankly the lyrics sound like they describe a successful iris spoofing attempt.
Through a glass eye, your throne
From https://genius.com/Elton-john-take-me-to-the-pilot-lyrics.
Is the one danger zone
For you young whippersnappers who don’t understand why the opening image mentioned “54 Years On,” this is a reference to another Elton John song.
And it’s no surprise that the live version is better.
Now I’m going to listen to this all day. Cue the Instagram post (if Instagram has access to the 17-11-70/11-17-70 version).