Do All 5 Identity Factors Apply to Non-Human Identities?

I’ve talked ad nauseam about the five factors of identity verification and authentication. In case you’ve forgotten, these factors are:

  • Something you know.
  • Something you have.
  • Something you are.
  • Something you do.
  • Somewhere you are.

I’ll leave “somewhat you why” out of the discussion for now, but perhaps I’ll bring it back later.

These five (or six) factors are traditionally used to identify people.

Identifying “Non-Person Entities”

But what happens when the entity you want to identify is not a person? I’ll give two examples:

Kwebbelkop AI? https://www.youtube.com/watch?v=3l4KCbTyXQ4.
  • Kwebbelkop AI, discussed in “Human Cloning Via Artificial Intelligence: It’s Starting,” is not a human. But is there a way to identify the “real” Kwebbelkop AI from a “fake” one?
  • In “On Attribute-Based Access Control,” I noted that NIST defined a subject as “a human user or NPE (Non-Person Entity), such as a device that issues access requests to perform operations on objects.” Again, there’s a need to determine that the NPE has the right attributes, and is not a fake, deep or shallow.

There’s clearly a need to identify non-person entities. If I work for IBM and have a computer issued by IBM, the internal network needs to know that this is my computer, and not the computer of a North Korean hacker.

But I was curious. Can the five (or six) factors identify non-person entities?

Let’s consider factor applicability, going from the easiest to the hardest.

The easy factors

  • Somewhere you are. Not only is this extremely applicable to non-person entities, but in truth this factor doesn’t identify persons, but non-person entities. Think about it: a standard geolocation application doesn’t identify where YOU are. It identities where YOUR SMARTPHONE is. Unless you have a chip implant, there is nothing on your body that can identify your location. So obviously “somewhere you are” applies to NPEs.
  • Something you have. Another no brainer. If a person has “something,” that something is by definition an NPE. So “something you have” applies to NPEs.
  • Something you do. NPEs can do things. My favorite example is Kraftwerk’s pocket calculator. You will recall that “by pressing down this special key it plays a little melody.” I actually had a Casio pocket calculator that did exactly that, playing a tune that is associated with Casio. Later, Brian Eno composed a startup sound for Windows 95. So “something you do” applies to NPEs. (Although I’m forced to admit that an illegal clone computer and operating system could reproduce the Eno sound.)
Something you do, 1980s version. Advance to 1:49 to hear the little melody. https://www.youtube.com/watch?v=6ozWOe9WEU8.
Something you do, 1990s version. https://www.youtube.com/watch?v=miZHa7ZC6Z0.

Those three were easy. Now it gets harder.

The hard factors

Something you know. This one is a conceptual challenge. What does an NPE “know”? For artificial intelligence creations such as Kwebbelkop AI, you can look at the training data used to create it and maintain it. For a German musician’s (or an Oregon college student’s) pocket calculator, you can look at the code used in the device, from the little melody itself to the action to take when the user enters a 1, a plus sign, and another 1. But is this knowledge? I lean toward saying yes—I can teach a bot my mother’s maiden name just as easily as I can teach myself my maiden name. But perhaps some would disagree.

Something you are. For simplicity’s sake, I’ll stick to physical objects here, ranging from pocket calculators to hand-made ceramic plates. The major reason that we like to use “something you are” as a factor is the promise of uniqueness. We believe that fingerprints are unique (well, most of us), and that irises are unique, and that DNA is unique except for identical twins. But is a pocket calculator truly unique, given that the same assembly line manufactures many pocket calculators? Perhaps ceramic plates exhibit uniqueness, perhaps not.

That’s all five factors, right?

Well, let’s look at the sixth one.

Somewhat you why

You know that I like the “why” question, and some time ago I tried to apply it to identity.

  • Why is a person using a credit card at a McDonald’s in Atlantic City? (Link) Or, was the credit card stolen, or was it being used legitimately?
  • Why is a person boarding a bus? (Link) Or, was the bus pass stolen, or was it being used legitimately?
  • Why is a person standing outside a corporate office with a laptop and monitor? (Link) Or, is there a legitimate reason for an ex-employee to gain access to the corporate office?

The first example is fundamental from an identity standpoint. It’s taken from real life, because I had never used any credit card in Atlantic City before. However, there was data that indicated that someone with my name (but not my REAL ID; they didn’t exist yet) flew to Atlantic City, so a reasonable person (or identity verification system) could conclude that I might want to eat while I was there.

But can you measure intent for an NPE?

  • Does Kwebbelkop AI have a reason to perform a particular activity?
  • Does my pocket calculator have a reason to tell me that 1 plus 1 equals 3?
  • Does my ceramic plate have a reason to stay intact when I drop it ten meters?

I’m not sure.

By Bundesarchiv, Bild 102-13018 / CC-BY-SA 3.0, CC BY-SA 3.0 de, https://commons.wikimedia.org/w/index.php?curid=5480820.

Training the Bot (who was smarter than I thought)

Perhaps I was a little agitated or excited this afternoon.

  • Perhaps I was a little agitated because five (or maybe more; I lost count) scammers had sent me LinkedIn DMs when they gravitated to my #opentowork LinkedIn banner…like flies speeding to a carcass. (Which gave me an excuse to write about identity management on LinkedIn.)
  • Or perhaps I was a little excited because a Never Search Alone participant had shared an identity management product marketing position with me.

The bot says hello

After applying for said job, I was reading the company’s website to learn about its identity management product when the company’s bot kept on messaging me.

At 2:30 pm Chat Bot says: Hello, this is an AI Rep from [COMPANY]. If you need any assistance, feel free to ask.

At 2:30 pm Chat Bot says: Saw that you’re interested in our SaaS management and identity governance software. Any questions?

Being the curious type (after all, I worked for IDEMIA), I wondered how the chat bot would deal with what I REALLY wanted to ask about.

  • Not about a demo.
  • Not about the company’s recent funding round. (Series B, $35 million.)
  • Not about how they want to save the badgers or whatever.
  • But about my just-completed job application.

At 2:47 pm I say: With 30 years’ identity experience and 9 years as a PMM, I just applied for your Sr PMM Identity Management role. So the only question that I have is when you will interview me.

Yes, 30 years. Not “over 29” years. See my prior post.

And yes, my assumption that the company would immediately drop everything and interview me may have been a little overconfident. But it’s better than imposter syndrome.

And after all, this was just a bot. I assumed that the bot would be completely confused by my question, so no harm would come from overconfidence.

You know what they say about people who assume?

The bot exceeds expectations

At 2:47 pm Chat Bot says: Thanks for applying for the Sr PMM Identity Management role! Could you provide your email address so we can follow up on your application status?

One note here: the bot parroted my response back to me. The company’s job description does not include a job title with the “PMM” acronym.

Regardless, I was pleasantly surprised. The bot was trained to respond to job applicants. So I provided the email address that I used for my job application.

The bot does not meet expectations

That’s when I was UNPLEASANTLY surprised.

At 2:47 pm Chat Bot says: Thank you for your interest. Please provide a business email address so we can follow up on your application status.

Apparently the training wasn’t thorough enough. Think about it; if you’re an individual applying for a job, why would you apply with a “business email address”? Which one? From your current employer? THAT is not a good idea.

But as it turns out, I DO have a business email address-the one for Bredemarket. So I entered that, even though that email address does not appear anywhere on the job application that I had just submitted.

At 2:47 pm Chat Bot says: Thank you for providing your email. Our team will review the conversation and follow up if necessary. How else can I assist you today?

So it will be interesting to see whether the company contacts me at both of my email addresses.

(Post image generated at craiyon.com)

Postscript for those who don’t click on every link in a Bredemarket post

You missed Brian May’s collaboration with the Weebl folks.

From https://www.youtube.com/watch?v=EllYgcWmcAY.

This resonated with me because in mid-September I spent some Instagram time thinking about the word “flash” (as in how fast a human can create content).

BigBear.ai’s Digital Identity Products

One of my more popular posts during the past year (October 2023 to September 2024) was one that I wrote way back in 2021, “Pangiam, CLEAR, and others make a “sporting” effort to deny (or allow) stadium access.”

A lot has happened since then. (The aquisition of Pangiam by BigBear.ai closed in March of this year.)

Here is how BigBear.ai describes its digital identity offerings in 2024:

  • Pangiam is BigBear.ai’s digital identity brand, harnessing facial recognition, image-based anomaly detection and advanced biometrics with computer vision and predictive analytics.
  • Trueface Performs one of the fastest one-to-many (1:N) facial matches with real-time photos, delivering safe and efficient identity verification.
  • veriScan™ Securely captures and transmits real-time photos into a biometric matching service supporting access control and biometric boarding/bag tags.
  • Dartmouth Delivers real-time image-based anomaly detection for enhanced 3D baggage screening.

All these products, including Dartmouth, were developed before the BigBear.ai acquisition. (Where is Pangiam Bridge?)

We’ll have to wait and see what happens next.

Rethinking (some of) My Go-to-market Concepts

My current two categories for go-to-market collateral, “external” and “internal,” are not sufficient. I want to fix that, but I’m still thinking through things, such as what to call the third category. As of this moment, my best option is “intrinsic,” based upon my conversations with my good buddy Gemini.

My current two categories for go-to-market collateral

Based upon go-to-market efforts that stretch back a decade (even before I formally became a Product Marketing Manager), I have traditionally divided go-to-market output into two categories.

22 types of content Bredemarket can create.
  • External content for your hungry people (target audience), such as articles, blog posts, social media, case studies, white papers, and proposals. This content goes to CEOs, marketers, engineers, IT staff, and many others at the companies that buy from you.
  • Internal content for the people in the company who talk to your hungry people, such as battlecards, sales playbooks, and scripts for trade show demonstrations. This content goes to salespeople, marketers, product managers, customer success folks, and many other employees and contractors.
  • Some content can either be external or internal, depending upon the audience.

Simple and straightforward, right?

My emerging third category

But during a current go-to-market effort for a client, I realized that these two categories are insufficient, since both the client and I are creating content that will never been seen by either the external clients or the internal clients.

I’ll give you two heavily redacted and anonymized examples for a #fakefakefake machine.

AI-generated by Google Gemini.
  • Bob wrote a Word document that listed the pain points that MegaCorp’s client encounter when trying to put green round widgets on their end customers’ beanie caps.
  • Judy created a picture with stick figures that showed how clients can use the MegaCorp WidgetPlacer 2000 for their beanie cap production needs.

Bob and Judy (names from this song) are NEVER going to show that document or that picture to the customers, or even to the company’s own account managers.

So why create it?

Because they will be the initial guides as the go-to-market team creates all the other external and internal collateral for the effort.

These foundation documents are a critical part of the go-to-market effort. Without them, the subsequent go-to-market collateral will be a contradictory mishmash of stuff, and you’ll need to undertake a lot of effort to align all the content after the fact.

And the third category is…

…well, I don’t know what to call this third category yet.

  • “Internal internal content” seems a bit silly.
  • “Foundational content” seems a bit grandiose.
  • “Bob and Judy” content seems like an independent TV show.

Google Gemini, you’re my only hope.

Not THOSE Gemini. Metropolitan Museum of Art, New York, Public Domain, https://commons.wikimedia.org/w/index.php?curid=15044467.

My question:

If we call the first category “external” marketing content for customers, and we call the second category “internal” marketing content for employees, then what do we call the third category which is only for the people who create marketing content for customers and employees?

Gemini’s first batch of answers:

Internal Marketing Content for Marketing Teams

Marketing Team Communication

Marketing Team Resources

Internal Marketing Tools

Brevity is NOT a strength of generative AI. Even when I specifically instructed Gemini to provide single word responses, I got hyphenated words:

Intra-marketing (combining “intra” for within and “marketing”)

Meta-marketing (implying a higher-level perspective on marketing)

Team-marketing (emphasizing its focus on the marketing team)

The fatal flaw is that not all the people on a go-to-market team are marketers. (This is my fault because of the prompt I wrote.)

And if I lop off the last part of each compound phrase, two of my options are “team” (too generic) and “meta” (NO).

And “intra” is too close to “internal internal.” But perhaps it’s a start. So I asked Gemini for words similar to “intra-” and got this response:

The prefix “intra-” means “within,” “inside,” or “inner.” Here are 12 words that convey a similar meaning:

  1. Inward: Turning or moving toward the inside.
  2. Internal: Existing or occurring within something.
  3. Inner: Situated or located within.
  4. Intrinsic: Belonging to or inherent in the nature of something.
  5. Interior: The inside part of something.
  6. Intramural: Occurring within a school or institution.
  7. Intranet: A private computer network within an organization.
  8. Intravenous: Administered into a vein.
  9. Intracranial: Situated or occurring within the skull.
  10. Intracellular: Existing or occurring within a cell.
  11. Intralingual: Relating to or occurring within a language.
  12. Intrajugular: Situated or occurring within the jugular vein.

I kind of like “intrinsic,” but I’ll sleep on it.

In the meantime, what would YOU call the third category?

A (Non-Existent) Stake in the (Existent) Ground

For those who use the term…

What the heck IS a so-called “digital landscape?”

The word “landscape” suggests a physical environment, not a digital environment. Merriam-Webster specifically cites “natural inland scenery,” which even rules out the shoreline, much less a bunch of smartphone apps or SDKs jumbled together.

And how does a DIGITAL landscape evolve, rapidly or otherwise?

Now I’m not suggesting that you AVOID references to the “rapidly evolving digital landscape.” After all, if aspiring influencers and thought leaders use the term, your content needs to sound exactly like theirs. And this applies whether your thought leader is a person or an AI bot. Trust me on this.

Or perhaps you shouldn’t take my advice. Maybe the overuse of hack phrases is NOT a best-of-breed approach.

So why did I write this…

Because a particular respectable vendor began a Facebook post with the words “In today’s rapidly evolving digital landscape.”

And it shook me.

Was this a one-time slip up, or are readers EXPECTING companies to talk like this?

(Digital landscape image AI-generated by Google Gemini)

The KlarnaBots

Klarna CEO Sebastian Siemiatkowski has revealed a plan to eliminate his job and replace himself with an AI bot…

…wait, I misread that. Siemiatkowski has revealed a plan to replace OTHER people’s jobs at Klarna.

Funny how these rightsizing moves, with a few notable exceptions, always seem to spare the executives. The brunt will fall on others.

“With almost half the workforce at the company, Siemiatkowski plans to shift customer service and marketing duties to AI. ‘Not only can we do more with less, but we can do much more with less,’ he said…”

Mashable: https://mashable.com/article/klarna-ceo-plan-to-reduce-workforce-fifty-percent-replace-with-ai

I bet customer service and marketing morale is REALLY high at Klarna right now. If it weren’t for the not-so-good job market at the moment, Siemiatkowski might realize his rightsizing goals via attrition before his bots are ready.

But let me ask my good buddy Google Gemini. The material below was not written by me.

= =

While AI has made significant strides in customer service and marketing, it’s unlikely that Klarna, or any company, could completely eliminate human involvement.

Here’s why:

  •  Nuance and Empathy: Human agents are better equipped to handle complex customer inquiries, especially those involving emotional situations. AI, while improving, still struggles with understanding and responding to nuanced emotions.
  •  Problem-Solving: Certain customer issues require creative problem-solving or judgment calls that AI may not be able to handle effectively.
  •  Ethical Considerations: AI can perpetuate biases present in the data it’s trained on. Human oversight is essential to ensure ethical and fair treatment of customers.
  •  Unforeseen Circumstances: AI may not be able to adapt to unexpected situations or rapidly changing circumstances. Human agents can often address these challenges more effectively.
  • Customer Preference: Many customers still prefer to interact with human agents, especially for complex or sensitive issues.

However, AI can be a valuable tool in these areas. It can automate routine tasks, provide 24/7 support, and offer personalized recommendations. The most effective approach is likely a hybrid model, combining AI with human agents to provide the best possible customer experience.

AI Articles in Ten (Not Five) Minutes—But I Can’t Tell You Why

More on the “human vs. AI vs. both” debate on content generation, and another alternative—the Scalenut tool.

The five-minute turnaround

I’ve been concerned about my own obsolescence for over a year now.

I haven’t seen a lot of discussion of one aspect of #generativeai:

Its ability to write something in about a minute.

(OK, maybe five minutes if you try a few prompts,)

Now I consider myself capable of cranking out a draft relatively quickly, but even my fastest work takes a lot longer than five minutes to write.

“Who cares, John? No one is demanding a five minute turnaround.”

Not yet.

Because it was never possible before (unless you had proposal automation software, but even that couldn’t create NEW text).

What happens to us writers when a five-minute turnaround becomes the norm?

The five-minute requirement

I returned to the topic in January, with a comment on the quality of generative AI text.

Never mind that the resulting generative AI content was wordy, crappy, and possibly incorrect. For some people the fact that the content was THERE was good enough.

OK, Writer.com (with a private dataset) claims to do a better job, but much of the publicly-available free generative AI tools are substandard.

Then I noted that sometimes I will HAVE to get that content out without proper reflection. I outlined two measures to do this:

  1. Don’t sleep on the content.
  2. Let full-grown ideas spring out of your head.

But I still prefer to take my time brewing my content. I’ve spent way more than five minutes on this post alone, and I don’t even know how I’m going to end it yet. And I still haven’t selected the critically important image to accompany the post.

Am I a nut for doing things manually?

You’ve gone from idea to 2500+ word articles in 10 minutes.

Now that I’ve set the context, let’s see what Kieran MacRae (quoted above) has to say about Scalenut. But first, let’s see Kieran’s comments about the state of the industry:

Sure, once upon a time, AI writing tools would write about as well as a 4-year-old.

So what does Scalenut do?

With Scalenut, you will reduce your content creation time by 75% and become a content machine. 

The content gets written in your tone of voice, and the only changes I made were adding personal anecdotes and a little Kieran charm.

But…why?

Why is Scalenut better?

Kieran doesn’t say.

And if Scalenut explains WHY its technology is so great, the description is hidden behind an array of features, benefits, and statistics.

Maybe it’s me, but Scalenut could improve its differentiation here, as outlined in my video.

Differentiation, by Bredemarket.

What Scalenut does…and doesn’t do

I should clarify that copyrighting is but one part of Scalenut’s arsenal.

Scalenut is a one-stop-shop AI-powered SEO writing tool that will see you through keyword selection, research, and content production. Plus, you get full access to their copywriting tool, which can create more specific short-form content like product descriptions.

You optimize SEO content by adding NLP keywords, which are the words that Google uses to decide what an article is about.

MacRae cautions that it’s not for “individuals whose writing is their brand,” and Scalenut’s price point means that it’s not for people who only need a few pieces a month.

But if you need a lot of content, and you’re not Stephen King or Dave Barry or John Bredehoft (not in terms of popularity, but of distinctness), then perhaps Scalenut may help you.

I can’t tell you why, though.

(And an apology for those who watch the video; like “The Long Run” album itself, it takes forever to get to the song.)

From https://www.youtube.com/watch?v=Odcn6qk94bs.

Text Generation in Images? Use an LMM.

I use both text generators (sparingly) and image generators (less sparingly) to artificially create text and images. But I encounter one image challenge that you’ve probably encountered also: bizarre misspellings.

This post includes an example, created in Google Gemini, that was created using the following prompt:

Create a square image of a library bookshelf devoted to the works authored by Dave Barry.

Now in the ideal world, my prompt would completely research Barry’s published titles, and the resulting image would include these book titles (such as Dave Barry Slept Here, one of the greatest history books of all time maybe or maybe not).

In the mediocre world, at least the book spines would include the words “Dave Barry.”

Gemini gave me nothing of the sort.

From Google Gemini.

The bookshelf may as well contain books by Charles Dikkens, the well-known Dutch author.

Why can’t your image generator spell words properly?

It always mystified me that AI-generated images had so many weird words, to the point where I wondered whether the AI was specifically programmed to misspell.

It wasn’t…but it wasn’t programmed to spell either.

TechCrunch recently published an article in which the title was so good you didn’t have to read the article itself. The title? “Why is AI so bad at spelling? Because image generators aren’t actually reading text.”

This is something that I pretty much forgot.

  • When I use an AI-powered text generator, it has been trained to respond to my textual prompts and create text.
  • When I use an AI-powered image generator, it has been trained to respond to my textual prompts and create images.

Two very different tasks, as noted by Asmelash Teka Hadgu, co-founder of Lesan and a fellow at the DAIR Institute.

“The diffusion models, the latest kind of algorithms used for image generation, are reconstructing a given input,” Hagdu told TechCrunch. “We can assume writings on an image are a very, very tiny part, so the image generator learns the patterns that cover more of these pixels.”

The algorithms are incentivized to recreate something that looks like what it’s seen in its training data, but it doesn’t natively know the rules that we take for granted — that “hello” is not spelled “heeelllooo,” and that human hands usually have five fingers.

So what’s the solution?

We need LMM image-text generators

The solution is something I’ve talked about before: large multimodal models. Permit me to repeat myself (it’s called repurposing) and quote from Chip Huyen again.

For a long time, each ML (machine learning) model operated in one data mode – text (translation, language modeling), image (object detection, image classification), or audio (speech recognition).

However, natural intelligence is not limited to just a single modality. Humans can read and write text. We can see images and watch videos. We listen to music to relax and watch out for strange noises to detect danger. Being able to work with multimodal data is essential for us or any AI to operate in the real world.

So if we ask an image generator to create an image of a library bookshelf with Dave Barry works, it would actually display book spines with Barry’s actual titles.

So why doesn’t my Google Gemini already provide this capability? It has a text generator and it has an image generator: why not provide both simultaneously?

Because that’s EXPENSIVE.

I don’t know whether Google’s Vertex AI provides the multimodal capabilties I seek, where text in images is spelled correctly.

And even with $300 in credits, I’m not going to spend the money to find out. See Vertex AI’s generative AI pricing here.

Who You Are, Plus What You Have, Equals What You Are

(Part of the biometric product marketing expert series)

Yes, I know the differences between the various factors of authentication.

Let me focus on two of the factors.

  • Something You Are. This is the factor that identifies people. It includes biometrics modalities (finger, face, iris, DNA, voice, vein, etc.). It also includes behavioral biometrics, provided that they are truly behavioral and relatively static.
  • Something You Have. While this is used to identify people, in truth this is the factor that identifies things. It includes driver’s licenses and hardware or software tokens.

There’s a very clear distinction between these two factors of authentication: “something you are” for people, and “something you have” for things.

But what happens when we treat the things as beings?

Who, or what, possesses identity?

License Plate Recognition

I’ve spent a decade working with automatic license plate recognitrion (ALPR), sometimes known as automatic number plate recognition (ANPR).

Actually more than a decade, since my car’s picture was taken in Montclair, California a couple of decades ago doing something it shouldn’t have been doing. I ended up in traffic school for that one.

But my traffic school didn’t have a music soundtrack. From https://www.imdb.com/title/tt0088847/mediaviewer/rm1290438144/?ref_=tt_md_2.

Now license plate recognition isn’t that reliable of an identifier, since within a minute I can remove a license plate from a vehicle and substitute another one in its place. However, it’s deemed to be reliable enough that it is used to identify who a car is.

Note my intentional use of the word “who” in the sentence above.

  • Because when my car made a left turn against a red light all those years ago, the police didn’t haul MY CAR into court.
  • Using then-current technology, it identified the car, looked up the registered owner, and hauled ME into court.

These days, it’s theoretically possible (where legally allowed) to identify the license plate of the car AND identify the face of the person driving the car.

But you still have this strange merger of who and what in which the non-human characteristics of an entity are used to identify the entity.

What you are.

But that’s nothing compared to what’s emerged over the past few years.

We Are The Robots

When the predecessors to today’s Internet were conceived in the 1960s, they were intended as a way for people to communicate with each other electronically.

And for decades the Internet continued to operate this way.

Until the Internet of Things (IoT) became more and more prominent.

From LINK REMOVED 2025-01-20

How prominent? The Hacker News explains:

Application programming interfaces (APIs) are the connective tissue behind digital modernization, helping applications and databases exchange data more effectively. The State of API Security in 2024 Report from Imperva, a Thales company, found that the majority of internet traffic (71%) in 2023 was API calls.

Couple this with the increasing use of chatbots and other artificial intelligence bots to generate content, and the result is that when you are communicating with someone on the Internet, there is often no “who.” There’s a “what.”

What you are.

Between the cars and the bots, there’s a lot going on.

What does this mean?

There are numerous legal and technical ramifications, but I want to concentrate on the higher meaning of all this. I’ve spent 29 years professionally devoted to the identification of who people are, but this focus on people is undergoing a seismic change.

KITT. By Tabercil – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=14927883.

The science fiction stories of the past, including TV shows such as Knight Rider and its car KITT, are becoming the present as we interact with automobiles, refrigerators, and other things. None of them have true sentience, but it doesn’t matter because they have the power to do things.

The late Dr. Frank Poole, died in 2001. From https://cinemorgue.fandom.com/wiki/Gary_Lockwood.

In the meantime, the identification industry not only has to identify people, but also identify things.

And it’s becoming more crucial that we do so, and do it accurately.