Baby Steps Toward Order of Magnitude Increases in Fingerprint Resolution

(Part of the biometric product marketing expert series)

For many years, the baseline for high-quality capture of fingerprint and palm print images has been to use a resolution of 500 pixels per inch. Or maybe 512 pixels per inch. Whatever.

The crime scene (latent) folks weren’t always satisfied with this, so they pushed to capture latent fingerprint and latent palm print images at 1000 pixels per inch. Pardon me, 1024.

But beyond this, the resolution of captured prints hasn’t really changed in decades. I’m sure some people have been capturing prints at 2000 (2048) pixels per inch, but there aren’t massive automated biometric identification systems that fully support this resolution from end to end.

But that may be changing.

One important truth about infant fingerprints

For about as long as latent examiners have pursued 1000 ppi print capture, people outside of the criminal justice arena have been looking at fingerprints for a very different purpose.

Our normal civil fingerprint processes require us to identify people via fingerprints beginning at the age of 18, or perhaps at the age of 12.

But gow do we identify people in those first 12 years?

More specifically, can we identify someone via their fingerprints at birth, and then authenticate them as an adult by comparing to those original prints?

It’s a dream, but many have pursued this dream. Dr. Anil Jain at Michigan State University has pursued this for years, and co-authored a 2014 paper on the topic.

Given that children, as well as the adults, in low income countries typically do not have any form of identification documents which can be used for this purpose [vaccination], we address the following question: can fingerprints be effectively used to recognize children from birth to 4 years? We have collected 1,600 fingerprint images (500 ppi) of 20 infants and toddlers captured over a 30-day period in East Lansing, Michigan and 420 fingerprints of 70 infants and toddlers at two different health clinics in Benin, West Africa.

At the time, it probably made sense to use 500 pixel per inch scanners to capture the prints, since developing countries don’t have a lot of money to throw around on expensive 1000 ppi scanners. But the use of regular scanners runs counter to a very important truth about infants and their fingerprints. Are you sitting down?

Because infants are smaller than adults, infant fingerprints are smaller than adult fingerprints.

Think about it. The standard FBI fingerprint card assumes that a rolled fingerprint occupies 1.6 inches x 1.5 inches of space. If you were to roll an infant fingerprint, it would occupy much less than that. Heck, I don’t even know if an infant’s entire FINGER is 1.6 inches long.

So the capture device is obtaining these teeny tiny ridges, and these teeny tiny ridge endings, and these teeny tiny bifurcations. Or trying to. And if those second-level details can’t be captured, then you’re not going to get the minutiae, and your fingerprint matching is going to fail.

So a decade later, researchers today are adopting a newer approach, according to a Biometric Update summary of an ID4Africa webinar. (This particular portion is at the very end of the webinar, at around the 2 hour 40 minute mark.)

A video presentation from Judge Lidia Maejima of the Court of Justice of Parana, Brazil introduced the emerging legal framework for biometric identification of infants. Her representative Felipe Hay explained how researchers in Brazil developed 5,000 dpi scanners, he says, which accurately record the minutiae of infants’ fingerprints.

Did you capture that? We’re moving from five hundred pixels per inch to FIVE THOUSAND pixels per inch. (Or maybe 5120.) Whether even that resolution is capable of capturing infant fingerprint detail remains to be seen.

And as Dr. Joseph Atick noted, all this research is still in its…um…infancy. We won’t know for years whether the algorithms can truly match infant fingerprints to child or adult fingerprints.

By the way, when talking about digital images, Adobe notes that the correct term is pixels per inch, not dots per inch. DPI specifically refers to printer resolution, which is appropriate when you’re printing a fingerprint card but not when you’re displaying an image on a screen.

(Image from From https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.500-290e3.pdf )

Amazon One and Palm/Vein Identity Scanning in Healthcare: Does It Work?

If you create your own test data, you’re more likely to pass the test. So what data was used for Amazon One palm/vein identity scanning accuracy testing?

(Part of the biometric product marketing expert series)

(Image from Imagen 3)

I’ve previously discussed Amazon’s biometric palm/vein identity scanning efforts. But according to Dr. Sai Balasubramanian, M.D., J.D. in Forbes, Amazon is entering a new market, healthcare.

“Amazon announced that it is partnering with NYU Langone to launch Amazon One, a contactless palm screening technology, throughout the health system.”

Which makes sense, as long as the medical professional isn’t wearing gloves. I don’t know if Amazon One can read veins through medical gloves.

As I reflected upon this further, I realized something:

  • NIST has tested fingerprint verification and identification.
  • NIST has tested facial recognition. (Not that Amazon participated.)
  • NIST has tested iris recognition.

But NIST has never conducted regular testing of palm identification in general, or palm/vein identity scanning in particular. Not for Amazon. Not for Fujitsu. Not for Imprivata. Not for Ingenico. Not for Pearson. Not for anybody.

So how do we know that Amazon One works?

Because Amazon said so.

“Amazon One is 100 times more accurate than scanning two irises. It raises the bar for biometric identification by combining palm and vein imagery, and after millions of interactions among hundreds of thousands of enrolled identities, we have not had a single false positive.”

Claims may dazzle some people, but (as of 2023) Jim Nash was not among them:

“The company claims it is 99.999 percent accurate but does not offer information supporting that statistic.”

And so far I haven’t found any either.

Since the company trains its algorithm on synthetically generated palms, I would like to make sure the company performs its palm/vein identity scanning accuracy testing on REAL palms. If you actually CREATE the data for any test, including an accuracy test, there’s a higher likelihood that you will pass.

I think many people would like to see public substantiated Amazon One accuracy data. ZERO false positives is a…BOLD claim to make.

On Marketing Personas

(Imagen 3)

Marketing personas are like NIST biometric tests.

They’re not real.

Use them with caution.

Marketing personas.

This part isn’t in the video:

Yes, I know that marketing personas are representations of your hungry people (target audience) that wonderfully focus the mind on the people interested in your product or service. But if we’re being honest with ourselves, a software purchase is not greatly influenced by a non-person entity’s go-to coffee shop order.

Or whether the purchasing manager is 28 or 68.

So don’t go overboard in persona development.

That is all.

Except for the Bredemarket content-proposal-analysis promo.

https://bredemarket.com/cpa/

CPA

P.S. Dorothy Bullard’s article can be found here.

Clean Fast Contactless Biometrics

(Image from DW)

The COVID-19 pandemic may be a fading memory, but contactless biometrics remains popular.

Back in the 1980s, you had to touch something to get the then-new “livescan” machines to capture your fingerprints. While you no longer had messy ink-stained fingers, you still had to put your fingers on a surface that a bunch of other people had touched. What if they had the flu? Or AIDS (the health scare of that decade)?

As we began to see facial recognition in the 1990s and early 2000s, one advantage of that biometric modality was that it was CONTACTLESS. Unlike fingerprints, you didn’t have to press your face against a surface.

But then fingerprints also became contactless after someone asked an unusual question in 2004.

“Actually this effort launched before that, as there were efforts in 2004 and following years to capture a complete set of fingerprints within 15 seconds…”

This WAS an unusual question, considering that it took a minute or more to capture inked prints or livescan prints. And the government expected this to happen in 15 seconds?

A decade later several companies were pursuing this in conjunction with NIST. There were two solutions: dedicated kiosks such as MorphoWave from my then-employer MorphoTrak, and solutions that used a standard smartphone camera such as SlapShot from Sciometrics and Integrated Biometrics.

The, um, upshot is that now contactless fingerprint and face capture are both a thing. Contactless capture provides speed, and even the impossible 15 second capture target was blown away. 

Fingers and faces can be captured “on the move” in airports, border crossings, stadiums, and university lunchrooms and other educational facilities.

Perhaps Iris and voice can be considered contactless and fast. 

But even “rapid” DNA isn’t that rapid.

TPRM

(Imagen 3)

A little (just a little) behind the scenes of why I write what I write.

What does TPRM mean?

I was prompted to write my WYSASOA post when I encountered a bunch of pages on a website that referred to TPRM, with no explanation.

Now if I had gone to the home page of that website, I would have seen text that said “Third Party Risk Management (TPRM).”

But I didn’t go to the home page. I entered the website via another page and therefore never saw the home page explanation of what the company meant by the acronym.

They meant Third Party Risk Management.

Unless you absolutely know that everybody in the world agrees on your acronym definition, always spell out the first instance of an acronym on a piece of content. So if you mention that acronym on 10 web pages, spell it out on all 10 of them.

That’s all I wanted to say…

How is NIST related to TPRM?

…I lied.

Because now I assume you want to know what Third Party Risk Management (TPRM) actually is.

Let’s go to my esteemed friends at the National Institute of Standards & Technology, or NIST.

What is TPRM?

But TPRM is implied in a NIST document entitled (PDF) Best Practices in Cyber Supply Chain Risk Management. Because there are a lot of “third parties” in the supply chain.

When companies began extensively outsourcing and globalizing the supply chain in the 1980’s and 1990’s, they did so without understanding the risks suppliers posed. Lack of supplier attention to quality management could compromise the brand. Lack of physical or cybersecurity at supplier sites could result in a breach of corporate data systems or product corruption. Over time, companies have begun implementing vendor management systems – ranging from basic, paper-based approaches to highly sophisticated software solutions and physical audits – to assess and mitigate vendor risks to the supply chain.

Because if MegaCorp is sharing data with WidgetCorp, and WidgetCorp is breached, MegaCorp is screwed. So MegaCorp has to reduce the risk that it’s dealing with breachable firms.

The TPRM problem

And it’s not just my fictional MegaCorp. Cybersecurity risks are obviously a problem. I only had to go back to January 26 to find a recent example.

Bank of America has confirmed a data breach involving a third-party software provider that led to the exposure of sensitive customer data.

What Happened: According to a filing earlier this month, an unidentified third-party software provider discovered unauthorized access to its systems in October. The breach did not directly impact Bank of America’s systems, but the data of at least 414 customers is now at risk.

The breach pertains to mortgage loans and the compromised data includes customers’ names, social security numbers, addresses, phone numbers, passport numbers, and loan numbers.

Note that the problem didn’t occur at Bank of America’s systems, but at the systems of some other company.

Manage your TPRM…now that you know what I mean by the acronym.

How Bredemarket Adopts Your Point of View

The video embedded in my “Where is ByteDance From?” blog post included an interesting frame:

“So depending upon your needs, you can argue that”

This frame was followed by three differing answers to the “Where is ByteDance From?” question.

But isn’t there only one answer to the question? How can there be three?

It all depends upon your needs.

Who is the best age estimation vendor?

I shared an illustrative example of this last year. When the National Institute of Standards and Technology (NIST) tested its first six age estimation algorithms, it published the results for everyone to see.

“Because NIST conducts so many different tests, a vendor can turn to any single test in which it placed first and declare it is the best vendor.

“So depending upon the test, the best age estimation vendor (based upon accuracy and or resource usage) may be Dermalog, or Incode, or ROC (formerly Rank One Computing), or Unissey, or Yoti. Just look for that “(1)” superscript….

“Out of the 6 vendors, 5 are the best. And if you massage the data enough you can probably argue that Neurotechnology is the best also. 

“So if I were writing for one of these vendors, I’d argue that the vendor placed first in Subtest X, Subtest X is obviously the most important one in the entire test, and all the other ones are meaningless.”

Are you the best? Only if I’m writing for you

I will let you in on a little secret.

  • When I wrote things for IDEMIA, I always said that IDEMIA was the best.
  • When I wrote things for Incode, I always said that Incode was the best.
  • And when I write things for each of my Bredemarket clients, I always say that my client is the best.

I recently had to remind a prospect of this fact. This particular prospect has a very strong differentiator from its competitors. When the prospect asked for past writing samples, I included this caveat:

“I have never written about (TOPIC 1) or (TOPIC 2) from (PROSPECT’S) perspective, but here are some examples of my writing on both topics.”

I then shared four writing samples, including something I wrote for my former employer Incode about two years ago. I did this knowing that my prospect would disagree with my assertions that Incode’s product is so great…and greater than the prospect’s product. 

If this loses me the business, I can accept that. Anyone with any product marketing experience in the identity industry is guaranteed to have said SOMETHING offensive to most of the 80+ companies in the industry.

How do I write for YOU?

But let’s say that you’re an identity firm and you decide to contract with Bredemarket anyway, even though I’ve said nice things about your competitors in the past.

How do we work together to ensure that I say nice things about you?

That’s where my initial questions (seven, plus some more) come into play.

My first seven questions.

By the time we’re done, we have hopefully painted a hero picture of your company, describing why you are the preferred solution for your customers—better than IDEMIA, Incode, or anyone else.

(Unless of course IDEMIA or Incode contracts with Bredemarket, in which case I will edit the sentence above just a bit.)

So let’s talk

If you would like to work with Bredemarket for differentiated content, proposal, or analysis work, book a free meeting on my “CPA” page.

CPA

You Need FAT and SAT

On LinkedIn, I was just discussing the difference between a controlled study and a real-world test. Think of a NIST test vs. a benchmark.

Then I started talking about some of the post-contract signature tests in the automated biometric identification system world, including factory acceptance tests and site acceptance tests.

These tests are not unique to ABIS. Healthcare (the other biometric) conducts FAT and SAT also, as Powder Systems notes.

“When manufacturing complex machinery in industries such as pharmaceuticals or fine chemicals, extensive equipment testing must be carried out before commissioning.

“It requires thorough functional, performance, and safety tests of intricate systems. These may comprise many components and interdependencies. Challenging though it may be, these must be systematically assessed before they’re put into operation. This approach is broadly known as acceptance testing.

“There are many forms of acceptance testing. Two closely related approaches that often come in for confusion are Factory Acceptance Testing (FAT) and Site Acceptance Testing (SAT). Both are critical stages in the verification and validation of equipment and systems within industrial and manufacturing contexts. However, they differ significantly in terms of location, timing, purpose, scope, participants, outcomes, and testing environment.”

You should read the entire article to learn about the significant differences between the two test types. But let me highlight one point:

“Factory acceptance testing typically involves a more rigorous and comprehensive testing process. This testing procedure includes the detailed verification of system components to ensure they function correctly and meet design specifications.”

This is based on the fact that it’s less costly to fix problems early at the factory than to fix them later out in the field.

Whether you’re testing pharmaceutical machinery or ABIS, both factory and site acceptance tests are absolutely critical. Skipping one of the two tests does not save costs.

The NIST Test You Choose Matters

(Baby smoking image designed by Freepik)

As I’ve mentioned before, when the National Institute of Standards and Technology (NIST) tests biometric modalities such as finger and face, they conduct each test in a bunch of different ways.

One of the ramifications of this is that many entities can claim that they are “the best, according to NIST.”

For example, when NIST released its first version of the age estimation tests, 5 of the 6 participating vendors scored first in SOME category.

But NIST doesn’t do this just to make the vendors happy. NIST does this because biometrics are used in many, many ways.

Let’s look at recent age estimation testing, which currently tests 15 algorithms rather than the original 6.

Governments and private entities can estimate ages for people at the pub, people buying weed, or people gambling. And then there’s the use case that is getting a lot of attention these days—people accessing social media.

Child Online Safety, Ages 13-16 (in my country anyway)

When NIST conceived the age estimation tests, the social media providers generaly required their users to be 13 years of age or older. For this reason, one of NIST’s age estimation tests focused upon whether age estimation algorithms could reliably identify those who were 13 years old vs. those who were not.

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727.

Which, by the way, basically means that the NIST age estimation tests are useless in Australia. After NIST started age estimation testing, Australia passed a law last month requiring social media users to be 16 years old or older.

Returning to America, NIST actually conducted several different tests for the 13 year old “child online safety” testing. I’m going to focus on one of them:

Age 8-12 – False Positive Rates (FPR) are proportions of subjects aged 8 to 12 but whose age is estimated from 13 to 16 (below 17).

This covers the case in which a social media provider requires people to be 13 years old or older, someone between 8 and 12 tries to sign up for the social media service anyway…AND SUCCESSFULLY DOES SO.

You want the “false positive rate” to be as low as possible in this case, so that’s what NIST measures.

Results as of December 10, 2024

The image below was taken from the NIST Face Analysis Technology Evaluation (FATE) Age Estimation & Verification page on December 10, 2024. Because this is a continuous test, the actual results may be different by the time you read this, so be sure to check the latest results.

As of December 10, the best performing algorithm of the 15 tested had a false positive rate (FPR) of 0.0467. The second was close at 0.0542, with the third at 0.0828.

The 15th was a distant last at 0.2929.

But the worst-tested algorithm is much better on other tests

But before you conclude that the 15th algorithm in the “8-12” test is a dud, take a look at how that same algorithm performed on some of the OTHER age estimation tests.

  • For the age 17-22 test (“False Positive Rates (FPR) are proportions of subjects aged 17 to 22 but whose age is estimated from 13 to 16 (below 17)”), this algorithm was the second MOST accurate.
  • And the algorithm is pretty good at correctly classifying 13-16 year olds.
  • It also performs well in the “challenge 25” tests (addressing some of the use cases I mentioned above such as alcohol purchases).
I think they’re over 13. By Obscurasky – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=7776157.

So it looks like this particular algorithm doesn’t (currently) do well with kids, but it does VERY well with adults.

So before you use the NIST tests as a starting point to determine if an algorithm is good for you, make sure you evaluate the CORRECT test, including the CORRECT data.

Do All 5 Identity Factors Apply to Non-Human Identities?

I’ve talked ad nauseam about the five factors of identity verification and authentication. In case you’ve forgotten, these factors are:

  • Something you know.
  • Something you have.
  • Something you are.
  • Something you do.
  • Somewhere you are.

I’ll leave “somewhat you why” out of the discussion for now, but perhaps I’ll bring it back later.

These five (or six) factors are traditionally used to identify people.

Identifying “Non-Person Entities”

But what happens when the entity you want to identify is not a person? I’ll give two examples:

Kwebbelkop AI? https://www.youtube.com/watch?v=3l4KCbTyXQ4.
  • Kwebbelkop AI, discussed in “Human Cloning Via Artificial Intelligence: It’s Starting,” is not a human. But is there a way to identify the “real” Kwebbelkop AI from a “fake” one?
  • In “On Attribute-Based Access Control,” I noted that NIST defined a subject as “a human user or NPE (Non-Person Entity), such as a device that issues access requests to perform operations on objects.” Again, there’s a need to determine that the NPE has the right attributes, and is not a fake, deep or shallow.

There’s clearly a need to identify non-person entities. If I work for IBM and have a computer issued by IBM, the internal network needs to know that this is my computer, and not the computer of a North Korean hacker.

But I was curious. Can the five (or six) factors identify non-person entities?

Let’s consider factor applicability, going from the easiest to the hardest.

The easy factors

  • Somewhere you are. Not only is this extremely applicable to non-person entities, but in truth this factor doesn’t identify persons, but non-person entities. Think about it: a standard geolocation application doesn’t identify where YOU are. It identities where YOUR SMARTPHONE is. Unless you have a chip implant, there is nothing on your body that can identify your location. So obviously “somewhere you are” applies to NPEs.
  • Something you have. Another no brainer. If a person has “something,” that something is by definition an NPE. So “something you have” applies to NPEs.
  • Something you do. NPEs can do things. My favorite example is Kraftwerk’s pocket calculator. You will recall that “by pressing down this special key it plays a little melody.” I actually had a Casio pocket calculator that did exactly that, playing a tune that is associated with Casio. Later, Brian Eno composed a startup sound for Windows 95. So “something you do” applies to NPEs. (Although I’m forced to admit that an illegal clone computer and operating system could reproduce the Eno sound.)
Something you do, 1980s version. Advance to 1:49 to hear the little melody. https://www.youtube.com/watch?v=6ozWOe9WEU8.
Something you do, 1990s version. https://www.youtube.com/watch?v=miZHa7ZC6Z0.

Those three were easy. Now it gets harder.

The hard factors

Something you know. This one is a conceptual challenge. What does an NPE “know”? For artificial intelligence creations such as Kwebbelkop AI, you can look at the training data used to create it and maintain it. For a German musician’s (or an Oregon college student’s) pocket calculator, you can look at the code used in the device, from the little melody itself to the action to take when the user enters a 1, a plus sign, and another 1. But is this knowledge? I lean toward saying yes—I can teach a bot my mother’s maiden name just as easily as I can teach myself my maiden name. But perhaps some would disagree.

Something you are. For simplicity’s sake, I’ll stick to physical objects here, ranging from pocket calculators to hand-made ceramic plates. The major reason that we like to use “something you are” as a factor is the promise of uniqueness. We believe that fingerprints are unique (well, most of us), and that irises are unique, and that DNA is unique except for identical twins. But is a pocket calculator truly unique, given that the same assembly line manufactures many pocket calculators? Perhaps ceramic plates exhibit uniqueness, perhaps not.

That’s all five factors, right?

Well, let’s look at the sixth one.

Somewhat you why

You know that I like the “why” question, and some time ago I tried to apply it to identity.

  • Why is a person using a credit card at a McDonald’s in Atlantic City? (Link) Or, was the credit card stolen, or was it being used legitimately?
  • Why is a person boarding a bus? (Link) Or, was the bus pass stolen, or was it being used legitimately?
  • Why is a person standing outside a corporate office with a laptop and monitor? (Link) Or, is there a legitimate reason for an ex-employee to gain access to the corporate office?

The first example is fundamental from an identity standpoint. It’s taken from real life, because I had never used any credit card in Atlantic City before. However, there was data that indicated that someone with my name (but not my REAL ID; they didn’t exist yet) flew to Atlantic City, so a reasonable person (or identity verification system) could conclude that I might want to eat while I was there.

But can you measure intent for an NPE?

  • Does Kwebbelkop AI have a reason to perform a particular activity?
  • Does my pocket calculator have a reason to tell me that 1 plus 1 equals 3?
  • Does my ceramic plate have a reason to stay intact when I drop it ten meters?

I’m not sure.

By Bundesarchiv, Bild 102-13018 / CC-BY-SA 3.0, CC BY-SA 3.0 de, https://commons.wikimedia.org/w/index.php?curid=5480820.

On Attribute-Based Access Control

In this post I’m going to delve more into attribute-based access control (ABAC), comparing it to role-based access control (RBAC, or what Printrak BIS used), and directing you to a separate source that examines ABAC’s implementation.

(Delve. Yes, I said it. I told you I was temperamental. I may say more about the “d” word in a subsequent post.)

But first I’m going to back up a bit.

Role-based access control

As I noted in a LinkedIn post yesterday:

Back when I managed the Omnitrak and Printrak BIS products (now part of IDEMIA‘s MBIS), the cool kids used role-based access control.

My product management responsibilities included the data and application tours, so user permissions fell upon me. Printrak BIS included hundreds of specific permissions that governed its use by latent, tenprint, IT, and other staff. But when a government law enforcement agency onboarded a new employee, it would take forever to assign the hundreds of necessary permissions to the new hire.

Enter roles, as a part of role-based access control (RBAC).

If we know, for example, that the person is a latent trainee, we can assign the necessary permissions to a “latent trainee” role.

  • The latent trainee would have permission to view records and perform primary latent verification.
  • The latent trainee would NOT have permission to delete records or perform secondary latent verification.

As the trainee advanced, their role could change from “latent trainee” to “latent examiner” and perhaps to “latent supervisor” some day. One simple change, and all the proper permissions are assigned.

But what of the tenprint examiner who expresses a desire to do latent work? That person can have two roles: “tenprint examiner” and “latent trainee.”

Role-based access control certainly eased the management process for Printrak BIS’ government customers.

But something new was brewing…

Attribute-based access control

As I noted in my LinkedIn post, the National Institute of Standards and Technology released guidance in 2014 (since revised). The document is NIST Special Publication 800-162, Guide to Attribute Based Access Control (ABAC) Definition and Considerations, and is available at https://doi.org/10.6028/NIST.SP.800-162.

Compared to role-based access control, attribute-based access control is a teeny bit more granular.

Attributes are characteristics of the subject, object, or environment conditions. Attributes contain information given by a name-value pair.

A subject is a human user or NPE, such as a device that issues access requests to perform operations on objects. Subjects are assigned one or more attributes. For the purpose of this document, assume that subject and user are synonymous.

An object is a system resource for which access is managed by the ABAC system, such as devices, files, records, tables, processes, programs, networks, or domains containing or receiving information. It can be the resource or requested entity, as well as anything upon which an operation may be performed by a subject including data, applications, services, devices, and networks.

An operation is the execution of a function at the request of a subject upon an object. Operations include read, write, edit, delete, copy, execute, and modify.

Policy is the representation of rules or relationships that makes it possible to determine if a requested access should be allowed, given the values of the attributes of the subject, object, and possibly environment conditions.

So before you can even start to use ABAC, you need to define your subjects and objects and everything else.

Frontegg provides some excellent examples of how ABAC is used in practical terms. Here’s a government example:

For example, a military officer may access classified documents only if they possess the necessary clearance, are currently assigned to a relevant project, and are accessing the information from a secure location.

Madame Minna Craucher (right), a Finnish socialite and spy, with her chauffeur Boris Wolkowski (left) in 1930s. By Anonymous – Iso-Markku & Kähkönen: Valoa ja varjoa: 90 kuvaa Suomesta, s. 32. (Helsinki 2007.), Public Domain, https://commons.wikimedia.org/w/index.php?curid=47587700.

While (in my completely biased opinion) Printrak BIS was the greatest automated fingerprint identification system of its era, it couldn’t do anything like THAT. A Printrak BIS user could have a “clearance” role, but Printrak BIS had no way of knowing whether a person is assigned to an appropriate project or case, and Printrak BIS’ location capabilities were rudimentary at best. (If I recall correctly, we had some capability to restrict operations to particular computer terminals.)

As you can see, ABAC goes far beyond whether a PERSON is allowed to do things. It recognizes that people may be allowed to do things, but only under certain circumstances.

Implementing attribute-based access control

As I noted, it takes a lot of front-end work to define an ABAC implementation. I’m not going to delve into that complexity, but Gabriel L. Manor did, touching upon topics such as:

  • Policy as Code
  • Unstructured vs. Structured Rules
  • Policy configuration using the Open Policy Administration Layer (OPAL)

You can read Manor’s thoughts here (“How to Implement Attribute-Based Access Control (ABAC) Authorization?“).

And there are probably ways to simplify some of this.