Metal Injection Attack: Bypassing Biometric Fingerprint Security

(Image from LockPickingLawyer YouTube video)

This metal injection attack isn’t from an Ozzy Osbourne video, but from a video made by an expert lock picker in 2019 against a biometric gun safe.

The biometric gun safe is supposed to deny access to a person whose fingerprint biometrics aren’t registered (and who doesn’t have the other two access methods). But as Hackaday explains:

“(T)he back of the front panel (which is inside the safe) has a small button. When this button is pressed, the device will be instructed to register a new fingerprint. The security of that system depends on this button being inaccessible while the safe is closed. Unfortunately it’s placed poorly and all it takes is a thin piece of metal slid through the thin opening between the door and the rest of the safe. One press, and the (closed) safe is instructed to register and trust a new fingerprint.”

Biometric protection is of no use if you can bypass the biometrics.

But was the safe (subsequently withdrawn from Amazon) over promising? The Firearm Blog asserts that we shouldn’t have expected much.

“To be fair, cheap safes like this really are to keep kids, visitors, etc from accessing your guns. Any determined person will be able to break into these budget priced sheet metal safes….”

But still the ease at bypassing the biometric protection is deemed “inexcusable.”

So how can you detect this injection attack? One given suggestion: only allow the new biometric registration control to work when the safe is open (meaning that an authorized user has presumably opened the safe). When the safe is closed, insertion of a thin piece of metal shouldn’t allow biometric registration.

For other discussions of injection attack detection, see these posts: one, two.

By the way, this is why I believe passwords will never die. If you want a cheap way to lock something, just use a combination. No need to take DNA samples or anything.

Oh, and a disclosure: I used Google Gemini to research this post. Not that it really helped.

Reel Customer Focus and Employee Focus

After creating my textual “Customer Focus and Employee Focus,” I used Facebook to repurpose the Imagen 3-created images as a short reel, “Do your prospects believe your claimed employee focus?”

See my original post for the answers to these and following questions:

  • Do J.P. Morgan Chase’s employees matter to Jamie Dimon?
  • Do Meta’s employees matter to Mark Zuckerberg?
  • Do federal employees matter to Elon Musk and Donald Trump?
  • Do Virgin employees matter to Richard Branson?

The song is Nick Gallant’s “Gonna Need A Little Help.”

Do your prospects believe your claimed employee focus?

Customer Focus and Employee Focus

(All images Imagen 3)

When you market to your prospects and customers, will they believe what you say? Or will you be exposed as a liar?

The Bredemarket blog has talked incessantly about customer focus from a marketing perspective, noting that an entity’s marketing materials need to speak to the needs of the customer or the prospect, not the selling entity.

But customer focus alone is not enough. When the customers sign up, they have to deal with someone.

Unless the customer is stuck in answer bot hell (another issue entirely), they will deal with an employee.

The expendables 

And some employees are not happy, because they feel they are expendable.

Steve Craig of PEAK IDV recently shared a long quote from J.P. Morgan Chase’s Jamie Dimon. Here’s a short excerpt:

“Every area should be looking to be 10% more efficient. If I was running a department with a hundred people, I guarantee you, if I wanted to, I couldn’t run it with 90 and be more efficient. I guarantee you, I could do it.”

So J.P. Morgan Chase is doing very well, Dimon is doing very well, but he’s implicitly saying that his people suck.

Another CEO, Meta’s Mark Zuckerberg, is more explicit about how much his people suck.

“This is going to be an intense year, and I want to make sure we have the best people on our teams. I’ve decided to raise the bar on performance management and move out low performers faster.”

You may have noticed my intentional use of the word “entity” at the beginning of this post. Because while businesses have attracted much attention in the current culture of “layoffs will continue until morale improves,” these businesses are themselves “low performers” in the shedding people category. Chief DOGE Elon Musk, fresh from reducing X’s headcount, is coordinating layoffs in the public sector.

“Federal agencies were ordered by Donald Trump to fire mostly probationary staff, with as many as 200,000 workers set to be affected and some made to rush off the premises.”

Zuckerberg could only dream of saying “you’re fired” to 200,000 people. That dream would certainly increase his masculine energy, but for now Musk has trumped Zuckerberg on that front.

  • Do J.P. Morgan Chase’s employees matter to Jamie Dimon?
  • Do Meta’s employees matter to Mark Zuckerberg?
  • Do federal employees matter to Elon Musk and Donald Trump?

Regardless of the answer (and one could assert that they like the “good” employees and don’t want them to be harmed by the bad apples), their views are not universal.

The other extreme

Richard Branson (reportedly) does not put his needs first at the Virgin companies he runs.

Nor does he prioritize investors.

Oh, and if you’re one of Virgin’s customers…your happiness isn’t critically important either.

Branson’s stance is famous, and (literally) sounds foreign to the Dimons and Zuckerbergs of the world.

“So, my philosophy has always been, if you can put staff first, your customer second and shareholders third, effectively, in the end, the shareholders do well, the customers do better, and yourself are happy.”

You could argue that this is a means to an end, and that employee focus CAUSES customer focus. What if employee focus is missing?

“If the person who’s working for your company is not given the right tools, is not looked after, is not appreciated, they’re not gonna do things with a smile and therefore the customer will be treated in a way where often they won’t want to come back for more.”

Think about this the next time you have a problem with your Facebook account or at a Chase Bank or with your tax return.

Whether back office issues matter to customers

Of course I may be over reading into this, because I have said that the customer doesn’t care about your company. If you solve their problems, they don’t care if you’re hiring 200,000 people or firing 200,000 people.

If you solve their problems.

I can’t cite the source or the company, but I heard a horror story about an unhappy customer. The company had heavily bought into the “layoffs will continue until morale improves” philosophy, resulting in turnover in the employees who dealt with customers. When the customer raised an issue with the company, it made a point of saying that employee John Jones (not the employee’s real name) could have solved the customer’s problem long ago if the company hadn’t removed Jones from the account.

What about your company’s marketing?

So think about this in your marketing. Before you brag about your best places to work award, make sure that your prospect will see evidence of this in the employees they encounter.

“Our 8th annual LinkedIn Top Companies list highlights the 50 best large workplaces to grow your career in the U.S. right now. Fueled by unique LinkedIn data, the methodology analyzes various facets of career progression like promotion rates, skill development and more among employees at each company.”

Number 1 on LinkedIn’s April 2024 list? J.P. Morgan Chase.

Number 2? Amazon.

Number 6? UnitedHealth Group.

Um, maybe not.

In the meantime, take care of yourself, and each other.


Jerry Springer. By Justin Hoch, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=16673259.

I Just Saw People

Unlike my other Bredemarket blog posts, this one contains exactly zero images.

For a reason.

My most recent client uses Google Workspace, and I was in the client’s system performing some research for a piece of content I’m writing.

I was using Gemini for the research, and noticed that the implementation was labeled “Gemini Advanced.”

How advanced, I wondered. Bredemarket has a plain old regular version of Gemini with my Google Workspace, so I wondered if Gemini Advanced could do one particular thing that I can’t do.

So I entered one of my “draw a realistic picture” prompts, but did not specify that the entity in the picture had to be a wildebeest of iguana.

I entered my prompt…

…and received a picture that included…

A PERSON.

(This is the part of the blog post where I should display the image, but the image belongs to my client so I can’t.)

In case you don’t know the history of why Google Gemini images of people are hard to get, it’s because of a brouhaha in 2024 that erupted when Google Gemini made some interesting choices when generating its images of people.

When prompted by CNN on Wednesday to generate an image of a pope, for example, Gemini produced an image of a man and a woman, neither of whom were White. Tech site The Verge also reported that the tool produced images of people of color in response a prompt to generate images of a “1943 German Soldier.”

I mean, when are we going to ever encounter a black Nazi?

Google initially stopped its image generation capabilities altogeher, but a few months later in August 2024 it rolled out Imagen 3. As part of this rollout, certain people were granted the privilege to generate images of people again.

Over the coming days, we’ll also start to roll out the generation of images of people, with an early access version for our Gemini Advanced, Business, and Enterprise users, starting in English….We don’t support the generation of photorealistic, identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes.

Not sure whether Gemini Advanced users can generate images of black Popes, black Nazis, non-binary people, or (within the United States) the Gulf of Mexico.

Artificial intelligence is hard.

Incidentally, I have never tried to test guardrail-less Grok to see if it can generate images of black Nazis. And I don’t plan to.

My Gmail Labels Need a Draft 0.5 to Draft 1 Conversion

(All images from Imagen 3)

I’ve previously discussed my writing process, which consists of a draft 0.5 which I normally don’t show to anyone, and then (preferably after sleeping on it) a draft 1 in which I hack a bunch of the junk out of draft 0.5 to streamline the messaging.

I need to apply that elsewhere.

Like my Gmail labels.

Creating a content calendar

Bredemarket just started providing content services for a new consulting client (no proposal or analysis services—yet), and one of my first tasks was to set up a shared content calendar for the client.

Keeping a content calendar in an email or a document or a workbook works, and I’ve done this before. But keeping it on an accessible, shared platform is better because everyone has the same view and you don’t have to worry about synchronization issues.

Creating a content calendar in Jira

While Bredemarket’s own content calendars (internal and external) are in Asana, this client requested that I use Jira. Another client uses Jira for a content calendar, so I knew it would work fine.

If you’re curious, the content calendar I set up has the following statuses:

  • Backlog
  • On Hold
  • To Do
  • Doing
  • Done

Bredemarket’s external content calendar is more complex, but that’s because I know that everything on that calendar goes through my iterative review cycle process, and because most of my external projects require an invoicing step at the end. So “Doing” involves a lot of sub-statuses before I’m “Done.” My client obviously didn’t need all this. 

So I set up the content calendar, and the first issue (CC-1, create content calendar) is Done. (No confetti, Jira? Asana provides confetti.)

As Steve Taylor spoke in “Jung and the Restless,” “So what’s the problem?”

Creating email labels

The problem is one of my other obsessive habits, labeling or tagging my emails so that I can easily find them.

All my content work for this client generates a lot of emails. And I decided that the best way to label these emails was with their Jira issue number.

So emails concerning the creation of the content calendar bear the label jiracc001.

And emails concerning another issue are labeled jiracc005.

Did I mention that we already have 28 Jira issues so far? (Mostly in the Backlog.)

I shudder to think what my email will look like a week from now. I will find the relevant emails, but will have to wade through dozens or hundreds of labels first.

How to Recognize People From Quite a Long Way Away

I can’t find it, and I failed to blog about it (because reasons), but several years ago there was a U.S. effort to recognize people from quite a long way away.

Recognize, not recognise.

From https://www.youtube.com/watch?v=ug8nHaelWtc.

The U.S. effort was not a juvenile undertaking, but from what I recall was seeking solutions to wartime use cases, in which the enemy (or a friend) might be quite a long way away.

I was reminded of this U.S. long-distance biometric effort when Biometric Update reported on efforts by Heriot-Watt University in Edinburgh, Scotland and other entities to use light detection and ranging (LiDAR) to capture and evaluate faces from as far as a kilometer away.

At 325 metres – the length of around three soccer pitches – researchers were able to 3D image the face of one of their co-authors in millimetre-scale detail.

The same system could be used to accurately detect faces and human activity at distances of up to one kilometre – equivalent to the length of 10 soccer pitches – the researchers say.

(I’m surprised they said “soccer.” Maybe it’s a Scots vs. English thing.)

More important than the distance is the fact that since they didn’t depend upon visible light, they could capture faces shrouded by the environment.

“The results of our research show the enormous potential of such a system to construct detailed high-resolution 3D images of scenes from long distances in daylight or darkness conditions.

“For example, if someone is standing behind camouflage netting, this system has the potential to determine whether they are on their mobile phone, holding something, or just standing there idle. So there are a number of potential applications from a security and defence perspective.”

So much for camouflage.

But this is still in the research stage. Among other things, the tested “superconducting nanowire single-photon detector (SNSPD)” only works at 1 degree Kelvin.

That’s cold.

More on Injection Attack Detection

(Injection attack syringe image from Imagen 3)

Not too long after I shared my February 7 post on injection attack detection, Biometric Update shared a post of its own, “Veridas introduces new injection attack detection feature for fraud prevention.”

I haven’t mentioned VeriDas much in the Bredemarket blog, but it is one of the 40+ identity firms that are blogging. In Veridas’ case, in English and Spanish.

And of course I referenced VeriDas in my February 7 post when it defined the difference between presentation attack detection and injection attack detection.

Biometric Update played up this difference:

To stay ahead of the curve, Spanish biometrics company Veridas has introduced an advanced injection attack detection capability into its system, to combat the growing threat of synthetic identities and deepfakes…. 

Veridas says that standard fraud detection only focuses on what it sees or hears – for example, face or voice biometrics. So-called Presentation Attack Detection (PAD) looks for fake images, videos and voices. Deepfake detection searches for the telltale artifacts that give away the work of generative AI. 

Neither are monitoring where the feed comes from or whether the device is compromised. 

I can revisit the arguments about whether you should get PAD and…IAD?…from the same vendor, or whether you should get best in-class solutions to address each issue separately.

But they need to be addressed.

Age By Gesture?

(Churchill image public domain)

And I thought tongue identification was weird.

Biometric Update reported that the Australian government is evaluating a solution that estimates age by gestures.

At first thought I didn’t get it. Holding two fingers up in the air could be a 1960s peace hand gesture or a 1940s victory hand gesture.

Obviously I needed to give this a second thought. So I went to Needemand’s page for BorderAge. This is what I found.

« L’internaute doit simplement effectuer 3 mouvements de la main et l’avant-bras devant la caméra de son écran (ordinateur, tablette, smartphone). En quelques secondes, il/elle vérifie son âge sans dévoiler son identité. »

Help me, Google Translate; you’re my only hope.

“The Internet user simply has to make  3 movements of the hand and forearm  in front of the camera on their screen (computer, tablet, smartphone). In a few seconds, he/she verifies his/her age without revealing his/her identity.”

The method is derived from a 1994 scientific paper entitled “Rapid aimed limb movements: Age differences and practice effects in component submovements.” The abstract of the paper reads as follows:

“Two experiments are reported in which younger and older adults practiced rapid aimed limb movements toward a visible target region. Ss were instructed to make the movements as rapidly and as accurately as possible. Kinematic details of the movements were examined to assess the differences in component submovements between the 2 groups and to identify changes in the movements due to practice. The results revealed that older Ss produced initial ballistic submovements that had the same duration but traveled less far than those of younger Ss. Additionally, older Ss produced corrective secondary submovements that were longer in both duration and distance than those of the younger subjects. With practice, younger Ss modified their submovements, but older Ss did not modify theirs even after extensive practice on the task. The results show that the mechanisms underlying movements of older adults are qualitatively different from those in younger adults.”

So what does this mean? Needemand has a separate BorderAge website—thankfully in English—that illustrates the first part of the user instructions.

I don’t know what happens after that, but the process definitely has an “active liveness” vibe, except instead of proving you’re real, you’re proving you’re old, or old enough.

Now I’m not sure if the original 1994 study results were ever confirmed across worldwide populations. But it wouldn’t be the first scheme that is unproven. Do we KNOW that fingerprints are unique?

Another question I have regards the granularity of the age estimation solution. Depending upon your use case and jurisdiction, you may have to show that your age is 13, 16, 18, 21, or 25. Not sure if BorderAge gets this granular.

But if you want a way to estimate age and preserve anonymity (the solution blocks faces and has too low of a resolution to capture friction ridges), BorderAge may fit the bill.

Injection Attack Detection

(Injection attack syringe image from Imagen 3)

Having realized that I have never discussed injection attacks on the Bredemarket blog, I decided I should rectify this.

Types of attacks

When considering falsifying identity verification or authentication, it’s helpful to see how VeriDas defines two different types of falsification:

  1. Presentation Attacks: These involve an attacker presenting falsified evidence directly to the capture device’s camera. Examples include using photocopies, screenshots, or other forms of impersonation to deceive the system.
  2. Injection Attacks: These are more sophisticated, where the attacker introduces false evidence directly into the system without using the camera. This often involves manipulating the data capture or communication channels.

To be honest, most of my personal experience involves presentation attacks, in which the identity verification/authentication system remains secure but the information, um, presented to it is altered in some way. See my posts on Vision Transformer (ViT) Models and NIST IR 8491.

By JamesHarrison – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4873863.

Injection attacks and the havoc they wreak

In an injection attack, the identity verification/authentication system itself is compromised. For example, instead of taking its data from the camera, data from some other source is, um, injected so that it look like it came from the camera.

Incidentally, I should tangentially note that injection attacks greatly differ from scraping attacks, in which content from legitimate blogs is stolen and injected into scummy blogs that merely rip off content from their original writers. Speaking for myself, it is clear that this repurpose is not an honorable practice.

Note that injection attacks don’t only affect identity systems, but can affect ANY computer system. SentinelOne digs into the different types of injection attacks, including manipulation of SQL queries, cross-site scripting (XSS), and other types. Here’s an example from the health world that is pertinent to Bredemarket readers:

In May 2024, Advocate Aurora Health, a healthcare system in Wisconsin and Illinois, reported a data breach exposing the personal information of 3 million patients. The breach was attributed to improper use of Meta Pixel on the websites of the provider. After the breach, Advocate Health was faced with hefty fines and legal battles resulting from the exposure of Protected Health Information(PHI).

Returning to the identity sphere, Mitek Systems highlights a common injection.

Deepfakes utilize AI and machine learning to create lifelike videos of real people saying or doing things they never actually did. By injecting such videos into a system’s feed, fraudsters can mimic the appearance of a legitimate user, thus bypassing facial recognition security measures.

Again, this differs from someone with a mask getting in front of the system’s camera. Injections bypass the system’s camera.

Fight back, even when David Horowitz isn’t helping you

Do how do you detect that you aren’t getting data from the camera or capture device that is supposed to be providing it? Many vendors offer tactics to attack the attackers; here’s what ID R&D (part of Mitek Systems) proposes.

These steps include creating a comprehensive attack tree, implementing detectors that cover all the attack vectors, evaluating potential security loopholes, and setting up a continuous improvement process for the attack tree and associated mitigation measures.

And as long as I’m on a Mitek kick, here’s Chris Briggs telling Adam Bacia about how injection attacks relate to everything else.

From https://www.youtube.com/watch?v=ZXBHlzqtbdE.

As you can see, the tactics to fight injection attacks are far removed from the more forensic “liveness” procedures such as detecting whether a presented finger is from a living breathing human.

Presentation attack detection can only go so far.

Injection attack detection is also necessary.

So if you’re a company guarding against spoofing, you need someone who can create content, proposals, and analysis that can address both biometric and non-biometric factors.

Learn how Bredemarket can help.

CPA

Not that I’m David Horowitz, but I do what I can. As did David Horowitz’s producer when he was threatened with a gun. (A fake gun.)

From https://www.youtube.com/watch?v=ZXP43jlbH_o.