Maximizing Event ROI with the Bredemarket 2800 Medium Writing Service

Part of the IBM exhibit at CeBIT 2010. CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=10326025.

When your company attends events, you’ll want to maximize your event return on investment (ROI) by creating marketing content that you publish before, during, and after the event.

This is how you do it.

Including:

And I’ll spill a couple of secrets along the way.

The first secret (about events)

I’m going to share two secrets in this post. OK, maybe they’re not that secret, but you’d think they ARE secrets because no one acknowledges them.

The first one has to do with event attendance. You personally might be awed and amazed when you’re in the middle of an event and surrounded by hundreds, or thousands, or tens of thousands of people. All of whom are admiring your exhibit booth or listening to your CEO speak.

Technically not a CEO (Larry Ellison’s official title is Chief Technology Officer, and the CEO is Safra Catz), but you get the idea. By Oracle PR Hartmann Studios – CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=47277811.

But guess what?

Many, many more people are NOT at the event.

They can’t see your exhibit booth, and can’t hear your speaker. They’re on the outside, TRYING to look in.

CC-BY-2.0. Link.

And all the money you spent on booth space and travel and light-up pens does NOTHING for the people who aren’t there…

Unless you bring the event to them. Your online content can bring the event to people who were never there.

But you need to plan, create, and approve your content before, during, and after the event. Here’s how you do that.

Three keys to creating event-related content

Yes, you can just show up at an event, take some pictures, and call it a day. But if you want to maximize your event return on investment, you’ll be a bit more deliberate in executive your event content. Ideally you should be:

Planning your event content

Before the event begins, you need to plan your content. While you can certainly create some content on a whim as opportunity strikes, you need to have a basic idea of what content you plan to create.

Content I created before attending the APMP Western Chapter Training Day on October 29, 2021. From https://www.youtube.com/watch?v=9rS5Mc3w4Nk.
  • Before the event. Why should your prospects and customers care about the event? How will you get prospects and customers to attend the event? What will attendees and non-attendees learn from the event?
  • During the event. What event activities require content generation? Who will cover them? How will you share the content?
Some dude creating Morphoway-related content for Biometric Update at the (then) ConnectID Expo in 2015.
  • After the event. What lessons were learned? How will your prospects and customers benefit from the topics covered at the event? Why should your prospects buy the product you showcased at the event?

Creating your event content

Once you have planned what you want to do, you need to do it. Before, during, and after the event, you may want to create the following types of content:

  • Blog posts. These can announce your attendance at the event before it happens, significant goings-on at the event (such as your CEO’s keynote speech or the evening party launching your new product), or lessons learned from the event (what your CEO’s speech or your new product means for your prospects and clients). Blog posts can be created relatively quickly (though not as quick as some social media posts), and definitely benefit your bottom line.
  • Social media. Social media such as Facebook, Instagram, and LinkedIn can also be used before, during, and after the event. Social media excels at capturing the atmosphere of the event, as well as significant activities. When done right, it lets people experience the event who were never there.
  • E-mails. Don’t forget about e-mails before, during, and after the event. I forgot about e-mails once and paid the price. I attended an event but neglected to tell my e-mail subscribers that I was going to be there. When I got to the event, I realized that hardly any of the attendees understood the product I was offering, and were not the people who were hungry for my product. If I had stocked the event with people from my e-mail list, the event would have been more productive for me.
  • Data sheets. Are you announcing a new product at an event? Have the data sheet ready.
  • Demonstration scripts. Are you demonstrating a new or existing product at the event? Script out your demonstration so that your demonstrators start with the same content and make the points YOU want them to make.
  • Case studies and white papers. While these usually come into play after the event, you may want to release an appropriate case study or white paper before or during the event, tied to the event topic. Are you introducing a new product at an industry conference? Time your product-related white paper for release during the conference. And promote the white paper with blog posts, social media, and e-mails.
  • Other types of content. There are many other types of content that you can release before, during, or after an event. Here’s a list of them.

Approving your event content

Make sure that your content approval process is geared for the fast-paced nature of events. I can’t share details, but:

  • If your content approval process requires 24 hours, then you can kiss on-site event coverage goodbye. What’s the point in covering your CEO’s Monday 10:00 keynote speech if the content doesn’t appear until 11:00…on Tuesday?
  • If your content approval process doesn’t have a timeline, then you can kiss ALL event coverage goodbye. There have been several times when I’ve written blog posts announcing my company’s attendance at an event…and the blog posts weren’t approved until AFTER the event was already over. I salvaged the blog posts via massive rewrites.

So how are you going to generate all this content? This brings us to my proposed solution…and the second secret.

The second secret (about Bredemarket’s service)

By Karl Thomas Moore – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=58968347.

The rest of this post talks about one of Bredemarket’s services, the Bredemarket 2800 Medium Writing Service. For those who haven’t heard about it, it’s a service where I provide between 2,800 and 3,200 words of written text.

“But John,” you’re asking. “How is a single block of 3,200 words of text going to help me with my event marketing?”

Time to reveal the second secret…

You can break up those 3,200 words any way you like.

For example, let’s say that you’re planning on attending an event. You could break the text up as follows:

  • One 500-word blog post annnouncing your attendance at the event.
  • Three 100-word social media posts before the event.
  • One 500-word blog post as the event begins.
  • One 300-word product data sheet prepared before the event and released on the second day of the event.
  • One 500-word blog post announcing the new product.
  • Three 100-word social media posts tied to the new product announcement.
  • One 500-word post-event blog post with lessons learned.
  • Three 100-word social media posts after the event.

For $2,000 (as of June 2024), you can benefit from written text for complete event coverage, arranged in any way you need.

So how can you and your company receive these benefits?

Read about the Bredemarket 2800 Medium Writing Service

First, read the data sheet for the Bredemarket 2800 Medium Writing Service so you understand the offer and process.

Contact Bredemarket…now

Second, contact Bredemarket to get the content process started well BEFORE your event. Book a meeting with me at calendly.com/bredemarket. Be sure to fill out the information form so I can best help you.

Alternatively, you can

But don’t wait. If your event is in September…don’t contact me in October.

Digital Identity and Public Benefits

Both the U.S. National Institute of Standards and Technology and the Digital Benefits Hub made important announcements this morning. I will quote portions of the latter announcement.

The National Institute of Standards and Technology (NIST), the Digital Benefits Network (DBN) at the Beeck Center for Social Impact + Innovation at Georgetown University, and the Center for Democracy and Technology (CDT) are collaborating on a two-year-long collaborative research and development project to adapt NIST’s digital identity guidelines to better support the implementation of public benefits policy and delivery while balancing security, privacy, equity, and usability….

In response to heightened fraud and related cybersecurity threats during the COVID-19 pandemic, some benefits-administering agencies began to integrate new safeguards such as individual digital accounts and identity verification, also known as identity proofing, into online applications. However, the use of certain approaches, like those reliant upon facial recognition or data brokers, has raised questions about privacy and data security, due process issues, and potential biases in systems that disproportionately impact communities of color and marginalized groups. Simultaneously, adoption of more effective, evidence-based methods of identity verification has lagged, despite recommendations from NIST (Question A4) and the Government Accountability Office

There’s a ton to digest here. This impacts a number of issues that I and others have been discussing for years.

NIST’s own press release, by the way, can be found here.

True Stories

Image CC BY 2.0.

“If you’re not careful, you might learn something before it’s done.”

(Quote from William H. Cosby, M.A., Ed.D., L.H.D. (resc), from the Fat Albert TV show theme song. From https://www.streetdirectory.com/lyricadvisor/song/upujwj/fat_albert/.)

When I write about space aliens, there’s a reason. And that reason may be to warn identity vendors that silence is NOT golden.

Fake LinkedIn stories

As a frequent reader and writer on LinkedIn, I’ve seen all the tips and tricks to drive engagement. One popular trick is to make up a story that will resonate with the LinkedIn audience.

For example, the writer (usually a self-proclaimed career expert who is ex-FAANG) will tell the entirely fictional story of a clueless hiring manager and an infinitely wise recruiter. The clueless hiring manager is shocked that a candidate accepted a competing job offer. “Didn’t she like us?” asks the hiring manager. The wise recruiter reminds the clueless hiring manager that the candidate had endured countless delays in numerous interviews with the company, allowing another company to express interest in and snatch her.

Job seekers have endured countless delays in their own employment searches. When they read the post, they hoot and holler for the candidate and boo the clueless hiring manager. Most importantly, readers like and love the writer’s post until it goes viral, making the author an ex-FAANG top recruiting voice.

Even though no sources are cited and the story is fictional, it is very powerful.

Well…until you’ve read the same story a dozen times from a dozen recruiters. Then it gets tiresome.

My improvement on fake stories

But those fake stories powerfully drive clicks on LinkedIn, so I wanted to get in on the action. But I was going to add two wrinkles to my fake story.

First, I would explicitly admit that my story is fake. Because authenticity. Sort of.

Second, my story would include space aliens to make it riveting. And to hammer the point that the story is fake.

Now I just had to write a fake story with space aliens.

Or did I?

A repurposed and adapted fake story with space aliens

It turned out that I had already written a fake story. It didn’t have space aliens, but I liked the story I had spun in the Bredemarket blog post “(Pizza Stories) Is Your Firm Hungry for Awareness?

I just needed to make one of the characters a space alien, and since Jones was based on the striking Grace Jones, I went ahead and did it. If you can imagine Grace Jones with tentacles, two noses, and eight legs.

With a few additional edits, my fake space alien story was ready for the Sunday night LinkedIn audience.

The truth in the fake story

As the space alien’s tentacles quivered, I snuck something else into the LinkedIn story—some facts.

Kids who watched Fat Albert on TV not only enjoyed the antics, but also learned an Important Life Lessons. Now I don’t have multiple advanced degrees like Cosby, but then again I never had multiple degrees rescinded either.

But my life lesson wasn’t to stay in school or pull your pants up. My life lesson was to blog. The lesson was in the form of a statement by Jones’ humanoid colleague Smith, taken verbatim from the Pizza Stories post.

“Take blogging,” replied Smith. “The average company that blogs generates 55% more website visitors. B2B marketers that use blogs get 67% more leads than those who do not. Marketers who have prioritized blogging are 13x more likely to enjoy positive ROI. And 92% of companies who blog multiple times per day have acquired a customer from their blog.”

The stats originally appeared in an earlier post, “How Identity and Biometrics Firms Can Use Blogging to Grow Their Business.”

Data source: Daily Infographic, https://www.dailyinfographic.com/state-of-blogging-industry.

And the fake story also talked about companies (unnamed, but real) who ignored these facts and remained silent on their blog and social channels.

A huge mistake, because their competitors ARE engaging with their prospects, with real stories.

Is your company making the same mistake?

Do you want to fix it?

Drive content results with Bredemarket Identity Firm Services.

I guess I should mention David Byrne. OK, I did.

Investigative Lead, Again

Image from the mid-2010s. “John, how do you use the CrowdCompass app for this Users Conference?” Well, let me tell you…

Because of my former involvement with the biometric user conference managed by IDEMIA, MorphoTrak, Sagem Morpho, Motorola, and older entities, I always like to peek and see what they’re doing these days. And it looks like they’re still prioritizing the educational element of the conference.

Although the 2024 Justice and Public Safety Conference won’t take place until September, the agenda is already online.

Subject to change, presumably.

This Joseph Courtesis session, scheduled for the afternoon of Thursday, September 12 caught my eye. It’s entitled “Ethical Use of Facial Recognition in Law Enforcement: Policy Before Technology.” Here’s an excerpt from the abstract:

This session will focus on post investigative image identification with the assistance of Facial Recognition Technology (FRT). It’s important to point out that FRT, by itself, does not produce Probable Cause to arrest.

Re-read that last sentence, then re-read it one more time. 100% of the wrongful arrest cases would be eliminated if everyone adopted this one practice. FRT is ONLY an investigative lead.

And Courtesis makes one related point:

Any image identification process that includes FRT should put policy before the technology.

Any technology that could deprive a person of their liberty needs a clear policy on its proper use.

September conference attendees will definitely receive a comprehensive education from an authority on the topic.

But now I’m having flashbacks, and visions of Excel session planning workbooks are dancing in my head. Maybe they plan with Asana today.

The Really Big Bunch and Facial Recognition in 2024

CC0, https://commons.wikimedia.org/wiki/File:AAAMM_Big_Tech.svg

Are the Big 3 ID facing a threat from a member of the Really Big Bunch (a/k/a FAANG)? Maybe…maybe not.

Amazon Rekognition and HID Global

According to Biometric Update:

HID Global has teamed up with Amazon Web Services to enhance biometric face imaging capabilities by utilizing the Amazon Rekognition computer vision cloud service on its U.ARE.U camera system.

HID Global has previously used Paravision technology for this device. I don’t know how the Amazon agreement affects this.

And I also don’t know whether HID Global will be prevented from providing the U.ARE.U face product to law enforcement, given Amazon’s 2020-2021 ban on law enforcement use of Amazon Rekognition’s face capabilities.

Amazon Rekognition and the FBI

Especially since Fedscoop revealed in January that the FBI was in the “initiation” phase of using Amazon Rekognition. Neither Amazon nor the FBI would say whether facial recognition was part of the deal.

Why is this significant? Because, as I said before:

If Alphabet or Amazon reverse their current reluctance to market their biometric offerings to governments, the entire landscape could change again.

If they wished, Alphabet, Amazon, and the other tech powers could shut IDEMIA, NEC, and Thales completely out of the biometric business with a minimal (to them) investment. If you’re familiar with SWOT analyses, this definitely falls into the “threat” category.

But the Really Big Bunch still fear public reaction to any so-called “police state” involvement.

It’s Medicare Fraud Prevention Week

Signing the Medicare amendment (July 30, 1965). By White House Press Office. Public Domain, https://commons.wikimedia.org/w/index.php?curid=1394392.

The FBI and others are letting us know that June 3 through June 9 is Medicare Fraud Prevention Week. Pro Seniors:

Fraud costs Medicare an estimated $60 billion per year. It costs Medicare beneficiaries in time, stress, their medical identities, and potentially their health. It costs families, friends, and caregivers in worry and lost work when helping their loved ones recover from falling victim to Medicare fraud.

Of course my primary interest in the topic is ensuring that only the proper people can access Medicare data, preferably through a robust method of identity verification that uses multiple factors.

Not multiple modalities, especially ones that are well-known such as your Social Security Number and your mother’s maiden name.

Multiple factors, such as your government-issued driver’s license, your biometrics, and your geolocation.

For more information, see what these vendors are saying about using biometrics to counter healthcare fraud attempts.

It’s My Birthday Too, Yeah

Here’s what I said:

Basically, the difference between “recognition” and “analysis” in this context is that recognition identifies an individual, while analysis identifies a characteristic of an individual….The age of a person is another example of analysis. In and of itself an age cannot identify an individual, since around 385,000 people are born every day. Even with lower birth rates when YOU were born, there are tens or hundreds of thousands of people who share your birthday.

Here’s what ilovemyqa said on Instagram:

Enter your age. 17. User with this age already exists.
From https://www.instagram.com/p/C7qb5S9p8Tc/?igsh=MzRlODBiNWFlZA==.

The Why, How, and What on NIST Age Estimation Testing

(Part of the biometric product marketing expert series)

Normal people look forward to the latest album or movie. A biometric product marketing expert instead looks forward to an inaugural test report from the National Institute of Standards and Technology (NIST) on age estimation and verification using faces.

Waiting

I’ve been waiting for this report for months now (since I initially mentioned it in July 2023), and in April NIST announced it would be available in the next few weeks.

NIST news release

Yesterday I learned of the report’s public availability via a NIST news release.

A new study from the National Institute of Standards and Technology (NIST) evaluates the performance of software that estimates a person’s age based on the physical characteristics evident in a photo of their face. Such age estimation and verification (AEV) software might be used as a gatekeeper for activities that have an age restriction, such as purchasing alcohol or accessing mature content online….

The new study is NIST’s first foray into AEV evaluation in a decade and kicks off a new, long-term effort by the agency to perform frequent, regular tests of the technology. NIST last evaluated AEV software in 2014….

(The new test) asked the algorithms to specify whether the person in the photo was over the age of 21.

Well, sort of. We’ll get to that later.

Current AEV results

I was in the middle of a client project on Thursday and didn’t have time to read the detailed report, but I did have a second to look at the current results. Like other ongoing tests, NIST will update the age estimation and verification (AEV) results as these six vendors (and others) submit new algorithms.

From https://pages.nist.gov/frvt/html/frvt_age_estimation.html as of May 31, 2024. Subject to change.

This post looks at my three favorite questions:

Why NIST tests age estimation

Why does NIST test age estmation, or anything else?

The Information Technology Laboratory and its Information Access Division

NIST campus, Gaithersburg MD. From https://www.nist.gov/ofpm/historic-preservation-nist/gaithersburg-campus. I visited it once, when Safran’s acquisition of Motorola’s biometric business was awaiting government approval. I may or may not have spoken to a Sagem Morpho employee at this meeting, even though I wasn’t supposed to in case the deal fell through.

One of NIST’s six research laboratories is its Information Technology Laboratory (ITL), charged “to cultivate trust in information technology (IT) and metrology.” Since NIST is part of the U.S. Department of Commerce, Americans (and others) who rely on information technology need an unbiased source on the accuracy and validity of this technology. NIST cultivates trust by a myriad of independent tests.

Some of those tests are carried out by one of ITL’s six divisions, the Information Access Division (IAD). This division focuses on “human action, behavior, characteristics and communication.”

The difference between FRTE and FATE

While there is a lot of IAD “characteristics” work that excites biometric folks, including ANSI/NIST standard work, contactless fingerprint capture, the Fingerprint Vendor Technology Evaluation (ugh), and other topics, we’re going to focus on our new favorite acronyms, FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation). If these acronyms are new to you, I talked about them last August (and the deprecation of the old FRVT acronym).

Basically, the difference between “recognition” and “analysis” in this context is that recognition identifies an individual, while analysis identifies a characteristic of an individual. So the infamous “Gender Shades” study, which tested the performance of three algorithms in identifying people’s sex and race, is an example of analysis.

Age analysis

The age of a person is another example of analysis. In and of itself an age cannot identify an individual, since around 385,000 people are born every day. Even with lower birth rates when YOU were born, there are tens or hundreds of thousands of people who share your birthday.

They say it’s your birthday. It’s my birthday too, yeah. From https://www.youtube.com/watch?v=fkZ9sT-z13I. Paul’s original band never filmed a promotional video for this song.

And your age matters in the situations I mentioned above. Even when marijuana is legal in your state, you can’t sell it to a four year old. And that four year old can’t (or shouldn’t) sign up for Facebook either.

You can check a person’s ID, but that takes time and only works when a person has an ID. The only IDs that a four year old has are their passport (for the few who have one) and their birth certificate (which is non-standard from county to county and thus difficult to verify). And not even all adults have IDs, especially in third world countries.

Self-testing

So companies like Yoti developed age estimation solutions that didn’t rely on government-issued identity documents. The companies tested their performance and accuracy themselves (see the PDF of Yoti’s March 2023 white paper here). However, there are two drawbacks to this:

  • While I am certain that Yoti wouldn’t pull any shenanigans, results from a self-test always engender doubt. Is the tester truly honest about its testing? Does it (intentionally or unintentionally) gloss over things that should be tested? After all, the purpose of a white paper is for a vendor to present facts that lead a prospect to buy a vendor’s solution.
  • Even with Yoti’s self tests, it did not have the ability (or the legal permission) to test the accuracy of its age estimation competitors.

How NIST tests age estimation

Enter NIST, where the scientists took a break from meterological testing or whatever to conduct an independent test. NIST asked vendors to participate in a test in which NIST personnel would run the test on NIST’s computers, using NIST’s data. This prevented the vendors from skewing the results; they handed their algorithms to NIST and waited several months for NIST to tell them how they did.

I won’t go into it here, but it’s worth noting that a NIST test is just a test, and test results may not be the same when you implement a vendor’s age estimation solution on CUSTOMER computers with CUSTOMER data.

The NIST internal report I awaited

NOW let’s turn to the actual report, NIST IR 8525 “Face Analysis Technology Evaluation: Age Estimation and Verification.”

NIST needed a set of common data to test the vendor algorithms, so it used “around eleven million photos drawn from four operational repositories: immigration visas, arrest mugshots, border crossings, and immigration office photos.” (These were provided by the U.S. Departments of Homeland Security and Justice.) All of these photos include the actual ages of the persons (although mugshots only include the year of birth, not the date of birth), and some include sex and country-of-birth information.

For each algorithm and each dataset, NIST recorded the mean absolute error (MAE), which is the mean number of years between the algorithm’s estimate age and the actual age. NIST also recorded other error measurements, and for certain tests (such as a test of whether or not a person is 17 years old) the false positive rate (FPR).

The challenge with the methodology

Many of the tests used a “Challenge-T” policy, such as “Challenge 25.” In other words, the test doesn’t estimate whether a person IS a particular age, but whether a person is WELL ABOVE a particular age. Here’s how NIST describes it:

For restricted-age applications such as alcohol purchase, a Challenge-T policy accepts people with age estimated at or above T but requires additional age assurance checks on anyone assessed to have age below T.

So if you have to be 21 to access a good or service, the algorithm doesn’t estimate if you are over 21. Instead, it estimates whether you are over 25. If the algorithm thinks you’re over 25, you’re good to go. If it thinks you’re 24, pull out your ID card.

And if you want to be more accurate, raise the challenge age from 25 to 28.

NIST admits that this procedure results in a “tradeoff between protecting young people and inconveniencing older subjects” (where “older” is someone who is above the legal age but below the challenge age).

NIST also performed a variety of demographic tests that I won’t go into here.

What the NIST age estimation test says

OK, forget about all that. Let’s dig into the results.

Which algorithm is the best for age estimation?

It depends.

I’ve covered this before with regard to facial recognition. Because NIST conducts so many different tests, a vendor can turn to any single test in which it placed first and declare it is the best vendor.

So depending upon the test, the best age estimation vendor (based upon accuracy and or resource usage) may be Dermalog, or Incode, or ROC (formerly Rank One Computing), or Unissey, or Yoti. Just look for that “(1)” superscript.

From https://pages.nist.gov/frvt/html/frvt_age_estimation.html as of May 31, 2024. Subject to change.

You read that right. Out of the 6 vendors, 5 are the best. And if you massage the data enough you can probably argue that Neurotechnology is the best also.

So if I were writing for one of these vendors, I’d argue that the vendor placed first in Subtest X, Subtest X is obviously the most important one in the entire test, and all the other ones are meaningless.

But the truth is what NIST said in its news release: there is no single standout algorithm. Different algorithms perform better based upon the sex or national origin of the people. Again, you can read the report for detailed results here.

What the report didn’t measure

NIST always clarifies what it did and didn’t test. In addition to the aforementioned caveat that this was a test environment that will differ from your operational environment, NIST provided some other comments.

The report excludes performance measured in interactive sessions, in which a person can cooperatively present and re-present to a camera. It does not measure accuracy effects related to disguises, cosmetics, or other presentation attacks. It does not address policy nor recommend AV thresholds as these differ across applications and jurisdictions.

Of course NIST is just starting this study, and could address some of these things in later studies. For example, its ongoing facial recognition accuracy tests never looked at the use case of people wearing masks until after COVID arrived and that test suddenly became important.

What about 22 year olds?

As noted above, the test used a Challenge 25 or Challenge 28 model which measured whether a person who needed to be 21 appeared to be 25 or 28 years old. This makes sense when current age estimation technology measures MAE in years, not days. NIST calculated the “inconvenience” to 21-25 (or 28) year olds affected by this method.

What about 13 year olds?

While a lot of attention is paid to the use cases for 21 year olds (buying booze) and 18 year olds (viewing porn), states and localities have also paid a lot of attention to the use cases for 13 year olds (signing up for social media). In fact, some legislators are less concerned about a 20 year old buying a beer than a 12 year old receiving text messages from a Meta user.

By Adrian Pingstone – Transferred from en.wikipedia, Public Domain, https://commons.wikimedia.org/w/index.php?curid=112727.

NIST tests for these in the “child online safety” tests, particularly these two:

  • Age < 13 – False Positive Rates (FPR) are proportions of subjects aged below 13 but whose age is estimated from 13 to 16 (below 17).
  • Age ≥ 17 – False Positive Rates (FPR) are proportions of subjects aged 17 or older but whose age is estimated from 13 to 16.

However, the visa database is the only one that includes data of individuals with actual ages below age 13. The youngest ages in the other datasets are 14, or 18, or even 21, rendering them useless for the child online safety tests.

Why NIST researchers are great researchers

The mark of a great researcher is their ability to continue to get funding for their research, which is why so many scientific papers conclude with the statement “further study is needed.”

Here’s how NIST stated it:

Future work: The FATE AEV evaluation remains open, so we will continue to evaluate and report on newly submitted prototypes. In future reports we will: evaluate performance of implementations that can exploit having a prior known-age reference photo of a subject (see our API); consider whether video clips afford improved accuracy over still photographs; and extend demographic and quality analyses.

Translation: if Congress doesn’t continue to give NIST money, then high school students will get drunk or high, young teens will view porn, and kids will encounter fraudsters on Facebook. It’s up to you, Congress.

Don’t Misuse Facial Recognition Technology

From https://www.biometricupdate.com/202405/facewatch-met-police-face-lawsuits-after-facial-recognition-misidentification.

From Biometric Update:

Biometric security company Facewatch…is facing a lawsuit after its system wrongly flagged a 19-year-old girl as a shoplifter….(The girl) was shopping at Home Bargains in Manchester in February when staff confronted her and threw her out of the store…..’I have never stolen in my life and so I was confused, upset and humiliated to be labeled as a criminal in front of a whole shop of people,’ she said in a statement.

While Big Brother Watch and others are using this story to conclude that facial recognition is evil and no one should ever use it, the problem isn’t the technology. The problem is when the technology is misused.

  • Were the Home Bargains staff trained in forensic face examination, so that they could confirm that the customer was the shoplifter? I doubt it.
  • Even if they were forensically trained, did the Home Bargains staff follow accepted practices and use the face recognition results ONLY as an investigative lead, and seek other corroborating evidence to identify the girl as a shoplifter? I doubt it.

Again, the problem is NOT the technology. The problem is MISUSE of the technology—by this English store, by a certain chain of U.S. stores, and even by U.S. police agencies who fail to use facial recognition results solely as an investigative lead.

A prospect approached me some time ago to have Bredemarket help tell this story. However, the prospect has delayed moving forward with the project, and so their story has not yet been told.

Does YOUR firm have a story that you have failed to tell?