And one of those records was so unmemorable that it was memorable.
The album, recorded in the early to mid 1960s, trumpeted the fact that the group that recorded the album was extremely versatile. You see, the record not only included surf songs, but also included car songs!
The only problem? The album was NOT by the Beach Boys.
Instead, the album was from some otherwise unknown band that was trying to achieve success by doing what the competition did. (In this case, the Beach Boys.)
I can’t remember the name of the band, and I bet no one else can either.
“Me too” in computing and lawn care
Sadly, this tactic of Xeroxing (or Mitaing) the competition is not confined to popular music. Have you noticed that so many recipes for marketing success involve copying what your competitors do?
Semrush: “Analyze your competitors’ keywords that you are not ranking for to discover gaps in your SEO strategy.”
iSpionage: “If you can emulate your competitors but do things slightly better you have a good chance of being successful.”
Someone who shall remain nameless: “Look at this piece of collateral that one of our competitors did. We should do something just like that.”
And of course the tactic of slavishly copying competitors has been proven to work. For example, remember when Apple Computer adopted the slogan “Think The Same” as the company dressed in blue, ensured all its computers could run MS-DOS, and otherwise imitated everything that IBM did?
“But John,” you are saying. “That’s unfair. Not everyone can be Apple.”
My point exactly. Everyone can’t be Apple because they’re so busy trying to imitate someone else—either a competitor or some other really popular company.
Personally, I’m waiting for some company to claim to be “the Bredemarket of satellite television. (Which would simply mean that the company would have a lot of shows about wildebeests.) But I’ll probably have to wait a while for some company to be the Bredemarket of anything.
(An aside: while talking with a friend, I compared the British phrase “eating your pudding” to the American phrase “eating your own dog food,” although I noted that “I like to say ‘eating your own wildebeest food‘ just to stand out.” Let’s see ChatGPT do THAT.)
“Me too” in identity verification
Now I’ll tread into more dangerous territory.
Here’s an example from the identity/biometric world. Since I self-identity (heh) as the identity content marketing expert, I’m supremely qualified to cite this example.
I spent a year embedded in the identity verification industry, and got to see the messaging from my own company and by the competition.
After a while, I realized that most of the firms in the industry were saying the same thing. Here are a few examples. See if you can spot the one word that EVERY company is using:
(Company I) “Reimagine trust.”
(Company J) “To protect against fraud and financial crime, businesses online need to know and trust that their customers are who they claim to be — and that these customers continue to be trustworthy.”
(Company M) “Trust is the core of any successful business relationship. As the digital revolution continues to push businesses and financial industries towards digital-first services, gaining digital trust with consumers will be of utmost importance for survival.”
(Company O) “Create trust at onboarding and beyond with a complete, AI-powered digital identity solution built to help you know your customers online.”
(Company P) “Trust that users are who they say they are, and gain their trust by humanizing the identity experience.”
(Company V) “Stop fraud. Build trust. Identity verification made simple.”
Yes, these companies, and many others, prominently feature the t-word in their messaging.
Now perhaps some of you would argue that trust is essential to identity verification in the same way that water is essential to an ocean, and that therefore EVERYBODY HAS to use the t-word in their communications.
After all, if I was going to create content for this prospect, we had to ensure that the content stood out from their competitors.
Without revealing confidential information, I can say that I asked the firm why they were better than every other firm out there, and why all the other firms sucked. And the firm provided me with a compelling answer to that question. I can’t reveal that answer, but you can probably guess that the word “trust” was not involved.
A final thought
So let me ask you:
Why is YOUR firm better than every other firm out there, and why do all or YOUR competitors suck?
Your firm’s survival may depend upon communicating that answer.
While I don’t use all the marketing tools at my disposal, I am certainly curious about them. After all, such tools provide marketers with powerful insights on their prospects and customers.
I became especially curious about one marketing tool when re-examining a phrase I use often.
I use the phrase “biometric content marketing expert” in a non-traditional way. When I use it, I am attempting to say that I am a content marketing expert on the use of biometrics for identification. In other words, I can create multiple types of content that discusses fingerprint identification, facial recognition, and similar technologies.
But if you speak to a normal person, they will assume that a “biometric content marketing expert” is someone who uses biometrics (the broader term, not the narrower term) to support content marketing. This is something very different—something that is generally known as “facial coding,” a technique that purports to provide information to marketers.
We all know that our face conveys emotions through facial expressions; facial coding is the process of measuring those human emotions. With the help of computer vision, powered by AI and machine learning, emotions can be detected via webcam or mobile cam. The tech tracks every muscle movement on the face or all-action units (AU) based on the FACS (facial action coding system).
The differences between facial coding and facial recognition
Unlike the topics in which I usually dwell, facial coding:
Does not identify individuals. Many people can share the same emotions, so detection of a particular emotion does not serve as individualization.
Does not provide permanent information. In the course of watching a movie or even a short advertisement, viewers often exhibit a wide range of emotions. Just because you exhibit a particular emotion at the beginning of an ad doesn’t mean you’ll exhibit the same emotion at the conclusion.
As Rathi describes the practice, it preserves privacy by allowing people to opt-in, and to record the emotions anonymously.
So, the user’s permission is required to access their camera and all this data is captured with consent. And no video is shared. Only the emotion data of the users are captured through their facial expressions and shared in real-time. The emotions on a person’s face are captured as binary units (0 and 1). Hence no PII (Personally Identifiable Information) related to race, ethnicity, gender, or age is captured at any point in time.
But what if another firm chooses to gather more data, thus reducing the anonymity of the data collected? “I don’t only want to know how people react to the content. I want to know how black women in their 30s react to the content.”
And what if another firm (or a government agency, such as the Transportation Security Administration) chooses to gather the data without explicit consent, or with consent buried deep in the terms of service? In that case, people may not even realize that their facial expressions are being watched.
Examining facial expressions is not the only way to decipher what is happening in a person’s mind as they view content. But it’s powerful.
Well, maybe.
Does everyone exhibit the same facial coding?
The underlying assumption behind emotion recognition is that you can identify emotions at a universal level. If content makes me happy, or if it makes a person halfway around the world happy, we will exhibit the same measurable facial characteristics.
Research has not revealed a consistent, physical fingerprint for even a single emotion. When scientists attach electrodes to a person’s face and measure muscle movement during an emotion, they find tremendous variety, not uniformity. They find the same variety with the body and brain. You can experience anger with or without a spike in blood pressure. You can experience fear with or without a change in the amygdala, the brain region tagged as the home of fear.
When scientists set aside the classical view and just look at the data, a radically different explanation for emotion comes to light. We find that emotions are not universal but vary from culture to culture. They are not triggered; you create them. They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing.
If Barrett is correct, then how reliable is facial coding, even within a particular region? After all, even Southern California does not have a single universal culture, but is made up of many cultures in which people react in many different ways. And if we preserve privacy by NOT collecting this cultural information, then we may not fully understand the codings that the cameras record.
Back to the familiar “biometric” world
And with that, I will retreat from the broader definition of biometrics to the narrower and more familiar one, as described here.
The term “Biometrics” has also been used to refer to the field of technology devoted to the identification of individuals using biological traits, such as those based on retinal or iris scanning, fingerprints, or face recognition. Neither the journal “Biometrics” nor the International Biometric Society is engaged in research, marketing, or reporting related to this technology.
For those who don’t know, the Prism presents an organized view of all of the digital identity companies—or at least the ones that FindBiometrics and Acuity Market Intelligence knew about. In the last few days, they were literally beggin’ to give companies a last chance for inclusion.
On Monday, I began to see a trickle of companies that talked about their place on the Prism, including iProov and Trustmatic.
I tend to view presentation attack detection (PAD) through the lens of iBeta or occasionally of BixeLab. But I need to remind myself that these are not the only entities examining PAD.
A recent paper authored by Koushik Srivatsan, Muzammal Naseer, and Karthik Nandakumar of the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI) addresses PAD from a research perspective. I honestly don’t understand the research, but perhaps you do.
Flip spoofing his natural appearance by portraying Geraldine. Some were unable to detect the attack. By NBC Television. – eBay itemphoto frontphoto back, Public Domain, https://commons.wikimedia.org/w/index.php?curid=16476809
Face anti-spoofing (FAS) or presentation attack detection is an essential component of face recognition systems deployed in security-critical applications. Existing FAS methods have poor generalizability to unseen spoof types, camera sensors, and environmental conditions. Recently, vision transformer (ViT) models have been shown to be effective for the FAS task due to their ability to capture long-range dependencies among image patches. However, adaptive modules or auxiliary loss functions are often required to adapt pre-trained ViT weights learned on large-scale datasets such as ImageNet. In this work, we first show that initializing ViTs with multimodal (e.g., CLIP) pre-trained weights improves generalizability for the FAS task, which is in line with the zero-shot transfer capabilities of vision-language pre-trained (VLP) models. We then propose a novel approach for robust cross-domain FAS by grounding visual representations with the help of natural language. Specifically, we show that aligning the image representation with an ensemble of class descriptions (based on natural language semantics) improves FAS generalizability in low-data regimes. Finally, we propose a multimodal contrastive learning strategy to boost feature generalization further and bridge the gap between source and target domains. Extensive experiments on three standard protocols demonstrate that our method significantly outperforms the state-of-the-art methods, achieving better zero-shot transfer performance than five-shot transfer of “adaptive ViTs”.
CLIP is the first multimodal (in this case, vision and text) model tackling computer vision and was recently released by OpenAI on January 5, 2021….CLIP is a bridge between computer vision and natural language processing.
Sadly, Brems didn’t address ViT, so I turned to Chinmay Bhalerao.
Vision Transformers work by first dividing the image into a sequence of patches. Each patch is then represented as a vector. The vectors for each patch are then fed into a Transformer encoder. The Transformer encoder is a stack of self-attention layers. Self-attention is a mechanism that allows the model to learn long-range dependencies between the patches. This is important for image classification, as it allows the model to learn how the different parts of an image contribute to its overall label.
The output of the Transformer encoder is a sequence of vectors. These vectors represent the features of the image. The features are then used to classify the image.
On September 30, FindBiometrics and Acuity Market Intelligence released the production version of the Biometric Digital Identity Prism Report. You can request to download it here.
But FindBiometrics and Acuity Market Intelligence didn’t invent the Big 3. The concept has been around for 40 years. And two of today’s Big 3 weren’t in the Big 3 when things started. Oh, and there weren’t always 3; sometimes there were 4, and some could argue that there were 5.
So how did we get from the Big 3 of 40 years ago to the Big 3 of today?
The Big 3 in the 1980s
Back in 1986 (eight years before I learned how to spell AFIS) the American National Standards Institute, in conjunction with the National Bureau of Standards, issued ANSI/NBS-ICST 1-1986, a data format for information interchange of fingerprints. The PDF of this long-superseded standard is available here.
When creating this standard, ANSI and the NBS worked with a number of law enforcement agencies, as well as companies in the nascent fingerprint industry. There is a whole list of companies cited at the beginning of the standard, but I’d like to name four of them.
De La Rue Printrak, Inc.
Identix, Inc.
Morpho Systems
NEC Information Systems, Inc.
While all four of these companies produced computerized fingerprinting equipment, three of them had successfully produced automated fingerprint identification systems, or AFIS. As Chapter 6 of the Fingerprint Sourcebook subsequently noted:
Morpho Systems resulted from French AFIS efforts, separate from those of the FBI. These efforts launched Morpho’s long-standing relationship with the French National Police, as well as a similar relationship (now former relationship) with Pierce County, Washington.
NEC had deployed AFIS equipment for the National Police Academy of Japan, and (after some prodding; read Chapter 6 for the story) the city of San Francisco. Eventually the state of California obtained an NEC system, which played a part in the identification of “Night Stalker” Richard Ramirez.
After the success of the San Francisco and California AFIS systems, many other jurisdictions began clamoring for AFIS of their own, and turned to these three vendors to supply them.
The Big 4 in the 1990s
But in 1990, these three firms were joined by a fourth upstart, Cogent Systems of South Pasadena, California.
While customers initially preferred the Big 3 to the upstart, Cogent Systems eventually installed a statewide system in Ohio and a border control system for the U.S. government, plus a vast number of local systems at the county and city level.
Between 1991 and 1994, the (Immigfation and Naturalization Service) conducted several studies of automated fingerprint systems, primarily in the San Diego, California, Border Patrol Sector. These studies demonstrated to the INS the feasibility of using a biometric fingerprint identification system to identify apprehended aliens on a large scale. In September 1994, Congress provided almost $30 million for the INS to deploy its fingerprint identification system. In October 1994, the INS began using the system, called IDENT, first in the San Diego Border Patrol Sector and then throughout the rest of the Southwest Border.
I was a proposal writer for Printrak (divested by De La Rue) in the 1990s, and competed against Cogent, Morpho, and NEC in AFIS procurements. By the time I moved from proposals to product management, the next redefinition of the “big” vendors occurred.
The Big 3 in 2003
There are a lot of name changes that affected AFIS participants, one of which was the 1988 name change of the National Bureau of Standards to the National Institute of Standards and Technology (NIST). As fingerprints and other biometric modalities were increasingly employed by government agencies, NIST began conducting tests of biometric systems. These tests continue to this day, as I have previously noted.
One of NIST’s first tests was the Fingerprint Vendor Technology Evaluation of 2003 (FpVTE 2003).
For those who are familiar with NIST testing, it’s no surprise that the test was thorough:
FpVTE 2003 consists of multiple tests performed with combinations of fingers (e.g., single fingers, two index fingers, four to ten fingers) and different types and qualities of operational fingerprints (e.g., flat livescan images from visa applicants, multi-finger slap livescan images from present-day booking or background check systems, or rolled and flat inked fingerprints from legacy criminal databases).
Eighteen vendors submitted their fingerprint algorithms to NIST for one or more of the various tests, including Bioscrypt, Cogent Systems, Identix, SAGEM MORPHO (SAGEM had acquired Morpho Systems), NEC, and Motorola (which had acquired Printrak). And at the conclusion of the testing, the FpVTE 2003 summary (PDF) made this statement:
Of the systems tested, NEC, SAGEM, and Cogent produced the most accurate results.
Which would have been great news if I were a product manager at NEC, SAGEM, and Cogent.
Unfortunately, I was a product manager at Motorola.
The effect of this report was…not good, and at least partially (but not fully) contributed to Motorola’s loss of its long-standing client, the Royal Canadian Mounted Police, to Cogent.
The Big 3, 4, or 5 after 2003
So what happened in the years after FpVTE was released? Opinions vary, but here are three possible explanations for what happened next.
Did the Big 3 become the Big 4 again?
Now I probably have a bit of bias in this area since I was a Motorola employee, but I maintain that Motorola overcame this temporary setback and vaulted back into the Big 4 within a couple of years. Among other things, Motorola deployed a national 1000 pixels-per-inch (PPI) system in Sweden several years before the FBI did.
Did the Big 3 remain the Big 3?
Motorola’s arch-enemies at Sagem Morpho had a different opinion, which was revealed when the state of West Virginia finally got around to deploying its own AFIS. A bit ironic, since the national FBI AFIS system IAFIS was located in West Virginia, or perhaps not.
Anyway, Motorola had a very effective sales staff, as was apparent when the state issued its Request for Proposal (RFP) and explicitly said that the state wanted a Motorola AFIS.
That didn’t stop Cogent, Identix, NEC, and Sagem Morpho from bidding on the project.
After the award, Dorothy Bullard and I requested copies of all of the proposals for evaluation. While Motorola (to no one’s surprise) won the competition, Dorothy and I believed that we shouldn’t have won. In particular, our arch-enemies at Sagem Morpho raised a compelling argument that it should be the chosen vendor.
Their argument? Here’s my summary: “Your RFP says that you want a Motorola AFIS. The states of Kansas (see page 6 of this PDF) and New Mexico (see this PDF) USED to have a Motorola AFIS…but replaced their systems with our MetaMorpho AFIS because it’s BETTER than the Motorola AFIS.”
But were Cogent, Motorola, NEC, and Sagem Morpho the only “big” players?
Did the Big 3 become the Big 5?
While the Big 3/Big 4 took a lot of the headlines, there were a number of other companies vying for attention. (I’ve talked about this before, but it’s worthwhile to review it again.)
Identix, while making some efforts in the AFIS market, concentrated on creating live scan fingerprinting machines, where it competed (sometimes in court) against companies such as Digital Biometrics and Bioscrypt.
The fingerprint companies started to compete against facial recognition companies, including Viisage and Visionics.
Oh, and there were also iris companies such as Iridian.
And there were other ways to identify people. Even before 9/11 mandated REAL ID (which we may get any year now), Polaroid was making great efforts to improve driver’s licenses to serve as a reliable form of identification.
In short, there were a bunch of small identity companies all over the place.
But in the course of a few short years, Dr. Joseph Atick (initially) and Robert LaPenta (subsequently) concentrated on acquiring and merging those companies into a single firm, L-1 Identity Solutions.
These multiple mergers resulted in former competitors Identix and Digital Biometrics, and former competitors Viisage and Visionics, becoming part of one big happy family. (A multinational big happy family when you count Bioscrypt.) Eventually this company offered fingerprint, face, iris, driver’s license, and passport solutions, something that none of the Big 3/Big 4 could claim (although Sagem Morpho had a facial recognition offering). And L-1 had federal contracts and state contracts that could match anything that the Big 3/Big 4 offered.
So while L-1 didn’t have a state AFIS contract like Cogent, Motorola, NEC, and Sagem Morpho did, you could argue that L-1 was important enough to be ranked with the big boys.
So for the sake of argument let’s assume that there was a Big 5, and L-1 Identity Solutions was part of it, along with the three big boys Motorola, NEC, and Safran (who had acquired Sagem and thus now owned Sagem Morpho), and the independent Cogent Systems. These five companies competed fiercly with each other (see West Virginia, above).
In a two-year period, everything would change.
The Big 3 after 2009
Hang on to your seats.
The Motorola RAZR was hugely popular…until it wasn’t. Eventually Motorola split into two companies and sold off others, including the “Printrak” Biometric Business Unit. By NextG50 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=130206087
By 2009, Safran (resulting from the merger of Sagem and Snecma) was an international powerhouse in aerospace and defense and also had identity/biometric interests. Motorola, in the meantime, was no longer enjoying the success of its RAZR phone and was looking at trimming down (prior to its eventual, um, bifurcation). In response to these dynamics, Safran announced its intent to purchase Motorola’s Biometric Business Unit in October 2008, an effort that was finalized in April 2009. The Biometric Business Unit (adopting its former name Printrak) was acquired by Sagem Morpho and became MorphoTrak. On a personal level, Dorothy Bullard moved out of Proposals and I moved into Proposals, where I got to work with my new best friends that had previously slammed Motorola for losing the Kansas and New Mexico deals. (Seriously, Cindy and Ron are great folks.)
By 2011, Safran decided that it needed additional identity capabilities, so it acquired L-1 Identity Solutions and renamed the acquisition as MorphoTrust.
If you’re keeping notes, the Big 5 have now become the Big 3: 3M, Safran, and NEC (the one constant in all of this).
While there were subsequent changes (3M sold Cogent and other pieces to Gemalto, Safran sold all of Morpho to Advent International/Oberthur to form IDEMIA, and Gemalto was acquired by Thales), the Big 3 has remained constant over the last decade.
And that’s where we are today…pending future developments.
If Alphabet or Amazon reverse their current reluctance to market their biometric offerings to governments, the entire landscape could change again.
Or perhaps a new AI-fueled competitor could emerge.
The 1 Biometric Content Marketing Expert
This was written by John Bredehoft of Bredemarket.
If you work for the Big 3 or the Little 80+ and need marketing and writing services, the biometric content marketing expert can help you. There are several ways to get in touch:
Book a meeting with me at calendly.com/bredemarket. Be sure to fill out the information form so I can best help you.
We’ve been talking about the death of the bicycle since the time of the Wright Brothers and Henry Ford.
But we still haven’t achieved it.
Wilbur Wright building a bicycle two centuries ago before he came to his senses. By Wright brothers – Library of Congress CALL NUMBER: LC-W85- 81 [P&P]REPRODUCTION NUMBER: LC-DIG-ppprs-00540 (digital file from original)LC-W851-81 (b&w film copy, Public Domain, https://commons.wikimedia.org/w/index.php?curid=2217030
What will it take to make the death of the bicycle a reality?
Why does the bicycle need to die?
I think that all intelligent people agree that the bicycle needs to die. But just to be extra-cautious, I will again enumerate the reasons why the death of the bicycle is absolutely necessary.
By Photo by Adam Coppola. – Photo by Adam Coppola taken under contract for PeopleForBikes, released into the public domain with the consent of the subjects.[1][2], CC0, https://commons.wikimedia.org/w/index.php?curid=46251073
The bicycle is too slow. Perhaps the bicycle was suitable for 19th century life, but today it’s an embarrassment. The speed of the bicycle has long been surpassed by automobiles from the aforementioned Ford, and airplanes from the aforementioned Wrights. It poses a danger as slow-moving bicycle traffic risks getting hit by faster-moving vehicles, unless extraordinary measures are undertaken to separate bicycles from normal traffic. For this reason alone the bicycle must die.
The bicycle is too weak. If that weren’t enough, take a look at the weakness of the bicycle and the huge threat from this weakness. You can completely destroy the bicycle and its rider with a simple puddle of oil, a nail, or a misplaced brick that a bicycle hits. This is yet another reason why the bicycle must die.
The bicycle is too inefficient. Other factors of transportation are much better equipped to carry loads of people and goods. The bicycle? Forget it. Any attempt to carry a reasonable load of goods on a bicycle is doomed to failure.
The bicycle is too easy to steal. It takes some effort to steal other factors of transportation, but it is pitifully easy to steal a bike, or part of a bike.
Despite everyone knowing about these security and personal threats for years if not decades, use of the bicycle continues to persist.
And we have to put a stop to it.
Why does the bicycle continue to live?
The problem is that a few wrongheaded individuals continue to promote bicycle use in a misguided way.
Some of them argue that bicycles provide health benefits that you can’t realize with other factors of transportation. Any so-called health benefits are completely erased by the damage that could happen when a bicycle rider ends up face down on the pavement.
Others argue that you can mitigate the problems with bicycles by requiring riders to change to a new bicycle every 90 days. This is also misguided, because even if you do this, the threats from bicycle use continue to occur from day one.
Make sure your bicycle has a wheel, spokes, seat, and drink holder, and don’t use any of the last six bicycles you previously used. By Havang(nl) – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=2327525
How do we solve this?
People have tried to hasten the death of the bicycle, but its use still persists.
We have continued to advance other factors of transportation, both from the efforts of vendors, as well as the efforts of industry associations such as the International Bus and Infiniti Association (IBIA) and the MANX (Moving At Necessary eXpress) Alliance.
Yet resistance persists. Even the National Institute of Standards and Technology (NIST), which should know better, continues to define bicycle use as a standard factor of transportation.
The three most recognized factors of transportation include “something you pedal” (such as a bicycle), “something you drive” (such as an automobile), and “something you ride” (such as a bus).
NIST Special Publication 800-8-2. Link unavailable.
It is imperative that both governments and businesses completely ban use of the bicycle in favor of other forms of transportation. Our security as a nation depends on this.
Do your part to bring about the death of the bicycle in favor of other factors of transportation, and ensure that we will enjoy a bicycleless future.
A personal note
I don’t agree with anything I just wrote.
Despite its faults, I still believe that the bicycle has a proper place in our society, perhaps as one of several factors of transportation in an MFT (multi-factor transportation) arrangement.
And, if you haven’t figure it out yet, I’m not on board with the complete death of the password either. Passwords (and PINs) have their place. And when used properly they’re not that bad (even if these 2021 figures are off by an order of magnitude today).
Feel free to share the images and interactive found on this page freely. When doing so, please attribute the authors by providing a link back to this page and Better Buys, so your readers can learn more about this project and the related research.
I’ve talked about why NIST separated its FRVT efforts into FRTE and FATE.
But I haven’t talked bout how NIST did this.
And as you all know, the second most important question after why is how.
Why the great renaming took place
As I noted back in August, NIST chose to split its Face Recognition Vendor Test (FRVT) into two parts—FRTE (Face Recognition Technology Evaluation) and FATE (Face Analysis Technology Evaluation).
In essence, the Face Recognition Vendor Test had become a hodgepodge of different things. Some of the older tests were devoted to identification of individuals (face recognition), while some of the newer tests were looking at issues other than individual identification (face analysis).
Of course, this confusion between identification and non-identification is nothing new, which is why some of the people who read Gender Shades falsely concluded that if the three algorithms couldn’t classify people by sex or race, they couldn’t identify them as individuals.
But I digress. (I won’t do it again.)
NIST explained at the time:
Tracks that involve the processing and analysis of images will run under the FATE activity, and tracks that pertain to identity verification will run under FRTE.
To date, most of my personal attention (and probably most of yours) was paid to what was previously called FRVT 1:1 and FRVT 1:N.
These two tests are now part of FRTE, and were simply renamed to FRTE 1:1 and FRTE 1:N. They’ve even (for now) retained the same URLs, although that may change in the future.
Other tests that are now part of the FRTE bucket include:
The “Still Face and Iris 1:N Identification” effort (PDF) has apparently also been reclassified as an FRTE effort.
What is in FATE?
Obviously, presentation attack detection (PAD) testing falls into the FATE category, since this does not measure the identification of an individual, but whether a person is truly there or not. The first results have been released; I previously wrote about this here.
The next obvious category is age estimation testing, which again does not try to identify an individual, but estimate how old the person is. This testing has not yet started, but I talked about the concept of age estimation previously.
It is very possible that NIST will add additional FRTE and FATE tests in the future. These may be brand new tests, or variations of existing tests. For example, when all of us started wearing face masks a couple of years ago, NIST simulated face masks on its existing facial images and created the data for the face mask test described above.
What do you think NIST should test next, either in the FRTE or the FATE category?
More on morphing
And yes, I’m concluding this post with this video. By the way, this is the full version that (possibly intentionally) caused a ton of controversy and was immediately banned for nearly a quarter century. The morphing starts at 5:30. The crotch-grabbing starts right after the 7:00 mark.
Perhaps because of the lack of controversy with Godley & Creme’s earlier effort, Ashley Clark prefers it to the later Michael Jackson/John Landis effort.
Whereas Godley & Creme used editing technology to embrace and reflect the ambiguous murk of thwarted love, Jackson and Landis imposed an artificial sheen on the complexity of identity; a sheen that feels poignant if not outright tragic in the wake of Jackson’s ultimate appearance and fate. Really, it did matter if he was black or white.
One of the main application areas of facial morphing for criminal purposes is forging identity documents. The attack targets face-based identity verification systems and procedures. Most often it involves passports; however, any ID document with a photo can be compromised.
One well-known case happened in 2018 when a group of activists merged together a photo of Federica Mogherini, the High Representative of the European Union for Foreign Affairs and Security Policy, and a member of their group. Using this morphed photo, they managed to obtain an authentic German passport.
The vast majority of people who visit the Bredemarket website arrive via Google. Others arrive via Bing, DuckDuckGo, Facebook, Feedspot, Instagram, LinkedIn, Meltwater, Twitter (WordPress’ Stats page didn’t get the memo from Elon), WordPress itself, and other sites.
Yes, people are using ChatGPT and other generative AI tools as search engines.
Patel was curious about why ChatGPT recommended Neil Patel Digital, and he started to investigate. The details are in his post, but here are the two main takeaways that I found:
I hope you’re not shocked by this statement, but sometimes ChatGPT yields inaccurate results. One example: Patel asked ChatGPT to recommend ad agencies who could provide SEO help, and received two inaccurate recommendations. “2 of the top 4 results… Moz and HubSpot are software companies and not ad agencies. They don’t really offer services.”
After a lot of experimentation and number-crunching, Patel identified six specific factors that correlated with ChatGPT’s recommendation of a particular brand: brand mentions, reviews, relevancy, age, recommendations, and authority.
For a detailed discussion of these six factors, see Patel’s post. Let’s look at one of those factors, brand mentions, that has a relatively high (0.87) correlation.
How do you increase brand mentions?
So, how do you increase brand mentions across the web to rank higher on ChatGPT, other generative AI platforms, and more traditional search engines like TikTok?
By mentioning your brand. (And getting relevant authorities to mention your brand also.)
Yes, this is a Captain Obvious statement, but it’s important. There are at least ten places where you can mention your brand.
Articles
Blog Posts
Briefs/Data/Literature Sheets
Case Studies
Proposals
Scientific Book Chapters
Smartphone Application Content
Social Media (Facebook, Instagram, LinkedIn, Threads, TikTok, Twitter)
Web Page Content
White Papers and E-Books
If this list appears familiar to some of you, that’s intentional. It’s a list of the 10 types of external content that I have created for others, and can create for you. See my post “The 22 (or more) Types of Content That Product Marketers Create” for more information on both external and internal content types that I can create.
Because the more you look at my brand content, the more my brand will be recognized by generative AI.
I was curious myself, so I tested this. ChatGPT was at capacity when I wrote this post, so I asked Bard “Who is the biometric content marketing expert?”
From Google Bard, September 26, 2023.
John Bredehoft, of the company Bredemarket, is a biometric content marketing expert. He has over 25 years of experience in biometrics and other identity modalities, including friction ridge, face, iris, voice, and DNA. He has created content for Bredemarket clients and his previous employers, including case studies, white papers, and blog posts, that have positioned his firms as biometric authorities.
Bredehoft’s expertise in biometrics and content marketing allows him to create content that is both informative and engaging for his target audience. He understands the issues behind biometric technologies and can explain them in a clear and concise way. He also knows how to write content that is optimized for search engines and social media.
If you are looking for a biometric content marketing expert, John Bredehoft is a great choice. He has the knowledge and experience to create content that will help you achieve your marketing goals.
From Google Bard, September 26, 2023.
Now I don’t know yet if ChatGPT will yield a similar response, and if not I have some more work to do.
But enough about me.
How can you increase YOUR brand mentions?
Let’s talk about you, your content marketing needs, and your need for prospects and customers to know about your brand.
Whether you want to rank in a traditional search engine or generative AI, the key is the creation of content. When you work with Bredemarket as your content creation partner, we start by discussing your goals and other critical information that is important to you. We do this before I start writing your blog post, social media post, case study, white paper, or other piece of content (car show posters, anyone?).
Let’s hold that (complimentary) discussion to see if Bredemarket’s services are a fit for your needs. Book a meeting with me at calendly.com/bredemarket. Be sure to fill out the information form so I can best help you.
Well, the FATE side of the house has released its first two studies, including one entitled “Face Analysis Technology Evaluation (FATE) Part 10: Performance of Passive, Software-Based Presentation Attack Detection (PAD) Algorithms” (NIST Internal Report NIST IR 8491; PDF here).
Machine learning models need training data to improve their accuracy—something I know from my many years in biometrics.
And it’s difficult to get that training data—something else I know from my many years in biometrics. Consider the acronyms GDPR, CRPA, and especially BIPA. It’s very hard to get data to train biometric algorithms, so they are trained on relatively limited data sets.
At the same time that biometric algorithm training data is limited, Kevin Indig believes that generative AI large language models are ALSO going to encounter limited accessibility to training data. Actually, they are already.
The lawsuits have already begun
A few months ago, generative AI models like ChatGPT were going to solve all of humanity’s problems and allow us to lead lives of leisure as the bots did all our work for us. Or potentially the bots would get us all fired. Or something.
But then people began to ask HOW these large language models work…and where they get their training data.
Just like biometric training models that just grab images and associated data from the web without asking permission (you know the example that I’m talking about), some are alleging that LLMs are training their models on copyrighted content in violation of the law.
I am not a lawyer and cannot meaningfully discuss what is “fair use” and what is not, but suffice it to say that alleged victims are filing court cases.
Comedian and author Sarah Silverman, as well as authors Christopher Golden and Richard Kadrey — are suing OpenAI and Meta each in a US District Court over dual claims of copyright infringement.
The suits alleges, among other things, that OpenAI’s ChatGPT and Meta’s LLaMA were trained on illegally-acquired datasets containing their works, which they say were acquired from “shadow library” websites like Bibliotik, Library Genesis, Z-Library, and others, noting the books are “available in bulk via torrent systems.”
This could be a big mess, especially since copyright laws vary from country to country. This description of copyright law LLM implications, for example, is focused upon United Kingdom law. Laws in other countries differ.
Systems that get data from the web, such as Google, Bing, and (relevant to us) ChatGPT, use “crawlers” to gather the information from the web for their use. ChatGPT, for example, has its own crawler.
But that only includes the sites that blocked the crawler when Originality AI performed its analysis.
More sites will block the LLM crawlers
Indig believes that in the future, the number of the top 1000 sites that will block ChatGPT’s crawler will rise significantly…to 84%. His belief is based on analyzing the business models for the sites that already block ChatGPT and assuming that other sites that use the same business models will also find it in their interest to block ChatGPT.
The business models that won’t block ChatGPT are assumed to include governments, universities, and search engines. Such sites are friendly to the sharing of information, and thus would have no reason to block ChatGPT or any other LLM crawler.
The business models that would block ChatGPT are assumed to include publishers, marketplaces, and many others. Entities using these business models are not just going to turn it over to an LLM for free.
One possibility is that LLMs will run into the same training issues as biometric algorithms.
In biometrics, the same people that loudly exclaim that biometric algorithms are racist would be horrified at the purely technical solution that would solve all inaccuracy problems—let the biometric algorithms train on ALL available biometric data. In the activists’ view (and in the view of many), unrestricted access to biometric data for algorithmic training would be a privacy nightmare.
Similarly, those who complain that LLMs are woefully inaccurate would be horrified if the LLM accuracy problem were solved by a purely technical solution: let the algorithms train themselves on ALL available data.
Could LLMs buy training data?
Of course, there’s another solution to the problem: have the companies SELL their data to the LLMs.
In theory, this could provide the data holders with a nice revenue stream while allowing the LLMs to be extremely accurate. (Of course the users who actually contribute the data to the data holders would probably be shut out of any revenue, but them’s the breaks.)
But that’s only in theory. Based upon past experience with data holders, the people who want to use the data are probably not going to pay the data holders sufficiently.
Google and Meta to Canada: Drop dead / Mourir
By The original uploader was Illegitimate Barrister at Wikimedia Commons. The current SVG encoding is a rewrite performed by MapGrid. – This vector image is generated programmatically from geometry defined in File:Flag of Canada (construction sheet – leaf geometry).svg., Public Domain, https://commons.wikimedia.org/w/index.php?curid=32276527
Even today, Google and Meta (Facebook et al) are greeting Canada’s government-mandated Bill C-18 with resistance. Here’s what Google is saying:
Bill C-18 requires two companies (including Google) to pay for simply showing links to Canadian news publications, something that everyone else does for free. The unprecedented decision to put a price on links (a so-called “link tax”) breaks the way the web and search engines work, and exposes us to uncapped financial liability simply for facilitating access to news from Canadian publications….
As a result, we have informed them that we have made the difficult decision that, when the law takes effect, we will be removing links to Canadian news publications from our Search, News, and Discover products.
Google News Showcase is the program that gives money to news organizations in Canada. Meta has a similar program. Peter Menzies notes that these programs give tens of millions of (Canadian) dollars to news organizations, but that could end, despite government threats.
The federal and Quebec governments pulled their advertising spends, but those moves amount to less money than Meta will save by ending its $18 million in existing journalism funding.
Bearing in mind that Big Tech is reluctant to give journalistic data holders money even when a government ORDERS that they do so…
…what is the likelihood that generative AI algorithm authors (including Big Tech companies like Google and Microsoft) will VOLUNTARILY pay funds to data holders for algorithm training?
If Kevin Indig is right, LLM training data will become extremely limited, adversely affecting the algorithms’ use.