But a recent pitch excelled in its, um, genericism. Here’s the relevant part:
I run a white-label marketing company and am reaching out to ask if you need help with content creation? I work with several other marketing agencies on campaigns like Airbnb’s.
I’m not sure how Bredemarket relates to Airbnb, but it really doesn’t matter because they have worked on campaigns LIKE Airbnb. So I do not know what they’ve done. (Although ghostwriters have this problem.)
Ghostwriters like me. But I’ve never worked for companies like Airbnb.
I recently sent out a mailing that was hopefully much more targeted. I knew my hungry people (target audience), so even though it was a mass mailing (OK, not “mass”), it was relevant.
If you didn’t receive the mailing, you can view the repurposed version here.
Contact Bredemarket if you need content that benefits from my 29+ years of identity/biometrics experience.
With over 29 years of identity/biometric experience, John Bredehoft of Bredemarket is the biometric product marketing expert that can move your company forward.
A single loss does not define your entire life. As the sporting world teaches us, Olympic losers and other competitive losers can become winners—if not in sports, then elsewhere.
The human drama of athletic competition
When I was young, the best variety show on television didn’t involve Bob Mackie dresses. It instead featured Jim McKay, introducing the show as follows.
Spanning the globe to bring you the constant variety of sport…the thrill of victory…and the agony of defeat…the human drama of athletic competition…This is ABC’s Wide World of Sports!
A technological marvel when originally introduced, this variety show brought sporting events to American viewers from all over the world.
And these viewers learned that in competitions, there are winners and losers.
But since Wide World of Sports focused on the immediate (well, with a bit of tape delay), viewers never learned about the losers who became winners.
Jim McKay and his colleagues were not retrospective, but were known for the moment. In one instance that was NOT on tape delay, Jim McKay spoke his most consequential words, “They’re all gone.”
Vinko Bogataj
(Note: some of this content is repurposed because repurposing is cool.)
Turning to less lethal sporting events, remember Jim McKay’s phrase “the agony of defeat”?
For American TV watchers, this phrase was personified by Vinko Bogataj.
The agony of defeat.
Hailing from a country then known as Yugoslavia (now Slovenia), Bogataj was competing in the 1970 World Ski Flying Championships in Oberstdorf, in a country then known as West Germany (now Germany). His daughter described what happened:
It was bad weather, and he had to wait around 20 minutes before he got permission to start. He remembers that he couldn’t see very good. The track was very bad, and just before he could jump, the snow or something grabbed his skis and he fell. From that moment, he doesn’t remember anything.
While Bogataj suffered a concussion and a broken ankle, the accident was captured by the Wide World of Sports film crew, and Bogataj became famous on the “capitalist” side of the Cold War.
“He didn’t have a clue he was famous,” (his daughter) Sandra said. That changed when ABC tracked him down in Slovenia and asked him to attend a ceremony in New York to celebrate the 20th anniversary of “Wide World of Sports” in 1981.
At the gala, Bogataj received the loudest ovation among a group that included some of the best-known athletes in the world. The moment became truly surreal for Bogataj when Muhammad Ali asked for his autograph.
Bogataj is now a painter, but his 1970 performance still follows him.
Over 20 years after the infamous ski jump, Terry Gannon interviewed Bogataj for ABC. As Gannon recounted it on X (then Twitter), Bogataj “got in a fender bender on the way. His first line..’every time I’m on ABC I crash.'”
Some guy at the Athens Olympics in 2004
Since the Paris Olympics is taking place as I write this, people are paying a lot of attention to present and past Olympics.
The 2004 Olympics in Athens was a notable one, taking place in the country where the original Olympics were held.
But during that year, people may have missed some of the important stories that took place. We pay attention to winners, not losers.
Take the men’s 200 meter competition. It began with 7 heats, with the top competitors from the heats advancing.
Within the 7 heats, Heat 4 was a run-of-the-mill race, with the top four sprinters advancing to the next round. If I were to read their names to you you’d probably reward me with a blank stare.
But if I were to read the 5th place finisher to you, the guy who failed to advance to the next round, you’d recognize the name.
KBWEB Consult tells the story of another competitor in the same 200 meter event in Athens. Chris Lambert participated in Heat 3, but didn’t place in the first four positions and therefore didn’t advance.
Nor did he place in the fifth position like Usain Bolt did in Heat 4.
Actually, he technically didn’t place at all. His performance is marked with a “DNF,” or “did not finish.”
You see, at about the 50 meter point of the 200 meter event, Lambert pulled a hamstring.
And that ended his Olympic competition dreams forever. By the time the Olympics were held in Lambert’s home country of the United Kingdom in 2012, he was not a competitor, but a volunteer for the London Olympics.
But Lambert learned much from his competitive days, and now works for Adobe.
KBWEB Consult (who consults on Adobe Experience Manager implementations) tells the full story of Chris Lambert and what he learned in its post “Expert Coaching From KBWEB Consult.”
I haven’t done one of these in a while, but it’s important to remember that just because you lost a particular competition doesn’t mean that all is lost. We need to remember this whether we are a 200 meter runner who didn’t advance from their heat, or whether we are a job applicant receiving yet another “we are moving in a different direction” form letter.
In the meantime, take care of yourself, and each other.
Expect heavy large business lobbying against this proposed ballot measure in Upland. Because if they have to pay a debilitating $865 in fees, they’ll shutter their business and join Elon and Chevron in Texas.
“Under the existing system, each $20,000 a business makes is taxed in $54 increments. Businesses reach the $864 cap when they have roughly $320,000 in gross sales….
“If approved by voters, the Nov. 5 measure would mean businesses would pay $50 for every $100,000 they generate in revenue….Meanwhile, the measure would increase the cap on business license taxes to $29,500.”
For the record, Bredemarket is based in Ontario, and I’m glad I’m not subject to Upland’s current licensing fees.
You’ve probably noticed that I’ve created a lot of Bredemarket videos lately.
My longer ones last a minute. That’s the length of a video I haven’t shared in the Bredemarket blog (it’s on Instagram) summarizing my client work over the last four years. My early July identity and Inland Empire reels are almost a minute long.
Researchers in Canada surveyed 2,000 participants and studied the brain activity of 112 others using electroencephalograms (EEGs). Microsoft found that since the year 2000 (or about when the mobile revolution began) the average attention span dropped from 12 seconds to eight seconds.
As many noted, a goldfish’s attention span is 9 seconds.
Some argue that the 8 second attention span is not universal and varies according to the task. For example, a 21 minute attention span has been recorded for drivers. If drivers had an 8 second attention span, we would probably all be dead by now.
But watching a video is not a life-or-death situation. Viewers will happily jump away if there’s no reason to watch.
So I have my challenge.
Ironically, I learned about the 8 second rule while watching a LinkedIn Learning course about the 3 minute rule. I haven’t finished the course yet, so I haven’t yet learned how to string someone along for 22.5 8-second segments.
I use both text generators (sparingly) and image generators (less sparingly) to artificially create text and images. But I encounter one image challenge that you’ve probably encountered also: bizarre misspellings.
This post includes an example, created in Google Gemini, that was created using the following prompt:
Create a square image of a library bookshelf devoted to the works authored by Dave Barry.
Now in the ideal world, my prompt would completely research Barry’s published titles, and the resulting image would include these book titles (such as Dave Barry Slept Here, one of the greatest history books of all time maybe or maybe not).
In the mediocre world, at least the book spines would include the words “Dave Barry.”
Why can’t your image generator spell words properly?
It always mystified me that AI-generated images had so many weird words, to the point where I wondered whether the AI was specifically programmed to misspell.
It wasn’t…but it wasn’t programmed to spell either.
TechCrunch recently published an article in which the title was so good you didn’t have to read the article itself. The title? “Why is AI so bad at spelling? Because image generators aren’t actually reading text.”
This is something that I pretty much forgot.
When I use an AI-powered text generator, it has been trained to respond to my textual prompts and create text.
When I use an AI-powered image generator, it has been trained to respond to my textual prompts and create images.
Two very different tasks, as noted by Asmelash Teka Hadgu, co-founder of Lesan and a fellow at the DAIR Institute.
“The diffusion models, the latest kind of algorithms used for image generation, are reconstructing a given input,” Hagdu told TechCrunch. “We can assume writings on an image are a very, very tiny part, so the image generator learns the patterns that cover more of these pixels.”
The algorithms are incentivized to recreate something that looks like what it’s seen in its training data, but it doesn’t natively know the rules that we take for granted — that “hello” is not spelled “heeelllooo,” and that human hands usually have five fingers.
For a long time, each ML (machine learning) model operated in one data mode – text (translation, language modeling), image (object detection, image classification), or audio (speech recognition).
However, natural intelligence is not limited to just a single modality. Humans can read and write text. We can see images and watch videos. We listen to music to relax and watch out for strange noises to detect danger. Being able to work with multimodal data is essential for us or any AI to operate in the real world.
So if we ask an image generator to create an image of a library bookshelf with Dave Barry works, it would actually display book spines with Barry’s actual titles.
So why doesn’t my Google Gemini already provide this capability? It has a text generator and it has an image generator: why not provide both simultaneously?
Because that’s EXPENSIVE.
I don’t know whether Google’s Vertex AI provides the multimodal capabilties I seek, where text in images is spelled correctly.
You may remember the May hoopla regarding amendments to Illinois’ Biometric Information Privacy Act (BIPA). These amendments do not eliminate the long-standing law, but lessen its damage to offending companies.
The General Assembly is expected to send the bill to Illinois Governor JB Pritzker within 30 days. Gov. Pritzker will then have 60 days to sign it into law. It will be immediately effective.
While the BIPA amendment has passed the Illinois House and Senate and was sent to the Governor, there is no indication that he has signed the bill into law within the 60-day timeframe.
A proposed class action claims Photomyne, the developer of several photo-editing apps, has violated an Illinois privacy law by collecting, storing and using residents’ facial scans without authorization….
The lawsuit contends that the app developer has breached the BIPA’s clear requirements by failing to notify Illinois users of its biometric data collection practices and inform them how long and for what purpose the information will be stored and used.
In addition, the suit claims the company has unlawfully failed to establish public guidelines that detail its data retention and destruction policies.
When marketing digital identity products secured by biometrics, emphasize that they are MORE secure and more private than their physical counterparts.
When you hand your physical driver’s license over to a sleazy bartender, they find out EVERYTHING about you, including your name, your birthdate, your driver’s license number, and even where you live.
When you use a digital mobile driver’s license, bartenders ONLY learn what they NEED to know—that you are over 21.
Any endeavor, scientific or non-scientific, tends to generate a host of acronyms that the practitioners love to use.
For people interested in fingerprint identification, I’ve written this post to delve into some of the acronyms associated with NIST MINEX testing, including ANSI, INCITS, FIPS, and PIV.
NIST was involved with fingerprints before NIST even existed. Back when NIST was still the NBS (National Bureau of Standards), it issued its first fingerprint interchange standard back in 1986. I’ve previously talked about the 1993 version of the standard in this post, “When 250ppi Binary Fingerprint Images Were Acceptable.”
But let’s move on to another type of interchange.
MINEX
It’s even more important that we define MINEX, which stands for Minutiae (M) Interoperability (IN) Exchange (EX).
You’ll recall that the 1993 (and previous, and subsequent) versions of the ANSI/NIST standard included a “Type 9” to record the minutiae generated by the vendor for each fingerprint. However, each vendor generated minutiae according to its own standard. Back in 1993 Cogent had its standard, NEC its standard, Morpho its standard, and Printrak its standard.
So how do you submit Cogent minutiae to a Printrak system? There are two methods:
First, you don’t submit them at all. Just ignore the Cogent minutiae, look at the Printrak image, and use an algorithm regenerate the minutiae to the Printrak standard. While this works with high quality tenprints, it won’t work with low quality latent (crime scene) prints that require human expertise.
The second method is to either convert the Cogent minutiae to the Printrak minutiae standard, or convert both standards into a common format.
The American National Standards Institute (ANSI) is a private, non-profit organization that administers and coordinates the U.S. voluntary standards and conformity assessment system. Founded in 1918, the Institute works in close collaboration with stakeholders from industry and government to identify and develop standards- and conformance-based solutions to national and global priorities….
ANSI is not itself a standards developing organization. Rather, the Institute provides a framework for fair standards development and quality conformity assessment systems and continually works to safeguard their integrity.
So ANSI, rather than creating its own standards, works with outside organizations such as NIST…and INCITS.
INCITS
Now that’s an eye-catching acronym, but INCITS isn’t trying to cause trouble. Really, they’re not. Believe me.
Back in 2004, INCITS worked with ANSI (and NIST, who created samples) to develop three standards: one for finger images (ANSI INCITS 381-2004), one for face recognition (ANSI INCITS 385-2004), and one for finger minutiae (ANSI INCITS 378-2004, superseded by ANSI INCITS 378-2009 (S2019)).
When entities used this vendor-agnostic minutiae format, then minutiae from any vendor could in theory be interchanged with those from any other vendor.
This came in handy when the FIPS was developed for PIV. Ah, two more acronyms.
FIPS and PIV
One year after the three ANSI INCITS standards were released, this happened (the acronyms are defined in the text):
Federal Information Processing Standard (FIPS) 201 entitled Personal Identity Verification of Federal Employees and Contractors establishes a standard for a Personal Identity Verification (PIV) system (Standard) that meets the control and security objectives of Homeland Security Presidential Directive-12 (HSPD-12). It is based on secure and reliable forms of identity credentials issued by the Federal Government to its employees and contractors. These credentials are used by mechanisms that authenticate individuals who require access to federally controlled facilities, information systems, and applications. This Standard addresses requirements for initial identity proofing, infrastructure to support interoperability of identity credentials, and accreditation of organizations issuing PIV credentials.
So the PIV, defined by a FIPS, based upon an ANSI INCITS standard, defined a way for multiple entities to create and support fingerprint minutiae that were interoperable.
But how do we KNOW that they are interoperable?
Let’s go back to NIST and MINEX.
Testing interoperability
So NIST ended up in charge of figuring out whether these interoperable minutiae were truly interoperable, and whether minutiae generated by a Cogent system could be used by a Printrak system. Of course, by the time MINEX testing began Printrak no longer existed, and a few years later Cogent wouldn’t exist either.
You can read the whole history of MINEX testing here, but for now I’m going to skip ahead to MINEX III (which occurred many years after MINEX04, but who’s counting?).
Like some other NIST tests we’ve seen before, vendors and other entities submit their algorithms, and NIST does the testing itself.
In this case, all submitters include a template generation algorithm, and optionally can include a template matching algorithm.
Then NIST tests each algorithm against every other algorithm. So the “innovatrics+0020” template generator is tested against itself, and is also tested against the “morpho+0115” algorithm, and all the other algorithms.
NIST then performs its calculations and comes up with summary values of interoperability, which can be sliced and diced a few different ways for both template generators and template matchers.
From NIST. Top 10 template generators (Ascending “Pooled 2 Fingers FNMR @ FMR≤10-2“) as of July 29, 2024.
And this test, like some others, is an ongoing test, so perhaps in a few months someone will beat Innovatrics for the top pooled 2 fingers spot.
Are fingerprints still relevant?
And entities WILL continue to submit to the MINEX III test. While a number of identity/biometric professionals (frankly, including myself) seem to focus on faces rather than fingerprints, fingers still play a vital role in biometric identification, verification, and authentication.