Three Reasons Why You Need the Bredemarket 404 Web/Social Media Checkup

I haven’t mentioned my “Bredemarket 404 Web/Social Media Checkup” in years, but we need the service more than ever. In fact, as I mention below, I should probably buy the service for myself.

What is the Bredemarket 404 Web/Social Media Checkup?

Why do I offer the Bredemarket 404 Web/Social Media Checkup? To ensure that your web and social properties are correctly communicating your business benefits and values to prospects and customers.

How do I provide the service? I not only analyze every page on your business website, but also analyze every social media account associated with your business (and, if you choose, your personal social media accounts also).

What do I do? For each social media account and page within each account, Bredemarket checks for these and other items:

  • Broken links
  • Outdated information
  • Other text and image errors
  • Synchronization between the web page and the social media accounts
  • Content synchronization between the web page and the social media accounts
  • Hidden web pages that still exist

Bredemarket then reports the results to you with recommended actions.

Redacted example of one page of a multi-page Bredemarket 404 report.

If requested, Bredemarket prepares a simple social media communications process for you.

Three reasons why you need a web/social media check

If you’re wondering why your business may need such a check, here are three things that I’ve observed over the years that adversely impact your marketing (and, um, my marketing).

Stale, dated material

Designed by Freepik.

Perhaps you wrote the text for your website or your social media page several years ago. And it was great…at the time. But as the months and years pass, the text becomes outdated.

I’ve discussed the problem of non-current content before, giving examples such as sites that mention Windows 7 support long after Microsoft stopped supporting Windows 7. But sadly, I recently ran across another offender, and this time I’m going to name names.

The company who kept stale content online was…Bredemarket.

As some of you know, last week I announced changes in Bredemarket’s scope and business hours. This necessitated some changes on my website.

Then I had to return to this website to make some hurried updates, since my April 2022 prohibition on taking certain types of work is no longer in effect as of June 2023. Hence, my home page, my “What I Do” page, and (obviously) my identity page are all corrected.

From https://bredemarket.com/2023/06/01/updates-updates-updates/

But earlier this week when I was cruising around the site, I noticed a page that I had missed:

From https://bredemarket.com/biometric-content-marketing-expert/ as of the morning of June 8, 2023.

“Biometric content marketing expert” my…(you know what). By the time you read this post, I will hopefully have fixed this. You can check for yourself to make sure I did fix it, and call me out if I didn’t. (Pressure’s on, Johnny.)

Oh, and there are three other pages that mention the words “Saturday morning” (as in booking a Saturday morning meeting with me). I have to fix those also.

WordPress listing of Bredemarket pages that include(d) the words “Saturday morning.”

Heroic sprints, only partially executed

Designed by Freepik.

In addition to stale, dated, material, sometimes the material on your online properties is only partially complete.

Perhaps you’ve worked with organizations that have sudden inspirations and want to implement them NOW.

From https://www.youtube.com/watch?v=rziG2gn-eQ0

So you’re going to mount a heroic sprint to just do it, process be damned. You’re going to steamroll ahead, working nights and weekends, and get the thing done.

And then, bleary-eyed, you get it done.

But you didn’t get all the other stuff done that needed to be completed along with the heroic sprint.

Maybe you completed a heroic sprint to document something on one of your properties…but you completely forgot to document that same thing on another of your properties. So one property mentions six items, while the other one only mentions five. Hopefully your prospect will go to the property that mentions the correct number of items.

If you’re lucky. Authentically lucky.

From https://www.youtube.com/watch?v=38mE6ba3qj8

(As an aside, a company that relies on heroic sprints is only hurting itself and its employees. See this Moira Lethbridge & Toni Collis LinkedIn article, “Why Having Superhuman Expectations Is Killing Your Career.“)

Forgotten online properties

Designed by Freepik.

A third common problem that your company may face is the existence of old online properties that you may have forgotten about.

  • Maybe you established an online property and completely forgot about it. So as you update all of your other online properties, you neglect to update that one. What happens if the only online property your prospect sees is the one you never bother to update?
  • Maybe you established an online property, then established a second one on the same platform. I previously cited an example in which a company established a Twitter account, then established a second one later without letting followers of the first account know. Guess which Twitter account had fewer followers? The new one.

Forgotten online properties result in disjointed views of your firm, and a confusing online presence.

Here’s how to obtain a web/social media check for yourself

Do your website and social media accounts suffer from these inconsistencies and errors?

Would you like an independent person to analyze your online properties and report the issues so you can fix them?

If you need Bredemarket’s services:

Repurposing My Generative AI Suggestions/Rules as a LinkedIn Article

I have published a number of articles (as opposed to posts) on LinkedIn.

Until today, I had never published an article under the name of the Bredemarket Identity Firm Services LinkedIn showcase page. (That’s the green one, if you pay attention to the color coding.)

Since I’m re-establishing Bredemarket’s identity/biometrics credentials, I decided to repurpose my previous Bredemarket blog post “The Temperamental Writer’s Two Suggestions and One Rule for Using Generative AI” as a new LinkedIn article.

Repurposing is fun, not only because I get to customize the message to a new audience (in this case, specifically to the identity/biometrics crowd rather than the general AI/writing crowd), but also because it gives me a chance to revisit and modify some of the arguments I used or didn’t use in the original post. (For example, I dove into the Samsung AI issue a little more deeply this time around.)

If you want to see my latest take on using generative AI in writing, see the Bredemarket Identity Firm Services LinkedIn article “Three Ways I Use Generative AI to Create Written Content for Identity/Biometrics (and other) Companies.”

And bear in mind that as AI and customer expectations change, I may have to revise it sooner rather than later.

P.S. If you want Bredemarket to create a LinkedIn article for your profile or company page

Why do print people capture 14 fingerprint impression blocks? Why not 10? Or 20?

In the course of writing something on another blog, I mentioned the following:

You see, my fingerprint experience was primarily rooted in the traditional 14 (yes, 14) fingerprint impression block livescan capture technology used by law enforcement agencies to submit full sets of tenprints to the U.S. Federal Bureau of Investigation (FBI), and state and local agencies that submit to the FBI.

From https://jebredcal.wordpress.com/2023/06/12/when-one-type-of-experience-is-not-enough/

I’d be willing to bet that the vast majority of you have ten fingers.

So why do tenprint livescan devices capture 14 fingerprint impression blocks?

Why 14 fingerprint impression blocks are as good as 20 fingers

It’s important to understand that tenprint livescan devices, which only began to emerge in the 1980s, were originally designed as an electronic way to duplicate the traditional inking process in which ink was placed on arrestees’ fingers, and the ink was transferred to a tenprint fingerprint card.

The criminal fingeprint card (and, with some changes, the applicant fingerprint card) looks something like this:

If you look at the lower half of the front of a fingerprint card, you will see 14 fingerprint impression blocks arranged in 3 rows.

  • The first row is where you place five “rolled” (nail to nail) fingerprints taken from the right hand, starting with the right thumb and ending with the right little finger.
  • The second row is where you place five rolled fingerprints from the left hand, again starting with the thumb and ending with the little finger.

So now you’ve captured ten fingerprints. But you’re not done. You still have to fill four more impression blocks. Here’s how:

Identification flat impressions are taken simultaneously without rolling. These are referred to as plain, slap, or flat impressions. The individual’s right and left four fingers should be captured first, followed by the two thumbs (4-4-2 method).

From https://le.fbi.gov/science-and-lab/biometrics-and-fingerprints/biometrics/recording-legible-fingerprints

To clarify, on the third row, for the large box in the lower left corner of the card, you “slap” all four fingers of the left hand down at the same time. Then you skip over the the large box on the lower right corner of the card and slap all four fingers of the right hand down at the same time. Finally you slap the two thumbs down at the same time, capturing the left thumb in the small middle left box, and the right thumb in the small middle right box.

Well, at least that’s how you do it on a traditional inked card. On a tenprint livescan device, you roll and slap your fingers on the large platen, without worrying (that much) about staying within the lines.

Why 14 fingerprint impression blocks are better than 20 fingers

So by the time you’re done, you’ve filled 14 fingerprint impression blocks by 13 distinct actions (the two slap thumbs are captured simultaneously), and you’ve effectively captured 20 fingerprints.

Why?

Quality control.

Because since every finger should theoretically be captured twice, the slaps can be compared against the rolls to ensure that the fingerprints were captured in the correct order.

Locations of finger 2 (green) and finger 3 (blue) for rolled and slap prints.

If you capture the rolled and slap prints in the correct order, then the right index finger (finger 2) should appear in the green area on the first row as a rolled print, and in the green area on the third row as a slap print. Similarly, the middle finger (finger 3) should appear in the blue areas.

If the green rolled print is NOT the same as the green slap print, or if the blue rolled print is NOT the same as the blue slap print, then you captured the fingerprints in the wrong order.

In the old pre-livescan days of inking, a trained tenprint fingerprint examiner (or someone who pretended to be one) had to look at the prints to ensure that the fingers were captured properly. Now the roll to slap comparisons are all done in software, either at the tenprint livescan device itself, or at the automated fingerprint identification system (AFIS) or the automated biometric identification system (ABIS) that receives the prints.

For a mention of companions to roll-to-slap comparison, as well as a number of other issues regarding fingerprint capture quality, see this 2006 presentation given by Behnam Bavarian, then a Vice President at Motorola.

In the 4-4-2 method, groups of prints are captured together, rather than individually. While it is possible to completely mess things up by capturing the left slaps when you are supposed to capture the right slaps, or by twisting your hands in a bizarre manner to capture the thumbs in reverse order, 4-4-2 gives you a reasonable assurance that the slap prints are captured in the correct order, ensuring a proper roll-to-slap comparison.

Well, unless the fingerprints are captured in an unattended fashion, or the police officer capturing the fingerprints is crooked.

But today’s ABIS systems are powerful enough to compare all ten submitted fingers against all ten fingers of every record in an ABIS database, so even if the submitted fingerprints falsely record finger 2 as finger 3, the ABIS will still find the matching print anyway.

Book ’em, Danno.

Book ’em, Danno! By CBS Television – eBay item photo front photo back, Public Domain, https://commons.wikimedia.org/w/index.php?curid=19674714

Why Apple Vision Pro Is a Technological Biometric Advance, but Not a Revolutionary Biometric Event

(Part of the biometric product marketing expert series)

(UPDATE JUNE 24: CORRECTED THE YEAR THAT COVID BEGAN.)

I haven’t said anything publicly about Apple Vision Pro, so it’s time for me to be “how do you do fellow kids” trendy and jump on the bandwagon.

Actually…

It ISN’T time for me to jump on the Apple Vision Pro bandwagon, because while Apple Vision Pro affects the biometric industry, it’s not a REVOLUTIONARY biometric event.

The four revolutionary biometric events in the 21st century

How do I define a “revolutionary biometric event”?

By Alberto Korda – Museo Che Guevara, Havana Cuba, Public Domain, https://commons.wikimedia.org/w/index.php?curid=6816940

I define it as something that completely transforms the biometric industry.

When I mention three of the four revolutionary biometric events in the 21st century, you will understand what I mean.

  • 9/11. After 9/11, orders of biometric devices skyrocketed, and biometrics were incorporated into identity documents such as passports and driver’s licenses. Who knows, maybe someday we’ll actually implement REAL ID in the United States. The latest extension of the REAL ID enforcement date moved it out to May 7, 2025. (Subject to change, of course.)
  • The Boston Marathon bombings, April 2013. After the bombings, the FBI was challenged in managing and analyzing countless hours of video evidence. Companies such as IDEMIA National Security Solutions, MorphoTrak, Motorola, Paravision, Rank One Computing, and many others have tirelessly worked to address this challenge, while ensuring that facial recognition results accurately identify perpetrators while protecting the privacy of others in the video feeds.
  • COVID-19, spring 2020 and beyond. COVID accelerated changes that were already taking place in the biometric industry. COVID prioritized mobile, remote, and contactless interactions and forced businesses to address issues that were not as critical previously, such as liveness detection.

These three are cataclysmic world events that had a profound impact on biometrics. The fourth one, which occurred after the Boston Marathon bombings but before COVID, was…an introduction of a product feature.

  • Touch ID, September 2013. When Apple introduced the iPhone 5s, it also introduced a new way to log in to the device. Rather than entering a passcode, iPhone 5S users could just use their finger to log in. The technical accomplishment was dwarfed by the legitimacy that this brought to using fingerprints for identification. Before 2013, attempts to implement fingerprint verification for benefits recipients were resisted because fingerprinting was something that criminals did. After September 2013, fingerprinting was something that the cool Apple kids did. The biometric industry changed overnight.

Of course, Apple followed Touch ID with Face ID, with adherents of the competing biometric modalities sparring over which was better. But Face ID wouldn’t have been accepted as widely if Touch ID hadn’t paved the way.

So why hasn’t iris verification taken off?

Iris verification has been around for decades (I remember Iridian before L-1; it’s now part of IDEMIA), but iris verification is nowhere near as popular in the general population as finger and face verification. There are two reasons for this:

  • Compared to other biometrics, irises are hard to capture. To capture a fingerprint, you can lay your finger on a capture device, or “slap” your four fingers on a capture device, or even “wave” your fingers across a capture device. Faces are even easier to capture; while older face capture systems required you to stand close to the camera, modern face devices can capture your face as you are walking by the camera, or even if you are some distance from the camera.
  • Compared to other biometrics, irises are expensive to capture. Many years ago, my then-employer developed a technological marvel, an iris capture device that could accurately capture irises for people of any height. Unfortunately, the technological marvel cost thousands upon thousands of dollars, and no customers were going to use it when they could acquire fingerprint and face capture devices that were much less costly.

So while people rushed to implement finger and face capture on phones and other devices, iris capture was reserved for narrow verticals that required iris accuracy.

With one exception. Samsung incorporated Princeton Identity technology into its Samsung Galaxy S8 in 2017. But the iris security was breached by a “dummy eye” just a month later, in the same way that gummy fingers and face masks have defeated other biometric technologies. (This is why liveness detection is so important.) While Samsung continues to sell iris verification today, it hadn’t been adopted by Apple and therefore wasn’t cool.

Until now.

About the Apple Vision Pro and Optic ID

The Apple Vision Pro is not the first headset that was ever created, but the iPhone wasn’t the first smartphone either. And coming late to the game doesn’t matter. Apple’s visibility among trendsetters ensures that when Apple releases something, people take notice.

And when all of us heard about Vision Pro, one of the things that Apple shared about it was its verification technique. Not Touch ID or Face ID, but Optic ID. (I like naming consistency.)

According to Apple, Optic ID works by analyzing a user’s iris through LED light exposure and then comparing it with an enrolled Optic ID stored on the device’s Secure Enclave….Optic ID will be used for everything from unlocking Vision Pro to using Apple Pay in your own headspace.

From The Verge, https://www.theverge.com/2023/6/5/23750147/apple-optic-id-vision-pro-iris-biometrics

So why did Apple incorporate Optic ID on this device and not the others?

There are multiple reasons, but one key reason is that the Vision Pro retails for US$3,499, which makes it easier for Apple to justify the cost of the iris components.

But the high price of the Vision Pro comes at…a price

However, that high price is also the reason why the Vision Pro is not going to revolutionize the biometric industry. CNET admitted that the Vision Pro is a niche item:

At $3,499, Apple’s Vision Pro costs more than three weeks worth of pay for the average American, according to Bureau of Labor Statistics data. It’s also significantly more expensive than rival devices like the upcoming $500 Meta Quest 3, $550 Sony PlayStation VR 2 and even the $1,000 Meta Quest Pro

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Now CNET did go on to say the following:

With Vision Pro, Apple is trying to establish what it believes will be the next major evolution of the personal computer. That’s a bigger goal than selling millions of units on launch day, and a shift like that doesn’t happen overnight, no matter what the price is. The version of Vision Pro that Apple launches next year likely isn’t the one that most people will buy.

From CNET, https://www.cnet.com/tech/computing/why-apple-vision-pros-3500-price-makes-more-sense-than-you-think/

Certainly Vision Pro and Optic ID have the potential to revolutionize the computing industry…in the long term. And as that happens, the use of iris biometrics will become more popular with the general public…in the long term.

But not today. You’ll have to wait a little longer for the next biometric revolution. And hopefully it won’t be a catastrophic event like three of the previous revolutions.

How Soon Will I Have to Change My Temperamental Writer Generative AI Suggestions/Rule?

From https://twitter.com/jebredcal/status/1667597611619192833?s=46&t=ye3fFEJBNKSiV7FWcogPmg

Repurposing from Instagram for wider reach…

I recently published “The Temperamental Writer’s Two Suggestions and One Rule for Using Generative AI.” If you didn’t read it, the three ways I use generative AI are as follows:

  1. (Suggestion) A human should always write the first draft.
  2. (Suggestion) Only feed bits to the generative AI tool.
  3. (Rule) Don’t share confidential information with the tool.

If content consumers expect created content within 5 minutes, will i have to change my suggestions/rule a year from now?

A month from now?

From https://bredemarket.com/2023/06/05/the-temperamental-writers-two-suggestions-and-one-rule-for-using-generative-ai/
From https://twitter.com/jebredcal/status/1667597611619192833?s=46&t=ye3fFEJBNKSiV7FWcogPmg

How Can Your Identity Business Create the RIGHT Written Content?

Does your identity business provide biometric or non-biometric products and services that use finger, face, iris, DNA, voice, government documents, geolocation, or other factors or modalities?

Does your identity business need written content, such as blog posts (from the identity/biometric blog expert), case studies, data sheets, proposal text, social media posts, or white papers?

How can your identity business (with the help of an identity content marketing expert) create the right written content?

For the answer, click here.

Qualitative Benefits and Inland Empire Marketing

Are you an Inland Empire business who wants to promote the benefits of your products and services to your clients? If so, don’t assume that these benefits must be quantitative. You can use qualitative benefits also.

Benefits

Before we talk about quantative vs. qualitative benefits, let’s talk about benefits themselves, and how they differ from features.

As Kayla Carmichael has noted, features answer the “what” question, while benefits answer the “why” question.

She explains that your clients don’t care if your meal kit arrives ready to heat (a feature). Your clients care about saving time preparing meals (a benefit).

Quantitative benefits

In certain cases, the client may be even more impressed if the benefits can be expressed in a quantitative way. For example, if you know that your meal kit saves people an average of 37 minutes and 42.634 seconds preparing meals, let your client know this.

Am I the only one mouthing the words “these are the days of our lives” to myself? CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=2949924

But maybe you don’t know this.

  • You haven’t paid for a survey of your existing customers to see how much time they’ve saved preparing meals.
  • Or maybe the data just isn’t available at all.

The power of qualitative benefits

A lack of quantifiable data won’t stop your marketing efforts, though, since qualitative benefits can be just as powerful as quantative ones.

I’m going to take the marketer’s easy way out and just cite something that Apple did.

I’ll admit that Apple sometimes has some pretty stupid marketing statements (“It’s black!“). But sometimes the company grabs people’s attention with its messaging.

Take this July 2022 article, “How Apple is empowering people with their health information.”

You probably already saw the words “empowering people” in the title. Sure, people like health information…but they really like power.

By Andreas Bohnenstengel, CC BY-SA 3.0 de, https://commons.wikimedia.org/w/index.php?curid=61536009

Later in the article, Apple’s chief operating officer (Jeff Williams) emphasizes the power theme: “…they’re no longer passengers on their own health journey. Instead, we want people to be firmly in the driver’s seat.”

Of course, this isn’t the first time that Apple has referred to empowering the individual. The company has done this for decades. Remember (then) Apple Computer’s slogan, “The Power to Be Your Best”? If you missed that particular slogan, here’s a commercial.

From https://www.youtube.com/watch?v=s5S9VvMMxhU

There are zero statistics in that commercial. It doesn’t say that the Macintosh computer would equip you to jump 5% higher, or sing on key 99.9% of the time. And Apple Computer didn’t claim that the Macintosh would equip you to draw bridge images 35.2% faster.

But the viewer could see that a Macintosh computer, with its graphical user interface, its support of then-new graphic programs, and (not shown in the ad) the ability to distribute the output of these graphic programs via laser printers, gave Macintosh users the power to…well, the power to be their best.

And some potential computer buyers perceived that this power provided infinite value.

As you work out your benefit statements, don’t give up if the benefits cannot be quantified. As long as the benefits resonate with the customer, qualitative benefits are just fine.

What are your benefits?

Let’s return to you and your Ontario, California area business that needs content marketing promotion. Before you draft your compay’s marketing material, or ask someone to draft it for you, you need to decide what your benefits are.

I’ve written a book about identifying benefits, and five other questions that you need to answer before creating marketing content.

Click on the image below, find the e-book at the bottom of the page, and skip to page 11 to read about benefits.

Feel free to read the rest of the book also.

The Non-Temperemental Publisher WIRED’s Rules for Using Generative AI

I recently published “The Temperamental Writer’s Two Suggestions and One Rule for Using Generative AI.” If you didn’t read it, the three ways I use generative AI are as follows:

  1. (Suggestion) A human should always write the first draft.
  2. (Suggestion) Only feed bits to the generative AI tool.
  3. (Rule) Don’t share confidential information with the tool.
Here is how I used generative AI to improve a short passage, or a bit within a blog post. I wrote the text manually, then ran it through a tool, then tweaked the results.

However, I noted in passing that these suggestions and rules may not always apply to my writing. Specifically:

…unless someone such as an employer or a consulting client requires that I do things differently, here are three ways that I use generative AI tools to assist me in my writing.

From https://bredemarket.com/2023/06/05/the-temperamental-writers-two-suggestions-and-one-rule-for-using-generative-ai/

Now that I’ve said my piece on how to use generative AI in writing, I’m researching how others approach the issue. Here is how WIRED approaches generative AI writing, and differences between WIRED’s approach and Bredemarket’s approach.

Why does WIRED need these generative AI rules?

Before looking at what WIRED does and doesn’t do with generative AI, it’s important to understand WHY it approaches generative AI in this fashion.

By Scan of magazine cover., Fair use, https://en.wikipedia.org/w/index.php?curid=60369304

As of May 22, 2023, WIRED’s article “How WIRED Will Use Generative AI Tools” opens with the following:

This is WIRED, so we want to be on the front lines of new technology, but also to be ethical and appropriately circumspect.

From https://www.wired.com/about/generative-ai-policy/, as of the May 22, 2023 update.

Note the balance between

  • using cool stuff, and
  • using cool stuff correctly.

This is an issue that WIRED faces when evaluating all technology, and has plauged humankind for centuries before WIRED launched as a publication. Sure, we can perform some amazing technolocial task, but what are the ethical implications? What are the pros and cons of nuclear science, facial recognition…and generative artificial intelligence?

WIRED and text generators

WIRED’s rules regarding use of AI text generators are (as of May 22, 2023) five in number. As you will see, they are stricter than my own.

  1. We do not publish stories with text generated by AI, except when the fact that it’s AI-generated is the whole point of the story.”
  2. We do not publish text edited by AI either.” 
  3. We may try using AI to suggest headlines or text for short social media posts.
  4. We may try using AI to generate story ideas.
  5. We may experiment with using AI as a research or analytical tool.

I don’t want to copy and paste all of WIRED’s rationale for these five rules into this post. Go to WIRED’s article to see this rationale.

But I want to highlight one thing that WIRED said about its first rule, which not only applies to entire articles, but also to “snippets” (or “bits”) and editorial text.

[A]n AI tool may inadvertently plagiarize someone else’s words. If a writer uses it to create text for publication without a disclosure, we’ll treat that as tantamount to plagiarism.

From https://www.wired.com/about/generative-ai-policy/

The plagiarism issue is one we need to treat seriously. “I’ll polish them until they shine” is probably not enough to land me in court, but it provides yet another reason to follow my second suggestion to only feed little bits (snippets) of text to the tool. (WIRED won’t even do that.)

WIRED and image generators

WIRED also discusses how it uses and does not use image generators. I’m not going to delve into that topic in this post, but I encourage you to read WIRED’s article if you’re interested. I need to think through the ethics of this myself.

So who’s right?

Now that you’re familiar with my policy and WIRED’s policy, you’ll probably want to keep an eye on other policies. (Sadly, most entities don’t have a policy on generative AI use.)

And when you compare all the different policies…which one is the correct one?

I’ll leave that question for you.

Using “Multispectral” and “Liveness” in the Same Sentence

(Part of the biometric product marketing expert series)

Now that I’m plunging back into the fingerprint world, I’m thinking about all the different types of fingerprint readers.

  • The optical fingerprint and palm print readers are still around.
  • And the capacitive fingerprint readers still, um, persist.
  • And of course you have the contactless fingerprint readers such as MorphoWave, one that I know about.
  • And then you have the multispectral fingerprint readers.

What is multispectral?

Bayometric offers a web page that covers some of these fingerprint reader types, and points out the drawbacks of some of the readers they discuss.

Latent prints are usually produced by sweat, skin debris or other sebaceous excretions that cover up the palmar surface of the fingertips. If a latent print is on the glass platen of the optical sensor and light is directed on it, this print can fool the optical scanner….

Capacitive sensors can be spoofed by using gelatin based soft artificial fingers.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

There is another weakness of these types of readers. Some professions damage and wear away a person’s fingerprint ridges. Examples of professions whose practitioners exhibit worn ridges include construction workers and biometric content marketing experts (who, at least in the old days, handled a lot of paper).

The solution is to design a fingerprint reader that not only examines the surface of the finger, but goes deeper.

From HID Global, “A Guide to MSI Technology: How It Works,” https://blog.hidglobal.com/2022/10/guide-msi-technology-how-it-works

The specialty of multispectral sensors is that it can capture the features of the tissue that lie below the skin surface as well as the usual features on the finger surface. The features under the skin surface are able to provide a second representation of the pattern on the fingerprint surface.

From https://www.bayometric.com/fingerprint-reader-technology-comparison/

Multispectral sensors are nothing new. When I worked for Motorola, Motorola Ventures had invested in a company called Lumidigm that produced multispectral fingerprint sensors; they were much more expensive than your typical optical or capacitive sensor, but were much more effective in capturing true fingerprints to the subdermal level.

Lumidigm was eventually acquired in 2014: not by Motorola (who sold off its biometric assets such as Printrak and Symbol), but by HID Global. This company continues to produce Lumidigm-branded multispectral fingerprint sensors to this day.

But let’s take a look at the other word I bandied about.

What is liveness?

KISS, Alive! By Obtained from allmusic.com., Fair use, https://en.wikipedia.org/w/index.php?curid=2194847

“Gelatin based soft artificial fingers” aren’t the only way to fool a biometric sensor, whether you’re talking about a fingerprint sensor or some other sensor such as a face sensor.

Regardless of the biometric modality, the intent is the same; instead of capturing a true biometric from a person, the biometric sensor is fooled into capturing a fake biometric: an artificial finger, a face with a mask on it, or a face on a video screen (rather than a face of a live person).

This tomfoolery is called a “presentation attack” (becuase you’re attacking security with a fake presentation).

But the standards folks have developed ISO/IEC 30107-3:2023, Information technology — Biometric presentation attack detection — Part 3: Testing and reporting.

And an organization called iBeta is one of the testing facilities authorized to test in accordance with the standard and to determine whether a biometric reader can detect the “liveness” of a biometric sample.

(Friends, I’m not going to get into passive liveness and active liveness. That’s best saved for another day.)

[UPDATE 4/24/2024: I FINALLY ADDRESSED THE DIFFERENCE BETWEEN ACTIVE AND PASSIVE LIVENESS HERE.]

Multispectral liveness

While multispectral fingerprint readers aren’t the only fingerprint readers, or the only biometric readers, that iBeta has tested for liveness, the HID Global Lumidigm readers conform to Level 2 (the higher level) of iBeta testing.

There’s a confirmation letter and everything.

From the iBeta website.

This letter was issued in 2021. For some odd reason, HID Global decided to publicize this in 2023.

Oh well. It’s good to occasionally remind people of stuff.

The Temperamental Writer’s Two Suggestions and One Rule for Using Generative AI

Don’t let that smiling face fool you.

Behind that smiling face beats the heart of an opinionated, crotchety, temperamental writer.

With an overinflated ego and pride in my own writing.

So you can imagine…

  • how this temperamental writer would feel if someone came up and said, “Hey, I wrote this for you.”
  • how this temperamental writer would feel if someone came up and said, “Hey, I had ChatGPT write this for you.”
By Mindaugas Danys from Vilnius, Lithuania, Lithuania – scream and shout, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=44907034

Yeah, I’m an opinionated, crotchety, and temperamental writer.

So how do you think that I feel about ChatGPT, Bard, and other generative AI text writing tools?

Actually, I love them. (Even when they generate “code snippets” instead of text.)

But the secret is in knowing how to use these tools.

Bredemarket’s 2 suggestions and one rule for using generative AI

So unless someone such as an employer or a consulting client requires that I do things differently, here are three ways that I use generative AI tools to assist me in my writing. You may want to consider these yourself.

Bredemarket Suggestion 1: A human should always write the first draft

Yes, it’s quicker to feed a prompt to a bot and get a draft. And maybe with a few iterative prompts you can get a draft in five minutes.

And people will soon expect five-minute responses. I predicted it:

Now I consider myself capable of cranking out a draft relatively quickly, but even my fastest work takes a lot longer than five minutes to write.

“Who cares, John? No one is demanding a five minute turnaround.”

Not yet.

Because it was never possible before (unless you had proposal automation software, but even that couldn’t create NEW text).

What happens to us writers when a five-minute turnaround becomes the norm?

From https://www.linkedin.com/posts/jbredehoft_generativeai-activity-7065836499702861824-X8PO/

If I create the first draft the old-fashioned way, it obviously takes a lot longer than five minutes…even if I don’t “sleep on it.”

But the entire draft-writing process is also a lot more iterative. As I wrote this post I went back and forth throughout the text, tweaking things. For example, in the first draft alone the the three rules became three suggestions, then two suggestions and one rule. And there were many other tweaks along the way, including the insertion of part of my two-week old LinkedIn post.

It took a lot longer, but I ended up with a much better first draft. And a much better final product.

Bredemarket Suggestion 2: Only feed bits to the generative AI tool

The second rule that I follow is that after I write the first draft, I don’t dump the whole thing into a generative AI tool and request a rewrite of the entire block of text.

Instead I dump little bits and pieces into the tool, perhaps something as short as a sentence or two. I want my key sentences to pop. I’ll use generative AI to polish them until they shine.

The “code snippet” (?) rewrite that created the sentence above, after I made a manual edit to the result.

But always check the results. HubSpot flagged one AI-generated email title as “spammy.”

Bredemarket Rule: Don’t share confidential information with the tool

This one isn’t a suggestion. It’s a rule.

Remember the “Hey, I had ChatGPT write this for you” example that I cited above? That actually happened to me. And I don’t know what the person fed as a prompt to ChatGPT, since I only saw the end result, a block of text that included information that was, at the time, confidential.

OK, not THAT confidential. By July_12,_2007_Baghdad_airstrike_unedited_part1.ogv: US Apache helicopterderivative work: Wnt (talk) – July_12,_2007_Baghdad_airstrike_unedited_part1.ogv, Public Domain, https://commons.wikimedia.org/w/index.php?curid=9970435

Feeding confidential information to a generative AI tool can get you into real trouble.

  • Let’s say that Bredemarket is developing a new writing service, the “Bredemarket 288 Tweet Writing Service.” (I’m not. It’s not economically feasible. But bear with me.)
  • Now this is obviously an extremely valuable trade secret.
  • If someone scouring generative AI data found out about this offering and beat me to the punch, I would lose $45 billion. Or maybe less.

So how should I have a generative AI tool edit text about my new service?

  1. First, don’t use a Bredemarket account to submit the prompt. Even if I follow all the obfuscation steps that I am about to list below, the mere fact that the prompt was associated with a Bredemarket account links Bredemarket to the data.
  2. Second, if the word “Bredemarket” appears in the prompt, change it to something else. Like my standby WidgetCo, or maybe Wildebeest Inc.
  3. Third, obfuscate other parts of the prompt. Perhaps change 288 (a number closely associated with modern-day Twitter) to something else, and maybe change other things also.

To show how this is done, here’s something I just submitted to a well-known generative AI tool.

Convert the following to eye-catching text with 250 or fewer characters: “Wildebeest Inc. is pleased to announce that we are providing a brand new service for you, the Wildebeest 176 YouTube Video Creation Service. This service, priced at $50, allows me to collaborate with you in creating a YouTube video. Before I start work, I ask you a number of questions about your desired video, including where the video will be shown, the features you want to highlight, and the target audience. I prepare seventeen drafts during the process, and you can review each of them. The result is a YouTube video that meets your needs and brings your company a lack of profits.”

So my collaborative service provided…a code snippet!

Code snippet

**Wildebeest 176 YouTube Video Creation Service**

**Get a custom YouTube video for $50**

* Collaborate with a video creator
* 17 drafts for review
* Meet your needs and bring profits

**Sign up today!**

Use code with caution.

Now I can edit the text and insert the real words, adjusting them as needed, without revealing my super-secret project.

Would a temperamental writer really do all of this?

Yes, a temperamental writer would really do all of this.

Despite my (overly?) high opinion of my own written work vs. something a bot would write, in certain circumstances the bot can improve my writing.

And as long as I disclose to a potential Bredemarket client (or an employer) my three suggestions (whoops, two suggestions and one rule) for using generative AI, there should be no ethical or legal problem in using a tool. In a sense it’s like using online grammar correction tools, or a book like a dictionary or thesaurus.

So embrace our bot overlords, but keep your eyes wide open.

By Stanley Kubrick – A Clockwork Orange trailer, Public Domain, https://commons.wikimedia.org/w/index.php?curid=61914139

(If you want to see the earlier draft of this post, click here.)