Two Measures to Generate Content in (Almost) Five Minutes

I issued myself a seemingly-impossible challenge eight months ago: to create content within five minutes. Why? Because I was scared.

What scared me?

I was scared by generative AI’s ability to quickly produce content. So I wrote this on LinkedIn:

I haven’t seen a lot of discussion of one aspect of #generativeai:

Its ability to write something in about a minute.

(OK, maybe five minutes if you try a few prompts.)…

What happens to us writers when a five-minute turnaround becomes the norm?

From https://www.linkedin.com/posts/jbredehoft_generativeai-activity-7065836499702861824-X8PO/.

    Never mind that the resulting generative AI content was wordy, crappy, and possibly incorrect. For some people the fact that the content was THERE was good enough. After all, can’t you hire a cheap copywriter to edit the generative AI dreck?

    Now I’ve argued that there are benefits to a slower (i.e. greater than five minutes) content production process that results in more mature content.

    But what if you need the content within five minutes? People aren’t going to wait for my dilly-dallying.

    In fact, my LinkedIn post eight months ago was prompted by an encounter with an impatient content customer. I was waiting for some information before I wrote the content, and someone got tired of my waiting and just asked a generative AI engine to write something. (I think I was supposed to thank them for helping me, but I didn’t.)

    This is not an isolated incident, and even with our lamentations that generative AI content blows monkey chunks, it’s still going to be “good enough” for some people.

    So if I’m going to continue to make my living as a temperamental writer who is a “you can pry my keyboard out of my cold dead hands” type, I need to create my content VERY quickly.

    And sometimes I’m going to have to take extreme measures to get that content out in five minutes.

    Measure one: don’t sleep on the content

    In the past, and in the present when I can, I like to let a draft rest as I “sleep on it.” And I give my customer an answer in the morning.

    From https://www.youtube.com/watch?v=C11MzbEcHlw.

    I then return to the content with a fresh pair of eyes and modify it (usually by removing huge chunks of text).

    But sometimes you have to take extreme measures, including skipping the sleep on it step.

    From https://www.youtube.com/watch?v=07Y0cy-nvAg.

    Of course to do this, you need to approach the “draft 0.5” correctly with a focused message.

    How do you jump straight to a polished piece? Meat Loaf and the Beastie Boys aren’t going to tell me. I need to go to the ancient Greeks.

    Measure two: let full-grown ideas spring out of your head

    Non-believers think that Christianity, Islam, and Judaism have some really weird practices. But frankly, they’re mild compared to ancient Greek mythology, which primarily consists of Zeus trying to be the Wilt Chamberlain of his time.

    Take the story of Zeus lusting after Metis and succeeding in his pursuit, until an oracle (unrelated to Larry Ellison) prophesied that Zeus and Metis’ second offspring would overthrow Zeus. To prevent this, Zeus swallowed Metis.

    Makes sense to me.

    Then the story gets REALLY weird.

    After a time, Zeus developed an unbearable headache, which made him scream out of pain so loudly it could be heard throughout the earth. The other gods came to see what the problem was. Hermes realized what needed to be done and directed Hephaestus to take a wedge and split open Zeus’s skull.

    From https://www.greekmythology.com/Myths/The_Myths/Birth_of_Athena/birth_of_athena.html.

    You won’t believe what happened next!

    By User:Bibi Saint-Pol – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=2061180

    Yup, Athena, as a full-grown adult (wearing armor, no less), popped out of Zeus’ head.

    Which was awfully convenient, since they didn’t have to go through potty training and learning the Greek language and all of that, because Athena was already mature.

    So if I can conceive of my content as a full-grown piece of work from the outset, it will make it easier (and hopefully faster) to create it.

    Results

    I wasn’t able to write this particular blog post in five minutes, and I still had to go through some back-and-forth to tweak things such as headings. But I executed this content more quickly than normal.

    More work to do to meet the five-minute goal.

    If We Don’t Train Facial Recognition Users, There Will Be No Facial Recognition

    (Part of the biometric product marketing expert series)

    We get all sorts of great tools, but do we know how to use them? And what are the consequences if we don’t know how to use them? Could we lose the use of those tools entirely due to bad publicity from misuse?

    Hida Viloria. By Intersex77 – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=98625035

    Do your federal facial recognition users know what they are doing?

    I recently saw a WIRED article that primarily talked about submitting Parabon Nanolabs-generated images to a facial recognition program. But buried in the article was this alarming quote:

    According to a report released in September by the US Government Accountability Office, only 5 percent of the 196 FBI agents who have access to facial recognition technology from outside vendors have completed any training on how to properly use the tools.

    From https://www.wired.com/story/parabon-nanolabs-dna-face-models-police-facial-recognition/

    Now I had some questions after reading that sentence: namely, what does “have access” mean? To answer those questions, I had to find the study itself, GAO-23-105607, Facial Recognition Services: Federal Law Enforcement Agencies Should Take Actions to Implement Training, and Policies for Civil Liberties.

    It turns out that the study is NOT limited to FBI use of facial recognition services, but also addresses six other federal agencies: the Bureau of Alcohol, Tobacco, Firearms and Explosives (the guvmint doesn’t believe in the Oxford comma); U.S. Customs and Border Protection; the Drug Enforcement Administration; Homeland Security Investigations; the U.S. Marshals Service; and the U.S. Secret Service.

    In addition, the study confines itself to four facial recognition services: Clearview AI, IntelCenter, Marinus Analytics, and Thorn. It does not address other uses of facial recognition by the agencies, such as the FBI’s use of IDEMIA in its Next Generation Identification system (IDEMIA facial recognition technology is also used by the Department of Defense).

    Two of the GAO’s findings:

    • Initially, none of the seven agencies required users to complete facial recognition training. As of April 2023, two of the agencies (Homeland Security Investigations and the U.S. Marshals Service) required training, two (the FBI and Customs and Border Protection) did not, and the other three had quit using these four facial recognition services.
    • The FBI stated that facial recognition training was recommended as a “best practice,” but not mandatory. And when something isn’t mandatory, you can guess what happened:

    GAO found that few of these staff completed the training, and across the FBI, only 10 staff completed facial recognition training of 196 staff that accessed the service. FBI said they intend to implement a training requirement for all staff, but have not yet done so. 

    From https://www.gao.gov/products/gao-23-105607.

    So if you use my three levels of importance (TLOI) model, facial recognition training is important, but not critically important. Therefore, it wasn’t done.

    The detailed version of the report includes additional information on the FBI’s training requirements…I mean recommendations:

    Although not a requirement, FBI officials said they recommend (as
    a best practice) that some staff complete FBI’s Face Comparison and
    Identification Training when using Clearview AI. The recommended
    training course, which is 24 hours in length, provides staff with information on how to interpret the output of facial recognition services, how to analyze different facial features (such as ears, eyes, and mouths), and how changes to facial features (such as aging) could affect results.

    From https://www.gao.gov/assets/gao-23-105607.pdf.

    However, this type of training was not recommended for all FBI users of Clearview AI, and was not recommended for any FBI users of Marinus Analytics or Thorn.

    I should note that the report was issued in September 2023, based upon data gathered earlier in the year, and that for all I know the FBI now mandates such training.

    Or maybe it doesn’t.

    What about your state and local facial recognition users?

    Of course, training for federal facial recognition users is only a small part of the story, since most of the law enforcement activity takes place at the state and local level. State and local users need training so that they can understand:

    • The anatomy of the face, and how it affects comparisons between two facial images.
    • How cameras work, and how this affects comparisons between two facial images.
    • How poor quality images can adversely affect facial recognition.
    • How facial recognition should ONLY be used as an investigative lead.

    If state and local users received this training, none of the false arrests over the last few years would have taken place.

    What are the consequences of no training?

    Could I repeat that again?

    If facial recognition users had been trained, none of the false arrests over the last few years would have taken place.

    • The users would have realized that the poor images were not of sufficient quality to determine a match.
    • The users would have realized that even if they had been of sufficient quality, facial recognition must only be used as an investigative lead, and once other data had been checked, the cases would have fallen apart.

    But the false arrests gave the privacy advocates the ammunition they needed.

    Not to insist upon proper training in the use of facial recognition.

    But to ban the use of facial recognition entirely.

    Like nuclear or biological weapons, facial recognition’s threat to human society and civil liberties far outweighs any potential benefits. Silicon Valley lobbyists are disingenuously calling for regulation of facial recognition so they can continue to profit by rapidly spreading this surveillance dragnet. They’re trying to avoid the real debate: whether technology this dangerous should even exist. Industry-friendly and government-friendly oversight will not fix the dangers inherent in law enforcement’s discriminatory use of facial recognition: we need an all-out ban.

    From https://www.banfacialrecognition.com/

    (And just wait until the anti-facial recognition forces discover that this is not only a plot of evil Silicon Valley, but also a plot of evil non-American foreign interests located in places like Paris and Tokyo.)

    Because the anti-facial recognition forces want us to remove the use of technology and go back to the good old days…of eyewitness misidentification.

    Eyewitness misidentification contributes to an overwhelming majority of wrongful convictions that have been overturned by post-conviction DNA testing.

    Eyewitnesses are often expected to identify perpetrators of crimes based on memory, which is incredibly malleable. Under intense pressure, through suggestive police practices, or over time, an eyewitness is more likely to find it difficult to correctly recall details about what they saw. 

    From https://innocenceproject.org/eyewitness-misidentification/.

    And these people don’t stay in jail for a night or two. Some of them remain in prison for years until the eyewitness misidentification is reversed.

    Archie Williams moments after his exoneration on March 21, 2019. Photo by Innocence Project New Orleans. From https://innocenceproject.org/fingerprint-database-match-establishes-archie-williams-innocence/

    Eyewitnesses, unlike facial recognition algorithms, cannot be tested for accuracy or bias.

    And if we don’t train facial recognition users in the technology, then we’re going to lose it.

    The Double Loop Podcast Discusses Research From the Self-Styled “Inventor of Cross-Fingerprint Recognition”

    (Part of the biometric product marketing expert series)

    Apologies in advance, but if you’re NOT interested in fingerprints, you’ll want to skip over this Bredemarket identity/biometrics post, my THIRD one about fingerprint uniqueness and/or similarity or whatever because the difference between uniqueness and similarity really isn’t important, is it?

    Yes, one more post about the study whose principal author was Gabe Guo, the self-styled “inventor of cross-fingerprint recognition.”

    In case you missed it

    In case you missed my previous writings on this topic:

    But don’t miss this

    Well, two other people have weighed in on the paper: Glenn Langenburg and Eric Ray, co-presenters on the Double Loop Podcast. (“Double loop” is a fingerprint thing.)

    So who are Langenburg and Ray? You can read their full biographies here, but both of them are certified latent print examiners. This certification, administered by the International Association for Identification, is designed to ensure that the certified person is knowledgeable about both latent (crime scene) fingerprints and known fingerprints, and how to determine whether or not two prints come from the same person. If someone is going to testify in court about fingerprint comparison, this certification is recognized as a way to designate someone as an expert on the subject, as opposed to a college undergraduate. (As of today, the list of IAI certified latent print examiners as of December 2023 can be found here in PDF form.)

    Podcast episode 264 dives into the Columbia study in detail, including what the study said, what it didn’t say, and what the publicity for the study said that doesn’t match the study.

    Eric and Glenn respond to the recent allegations that a computer science undergraduate at Columbia University, using Artificial Intelligence, has “proven that fingerprints aren’t unique” or at least…that’s how the media is mischaracterizing a new published paper by Guo, et al. The guys dissect the actual publication (“Unveiling intra-person fingerprint similarity via deep contrastive learning” in Science Advances, 2024 by Gabe Guo, et al.). They state very clearly what the paper actually does show, which is a far cry from the headlines and even public dissemination originating from Columbia University and the author. The guys talk about some of the important limitations of the study and how limited the application is to real forensic investigations. They then explore some of the media and social media outlets that have clearly misunderstood this paper and seem to have little understanding of forensic science. Finally, Eric and Glenn look at some quotes and comments from knowledgeable sources who also have recognized the flaws in the paper, the authors’ exaggerations, and lack of understanding of the value of their findings.

    From https://doublelooppodcast.com/2024/01/fingerprints-proven-by-ai-to-not-be-unique-episode-264/.

    Yes, the episode is over an hour long, but if you want to hear a good discussion of the paper that goes beyond the headlines, I strongly recommend that you listen to it.

    TL;DR

    If you’re in a TL;DR frame of mind, I’ll just offer one tidbit: “uniqueness” and “similarity” are not identical. Frankly, they’re not even similar.

    Will Ferrell and Chad Smith, or maybe vice versa. Fair use. From https://www.billboard.com/music/music-news/will-ferrell-chad-smith-red-hot-benefit-chili-peppers-6898348/, originally from NBC.

    Intelligently Writing About Biometrics

    Let’s say that your identity/biometric firm has decided that silence ISN’T golden, and that perhaps your firm needs to talk about its products and services.

    Silence is not an optimal communication strategy. By Lorelei7, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=3164780

    For example, let’s say that your firm fights crooks who try to fraudulently use synthetic identities, and you want to talk about your solution.

    So you turn to your favorite generative AI tool to write something that will represent your company in front of everyone. What could go wrong?

    Battling synthetic identities requires a multi-pronged approach. Layering advanced technology is key: robust identity verification using government-issued IDs and biometrics to confirm a person’s existence, data enrichment and validation from diverse sources to check for inconsistencies, and machine learning algorithms to identify suspicious patterns and red flags. Collaboration is crucial too, from financial institutions sharing watchlists to governments strengthening regulations and consumers practicing good cyber hygiene. Ultimately, vigilance and a layered defense are the best weapons against these ever-evolving digital phantoms.

    From Google Bard.

    Great. You’re done, and you saved a lot of money by NOT hiring an identity blog writing expert. The text makes a lot of important points, so I’m sure that your prospects will be inspired by it.

    Bot-speak is not an optimal communication strategy either. Generated at craiyon.com.

    Well…

    …until your prospects ask what YOU do and how you are better than every other identity firm out there. If you’re the same as all the other “me too” solutions, then your prospects will just go with the lowest price provider.

    So how do you go about intelligently writing about biometrics?

    No-siree.

    Intelligently writing about biometrics requires that you put all of this information together AND effectively communicate your message…

    …including why your identity/biometrics firm is great and why all the other identity/biometric firms are NOT great.

    If you’re doing this on your own, be sure to ask yourself a lot of questions so that you get started on the right track.

    If you’re asking Bredemarket to help you create your identity/biometric content by intelligently writing about biometrics, I’ll take care of the questions.

    Oh, and one more thing: if you noted my use of the word “no siree” earlier in this post, it was taken from the Talking Heads song “The Big Country.” Here’s an independent video of that song, especially recommended for people outside of North America who may not realize that the United States and Canada are…well, big countries.

    From https://www.youtube.com/watch?v=cvua6zPIi7c.

    I’m tired of looking out the window of the airplane
    I’m tired of traveling, I want to be somewhere

    From https://genius.com/Talking-heads-the-big-country-lyrics.

    Get the Balance Right

    Have you ever created content that contradicts itself?

    Let me take you back to 1978, when the Who released an album entitled “Who Are You”—whose title song is beloved by identity/biometrics professionals over 45 years later.

    Fair use. From the album “Who Are You.”

    But there’s another song on the album that seems at first glance to speak to the times of 1978.

    Bands of the last decade like the Who had apparently been eclipsed by bands like the Sex Pistols, a band that had already imploded.

    In this environment, the Who recorded a song called “Music Must Change,” a song that seemed to speak to the changing of the guard.

    Until you listened to the song’s obscure lyrics and orchestral backing, which makes as much sense as an entire double album about a musician spitting at his audience. (That album would come in 1979.)

    Meet the new song…same as the old song.

    From https://youtu.be/ROG9llPP9qE?si=nyeRi2bXIiOCjNUh.

    Did the Columbia Study “Discover” Fingerprint Patterns?

    As you may have seen elsewhere, I’ve been wondering whether the widely-publicized Columbia University study on the uniqueness of fingerprints isn’t any more than a simple “discovery” of fingerprint patterns, which we’ve known about for years. But to prove or refute my suspicions, I had to read the study first.

    My initial exposure to the Columbia study

    I’ve been meaning to delve into the minutiae of the Columbia University fingerprint study ever since I initially wrote about it last Thursday.

    (And yes, that’s a joke. The so-called experts say that the word “delve” is a mark of AI-generated content. And “minutiae”…well, you know.)

    If you missed my previous post, “Claimed AI-detected Similarity in Fingerprints From the Same Person: Are Forensic Examiners Truly ‘Doing It Wrong’,” I discussed a widely-publicized study by a team led by Columbia University School of Engineering and Applied Science undergraduate senior Gabe Guo. Columbia Engineering itself publicized the study with the attention-grabbing headline “AI Discovers That Not Every Fingerprint Is Unique,” coupled with the sub-head “we’ve been comparing fingerprints the wrong way!”

    There are three ways to react to the article:

    1. Gabe Guo, who freely admits that he knows nothing about forensic science, is an idiot. For decades we have known that fingerprints ARE unique, and the original forensic journals were correct in not publishing this drivel.
    2. The brave new world of artificial intelligence is fundamentally disproving previously sacred assumptions, and anyone who resists these assumptions is denying scientific knowledge and should go back to their caves.
    3. Well, let’s see what the study actually SAYS.

    Until today, I hadn’t had a chance to read the study. But I wanted to do this, because a paragraph in the article that described the study got me thinking. I needed to see the study itself to confirm my suspicions.

    “The AI was not using ‘minutiae,’ which are the branchings and endpoints in fingerprint ridges – the patterns used in traditional fingerprint comparison,” said Guo, who began the study as a first-year student at Columbia Engineering in 2021. “Instead, it was using something else, related to the angles and curvatures of the swirls and loops in the center of the fingerprint.” 

    From https://www.newswise.com/articles/ai-discovers-that-not-every-fingerprint-is-unique

    Hmm. Are you thinking what I am thinking?

    What were you thinking?

    I’ll preface this by saying that while I have worked with fingerprints for 29 years, I am nowhere near a forensic expert. I know enough to cause trouble.

    But I know who the real forensic experts are, so I’m going to refer to a page on onin.com, the site created by Ed German. German, who is talented at explaining fingerprint concepts to lay people, created a page to explain “Level 1, 2 and 3 Details.” (It also explains ACE-V, for people interested in that term.)

    Here are German’s quick explanations of Level 1, 2, and 3 detail. These are illustrated at the original page, but I’m just putting the textual definitions here.

    • Level 1 includes the general ridge flow and pattern configuration.  Level 1 detail is not sufficient for individualization, but can be used for exclusion.  Level 1 detail may include information enabling orientation, core and delta location, and distinction of finger versus palm.” 
    • Level 2 detail includes formations, defined as a ridge ending, bifurcation, dot, or combinations thereof.   The relationship of Level 2 detail enables individualization.” 
    • Level 3 detail includes all dimensional attributes of a ridge, such as ridge path deviation, width, shape, pores, edge contour, incipient ridges, breaks, creases, scars and other permanent details.” 

    We’re not going to get into Level 3 in this post. But if you look at German’s summary of Level 2, you’ll see that he is discussing the aforementioned MINUTIAE (which, according to German, “enables individualization”). And if you look at German’s summary of Level 1, he’s discussing RIDGE FLOW, or perhaps “the angles and curvatures of the swirls and loops in the center of the fingerprint” (which, according to German, “is not sufficient for individualization”).

    Did Gabe Guo simply “discover” fingerprint patterns? On a separate onin.com page, common fingerprint patterns are cited (arch, loop, whorl). Is this the same thing that Guo (who possibly has never heard of loops and whorls in his life) is talking about?

    From Antheus Technology page, from NIST’s Appendix B to the FpVTE 2003 test document. I remember that test very well.

    I needed to read the original study to see what Guo actually said, and to determine if AI discovered something novel beyond what forensic scientists consider the information “in the center of the fingerprint.”

    So let’s look at the study

    I finally took the time to read the study, “Unveiling intra-person fingerprint similarity via deep contrastive learning,” as published in Science Advances on January 12. While there is a lot to read here, I’m going to skip to Guo et al’s description of the fingerprint comparison method used by AI. Central to this comparison is the concept of a “feature map.”

    Figure 2A shows that all the feature maps exhibit a statistically significant ability to distinguish between pairs of distinct fingerprints from the same person and different people. However, some are clearly better than others. In general, the more fingerprint-like a feature map looks, the more strongly it shows the similarity. We highlight that the binarized images performed almost as well as the original images, meaning that the similarity is due mostly to inherent ridge patterns, rather than spurious characteristics (e.g., image brightness, image background noise, and pressure applied by the user when providing the sample). Furthermore, it is very interesting that ridge orientation maps perform almost as well as the binarized and original images—this suggests that most of the cross-finger similarity can actually be explained by ridge orientation.

    From https://www.science.org/doi/10.1126/sciadv.adi0329.

    (The implied reversal from the forensic order of things is interesting. Specifically, ridge orientation, which yields a bunch of rich data, is considered more authoritative than mere minutiae points, which are just teeny little dots that don’t look like a fingerprint. Forensic examiners consider the minutiae more authoritative than the ridge detail.)

    Based upon the initial findings, Guo et al delved deeper. (Sorry, couldn’t help myself.) Specifically, they interrogated the feature maps.

    We observe a trend in the filter visualizations going from the beginning to the end of the network: filters in earlier layers exhibit simpler ridge/minutia patterns, the middle layers show more complex multidirectional patterns, and filters in the last layer display high-level patterns that look much like fingerprints—this increasing complexity is expected of deep neural networks that process images. Furthermore, the ridge patterns in the filter visualizations are all generally the same shade of gray, meaning that we can rule out image brightness as a source of similarity. Overall, each of these visualizations resembles recognizable parts of fingerprint patterns (rather than random noise or background patterns), bolstering our confidence that the similarity learned by our deep models is due to genuine fingerprint patterns, and not spurious similarities.

    From https://www.science.org/doi/10.1126/sciadv.adi0329.

    So what’s the conclusion?

    (W)e show above 99.99% confidence that fingerprints from different fingers of the same person share very strong similarities. 

    From https://www.science.org/doi/10.1126/sciadv.adi0329.

    And what are Guo et al’s derived ramifications? I’ll skip to the most eye-opening one, related to digital authentication.

    In addition, our work can be useful in digital authentication scenarios. Using our fingerprint processing pipeline, a person can enroll into their device’s fingerprint scanner with one finger (e.g., left index) and unlock it with any other finger (e.g., right pinky). This increases convenience, and it is also useful in scenarios where the original finger a person enrolled with becomes temporarily or permanently unreadable (e.g., occluded by bandages or dirt, ridge patterns have been rubbed off due to traumatic event), as they can still access their device with their other fingers.

    From https://www.science.org/doi/10.1126/sciadv.adi0329.

    However, the researchers caution that (as any good researcher would say when angling for funds) more research is needed. Their biggest concern was the small sample size they used in their experiments (60,000 prints), coupled with the fact that the prints were full and not partial fingerprints.

    What is unanswered?

    So let’s assume that the study shows a strong similarity between the ridges of fingerprints from the same person. Is this enough to show:

    • that the prints from two fingers on the same person ARE THE SAME, and
    • that the prints from two fingers on the same person are more alike than a print from ANY OTHER PERSON?

    Or to use a specific example, if we have Mike French’s fingers 2 (right index) and 7 (left index), are those demonstrably from the same person, while my own finger 2 is demonstrably NOT from Mike French?

    And what happens if my finger 2 has the same ridge pattern as French’s finger 2, yet is different from French’s finger 7? Does that mean that my finger 2 and French’s finger 2 are from the same person?

    If this happens, then the digital authentication example above wouldn’t work, because I could use my finger 2 to get access to French’s data.

    This could get messy.

    More research IS needed, and here’s what it should be

    If you have an innovative idea for a way to build an automobile, is it best to never talk to an existing automobile expert at all?

    Same with fingerprints. Don’t just leave the study with the AI folks. Bring the forensic people on board.

    And the doctors also.

    Initiate a conversation between the people who found this new AI technique, the forensic people who have used similar techniques to classify prints as arches, loops, whorls, etc., and the medical people who understand how the ridges are formed in the womb in the first place.

    If you get all the involved parties in one room, then perhaps they can work together to decide whether the technique can truly be used to identify people.

    I don’t expect that this discussion will settle once and for all whether every fingerprint is unique. At least not to the satisfaction of scientists.

    But bringing the parties together is better than not listening to critical stakeholders at all.

    You Can’t Make a Silk Purse Out of an AI-generated Sow’s Ear

    By Rictor Norton & David Allen from London, United Kingdom – Show Pig, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=43222404

    I’m sure that you’ve heard the saying that “you can’t make a silk purse out of a sow’s ear.” Alternative phrases are “putting lipstick on a pig” or “polishing a turd.”

    In other words, if something is crappy, you can’t completely transform it into something worthwhile.

    Yet we persist on starting with crappy stuff anyway…such as surrendering our writing to generative AI and then trying to fix the resulting crap later.

    Which is why I’ve said that a human should ALWAYS write the first draft.

    The questionable job description

    Mike Harris found a job post asking for a human copyeditor to rework AI-generated content. See the details here.

    I’m sure that the unnamed company thought it was a great idea to have AI generate the content…until they saw what AI generated.

    Rather than fix the source of the problem, the company has apparently elected to hire someone to rework the stuff.

    A human should always write the first draft

    Why not have a human write the stuff in the first place..as I recommended last June? Let me borrow what I said before…

    I’m going to stick with the old fashioned method of writing the first draft myself. And I suggest that you do the same. Doing this lets me:

    • Satisfy my inflated ego. I’ve been writing for years and take pride in my ability to outline and compose a piece of text. I’ve created thousands upon thousands of pieces of content over my lifetime, so I feel I know what I’m doing.
    • Iterate on my work to make it better. Yes, your favorite generative AI tool can crank out a block of text in a minute. But when I’m using my own hands on a keyboard to write something, I can zoom up and down throughout the text, tweaking things, adding stuff, removing stuff, and sometimes copying everything to a brand new draft and hacking half of it away. It takes a lot longer, but in my view all of this iterative activity makes the first draft much better, which makes the final version even better still.
    • Control the tone of my writing. One current drawback of generative AI is that, unless properly prompted, it often delivers bland, boring text. Creating and iterating the text myself lets me dictate the tone of voice. Do I want to present the content as coming from a knowledgeable Sage? Does the text need the tone of a Revolutionary? I want to get that into the first draft, rather than having to rewrite the whole thing later to change it.

    I made a couple of other points in that original LinkedIn article, but I’m…um…iterating. I predict that there’s a time when I WON’T be able to sleep on my text any more, and these days the “generated text” flag has been replaced by HUMAN detection of stuff that was obviously written by a bot.

    And that’s more dangerous than any flag.

    But if you insist on going the cheap route and outsourcing your writing to a bot…you get what you pay for.

    If you want your text to be right the FIRST time…

    Claimed AI-detected Similarity in Fingerprints From the Same Person: Are Forensic Examiners Truly “Doing It Wrong”?

    I shared some fingerprint-related information on my LinkedIn feed and other places, and I thought I’d share it here.

    Along with an update.

    You’re doing it wrong

    Forensic examiners, YOU’RE DOING IT WRONG based on this bold claim:

    “Columbia engineers have built a new AI that shatters a long-held belief in forensics–that fingerprints from different fingers of the same person are unique. It turns out they are similar, only we’ve been comparing fingerprints the wrong way!” (From Newswise)

    Couple that claim with the initial rejection of the paper by multiple forensic journals because “it is well known that every fingerprint is unique” (apparently the reviewer never read the NAS report), and you have the makings of a sexy story.

    Or do you?

    And what is the paper’s basis for the claim that fingerprints from the same person are NOT unique?

    ““The AI was not using ‘minutiae,’ which are the branchings and endpoints in fingerprint ridges – the patterns used in traditional fingerprint comparison,” said Guo, who began the study as a first-year student at Columbia Engineering in 2021. “Instead, it was using something else, related to the angles and curvatures of the swirls and loops in the center of the fingerprint.”” (From Newswise)

    Perhaps there are similarities in the patterns of the fingers at the center of a print, but that doesn’t negate the uniqueness of the bifurcations and ridge ending locations throughout the print. Guo’s method uses less of the distal fingerprint than traditional minutiae analysis.

    But maybe there are forensic applications for this alternate print comparison technique, at least as an investigative lead. (Let me repeat that again: “investigative lead.”) Courtroom use will be limited because there is no AI equivalent to explain to the court how the comparison was made, and if any other expert AI algorithm would yield the same results.

    Thoughts?

    https://www.newswise.com/articles/ai-discovers-that-not-every-fingerprint-is-unique

    The update

    As I said, I shared the piece above to several places, including one frequented by forensic experts. One commenter in a private area offered the following observation, in part:

    What was the validation process? Did they have a qualified latent print examiner confirm their data?

    From a private source.

    Before you dismiss the comment as reflecting a stick-in-the-mud forensic old fogey who does not recognize the great wisdom of our AI overlords, remember (as I noted above) that forensic experts are required to testify in court about things like this. If artificial intelligence is claimed to identify relationships between fingers from the same person, you’d better make really sure that this is true before someone is put to death.

    I hate to repeat the phrase used by scientific study authors in search of more funding, but…

    …more research is needed.

    Working With Familiar Faces

    Often consultants work with someone whom they have never met before.

    Sometimes they get to work with friends they have known from previous experiences, which can be a good thing.

    From “We Are Your Friends.” https://vimeo.com/11277708.

    First example: A couple of years ago, when consulting for a large client, I worked on a proposal with one of the client’s partners, and one of the employees in the partner organization happened to be a former coworker from MorphoTrak.

    Second example: This morning I’m meeting with Gene Volfe, a former coworker at Incode Technologies (we started at Incode on the same day). We’re working on a project together that requires Gene’s demand generation skills and my content skills…which we will be employing for the benefit of another former MorphoTrak coworker.

    Third example: Speaking of Incode, two of my former coworkers are reuniting at a different company. As a sign that these two know each other well, one made a point of saying to the other, “Go Bills!”

    And yes, Gene, I remember how you like Google Docs…

    When Follower Counts Matter

    I see social posts in which the authors thank their followers for getting them to a certain follower count, and I receive Instagram messages promising me that for just a little money I can get tens of thousands of followers.

    I definitely ignore the latter messages, and personally I ignore the former messages also.

    Because follower counts don’t matter.

    Just because Bredemarket has X followers doesn’t necessarily mean that Bredemarket will make lots of money. I could use viral tactics to attract countless followers that would never, ever purchase Bredemarket’s marketing and writing services.

    In fact, I could live just fine with 25 followers…provided that they’re the RIGHT followers.

    But while this is normally true, I’ve run into a couple of instances in which follower counts DO matter. Because you need a certain heft to get the large companies to pay attention to you.

    My invisible WhatsApp channel

    A little over a month ago I started a WhatsApp channel devoted to identity, biometrics, ID documents and geolocation. Why?

    I began mulling over whether I should create my own WhatsApp channel, but initially decided against it….

    I’d just follow the existing WhatsApp channels on identity, biometrics, and related topics.

    But I couldn’t find any.

    From https://bredemarket.com/2023/11/29/announcing-a-whatsapp-channel-for-identity-biometrics-id-documents-and-geolocation/.

    So I started my own to fill the void, then waited for similarly interested WhatsApp users to find my channel via search.

    But there was a catch.

    Although it isn’t explicitly documented anywhere, it appears that using the WhatsApp channel search only returns channels that already have thousands of subscribers. When I searched for a WhatsApp channel for “identity,” WhatsApp returned nothing.

    WhatsApp channel search for “identity.”

    As a result, I found myself promoting my WhatsApp channel everywhere EXCEPT WhatsApp.

    Including this blog post. If you want to subscribe to my WhatsApp channel “Identity, Biometrics, ID Documents, and Geolocation,” click on the link https://www.whatsapp.com/channel/0029VaARoeEKbYMQE9OVDG3a.

    Click the link https://www.whatsapp.com/channel/0029VaARoeEKbYMQE9OVDG3a to view the channel.

    My non-linkable YouTube channel

    I also have a YouTube channel, and you CAN find that. But it also suffers from a lack of subscribers.

    On Monday I received an onimous-sounding email from YouTube with the title “Your channel has lost access to advanced features.”

    The opening paragraph read as follows:

    To help keep our community safe, we limit some of our more powerful features to channels who have built and maintained a positive channel history or who have provided verification.

    Ah, verification. I vaguely remember having to provide Alphabet with my ID a few months ago.

    The message continued:

    As of now, your channel doesn’t have sufficient channel history. It has lost access to advanced features. This may have happened because your channel did not follow our Community Guidelines.

    While I initially panicked when I read that last sentence, I then un-panicked when I realized that this may NOT have happened because of a Community Guidelines violation. The more likely culprit was an insufficient channel history.

    Your channel history data is used to determine whether your content and activity has consistently followed YouTube’s Community Guidelines.

    Your channel history is a record of your:

    Channel activity (like video uploads, live streams, and audience engagement.)

    Personal data related to your Google Account.

    When and how the account was created.

    How often it’s used.

    Your method of connecting to Google services.

    Most active channels already have sufficient channel history to unlock advanced features without any further action required. 

    From https://support.google.com/youtube/answer/9891124#channelhistory.

    Frankly, my YouTube channel doesn’t have a ton of audience engagement. Now I could just start uploading a whole bunch of videos…but then I risk violating the Community Guidelines by getting a “spamming” accusation.

    As it turns out, there’s only one “advanced feature” that I really miss: the ability to “Add external links to your video descriptions.” And I’m trying to tone down my use of external links because Alphabet (on YouTube) and Meta (on Instagram) discourage their use anyway.

    But for now the previously-added external links to videos such as this one are now disabled.

    From https://www.youtube.com/watch?v=oIB9SPI-yiI. The link at the bottom of the description is non-clickable.

    Perhaps if I post long-form videos more frequently and get thousands of subscribers, I will get enough “audience engagement” to restore the advanced features.

    So if you want to increase my YouTube follower count, go to https://www.youtube.com/@johnbredehoftatbredemarket2225 and click the Subscribe button.

    So let’s get followers

    But the question remains: how do I get thousands of people to subscribe to my WhatsApp channel and my YouTube channel?

    Perhaps I can adapt a really cool TikTok challenge to WhatsApp and YouTube.

    You can create the exciting Savage Challenge on TikTok and ask your audience to participate in it. In this challenge, people will have to learn and follow the choreography of Megan Thee Stallions’ highly loved song, “Savage.”

    From https://www.engagebay.com/blog/tiktok-challenges/

    I’m not familiar with that particular song, so I’d better check it out.

    Well…

    I’m not sure if this fits into my “sage” persona.

    And if I go to the local car wash with a baseball bat and start knocking out car windows, I may end up in jail. And that usually does NOT increase the follower count. Because as Johnny Somali persumably found out in Japan, you can’t film videos when you’re in jail.