I recently downloaded an ebook from Canto entitled “The ROI of DAM: How to Prove the Business Value of Digital Asset Management.” If you would like to download it also, visit this page.
Why do you need to manage digital assets? Because if your company has thousands or millions of digital assets, individual ones will be so hard to find that you’ll start adding an “N” to the “DAM” acronym.
Canto argues that its digital asset management solution delivers positive ROI by the following:
Saving time and reducing waste
Accelerating speed to market and improving content quality
Reducing asset production costs
Boosting revenue with brand consistency
Minimizing business risk
The ebook quotes some numbers: $20,000 savings here, more savings there.
Of course, Canto isn’t the only DAM in town, as my former coworker Krassimir Boyanov will not hesitate to tell you. Krassimir heads KBWEB Consult, a boutique technology firm that provides consulting services for Adobe Experience Manager users.
In July 2024, IDC examined the business value of Adobe Experience Manager (AEM) Assets. Based on interviews with AEM Assets customers, IDC concluded that the interviewed customers could realize an average annual cost saving of $9.04 million per organization. These cost savings came from multiple sources:
Reduced risk of using out-of-date/unapproved assets (52%)
Reduced risk of accidental disclosure of assets (27%)
Reduced spending of duplicative (62%) or unused (40%) assets
Reduced agency spending by completing work in-house (24%)
Reduced go-to-market time (55%)
Reduced time for content to go from creation to production (47%)
Reduced time for content in a new form factor (39%)
Reduced time to create a new digital asset (66%)
Reduced time to repurpose an existing digital asset (73%)
Reduced time to create a rendition of an asset (60%)
Those are some DAM good numbers. And KBWEB Consult (and IDC) didn’t gate them.
Tech marketers, do you have similar return on investment numbers you would like to share with your end customers? Bredemarket can help you share those numbers. Talk to me before your competitors return YOUR investment to THEM.
Are you having trouble finding an asset such as a digital identity or a commercial asset? If you are, there are ways to make things easier to find.
An example from the identity world
Identity Jedi David Lee recently shared his thoughts on “The Hidden Cost of Bad Identity Data (and How to Fix It).” Lee didn’t focus on the biometric data, but instead on the textual data that is associated with a digital identity.
“Let’s say you’re kicking off a new identity program. You know you need user location to drive access policies, governance rules, or onboarding flows. But your authoritative source has location data in five different formats—some say “NY,” others say “New York,” and some list office addresses with zip codes and floor numbers.
“You tell yourself: “We’ll clean it up later.”
“What you’ve really done is commit your future self to a much more expensive project.”
Garbage in, garbage out.
An example from the commerce world
Krassimir Boyanov of KBWEB Consult provides another example of a problem in his post “Why AEM Assets Smart Tagging Makes Your Marketing Work Easier.” Let’s say that you’re managing the images (the “assets”) that display on a company’s online website. You have thousands if not millions of images to manage. How do you find a particular image?
One way to do this is to “tag” each image with descriptive information.
But if you do it wrong, there will be problems.
“Tagging is inconsistent. If 10 people are tagging the items, the tags will probably be inconsistent. While one person tags an item as a “car,” another may tag a similar item as an “automobile.” Although the two assets are similar, this is hidden because of inconsistent tag use.”
Again, garbage in, garbage out.
An organizational solution from the identity world
Lee and Boyanov approach these similar problems from two perspectives.
Lee, as an Identity and Access Management (IAM) expert, approaches this as a business problem and offers the following recommendations (among others):
“Clean early, not late: Push for authoritative sources to normalize and codify the data before it hits the IAM system….
“Push accountability upstream: Don’t accept ownership of fixing problems you don’t control. Instead, elevate the data issue to the right stakeholder (hint: HR, IT, or Legal).”
While Lee can certainly speak to the technologies that can normalize and codify the data, he prefers in this post to concentrate on the organizational issues that cause dirty data, and on how to prevent these issues from reoccurring in the future.
A technological solution from the commerce world
Boyanov can also speak to business and organizational issues as an Adobe Experience Manager consultant who has helped multiple organizations implement the Adobe product. But in this case he concentrates on a technological approach offered by Adobe:
A Taxonomy is a system of organizing tags based on shared characteristics, which are usually hierarchical structured per organizational need. The structure can help finding a tag faster or impose a generalization. Example: There is a need to subcategorize stock imagery of cars. The taxonomy could look like:
Once the taxonomy is defined, assets can be tagged (preferably automatically) in accordance with the hierarchy.
Presumably David Lee’s identity world can similarly come up with a method to standardize addresses BEFORE they are added to an IAM system.
As deep as any ocean
Whether you’re dealing with a digital identity or a commercial asset, you need to ensure that you can find this asset in the future. This requires planning beforehand.
And a content creation project also requires planning beforehand, such as asking questions before beginning the project.
If you are an identity/biometric or technology firm that requires content creation, or perhaps proposal or analysis services, Bredemarket can help. After all, content creation is science…and art.
Adobe is implementing C2PA-compliant content credentials in AEM [Adobe Experience Manager]. These can tell you the facts about an asset (in C2PA terms, the “provenance“): whether the image of a product from a particular manufacturer was created by that manufacturer, whether a voice recording of Anil Chakravarthy was created by Anil Chakravarthy, or whether an image of a peaceful meadow was actually generated by artificial intelligence (such as Adobe Firefly).
How do you elicit feedback from your customers? Pop-ups on your website? Emails?
Well, back when dinosaurs roamed the planet, none of these methods was available.
So you had to resort to other methods.
Corporate comedian Jan McInnis likes to share stories of her early days in comedy, when she was working comedy clubs instead of corporate conventions. Comedy clubs feature several comedians a night, and some do better than others.
And sometimes the same comedian gets different reactions from different audiences.
McInnis was once booked at a club for a week. The club owner was there for the first show, which went great. The owner went on a trip, and as McInnis relates in detail, she bombed for the next several shows. Afterwards, the club owner returned and asked how the week went.
“My first thought was to say the shows were fine and pretend that I didn’t notice the silent stares from 7 separate audiences….BUT I knew she’d see the comment cards and then know that I was not only a terrible comic, but a liar.”
Ah, those pesky comment cards, the dinosaur era version of Google Forms or Adobe Experience Manager Forms. (Gotta promote my favorite AEM consultant. But I digress.)
I won’t give away how McInnis answered the question (read about it here), but I will say that honesty is (usually) the best policy.
But regardless of how you survey your customers, the very act of doing so provides you with important knowledge. Not just data—knowledge.
For many years, the baseline for high-quality capture of fingerprint and palm print images has been to use a resolution of 500 pixels per inch. Or maybe 512 pixels per inch. Whatever.
The crime scene (latent) folks weren’t always satisfied with this, so they pushed to capture latent fingerprint and latent palm print images at 1000 pixels per inch. Pardon me, 1024.
But beyond this, the resolution of captured prints hasn’t really changed in decades. I’m sure some people have been capturing prints at 2000 (2048) pixels per inch, but there aren’t massive automated biometric identification systems that fully support this resolution from end to end.
But that may be changing.
One important truth about infant fingerprints
For about as long as latent examiners have pursued 1000 ppi print capture, people outside of the criminal justice arena have been looking at fingerprints for a very different purpose.
Our normal civil fingerprint processes require us to identify people via fingerprints beginning at the age of 18, or perhaps at the age of 12.
But gow do we identify people in those first 12 years?
More specifically, can we identify someone via their fingerprints at birth, and then authenticate them as an adult by comparing to those original prints?
It’s a dream, but many have pursued this dream. Dr. Anil Jain at Michigan State University has pursued this for years, and co-authored a 2014 paper on the topic.
Given that children, as well as the adults, in low income countries typically do not have any form of identification documents which can be used for this purpose [vaccination], we address the following question: can fingerprints be effectively used to recognize children from birth to 4 years? We have collected 1,600 fingerprint images (500 ppi) of 20 infants and toddlers captured over a 30-day period in East Lansing, Michigan and 420 fingerprints of 70 infants and toddlers at two different health clinics in Benin, West Africa.
At the time, it probably made sense to use 500 pixel per inch scanners to capture the prints, since developing countries don’t have a lot of money to throw around on expensive 1000 ppi scanners. But the use of regular scanners runs counter to a very important truth about infants and their fingerprints. Are you sitting down?
Because infants are smaller than adults, infant fingerprints are smaller than adult fingerprints.
Think about it. The standard FBI fingerprint card assumes that a rolled fingerprint occupies 1.6 inches x 1.5 inches of space. If you were to roll an infant fingerprint, it would occupy much less than that. Heck, I don’t even know if an infant’s entire FINGER is 1.6 inches long.
So the capture device is obtaining these teeny tiny ridges, and these teeny tiny ridge endings, and these teeny tiny bifurcations. Or trying to. And if those second-level details can’t be captured, then you’re not going to get the minutiae, and your fingerprint matching is going to fail.
So a decade later, researchers today are adopting a newer approach, according to a Biometric Update summary of an ID4Africa webinar. (This particular portion is at the very end of the webinar, at around the 2 hour 40 minute mark.)
A video presentation from Judge Lidia Maejima of the Court of Justice of Parana, Brazil introduced the emerging legal framework for biometric identification of infants. Her representative Felipe Hay explained how researchers in Brazil developed 5,000 dpi scanners, he says, which accurately record the minutiae of infants’ fingerprints.
Did you capture that? We’re moving from five hundred pixels per inch to FIVE THOUSAND pixels per inch. (Or maybe 5120.) Whether even that resolution is capable of capturing infant fingerprint detail remains to be seen.
And as Dr. Joseph Atick noted, all this research is still in its…um…infancy. We won’t know for years whether the algorithms can truly match infant fingerprints to child or adult fingerprints.
By the way, when talking about digital images, Adobe notes that the correct term is pixels per inch, not dots per inch. DPI specifically refers to printer resolution, which is appropriate when you’re printing a fingerprint card but not when you’re displaying an image on a screen.
Back before Jobs co-founded Apple Computer, typing “71077345” into a dedicated calculator (with an “LCD” style typeface) and flipping it upside down showed a recognizable word. Isn’t that cool?
Now, hardly anyone has dedicated calculators, and the one on my smartphone has a “normal” typeface that ruins the trick.
A single loss does not define your entire life. As the sporting world teaches us, Olympic losers and other competitive losers can become winners—if not in sports, then elsewhere.
The human drama of athletic competition
When I was young, the best variety show on television didn’t involve Bob Mackie dresses. It instead featured Jim McKay, introducing the show as follows.
Spanning the globe to bring you the constant variety of sport…the thrill of victory…and the agony of defeat…the human drama of athletic competition…This is ABC’s Wide World of Sports!
A technological marvel when originally introduced, this variety show brought sporting events to American viewers from all over the world.
And these viewers learned that in competitions, there are winners and losers.
But since Wide World of Sports focused on the immediate (well, with a bit of tape delay), viewers never learned about the losers who became winners.
Jim McKay and his colleagues were not retrospective, but were known for the moment. In one instance that was NOT on tape delay, Jim McKay spoke his most consequential words, “They’re all gone.”
Vinko Bogataj
(Note: some of this content is repurposed because repurposing is cool.)
Turning to less lethal sporting events, remember Jim McKay’s phrase “the agony of defeat”?
For American TV watchers, this phrase was personified by Vinko Bogataj.
The agony of defeat.
Hailing from a country then known as Yugoslavia (now Slovenia), Bogataj was competing in the 1970 World Ski Flying Championships in Oberstdorf, in a country then known as West Germany (now Germany). His daughter described what happened:
It was bad weather, and he had to wait around 20 minutes before he got permission to start. He remembers that he couldn’t see very good. The track was very bad, and just before he could jump, the snow or something grabbed his skis and he fell. From that moment, he doesn’t remember anything.
While Bogataj suffered a concussion and a broken ankle, the accident was captured by the Wide World of Sports film crew, and Bogataj became famous on the “capitalist” side of the Cold War.
“He didn’t have a clue he was famous,” (his daughter) Sandra said. That changed when ABC tracked him down in Slovenia and asked him to attend a ceremony in New York to celebrate the 20th anniversary of “Wide World of Sports” in 1981.
At the gala, Bogataj received the loudest ovation among a group that included some of the best-known athletes in the world. The moment became truly surreal for Bogataj when Muhammad Ali asked for his autograph.
Bogataj is now a painter, but his 1970 performance still follows him.
Over 20 years after the infamous ski jump, Terry Gannon interviewed Bogataj for ABC. As Gannon recounted it on X (then Twitter), Bogataj “got in a fender bender on the way. His first line..’every time I’m on ABC I crash.'”
Some guy at the Athens Olympics in 2004
Since the Paris Olympics is taking place as I write this, people are paying a lot of attention to present and past Olympics.
The 2004 Olympics in Athens was a notable one, taking place in the country where the original Olympics were held.
But during that year, people may have missed some of the important stories that took place. We pay attention to winners, not losers.
Take the men’s 200 meter competition. It began with 7 heats, with the top competitors from the heats advancing.
Within the 7 heats, Heat 4 was a run-of-the-mill race, with the top four sprinters advancing to the next round. If I were to read their names to you you’d probably reward me with a blank stare.
But if I were to read the 5th place finisher to you, the guy who failed to advance to the next round, you’d recognize the name.
KBWEB Consult tells the story of another competitor in the same 200 meter event in Athens. Chris Lambert participated in Heat 3, but didn’t place in the first four positions and therefore didn’t advance.
Nor did he place in the fifth position like Usain Bolt did in Heat 4.
Actually, he technically didn’t place at all. His performance is marked with a “DNF,” or “did not finish.”
You see, at about the 50 meter point of the 200 meter event, Lambert pulled a hamstring.
And that ended his Olympic competition dreams forever. By the time the Olympics were held in Lambert’s home country of the United Kingdom in 2012, he was not a competitor, but a volunteer for the London Olympics.
But Lambert learned much from his competitive days, and now works for Adobe.
KBWEB Consult (who consults on Adobe Experience Manager implementations) tells the full story of Chris Lambert and what he learned in its post “Expert Coaching From KBWEB Consult.”
I haven’t done one of these in a while, but it’s important to remember that just because you lost a particular competition doesn’t mean that all is lost. We need to remember this whether we are a 200 meter runner who didn’t advance from their heat, or whether we are a job applicant receiving yet another “we are moving in a different direction” form letter.
In the meantime, take care of yourself, and each other.
Yes, I’m stealing the Biometric Update practice of combining multiple items into a single post, but this lets me take a brief break from identity (mostly) and examine three general technology stories:
Advances in speech neuroprosthesis (the Pat Bennett / Stanford University story).
The benefits of Dynamic Media for Adobe Enterprise Manager users, as described by KBWEB Consult.
The benefits of graph databases for Identity and Access Management (IAM) implementations, as described by IndyKite.
Neuroprosthetics “is a discipline related to neuroscience and biomedical engineering concerned with developing neural prostheses, artificial devices to replace or improve the function of an impaired nervous system.
Various news sources highlighted the story of amyotrophic lateral sclerosis (ALS) patient Pat Bennett and her somewhat-enhanced ability to formulate words, resulting from research at Stanford University.
Because I was curious, I sought the Nature article that discussed the research in detail, “A high-performance speech neuroprosthesis.” The article describes a proof of concept of a speech brain-computer interface (BCI).
Here we demonstrate a speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant—who can no longer speak intelligibly owing to amyotrophic lateral sclerosis—achieved a 9.1% word error rate on a 50-word vocabulary (2.7 times fewer errors than the previous state-of-the-art speech BCI2) and a 23.8% word error rate on a 125,000-word vocabulary (the first successful demonstration, to our knowledge, of large-vocabulary decoding). Our participant’s attempted speech was decoded at 62 words per minute, which is 3.4 times as fast as the previous record8 and begins to approach the speed of natural conversation (160 words per minute9).
For Bennett, the (ALS) deterioration began not in her spinal cord, as is typical, but in her brain stem. She can still move around, dress herself and use her fingers to type, albeit with increasing difficulty. But she can no longer use the muscles of her lips, tongue, larynx and jaws to enunciate clearly the phonemes — or units of sound, such as sh — that are the building blocks of speech….
After four months, Bennett’s attempted utterances were being converted into words on a computer screen at 62 words per minute — more than three times as fast as the previous record for BCI-assisted communication.
Now let’s shift to companies that need to produce marketing collateral. Bredemarket produces collateral, but not to the scale that big companies need to produce. A single company may have to produce millions of pieces of collateral, each of which is specific to a particular product, in a particular region, for a particular audience/persona. Even Bredemarket could potentially produce all sorts of content, if it weren’t so difficult to do so:
An Instagram carousel post about the Bredemarket 400 Short Writing Service, targeted to voice sales executives in the identity industry.
A TikTok reel about the Bredemarket 400 Short Writing Service, targeted to marketing executives in the AI industry.
All of this specialized content, using all of these different image and video formats? I’m not gonna create all that.
But as KBWEB Consult (a boutique technology consulting firm specializing in the implementation and delivery of Adobe Enterprise Cloud technologies) points out in its article “Implementing Rapid Omnichannel Messaging with AEM Dynamic Media,” Adobe Experience Manager has tools to speed up this process and create correctly-messaged content in ALL the formats for ALL the audiences.
One of those tools is Dynamic Media.
AEM Dynamic Media accelerates omnichannel personalization, ensuring your business messages are presented quickly and in the proper formats. Starting with a master file, Dynamic Media quickly adjusts images and videos to satisfy varying asset specifications, contributing to increased content velocity.
A graph database, also referred to as a semantic database, is a software application designed to store, query and modify network graphs. A network graph is a visual construct that consists of nodes and edges. Each node represents an entity (such as a person) and each edge represents a connection or relationship between two nodes.
Graph databases have been around in some variation for along time. For example, a family tree is a very simple graph database….
Graph databases are well-suited for analyzing interconnections…
To see how this applies to identity and access management (IAM), I’ll turn to IndyKite, whose Lasse Andersen recently presented on graph database use in IAM (in a webinar sponsored by Strativ Group). IndyKite describes its solution as follows (in part):
A knowledge graph that holistically captures the identities of customers and IoT devices along with the rich relationships between them
A dynamic and real-time data model that unifies disconnected identity data and business metadata into one contextualized layer
Yes, I know that every identity company (with one exception) uses the word “trust,” and they all use the word “seamless.”
But this particular technology benefits banking customers (at least the honest ones) by using the available interconnections to provide all the essential information about the customer and the customer’s devices, in a way that does not inconvenience the customer. IndyKite claims “greater privacy and security,” along with flexibility for future expansion.
In other words, it increases velocity.
What is your technology story?
I hope you provided this quick overview of these three technology advances.
But do you have a technology story that YOU want to tell?
Perhaps Bredemarket, the technology content marketing expert, can help you select the words to tell your story. If you’re interested in talking, let me know.
So unless someone such as an employer or a consulting client requires that I do things differently, here are three ways that I use generative AI tools to assist me in my writing.
If you read the post, you’ll recall that some of the items were suggestions. However, one was not:
Bredemarket Rule: Don’t share confidential information with the tool
If you are using a general-purpose public AI tool, and not a private one, you don’t want to share secrets.
By Unnamed photographer for Office of War Information. – U.S. Office of War Information photo, via Library of Congress website [1], converted from TIFF to .jpg and border cropped before upload to Wikimedia Commons., Public Domain, https://commons.wikimedia.org/w/index.php?curid=8989847
I then constructed a hypothetical situation in which Bredemarket was developing a new writing service, but didn’t want to share confidential details about it. One of my ideas was as follows:
First, don’t use a Bredemarket account to submit the prompt. Even if I follow all the obfuscation steps that I am about to list below, the mere fact that the prompt was associated with a Bredemarket account links Bredemarket to the data.
Now I happen to have a ton of email accounts, so if I really wanted to divorce a generative AI prompt from its Bredemarket origins, I’d just use an account other than my Bredemarket account. It’s not a perfect solution (a sleuth could determine that the “gamer” account is associated with the same person as the Bredemarket account), but it seems to work.
But not well enough for one company.
Adobe’s restrictions on employee use of generative AI
PetaPixel accessed a gated Business Insider article that purported to include information from an email from an Adobe executive.
Adobe employees have been instructed to not use their “personal email accounts or corporate credit cards when signing up for AI tools, like ChatGPT.” This, the publication reports, comes from an internal email from Chief Information Officer Cindy Stoddard that Insider obtained.
Specifically, the email apparently included a list of “Don’ts”:
Don’t use personal emails for tools used on work-related tasks. This is the one that contradicts what I previously suggested. So if you work for Adobe, don’t listen to me.
Don’t include any personal or non-public Adobe information in prompts. This is reasonable when you’re using public tools such as ChatGPT.
Don’t use outputs verbatim. This is also reasonable, since (a) the outputs may be incorrect, and (b) there are potential copyright issues.
But don’t think that Adobe is completely restricting generative AI. It’s just putting guardrails around its use.
“We encourage the responsible and ethical exploration of generative Al technology internally, which we believe will enable employees to learn about its capabilities as we explore how it will change the way we all work,” Business Insider reported Stoddard wrote in the email.
“As employees, it’s your responsibility to protect Adobe and our customers’ data and not use generative Al in a way that harms or risks Adobe’s business, customers, or employees.”
So my suggestion to use a non-corporate login to obfuscate already-scrubbed confidential information doesn’t fly with Adobe. All fine and good.
The true takeaways from this are two:
If you’re working for or with someone who has their own policies on generative AI use, follow their policies.
If they don’t have their own policies on submitting confidential information to a generative AI tool, and if you don’t have your own policy on submitting confidential information to a generative AI tool, then stop what you’re doing and create a policy now.