When I originally wrote “The Temperamental Writer’s Two Suggestions and One Rule for Using Generative AI,” I included the following caveat:
So unless someone such as an employer or a consulting client requires that I do things differently, here are three ways that I use generative AI tools to assist me in my writing.
From https://bredemarket.com/2023/06/05/the-temperamental-writers-two-suggestions-and-one-rule-for-using-generative-ai/
It’s good that I included that caveat.
The Bredemarket Rule, Item One
If you read the post, you’ll recall that some of the items were suggestions. However, one was not:
Bredemarket Rule: Don’t share confidential information with the tool
If you are using a general-purpose public AI tool, and not a private one, you don’t want to share secrets.

I then constructed a hypothetical situation in which Bredemarket was developing a new writing service, but didn’t want to share confidential details about it. One of my ideas was as follows:
First, don’t use a Bredemarket account to submit the prompt. Even if I follow all the obfuscation steps that I am about to list below, the mere fact that the prompt was associated with a Bredemarket account links Bredemarket to the data.
From https://bredemarket.com/2023/06/05/the-temperamental-writers-two-suggestions-and-one-rule-for-using-generative-ai/
Now I happen to have a ton of email accounts, so if I really wanted to divorce a generative AI prompt from its Bredemarket origins, I’d just use an account other than my Bredemarket account. It’s not a perfect solution (a sleuth could determine that the “gamer” account is associated with the same person as the Bredemarket account), but it seems to work.
But not well enough for one company.
Adobe’s restrictions on employee use of generative AI
PetaPixel accessed a gated Business Insider article that purported to include information from an email from an Adobe executive.
Adobe employees have been instructed to not use their “personal email accounts or corporate credit cards when signing up for AI tools, like ChatGPT.” This, the publication reports, comes from an internal email from Chief Information Officer Cindy Stoddard that Insider obtained.
From https://petapixel.com/2023/07/06/adobe-limits-its-employees-use-of-generative-ai/?utm_source=tldrai
Specifically, the email apparently included a list of “Don’ts”:
- Don’t use personal emails for tools used on work-related tasks. This is the one that contradicts what I previously suggested. So if you work for Adobe, don’t listen to me.
- Don’t include any personal or non-public Adobe information in prompts. This is reasonable when you’re using public tools such as ChatGPT.
- Don’t use outputs verbatim. This is also reasonable, since (a) the outputs may be incorrect, and (b) there are potential copyright issues.
But don’t think that Adobe is completely restricting generative AI. It’s just putting guardrails around its use.
“We encourage the responsible and ethical exploration of generative Al technology internally, which we believe will enable employees to learn about its capabilities as we explore how it will change the way we all work,” Business Insider reported Stoddard wrote in the email.
“As employees, it’s your responsibility to protect Adobe and our customers’ data and not use generative Al in a way that harms or risks Adobe’s business, customers, or employees.”
From https://petapixel.com/2023/07/06/adobe-limits-its-employees-use-of-generative-ai/?utm_source=tldrai
What does this mean?
So my suggestion to use a non-corporate login to obfuscate already-scrubbed confidential information doesn’t fly with Adobe. All fine and good.
The true takeaways from this are two:
- If you’re working for or with someone who has their own policies on generative AI use, follow their policies.
- If they don’t have their own policies on submitting confidential information to a generative AI tool, and if you don’t have your own policy on submitting confidential information to a generative AI tool, then stop what you’re doing and create a policy now.

2 Comments