Let’s not pretend this is business as usual. The moment we invited AI to join our content teams—ghostwriters with silicon souls, tireless illustrators, teaching assistants who never sleep—we also opened the door to a host of questions that are more than technical. They are ethical. Legal. Human. And increasingly, urgent.

In corporate learning, marketing, customer education, and beyond, generative AI tools are reshaping how content gets made. But for every hour saved, a question lingers in the margins: “Are we sure this is okay?” Not just effective—but lawful, equitable, and aligned with the values we claim to champion.  These are ideas that I explore daily now as I work with Adobe’s Digital Learning Software teams, developing tools for corporate training, like Adobe Learning Manager, Adobe Captivate and Adobe Connect.

This article explores four big questions that every organization should be wrestling with right now, along with some real-world examples and guidance on what responsible policy might look like in this brave new content landscape.


1. What Are the Ethical Concerns Around AI-Generated Content?

AI is an impressive mimic. It can turn out fluent courseware, clever quizzes, and eerily on-brand product copy. But that fluency is trained on the bones of the internet: a vast, sometimes ugly fossil record of everything we’ve ever published online.

That means AI can—and often does—mirror back our worst assumptions:

  • A hiring module that downranks resumes with non-Western names.
  • A healthcare chatbot that assumes whiteness is the default patient profile.
  • A training slide that reinforces gender stereotypes because, well, “the data said so.”

In 2023, The Washington Post and Algorithmic Justice League found that popular generative AI platforms frequently produced biased imagery when prompted with professional roles—suggesting that AI doesn’t just replicate bias, it may reinforce it with frightening fluency (Harwell).

Then there’s the murky question of authorship. If an AI wrote your onboarding module, who owns it? And should your learners be told that the warm, human-sounding coach in their feedback app is actually just a smart echo?

Best practice? Organizations should treat transparency as a first principle. Label AI-created content. Review it with human SMEs. Make bias detection part of your QA checklist. Assume AI has ethical blind spots—because it does.


2. How Do We Stay Legally Clean When AI Writes Our Content?

The legal fog around AI-generated content is, at best, thickening. Copyright issues are particularly treacherous. Generative AI tools, trained on scraped web data, can accidentally reproduce copyrighted phrases, formatting, or imagery without attribution.

A 2023 lawsuit against OpenAI and Microsoft by The New York Times exemplified the concern: some AI outputs included near-verbatim excerpts from paywalled articles (Goldman).

That same risk applies to instructional content, customer documentation, and marketing assets.

But copyright isn’t the only hazard:

  • In regulated industries (e.g., pharmaceuticals, finance), AI-generated materials must align with up-to-date regulatory requirements. A chatbot that offers outdated advice could trigger compliance violations.
  • If AI invents a persona or scenario too closely resembling a real person or competitor, you may find yourself flirting with defamation.

Best practice?

  • Use enterprise AI platforms that clearly state what training data they use and provide indemnification.
  • Audit outputs in sensitive contexts.
  • Keep a human in the loop when legal risk is on the table.

3. What About Data Privacy? How Do We Avoid Exposing Sensitive Information?

In corporate contexts, content often starts with sensitive data: customer feedback, employee insights, product roadmaps. If you’re using a consumer-grade AI tool and paste that data into a prompt—you may have just made it part of the model’s learning forever.

OpenAI, for instance, had to clarify that data entered into ChatGPT could be used to retrain models—unless users opted out or used a paid enterprise plan with stricter safeguards (Heaven).

Risks aren’t limited to inputs. AI can also output information it has “memorized” if your org’s data was ever part of its training set, even indirectly. For example, one security researcher found ChatGPT offering up internal Amazon code snippets when asked the right way.

Best practice?

  • Use AI tools that support private deployment (on-premise or VPC).
  • Apply role-based access controls to who can prompt what.
  • Anonymize data before sending it to any AI service.
  • Educate employees: “Don’t paste anything into AI you wouldn’t share on LinkedIn.”

4. What Kind of AI Are We Actually Using—and Why Does It Matter?

Not all AI is created equal. And knowing which kind you’re working with is essential for risk planning.

Let’s sort the deck:

  • Generative AI creates new content. It writes, draws, narrates, codes. It’s the most impressive and most volatile category—prone to hallucinations, copyright issues, and ethical landmines.
  • Predictive AI looks at data and forecasts trends—like which employees might churn or which customers need support.
  • Classifying AI sorts things into buckets—like tagging content, segmenting learners, or prioritizing support tickets.
  • Conversational AI powers your chatbots, support flows, and voice assistants. If unsupervised, it can easily go off-script.

Each of these comes with different risk profiles and governance needs. But too many organizations treat AI like a monolith—“we’re using AI now”—without asking: which kind, for what purpose, and under what controls?

Best practice?

  • Match your AI tool to the job, not the hype.
  • Set different governance protocols for different categories.
  • Train your L&D and legal teams to understand the difference.

What Business Leaders Are Actually Saying

This isn’t just a theoretical exercise. Leaders are uneasy—and increasingly vocal about it.

In a 2024 Gartner report, 71% of compliance executives cited “AI hallucinations” as a top risk to their business (Gartner).

Meanwhile, 68% of CMOs surveyed by Adobe said they were “concerned about the legal exposure of AI-created marketing materials” (Adobe).

Microsoft president Brad Smith described the current moment as a call for “guardrails, not brakes”—urging companies to move forward but with deliberate constraints (Smith).

Salesforce, in its “Trust in AI” guidelines, publicly committed to never using customer data to train generative AI models without consent and built its own Einstein GPT tools to operate inside secure environments (Salesforce).

The tone has shifted: from wonder to wary. Executives want the productivity, but not the lawsuits. They want creative acceleration, without reputational ruin.


So What Should Companies Actually Do?

Let’s ground this whirlwind with a few clear stakes in the ground.

  1. Develop an AI Use Policy: Cover acceptable tools, data practices, review cycles, attribution standards, and transparency expectations. Keep it public, not buried in legalese.
  2. Segment Risk by AI Type: Treat generative AI like a loaded paintball gun—fun and colorful, but messy and potentially painful. Wrap it in reviews, logs, and disclaimers.
  3. Establish a Review and Attribution Workflow: Include SMEs, legal, DEI, and branding in any review process for AI-generated training or customer-facing content. Label AI involvement clearly.
  4. Invest in Private or Trusted AI Infrastructure: Enterprise LLMs, VPC deployments, or AI tools with contractual guarantees on data handling are worth their weight in uptime.
  5. Educate Your People: Host brown-bag sessions, publish prompt guides, and include AI literacy in onboarding. If your team doesn’t know the risks, they’re already exposed.

In Summary:

AI is not going away. And honestly? It shouldn’t. There’s magic in it—a dizzying potential to scale creativity, speed, personalization, and insight.

But the price of that magic is vigilance. Guardrails. The willingness to question both what we can build and whether we should.

So before you let the robots write your onboarding module or design your next slide deck, ask yourself: who’s steering this ship? What’s at stake if they get it wrong? And what would it look like if we built something powerful—and responsible—at the same time?

That’s the job now. Not just building the future, but keeping it human.


Works Cited:

Adobe. “Marketing Executives & AI Readiness Survey.” Adobe, 2024, https://www.adobe.com/insights/ai-marketing-survey.html.

Gartner. “Top Emerging Risks for Compliance Leaders.” Gartner, Q1 2024, https://www.gartner.com/en/documents/4741892.

Goldman, David. “New York Times Sues OpenAI and Microsoft Over Use of Copyrighted Work.” CNN, 27 Dec. 2023, https://www.cnn.com/2023/12/27/tech/nyt-sues-openai-microsoft/index.html.

Harwell, Drew. “AI Image Generators Create Racial Biases When Prompted with Professional Jobs.” The Washington Post, 2023, https://www.washingtonpost.com/technology/2023/03/15/ai-image-generators-bias/.

Heaven, Will Douglas. “ChatGPT Leaked Internal Amazon Code, Researcher Claims.” MIT Technology Review, 2023, https://www.technologyreview.com/2023/04/11/chatgpt-leaks-data-amazon-code/.

Salesforce. “AI Trust Principles.” Salesforce, 2024, https://www.salesforce.com/company/news-press/stories/2024/ai-trust-principles/.

Smith, Brad. “AI Guardrails Not Brakes: Keynote Address.” Microsoft AI Regulation Summit, 2023, https://blogs.microsoft.com/blog/2023/09/18/brad-smith-ai-guardrails-not-brakes/

All Comments
Sort by:  Most Recent