Communications professionals are in the thick of deciding how to integrate generative AI tools into their practices and business models. 

  • What tools will we use?
  • For what tasks will we use them?
  • How do we talk about AI use with our clients and other stakeholders?

While you make strategic decisions about the use cases for AI, think also about the appropriate balance of opportunity with the legal risks that accompany the adaptation of AI platforms into your communications work.

Do you have a plan to manage those risks?

Navigating AI Risks

We asked Sharon Toerek, a friend of Spin Sucks® and Founder of Toerek Law, about the key risks agencies and independent communications pros need to consider, and the practical steps they can take to minimize them.

Spin Sucks: One of the things we hear a lot as AI tools are being introduced into the market is that there’s concern about who can, and who will, own the work deliverables at the end of the day when AI technology is used. How do we know who owns what work?

Sharon Toerek: Right, it’s not as straightforward as it seems. Most practitioners understand now that it’s the copyright office’s legal position (in the U.S.) that only human-created content is entitled to copyright protection. But what about work that consists in part of AI-generated content and human-created content blended together? And even if the deliverables aren’t entirely protectable by copyright law, when it comes down to agency vs. client, who owns or controls the work? What about the terms and conditions of the particular AI platform you’ve used – what do those say about ownership of the output from generative AI? These are all variables to consider – and you can account for many of them with a little proactive work.

Spin Sucks: What kinds of proactive work would help here?

Sharon Toerek: A few key things: One – some guided conversations between agencies and clients about how AI will be used, which AI tools are permissible and whether the brand has any policies about the use of AI that will affect the use of AI platforms to produce the work. Next – clear contract language between brands and their communications agencies or practitioners about the responsibilities for use of AI-generated work as well as “rules of the road” for inputting the brand’s information into an AI platform. And then – some clearly written internal and external policies about the use of AI tools and the human guardrails needed to make sure that the content is acceptable to release into the world.

Spin Sucks: What legal issues do you think are not immediately obvious to communications professionals who are adopting AI tools to generate work?

Sharon Toerek: I think there’s a tendency to focus on the copyright issues, which are critical of course, and perhaps not as much attention being paid right now to the inputs that we’re putting into generative AI platforms. The concerns about this range from confidential business strategies and financial data, to personally identifiable and health-related information. Once you input the information into these tools, the secrecy around them can be compromised or destroyed. And you can’t unring that bell – although some technology platforms are working double-time to create additional security and assurances around the protection of proprietary or sensitive information. We’ll have to see how quickly that technology unfolds. For now, use caution and be sure the brand and the communicator are on the same page about what information can be input into the platform.

We recently previewed the new AI Agency Legal Toolkit released by Legal + Creative. If you’re looking for a “done” approach to proactively dealing with AI in your communications business, check out the toolkit here

Sharon Toerek

Sharon Toerek is Founder and President of Toerek Law | Legal + Creative, a national law firm focused on intellectual property, contracts and marketing regulations for independent communications firms in the U.S.

View all posts by Sharon Toerek