It is an open question as to who owns the rights to content that is at least in part generated by Artificial Intelligence. This could include a Large Language Model (LLM) such as OpenAI’s ChatGPT, Microsoft’s Bing LLM, and Google’s Bard, or could be an AI image generator such as OpenAI’ DALL-E 2 or DeepAI.
While there is no easy answer, the bottom line here is that there is risk involved in publishing AI-generated content. The more of your own creativity that is added to the content, the more that risk of copyright issues stemming from the use is reduced. But there are tangential issues arising from using these tools in that any queries that you use to generate the content are public, and you will not have any confidentiality in those queries.
The best-case scenario is, of course, don't use these tools and generate all of your content independently. This avoids any and all copyright and confidentiality issues. But if you must use these tools, put as much of your own creativity and added work to the final product as you can.
The law in this area is very murky, to say the least. With the recent rapid proliferation of AI tools, the ownership of the work product of the AI generators is a very large question that nobody can definitively answer yet. However, based on relatively similar case law and some insights from the Copyright Office, there are some guardrails for you to make sure you aren't going completely off-course and generating higher-risk content than you are comfortable with.
The answer lies in exactly how much of the AI-generated content you are using without any human creative input. The more of your own human creative input that is put into the work, the safer you will be to publish the work.
Let's start with the concept of AI generators and how they learn. The AI generators (ChatGPT, Bing, etc.) are "taught" by reviewing a large amount of data and learning patterns in that data. That large amount of data is called a "data lake," and likely contains a large number of copyrighted materials within it. As you may already know, taking a currently protected work and generating another work from that (think of writing a new movie based in the Star Wars universe) is called a "derivative work," and is also protected by copyright.
If you are simply asking the AI tool a simple question and copying and pasting the answer (or the generated art), then there may be some issues from copyright holders. There are a few cases making their way through the system right now from copyright holders that are suing the AI generators that "learned" by using their protected works. One of these copyright holders you may have heard of is Getty, who holds the rights to a huge library of images. Getty is suing the creators of AI image generator Stable Diffusion (Getty Images (US) Inc v. Stability AI Inc, U.S. District Court for the District of Delaware, No. 1:23-cv-00135) for violating its copyright rights in those images and generating derivative works from them.
Based on that, the first of these guardrails is that it really comes down to exactly how much of the AI generated content you are using. The general rule at the Copyright office right now is that works created by anything other than a human author cannot be registered for copyright protection. For instance, a selfie taken by a monkey is not registerable for copyright protection (see Naruto v. David John Slater, No. 16-15469 (9th Cir. 2018)). But, as discussed above, there may be derivative copyright rights that are implicated in the use of the AI-generated image that was itself based on protected material.
If you are simply using AI-generated copy, think about using the AI-generated words and edit such that the initial AI-generated content was only used as a framework for the copy. A good analogy here might be to use the AI-generated content as the frame of a house but change and revise the internal layout and wall colors independently. Change the tone and tenor of the content and even update the structure of the content to better suit the particular use case. In the instance of artwork, perhaps use a corpus of the AI-generated content to stimulate ideas for use in the published work but avoid using the AI-generated work and independently create the final image if possible.
Finally, and perhaps just as importantly, most AI-generated content is based on user queries which are not held confidential. There is an interesting story out of New York where a lawyer used ChatGPT to do his research and draft a court filing. The cites that ChatGPT used for its research were famously wrong and led to sanctions of the lawyer. But just as importantly the queries and instructions that the lawyer used were in the public domain. If you are publishing any data or information that you would like to keep confidential until the publication date, it is an important factor to note.
To summarize, there is much we do not know about AI-generated content ownership. Some of the questions we have in this area will be answered by courts in the coming years, but we can use the guidance we currently have to mitigate the risks involved now.
If you have any questions regarding AI generated content or other intellectual property concerns, please reach out to Dan Blakeslee or another member of our Intellectual Property team at BrownWinick. We are here to offer trusted legal advice and add value to any matter, including those with complex and novel issues.