The biggest artificial intelligence developments of 2024 and what lies ahead

Updated 4 months ago on July 16, 2024

"These cases and others that will undoubtedly follow will guide the debate around generative AI and IP. However, they only begin to scratch the surface of how the law will evolve to meet the new challenges that AI presents."

AI

As artificial intelligence (AI) becomes more powerful, its applications expand, and it becomes widespread, new legal challenges to intellectual property arise. In simplistic terms, generative AI is a term used to describe algorithms that can create content such as images, written works, audio and video. These algorithms work by inputting similar types of content (i.e., "training"), from which the algorithm learns patterns. After enough training, the algorithm can create new content based on the patterns learned during training.

This relatively new mode of creation raises a myriad of legal issues that are now evolving along with the expanding capabilities of AI. This article examines some of the most significant AI-related developments in intellectual property law that have occurred by 2024 and what to expect in the remaining half.

Two critical copyright issues that AI raises are (1) whether content created by AI is protected by copyright, and (2) whether using copyrighted content to train AI is an infringement of the owner's copyright.

With respect to the first issue, the US Copyright Office takes the hardline position that content created by generative AI is not subject to copyright because it is not the work of a human author. However, the Copyright Office does not rule out the possibility that a work containing AI-generated material that "also contains a sufficient amount of human authorship" could be the basis for a copyright claim. In addition, the Copyright Office has confirmed that AI can be used as a tool in an author's creative process to "create his or her works or to remake, transform, or adapt his or her expressive works of authorship." This guidance raises significant questions, and as a result, questions about what constitutes "sufficient human authorship" will continue to arise.

As to the second issue, this is a hot topic in copyright law in 2024 and will likely remain so for the foreseeable future. Because generative AI works by "learning" from a multitude of similar creative works that have been used to train it, several ongoing lawsuits raise important questions such as: Is the use of copyrighted materials to train an AI an infringement of copyright? Or is it fair use? Is the output of generative AI a derivative work?

The cases decided to date have already begun to address these issues. For example, on December 27, 2023, the New York Times filed a copyright infringement lawsuit against OpenAI in the Southern District of New York, alleging that OpenAI used millions of articles published by the Times to train its generative artificial intelligence algorithm, ChatGPT. The Times alleges that OpenAI infringed its copyrights when OpenAI used these materials to train its ChatGPT algorithm. In addition, the Times argues that the output of ChatGPT is a derivative work because, again, generative AI is essentially an algorithm trained on existing content to then output new content based on patterns that the algorithm learned from the existing content. In other words, ChatGPT uses copyrighted Times content (and all other copyrighted content used to train it) to simply output a remix of the same work. Not surprisingly, OpenAI disputes the Times' allegations and moved to dismiss the case in February 2024, and the Times has since requested leave to file an amended complaint. Each of these motions is still pending, and accordingly, this case will remain one to keep an eye on until the end of 2024.

The Times is not alone in its efforts to challenge generative AI - there are many ongoing cases involving these issues. For example, in April 2024, a group of artists sued Google and its parent company Alphabet in the Northern District of California for allegedly making unauthorized use of copyrighted images while training an AI-based image generator known as Imagen. In another example, in Thomson Reuters v. ROSS in the District of Delaware, a case that has had time to mature (trial is scheduled for August 2024), Thompson Reuters alleges that ROSS illegally copied content from Thomson Reuter's Westlaw legal research platform to train its AI-based platform.

2. Licensing of innovations

The ability of generative AI to create works is creating new competitive threats in the creative industries. One way AI is being used to accomplish this is by creating works that utilize (or even enhance) the voice, image, writing style, and other attributes and skills unique to that individual. A relatively famous example is James Earl Jones, who signed over the rights to his voice to have an AI use it to create new dialog for Darth Vader in the Star Wars universe. As a result, Darth Vader's voice will continue to be heard for years to come without any input from Jones.

While Jones has made an agreement that AI can use his voice, not every AI is trained with permission, which means there will inevitably be questions about the scope of legal protection for unique human attributes. For example, in 2023, an AI-generated song featuring Drake and The Weeknd made headlines because it was virtually indistinguishable from the performers' authentic work, despite the fact that neither of them were involved in its creation. Most recently, there was a scuffle between actress Scarlett Johansen and OpenAI after OpenAI's voice assistant called "Skye" spoke in a voice eerily similar to hers.

The reaction to this use of AI has been to question how creatives can protect themselves from it. In particular, members of the Writers Guild of America and Screen Actors Guild took to the streets late last year to protest, among other things, the use of generative AI in these creative spaces. At the end of the year, an agreement was reached offering various protections for member artists against the use of AI. Thus, at least with collective bargaining, private parties can protect themselves contractually.

Of course, not all creatives seek to limit the use of artificial intelligence for competition - some seek the exact opposite, like Jones with his Darth Vader voice. As a result, humans are compensated for using these attributes with little or no time and effort. Some models, for example, license their likenesses to AI Fashion, a platform that pays models to use their likenesses in AI-generated images to model clothing. Even the Screen Actors Guild, despite its earlier protests, has been receptive to such opportunities. In January, it announced an agreement reached with Replica Studios, an artificial intelligence (AI) technology company for voice acting. Under the agreement, Replica Studios and voice artists can work together to use AI-generated versions of their voices in the development of video games and other interactive media projects.

Given the relative newness of this topic, legal issues are bound to come up, either in 2024 or later.

Virtually all aspects of intellectual property law will be affected by AI, and patent law is no exception. The United States Patent and Trademark Office (USPTO) has already begun to address these issues and has issued some guidance on AI.

On February 13, 2024, the USPTO published "Invention Guidelines for Inventions Created with AI," which confirms that generative AI cannot be an inventor and cannot be listed as a joint inventor. However, this guidance also states that AI-generated inventions are not categorically unpatentable. In determining patentability, the focus will be on whether the human inventor made a "significant" contribution to the invention. In general, patent law requires that a patent identify as inventors persons who have made a substantial contribution to the claims in the patent. While it might seem that this requirement would require that an AI who made a substantial contribution be listed as an inventor, which would destroy patentability because the AI may not be an inventor, the Office Guidelines provide a way around this. Specifically, the USPTO's position is that patent law only requires "naming the individuals who invented or discovered the claimed invention," and thus, "failure to name the AI system used to create the invention as a joint inventor does not render the invention unpatentable due to improper inventorship."

It should be noted that the Office's guidance "is not a substantive rulemaking and does not have the force and effect of law." Accordingly, as with the issues raised with respect to copyright in AI-generated content, the boundaries of what qualifies as a "significant" contribution have yet to be defined. In other words, how much is enough? Only the courts or legislatures can determine that, and it's only a matter of time - this year or next.

These and other cases that will undoubtedly follow will guide the debate around generative AI and IP. However, they only begin to scratch the surface of how the law will evolve to meet the new challenges posed by AI.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype