Let’s Stop Analogizing Human Creators to Machines

Of course, policy discussions usually begin with the existing framework, but in this instance, it can be a shaky starting place because generative AI presents some unique challenges—and not just for the practice of copyright law.

[Guest post by David Newhoff, author of The Illusion of More and Who Invented Oscar Wilde? The Photograph at the Center of Modern American Copyright.]

Just as it is folly to anthropomorphize computers and robots, it is also unhelpful to discuss the implications of generative AI in copyright law by analogizing machines to authors.[1] In 2019, I explored the idea that “machine learning” could be analogous to human reading if the human happens to have an eidetic memory. But this was a thought exercise, and in that post, I also imagined machine training that serves a computer science or research purpose—not necessarily generative AIs trained on protected works designed to produce works without authors.

In the present discussion, however, certain parties weighing in on AI and copyright seem to advocate policy that is premised on the language and principles of existing doctrine as applicable to the technological processes of both the input and output sides of the generative AI equation. Of course, policy discussions usually begin with the existing framework, but in this instance, it can be a shaky starting place because generative AI presents some unique challenges—and not just for the practice of copyright law.

We should be wary of analogizing machine functions to human activity for the simple reason that copyright law (indeed all law) has never been anything but anthropocentric. Although it is difficult to avoid speaking in terms of machines “learning” or “creating,” it is essential that we either constantly remind ourselves that these are weak, inaccurate metaphors, or that a new glossary is needed to describe what certain AIs may be doing in the world of creative production.

On the input (training) side of the equation, the moment someone says something like, “Humans learn to make art by looking at art, and generative AIs do the same thing,” the speaker should be directed to the break-out session on sci-fi and excused from any serious conversation about applicable copyright law. Likewise, on the output side, comparisons of AI to other technological developments—from the printing press to Photoshop—should be presumed irrelevant unless the AI at issue can plausibly be described as a tool of the author rather than the primary maker of a work of creative expression.

Copyright Office Guidance Highlights Some Key Difficulties

To emphasize the exceptional nature of this discussion, even experts are somewhat confused by both the doctrinal and administrative aspects in the new guidelines published by U.S. Copyright Office directing authors how to disclaim AI-generated material in a registration application. The confusion is hardly surprising because generative AI has prompted the Office to ask an unprecedented question—namely, How was this work made?

As noted in several posts, copyrightability has always been agnostic with regard to the creative process. Copyright rights attach to works that show a modicum of originality, and the Copyright Office does not generally ask what tools, methods, etc. the author used to make a work.[2] But this historic practice was then confronted by the now widely reported applications submitted by Stephen Thaler and Kris Kashtanova, both claiming copyright in visual works made with generative AI.

In both cases, the Copyright Office rejected registration applications for the visual works based on the longstanding, bright-line doctrine that copyright rights can only attach to works made by human beings. In Thaler’s case, the consideration is straightforward because the claimant affirmed that the image was produced entirely by a machine. Kashtanova, on the other hand, asserts more than de minimis authorship (i.e., using AI as a tool) to produce the visual works elements in a comic book.

Whether in response to Kashtanova—or certainly anticipating applications yet to come—the muddiness of the Office guidelines is an attempt to address the difficult question as to whether copyright attaches to a work that combines authorship and AI generation, and how to draw distinctions between the two. This is not only new territory for the Office as a doctrinal matter but is a potential mess as an administrative one.

The Copyright Office has never been tasked with separating the protectable expression attributable to a human from the unprotectable expression attributable to a machine. Even if it could be said that photography has always provoked this tension (a discussion on its own), the analysis has never been an issue for the Office when registering works, but only for the courts in resolving claims of infringement. In fact, Warhol v. Goldsmith, although a fair use case, is a prime example of how tricky it can be to separate the factual elements of a photograph from the expressive elements.

But now the Copyright Office is potentially tasked with a copyrightability question that, in practice, would ask both the author and the examiner to engage in a version of the idea/expression dichotomy analysis—first separating the machine generated material from the author’s material and then considering whether the author has a valid claim in the protectable expression.

This is not so easy to accomplish in a work that combines author and machine-made elements in a manner that may be subtly intertwined; it begs new questions about what the AI “contributed” to a given work; and the inquiry is further complicated by the variety of AI tools in the market or in development. Then, because neither the author/claimant nor the Office examiner is likely a copyright attorney (let alone a court), the inquiry is fraught with difficulty as an administrative process—and that’s if the author makes a good-faith effort to disclaim the AI-generated material in the first place.

Many independent authors are confused enough by the Limit of Claim in a registration application or the concept of “published” versus “unpublished.” Asking these same creators to delve into the metaphysics implied by the AI/Author distinction seems like a dubious enterprise, and one that is not likely to foster more faith in the copyright system than the average indie creator has right now.

Copyrightability Could Remain Blind But …

It is understandable that some creators (e.g., filmmakers using certain plug-ins) may be concerned that the Copyright Office has already taken too broad a view—connoting a per se rule that denies copyrightability for any work generated with any AI technology. This concern is a reminder that AI should not be discussed as a monolithic topic because not all AI enhanced products do the same thing. And again, this may imply a need for some new terms rather than the words we use to describe human activities.

In this light, one could follow a different line of reasoning and argue that the agnosticism of copyrightability vis-à-vis process has always implied a presumption of human authorship where other factors—from technological enhancements to dumb luck—invisibly contribute to the protectable expression. Relatedly, a photographer can add a filter or plug-in that changes the expressive qualities of her image, but doing so is considered part of the selection and arrangement aspect of her authorship and does not dilute the copyrightability of the image.

Some extraordinary visual work has already been produced by professional artists using AI to yield results that are too strikingly well-crafted to believe that the author has not exerted considerable influence over the final image. In this regard, then, perhaps the copyrightability question at the registration stage, no matter how sophisticated the “filter” becomes, should remain blind to process. The Copyright Office could continue to register works submitted by valid claimants without asking the novel How question.

But the more that works may be generated with little or no human spark, the more this agnostic, status-quo approach could unravel the foundation of copyright rights altogether. And it would not be the first time that major tech companies have sought to do exactly that. It is no surprise that an AI developer or a producer using AI would seek the financial benefits of copyright protection; but without a defensible presence of human expression in the work, the exclusive rights of copyright cannot vest in a person with the standing to defend those rights. Nowhere in U.S. law do non-humans have rights of any kind, and this foundational principle reminds us that although machine activity can be compared to human activity as an allegorical construct, this is too whimsical for a serious policy discussion.

Again, I highlight this tangle of administrative and doctrinal factors to emphasize the point that generative AI does not merely present new variations on old questions (e.g., photography), but raises novel questions that cannot easily be answered by analogies to the past. If the challenges presented by generative AI are to be resolved sensibly, and in a way that will serve independent creators, policymakers and thought leaders on copyright law should be skeptical of arguments that too earnestly attempt to transpose centuries of doctrine for human activity into principles applied to machine activity.


[1] I do not distinguish “human” authors, because there is no other kind.

[2] I say “generally” only because I cannot account for every conversation among claimants and examiners.

Balancing the First Amendment on Whiskey and Dog Toys

The US Supreme Court has heard oral arguments and will soon decide the fate of the “Bad Spaniels” dog toy.

The United States Supreme Court has weighed First Amendment rights in the balance against many things: privacy, national security, the desire to protect children from hearing a bad word on the radio, to name a few. Now the Court will need to balance them against trademark interests. The Court heard oral arguments in Jack Daniel’s Props. v. VIP Prods., No. 22-148, on March 22, 2023.

I’ve written about this case before. To summarize, it is a dispute between whiskey manufacturer Jack Daniel’s and dog-toy maker VIP Products. The dog toy in question is shaped like a bottle of Jack Daniel’s whiskey and has a label that looks like the famous whiskey label. Instead of “Jack Daniel’s,” though, the dog toy is called “Bad Spaniels.” Instead of “Old No. 7 Brand Tennessee sour mash whiskey,” the dog toy label reads, “Old No. 2 on your Tennessee carpet.”

VIP sued for a declaratory judgment to the effect that this does not amount to trademark infringement or dilution. Jack Daniel’s filed a counterclaim alleging that it does. The trial court ruled in favor of the whiskey maker, finding a likelihood of consumer confusion existed. The Ninth Circuit Court of Appeals, however, reversed. The appeals court held that the dog toys came within the “noncommercial use” exception to dilution liability. Regarding the infringement claim, the court held, basically, that the First Amendment trumps private trademark interests. A petition for U.S. Supreme Court review followed.

Rogers v. Grimaldi

Rogers v. Grimaldi, 875 F.2d 994 (2d Cir. 1989) is a leading case on collisions of trademark and First Amendment rights. In that case, Ginger Rogers, Fred Astaire’s famous dance partner, brought suit against the makers of a movie called “Ginger and Fred.” She claimed that the title created the false impression that the movie was about her or that she sponsored, endorsed or was affiliated with it in some way. The Second Circuit affirmed the trial court’s ruling against her, on the basis that the title of the movie was artistic expression, protected by the First Amendment as such.

When the medium is the message

Some commentators have suggested that the balance struck in favor of the First Amendment in Rogers v. Grimaldi should only apply in cases involving traditional conveyors of expressive content, i.e., books, movies, drawings, and the like. They would say that when the product involved has a primarily non-expressive purpose (such as an object for a dog to chew), traditional trademark analysis focused on likelihood of confusion should apply sans a First Amendment override.

Does this distinction hold water, though? True, commercial speech receives a lower level of protection than artistic or political speech does, but both dog toys and movies are packaged and marketed commercially. Books, movies, music, artwork, video games, software, and many other items containing expressive content are packaged and marketed commercially. Moreover, if a banana taped to a wall is a medium of artistic expression, on what basis can we logically differentiate a case where a dog toy is used as the medium of expression?

A decision is expected in June.