A Recent Entrance to Complexity

The United States Copyright Office recently reaffirmed its position that it will not register AI-generated content, because it is not created by a human. The rule is easy to state; the devil is in the details. Attorney Thomas James explains.

Last year, the United States Copyright Office issued a copyright registration to Kristina Kashtanova for the graphic novel, Zarya of the Dawn. A month later, the Copyright Office issued a notice of cancellation of the registration, along with a request for additional information.

The Copyright Office, consistent with judicial decisions, takes the position that copyright requires human authorship. The Office requested additional information regarding the creative process that resulted in the novel because parts of it were AI-generated. Kashtanova complied with the request for additional information.

This week, the Copyright Office responded with a letter explaining that the registration would be cancelled, but that a new, more limited one will be issued. The Office explained that its concern related to the author’s use of Midjourney, an AI-powered image generating tool, to generate images used in the work:

Because Midjourney starts with randomly generated noise that evolves into a final image, there is no guarantee that a particular prompt will generate any particular visual output”

U.S. Copyright Office letter

The Office concluded that the text the author wrote, as well as the author’s selection, coordination and arrangement of written and visual elements, are protected by copyright, and therefore may be registered. The images generated by Midjourney, however, would not be registered because they were “not the product of human authorship.” The new registration will cover only the text and editing components of the work, not the AI-generated images.

A Previous Entrance to Paradise

Early last year, the Copyright Office refused copyright registration for an AI-generated image. Steven Thaler had filed an application to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” He listed himself as the copyright owner. The Copyright Office denied registration on the grounds that the work lacked human authorship. Thaler filed a lawsuit in federal court seeking to overturn that determination. The lawsuit is still pending. It is currently at the summary judgment stage.

The core issue

The core issue, of course, is whether a person who uses AI to generate content such as text or artwork can claim copyright protection in the content so generated. Put another way, can a user who deploys artificial intelligence to generate a seemingly expressive work (such as artwork or a novel) claim authorship?

This question is not as simple as it may seem. There can be different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. Text and images can be generated by these systems with minimal human input. If the only human input is a user’s directive to “Write a story” or “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

Peering into the worm can

The complicating consideration with content-generative AI mechanisms is that they have the potential to allow many different levels of user involvement in the generation of output. The more details a user adds to the instructions s/he gives to the machine, the more it begins to appear that the user is, in fact, contributing something creative to the project.

Is a prompt to “Write a story about a dog” a sufficiently creative contribution to the resulting output to qualify the user as an “author”? Maybe not. But what about, “Write a story about a dog who joins a traveling circus”? Or “Write a story about a dog named Pablo who joins a traveling circus”? Or “Write a story about a dog with a peculiar bark that begins, ‘Once upon a time, there was a dog named Pablo who joined a circus,’ and ends with Pablo deciding to return home”?

At what point along the spectrum of user-provided detail does copyright protectable authorship come into existence?

A question that is just as important to ask is: How much, if at all, should the Copyright Office involve itself with ascertaining the details of the creative process that were involved in a work?

In a similar vein, should copyright registration applicants be required to disclose whether their works contain AI-generated content? Should they be required to affirmatively disclaim rights in elements of AI-generated content that are not protected by copyright?

Expanding the Rule of Doubt

Alternatively, should the U.S. Copyright Office adopt something like a Rule of Doubt when copyright is claimed in AI-generated content? The Rule of Doubt, in its current form, is the rule that the U.S. Copyright Office will accept a copyright registration of a claim containing software object code, even though the Copyright Office is unable to verify whether the object code contains copyrightable work. If effect, if the applicant attests that the code is copyrightable, then the Copyright Office will assume that it is and will register the claim. Under 37 C.F.R. § 202.20(c)(2)(vii)(B), this may be done when an applicant seeks to register a copyright in object code rather than source code. The same is true of material that is redacted to protect a trade secret.

When the Office issues a registration under the Rule of Doubt, it adds an annotation to the certificate and to the public record indicating that the copyright was registered under the Rule of Doubt.

Under the existing rule, the applicant must file a declaration stating that material for which registration is sought does, in fact, contain original authorship.

This approach allows registration but leaves it to courts (not the Copyright Office) to decide on a case-by-case basis whether material for which copyright is claimed contains copyrightable authorship.  

Expanding the Rule of Doubt to apply to material generated at least in part by AI might not be the most satisfying solution for AI users, but it is one that could result in fewer snags and delays in the registration process.

Conclusion

The Copyright Office has said that it soon will be developing registration guidance for works created in part using material generated by artificial intelligence technology. Public notices and events relating to this topic may be expected in the coming months.


Need help with a copyright matter? Contact attorney Thomas James.

Why Machine Training AI with Protected Works is Not Fair Use

… if the underlying goal of copyright’s exclusive rights and the fair use exception is to promote new “authorship,” this is doctrinally fatal to the proposal that training AIs on volumes of protected works favors a finding of fair use.

Guest blogger David Newhoff lays out the argument against the claim that training AI systems with copyright-protected works is fair use. David is the author of Who Invented Oscar Wilde? The Photograph at the Center of Modern American Copyright (Potomac Books 2020) and is a copyright advocate/writer at The Illusion of More.


As most copyright watchers already know, two lawsuits were filed at the start of the new year against AI visual works companies. In the U.S., a class-action was filed by visual artists against DeviantArt, Midjourney, and Stability AI; and in the UK, Getty Images is suing Stability AI. Both cases allege infringing use of large volumes of protected works fed into the systems to “train” the algorithms. Regardless of how these two lawsuits might unfold, I want to address the broad defense, already being argued in the blogosphere, that training generative AIs with volumes of protected works is fair use. I don’t think so.

Copyright advocates, skeptics, and even outright antagonists generally agree that the fair use exception, correctly applied, supports the broad aim of copyright law to promote more creative work. In the language of the Constitution, copyright “promotes the progress of science,” but a more accurate, modern description would be that copyright promotes new “authorship” because we do not tend to describe literature, visual arts, music, etc. as “science.”

The fair use doctrine, codified in the federal statute in 1976, originated as judge-made law, and from the seminal Folsom v. Marsh to the contemporary Andy Warhol Foundation v. Goldsmith, the courts have restated, in one way or another, their responsibility to balance the first author’s exclusive rights with a follow-on author’s interest in creating new expression. And as a matter of general principle, it is held that the public benefits from this balancing act because the result is a more diverse market of creative and cultural works.

Fair use defenses are case-by-case considerations and while there may be specific instances in which an AI purpose may be fair use, there are no blanket exceptions. More broadly, though, if the underlying goal of copyright’s exclusive rights and the fair use exception is to promote new “authorship,” this is doctrinally fatal to the proposal that training AIs on volumes of protected works favors a finding of fair use. Even if a court holds that other limiting doctrines render this activity by certain defendants to be non-infringing, a fair use defense should be rejected at summary judgment—at least for the current state of the technology, in which the schematic encompassing AI machine, AI developer, and AI user does nothing to promote new “authorship” as a matter of law.

The definition of “author” in U.S. copyright law means “human author,” and there are no exceptions to this anywhere in our history. The mere existence of a work we might describe as “creative” is not evidence of an author/owner of that work unless there is a valid nexus between a human’s vision and the resulting work fixed in a tangible medium. If you find an anonymous work of art on the street, absent further research, it has no legal author who can assert a claim of copyright in the work that would hold up in any court. And this hypothetical emphasizes the point that the legal meaning of “author” is more rigorous than the philosophical view that art without humans is oxymoronic. (Although it is plausible to find authorship in a work that combines human creativity with AI, I address that subject below.)

As a matter of law, the AI machine itself is disqualified as an “author” full stop. And although the AI owner/developer and AI user/customer are presumably both human, neither is defensibly an “author” of the expressions output by the AI. At least with the current state of technologies making headlines, nowhere in the process—from training the AI, to developing the algorithm, to entering prompts into the system—is there an essential link between those contributions and the individual expressions output by the machine. Consequently, nothing about the process of ingesting protected works to develop these systems in the first place can plausibly claim to serve the purpose of promoting new “authorship.”

But What About the Google Books Case?

Indeed. In the fair use defenses AI developers will present, we should expect to see them lean substantially on the holding in Authors Guild v. Google Books—a decision which arguably exceeds the purpose of fair use to promote new authorship. The Second Circuit, while acknowledging that it was pushing the boundaries of fair use, found the Google Books tool to be “transformative” for its novel utility in presenting snippets of books; and because that utility necessitates scanning whole books into its database, a defendant AI developer will presumably want to make the comparison. But a fair use defense applied to training AIs with volumes of protected works should fail, even under the highly utilitarian holding in Google Books.

While people of good intent can debate the legal merits of that decision, the utility of the Google Books search engine does broadly serve the interest of new authorship with a useful research tool—one I have used many times myself. Google Books provides a new means by which one author may research the works of another author, and this is immediately distinguishable from the generative AI which may be trained to “write books” without authors. Thus, not only does the generative AI fail to promote authorship of the individual works output by the system, but it fails to promote authorship in general.

Although the technology is primitive for the moment, these AIs are expected to “learn” exponentially and grow in complexity such that AIs will presumably compete with or replace at least some human creators in various fields and disciplines. Thus, an enterprise which proposes to diminish the number of working authors, whether intentionally or unintentionally, should only be viewed as devastating to the purpose of copyright law, including the fair use exception.

AI proponents may argue that “democratizing” creativity (i.e., putting these tools in every hand) promotes authorship by making everyone an author. But aside from the cultural vacuum this illusion of more would create, the user prompting the AI has a high burden to prove authorship, and it would really depend on what he is contributing relative to the AI. As mentioned above, some AIs may evolve as tools such that the human in some way “collaborates” with the machine to produce a work of authorship. But this hypothetical points to the reason why fair use is a fact-specific, case-by-case consideration. AI Alpha, which autonomously creates, or creates mostly without human direction, should not benefit from the potential fair use defense of AI Beta, which produces a tool designed to aid, but not replace, human creativity.

Broadly Transformative? Don’t Even Go There

Returning to the constitutional purpose of copyright law to “promote science,” the argument has already been floated as a talking point that training AI systems with protected works promotes computer science in general and is, therefore, “transformative” under fair use factor one for this reason. But this argument should find no purchase in court. To the extent that one of these neural networks might eventually spawn revolutionary utility in medicine or finance etc., it would be unsuitable to ask a court to hold that such voyages of general discovery fit the purpose of copyright, to say nothing of the likelihood that the adventure strays inevitably into patent law. Even the most elastic fair use findings to date reject such a broad defense.

It may be shown that no work(s) output by a particular AI infringes (copies) any of the works that went into its training. It may also be determined that the corpus of works fed into an AI is so rapidly atomized into data that even fleeting “reproduction” is found not to exist, and, thus, the 106(1) right is not infringed. Those questions are going to be raised in court before long, and we shall see where they lead. But to presume fair use as a broad defense for AI “training” is existentially offensive to the purpose of copyright, and perhaps to law in general, because it asks the courts to vest rights in non-humans, which is itself anathema to caselaw in other areas.[1]

It is my oft-stated opinion that creative expression without humans is meaningless as a cultural enterprise, but it is a matter of law to say that copyright is meaningless without “authors” and that there is no such thing as non-human “authors.” For this reason, the argument that training AIs on protected works is inherently fair use should be denied with prejudice.


[1] Cetaceans v. Bush holding that animals do not have standing in court was the basis for rejecting PETA’S complaint against photographer Slater for infringing the copyright rights of the monkey in the “Monkey Selfie” fiasco.


A Thousand Cuts: AI and Self-Destruction

David Newhoff comments on generative AI (artificial intelligence) and public policy.

A guest post written by David Newhoff. AI, of course, stands for “artificial intelligence.” David is the author of Who Invented Oscar Wilde? The Photograph at the Center of Modern American Copyright (Potomac Books 2020) and a copyright advocate/writer at The Illusion of More.


I woke up the other day thinking about artificial intelligence (AI) in context to the Cold War and the nuclear arms race, and curiously enough, the next two articles I read about AI made arms race references. Where my pre-caffeinated mind had gone was back to the early 1980s when, as teenagers, we often asked that futile question as to why any nation needed to stockpile nuclear weapons in quantities that could destroy the world many times over.

Every generation of adolescents believes—and at times confirms—that the adults have no idea what the hell they’re doing; and watching the MADness of what often seemed like a rapturous embrace of nuclear annihilation was, perhaps, the unifying existential threat which shaped our generation’s world view. Since then, reasonable arguments have been made that nuclear stalemate has yielded an unprecedented period of relative global peace, but the underlying question remains:  Are we powerless to stop the development of new modes of self-destruction?

Of course, push-button extinction is easy to imagine and, in a way, easy to ignore. If something were to go terribly wrong, and the missiles fly, it’s game over in a matter of minutes with no timeouts left. So, it is possible to “stop worrying” if not quite “love the bomb” (h/t Strangelove); but today’s technological threats preface outcomes that are less merciful than swift obliteration. Instead, they offer a slow and seemingly inexorable decline toward the dystopias of science fiction—a future in which we are not wiped out in a flash but instead “amused to death” (h/t Postman) as we relinquish humanity itself to the exigencies of technologies that serve little or no purpose.

The first essay I read about AI, written by Anja Kaspersen and Wendell Wallach for the Carnegie Council, advocates a “reset” in ethical thinking about AI, arguing that giant technology investments are once again building systems with little consideration for their potential effect on people. “In the current AI discourse we perceive a widespread failure to appreciate why it is so important to champion human dignity. There is risk of creating a world in which meaning and value are stripped from human life,” the authors write. Later, they quote Robert Oppenheimer …

It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge, and are willing to take the consequences.

I have argued repeatedly that generative AI “art” is devoid of meaning and value and that the question posed by these technologies is not merely how they might influence copyright law, but whether they should exist at all. It may seem farfetched to contemplate banning or regulating the development of AI tech, but it should not be viewed as an outlandish proposal. If certain AI developments have the capacity to dramatically alter human existence—perhaps even erode what it means to be human—why is this any less a subject of public policy than regulating a nuclear power plant or food safety?

Of course, public policy means legislators, and it is quixotic to believe that any Congress, let alone the current one, could sensibly address AI before the industry causes havoc. At best, the tech would flood the market long before the most sincere, bipartisan efforts of lawmakers could grasp the issues; and at worst, far too many politicians have shown that they would sooner exploit these technologies for their own gain than they would seek to regulate it in the public interest. “AI applications are increasingly being developed to track and manipulate humans, whether for commercial, political, or military purposes, by all means available—including deception,” write Kaspersen and Wallach. I think it’s fair to read that as Cambridge Analytica 2.0 and to recognize that the parties who used the Beta version are still around—and many have offices on Capitol Hill.

Kaspersen and Wallach predict that we may soon discover that generative AI will have the same effect on education that “social media has had on truth.” In response, I would ask the following: In the seven years since the destructive power of social media became headline news, have those revelations significantly changed the conversation, let alone muted the cyber-libertarian dogma of the platform owners? I suspect that AI in the classroom threatens to exacerbate rather than parallel the damage done by social media to truth (i.e., reason). If social media has dulled Socratic skills with the flavors of narcissism, ChatGPT promises a future that does not remember what Socratic skills used to mean.

And that brings me to the next article I read in which Chris Gillard and Pete Rorabaugh, writing for Slate, use “arms race” as a metaphor to criticize technological responses to the prospect of students cheating with AI systems like ChatGPT. Their article begins:

In the classroom of the future—if there still are any—it’s easy to imagine the endpoint of an arms race: an artificial intelligence that generates the day’s lessons and prompts, a student-deployed A.I. that will surreptitiously do the assignment, and finally, a third-party A.I. that will determine if any of the pupils actually did the work with their own fingers and brain. Loop complete; no humans needed. If you were to take all the hype about ChatGPT at face value, this might feel inevitable. It’s not.

In what I feared might be another tech-apologist piece labeling concern about AI a “moral panic,” Gillard and Rorabaugh make the opposite point. Their criticism of software solutions to mitigate student cheating is that it is small thinking which erroneously accepts as a fait accompli that these AI systems are here to stay whether we like it or not. “Telling us that resistance to a particular technology is futile is a favorite talking point for technologists who release systems with few if any guardrails out into the world and then put the onus on society to address most of the problems that arise,” they write.

In other words, here we go again. The ethical, and perhaps legal, challenges posed by AI are an extension of the same conversation we generally failed to have about social media and its cheery promises to be an engine of democracy. “It’s a failure of imagination to think that we must learn to live with an A.I. writing tool just because it was built,” Gillard and Rorabaugh argue. I would like to agree but am skeptical that the imagination required to reject certain technologies exists outside the rooms where ethicists gather. And this is why I wake up thinking about AI in context to the Cold War, except of course that the doctrine of Mutually Assured Destruction was rational by contrast.


Photo by the author.

View the original article on The Illusion of More.

Contact attorney Tom James for copyright help

Need help registering a copyright or a group of copyrights in the United States, or enforcing a copyright in the United States? Contact attorney Tom James.

Getty Images Litigation Update

Getty Images has now filed a lawsuit for copyright infringement in the United States.

In a previous post, I reported on a lawsuit that Getty Images had filed in the United Kingdom against Stability AI. Now the company has filed similar claims against the company in the United States.

The complaint, which has been filed in federal district court in Delaware, alleges claims of copyright infringement; providing false copyright management information; removal or alteration of copyright management information; trademark infringement; trademark dilution; unfair competition; and deceptive trade practices. Both monetary damages and injunctive relief are being sought.

An interesting twist in the Getty litigation is that AI-generated works allegedly have included the Getty Images trademark.

Getty Images logo on AI-generated image
(Reproduction of a portion of the Complaint filed in Getty Images v. Stability AI, Inc. in U.S. district court for the district of Delaware, case no. Case 1:23-cv-00135-UNA (2023). The image has been cropped to avoid reproducing the likenesses of persons appearing in the image and to display only what is needed here for purposes of news reporting and commentary,)

Getty Images, which is in the business of collecting and licensing quality images, alleges (among other things) that affixing its trademark to poor quality AI-generated images tarnishes the company’s reputation. If proven, this could constitute trademark dilution, which is prohibited by the Lanham Act.

AI Legal Issues

Thomas James (“The Cokato Copyright Attorney”) describes the range of legal issues, most of which have not yet been resolved, that artificial intelligence (AI) systems have spawned.

AI is not new. Its implementation also is not new. In fact, consumers regularly interact with AI-powered systems every day. Online help systems often use AI to provide quick answers to questions that customers routinely ask. Sometimes these are designed to give a user the impression that s/he is communicating with a person.

AI systems also perform discrete functions such as analyzing a credit report and rendering a decision on a loan or credit card application, or screening employment applications.

Many other uses have been found for AI and new ones are being developed all the time. AI has been trained not just to perform customer service tasks, but also to perform analytics and diagnostic tests; to repair products; to update software; to drive cars; and even to write articles and create images and videos. These developments may be helping to streamline tasks and improve productivity, but they have also generated a range of new legal issues.

Tort liability

While there are many different kinds of tort claims, the elements of tort claims are basically the same: (1) The person sought to be held liable for damages or ordered to comply with a court order must have owed a duty to the person who is seeking the legal remedy; (2) the person breached that duty; (3) the person seeking the legal remedy experienced harm, i.e., real or threatened injury; and (4) the breach was the actual and proximate cause of the harm.

The kind of harm that must be demonstrated varies depending on the kind of tort claim. For example, a claim of negligent driving might involve bodily injury, while a claim of defamation might involve injury to reputation. For some kinds of tort claims, the harm might involve financial or economic injury. 

The duty may be specified in a statute or contract, or it might be judge-made (“common law.”) It may take the form of an affirmative obligation (such as a doctor’s obligation to provide a requisite level of care to a patient), or it may take a negative form, such as the common law duty to refrain from assaulting another person.

The advent of AI does not really require any change in these basic principles, but they can be more difficult to apply to scenarios that involve the use of an AI system.

Example. Acme Co. manufactures and markets Auto-Doc, a machine that diagnoses and repairs car problems. Mike’s Repair Shop lays off its automotive technician employees and replaces them with one of these machines. Suzie Consumer brings her VW Jetta to Mikes Repair Shop for service because she has been hearing a sound that she describes as being a grinding noise that she thinks is coming from either the engine or the glove compartment. The Auto-Doc machine adds engine oil, replaces belts, and removes the contents of the glove compartment. Later that day, Suzie’s brakes fail and her vehicle hits and kills a pedestrian in a crosswalk. A forensic investigation reveals that her brakes failed because they were badly worn. Who should be held liable for the pedestrian’s death – Suzie, Mike’s, Acme Co., some combination of two of them, all of them, or none of them?

The allocation of responsibility will depend, in part, on the degree of autonomy the AI machine possesses. Of course, if it can be shown that Suzie knew or should have known that her brakes were bad, then she most likely could be held responsible for causing the pedestrian’s death. But what about the others? Their liability, or share of liability, is affected by the degree of autonomy the AI machine possesses. If it is completely autonomous, then Acme might be held responsible for failing to program the machine in such a way that it would test for and detect worn brake pads even if a customer expresses an erroneous belief that the sound is coming from the engine or the glove compartment. On the other hand, if the machine is designed only to offer suggestions of possible problems and solutions,  leaving it up to a mechanic to accept or reject them, then Mike’s might be held responsible for negligently accepting the machine’s recommendations. 

Assuming the Auto-Doc machine is fully autonomous, should Mike’s be faulted for relying on it to correctly diagnose car problems? Is Mike’s entitled to rely on Acme’s representations about Auto-Doc’s capabilities, or would the repair shop have a duty to inquire about and/or investigate Auto-Doc’s limitations? Assuming Suzie did not know, and had no reason to suspect, her brakes were worn out, should she be faulted for relying on a fully autonomous machine instead of taking the car to a trained human mechanic?  Why or why not?

Criminal liability

It is conceivable that an AI system might engage in activity that is prohibited by an applicable jurisdiction’s criminal laws. E-mail address harvesting is an example. In the United States, for example, the CAN-SPAM Act makes it a crime to send a commercial email message to an email address that was  obtained  by automated scraping of Internet websites for email addresses. Of course, if a person intentionally uses an AI system for scraping, then liability should be clear. But what if an AI system “learns” to engage in scraping?

AI-generated criminal output may also be a problem. Some countries have made it a crime to display a Nazi symbol, such as a swastika, on a website. Will criminal liability attach if a website or blog owner uses AI to generate illustrated articles about World War II and the system generates and displays articles that are illustrated with World War II era German flags and military uniforms? In the United States, creating or possessing child pornography is illegal. Will criminal liability attach if an AI system generates it?

Some of these kinds of issues can be resolved through traditional legal analysis of the intent and scienter elements of the definitions of crimes. A jurisdiction might wish to consider, however, whether AI systems should be regulated to require system creators to implement measures that would prevent illegal uses of the technology. This raises policy and feasibility questions, such as whether and what kinds of restraints on machine learning should be required, and how to enforce them. Further, would prior restraints on the design and/or use of AI-powered expressive-content-generating systems infringe on First Amendment rights?  

Product liability

Related to the problem of allocating responsibility for harm caused by the use of an AI mechanism is the question whether anyone should be held liable for harm caused when the mechanism is not defective, that is to say, when it is operating as it should.

 Example.  Acme Co. manufactures and sells Auto-Article, a software program that is designed to create content of a type and kind the user specifies. The purpose of the product is to enable a website owner to generate and publish a large volume of content frequently, thereby improving the website’s search engine ranking. It operates   by scouring the Internet and analyzing instances of the content the user specifies to produce new content that “looks like” them. XYZ Co. uses the software to generate articles on medical topics. One of these articles explains that chest pain can be caused by esophageal spasms but that these typically do not require treatment unless they occur frequently enough to interfere with a person’s ability to eat or drink. Joe is experiencing chest pain. He does not seek medical help, however, because he read the article and therefore believes he is experiencing esophageal spasms. He later collapses and dies from a heart attack. A medical doctor is prepared to testify that his death could have been prevented if he had sought medical attention when he began experiencing the pain.

Should either Acme or XYZ Co. be held liable for Joe’s death? Acme could argue that its product was not defective. It was fit for its intended purposes, namely, a machine learning system that generates articles that look like articles of the kind a user specifies. What about XYZ Co.? Would the answer be different if XYZ had published a notice on its site that the information provided in its articles is not necessarily complete and that the articles are not a substitute for advice from a qualified medical professional? If XYZ incurs liability as a result of the publication, would it have a claim against Acme, such as for failure to warn it of the risks of using AI to generate articles on medical topics?

Consumer protection

AI system deployment raises significant health and safety concerns. There is the obvious example of an AI system making incorrect medical diagnoses or treatment recommendations. Autonomous (“self-driving”) motor vehicles are also examples. An extensive body of consumer protection regulations may be anticipated.

Forensic and evidentiary issues

In situations involving the use of semi-autonomous AI, allocating responsibility for harm resulting from the operation of the AI  system  may be difficult. The most basic question in this respect is whether an AI system was in use or not. For example, if a motor vehicle that can be operated in either manual or autonomous mode is involved in an accident, and fault or the extent of liability depends on that (See the discussion of tort liability, above), then a way of determining the mode in which the car was being driven at the time will be needed.

If, in the case of a semi-autonomous AI system, tort liability must be allocated between the creator of the system and a user of it, the question of fault may depend on who actually caused a particular tortious operation to be executed – the system creator or the user. In that event, some method of retracing the steps the AI system used may be essential. This may also be necessary in situations where some factor other than AI contributed, or might have contributed, to the injury. Regulation may be needed to ensure that the steps in an AI system’s operations are, in fact, capable of being ascertained.

Transparency problems also fall into this category. As explained in the Journal of Responsible Technology, people might be put on no-fly lists, denied jobs or benefits, or refused credit without knowing anything more than that the decision was made through some sort of automated process. Even if transparency is achieved and/or mandated, contestability will also be an issue.

Data Privacy

To the extent an AI system collects and stores personal or private information, there is a risk that someone may gain unauthorized access to it.. Depending on how the system is designed to function, there is also a risk that it might autonomously disclose legally protected personal or private information. Security breaches can cause catastrophic problems for data subjects.

Publicity rights

Many jurisdictions recognize a cause of action for violation of a person’s publicity rights (sometimes called “misappropriation of personality.”) In these jurisdictions, a person has an exclusive legal right to commercially exploit his or her own name, likeness or voice. To what extent, and under what circumstances, should liability attach if a commercialized AI system analyzes the name, likeness or voice of a person that it discovers on the Internet? Will the answer depend on how much information about a particular individual’s voice, name or likeness the system uses, on one hand, or how closely the generated output resembles that individual’s voice, name or likeness, on the other?

Contracts

The primary AI-related contract concern is about drafting agreements that adequately and effectively allocate liability for losses resulting from the use of AI technology. Insurance can be expected to play a larger role as the use of AI spreads into more areas.

Bias, Discrimination, Diversity & Inclusion

Some legislators have expressed concern that AI systems will reflect and perpetuate biases and perhaps discriminatory patterns of culture. To what extent should AI system developers be required to ensure that the data their systems use are collected from a diverse mixture of races, ethnicities, genders, gender identities, sexual orientations, abilities and disabilities, socioeconomic classes, and so on? Should developers be required to apply some sort of principle of “equity” with respect to these classifications, and if so, whose vision of equity should they be required to enforce? To what extent should government be involved in making these decisions for system developers and users?

Copyright

AI-generated works like articles, drawings, animations, music and so on, raise two kinds of copyright issues:

  1. Input issues, i.e., questions like whether AI systems that create new works based on existing copyright-protected works infringe the copyrights in those works
  2. Output issues, such as who, if anybody, owns the copyright in an AI-generated work.

I’ve written about AI copyright ownership issues and AI copyright infringement issues in previous blog posts on The Cokato Copyright Attorney.

Patents and other IP

Computer programs can be patented. AI systems can be devised to write computer programs. Can an AI-generated computer program that meets the usual criteria for patentability (novelty, utility, etc.) be patented?

Is existing intellectual property law adequate to deal with AI-generated inventions and creative works? The World Intellectual Property Organization (WIPO) apparently does not think so. It is formulating recommendations for new regulations to deal with the intellectual property aspects of AI.

Conclusion

AI systems raise a wide range of legal issues. The ones identified in this article are merely a sampling, not a complete listing of all possible issues. Not all of these legal issues have answers yet. It can be expected that more AI regulatory measures, in more jurisdictions around the globe, will be coming down the pike very soon.

Contact attorney Thomas James

Contact Minnesota attorney Thomas James for help with copyright and trademark registration and other copyright and trademark related matters.

Does AI Infringe Copyright?

A previous blog post addressed the question whether AI-generated creations are protected by copyright. This could be called the “output question” in the artificial intelligence area of copyright law. Another question is whether using copyright-protected works as input for AI generative processes infringes the copyrights in those works. This could be called the “input question.” Both kinds of questions are now before the courts. Minnesota attorney Tom James describes a framework for analyzing the input question.

The Input Question in AI Copyright Law

by Thomas James, Minnesota attorney

In a previous blog post, I discussed the question whether AI-generated creations are protected by copyright. This could be called the “output question” in the artificial intelligence area of copyright law. Another question is whether using copyright-protected works as input for AI generative processes infringes the copyrights in those works. This could be called the “input question.” Both kinds of questions are now before the courts. In this blog post, I describe a framework for analyzing the input question.

The Cases

The Getty Images lawsuit

Getty Images is a stock photograph company. It licenses the right to use the images in its collection to those who wish to use them on their websites or for other purposes. Stability AI is the creator of Stable Diffusion, which is described as a “text-to-image diffusion model capable of generating photo-realistic images given any text input.” In January, 2023, Getty Images initiated legal proceedings in the United Kingdom against Stability AI. Getty Images is claiming that Stability AI violated copyrights by using their images and metadata to train AI software without a license.

The independent artists lawsuit

Another lawsuit raising the question whether AI-generated output infringes copyright has been filed in the United States. In this case, a group of visual artists are seeking class action status for claims against Stability AI, Midjourney Inc. and DeviantArt Inc. The artists claim that the companies use their images to train computers “to produce seemingly new images through a mathematical software process.” They describe AI-generated artwork as “collages” made in violation of copyright owners’ exclusive right to create derivative works.

The GitHut Copilot lawsuit

In November, 2022, a class action lawsuit was filed in a U.S. federal court against GitHub, Microsoft, and OpenAI. The lawsuit claims the GitHut Copilot and OpenAI Codex coding assistant services use existing code to generate new code. By training their AI systems on open source programs, the plaintiffs claim, the defendants have allegedly infringed the rights of developers who have posted code under open-source licenses that require attribution.

How AI Works

AI, of course, stands for artificial intelligence. Almost all AI techniques involve machine learning. Machine learning, in turn, involves using a computer algorithm to make a machine improve its performance over time, without having to pre-program it with specific instructions. Data is input to enable the machine to do this. For example, to teach a machine to create a work in the style of Vincent van Gogh, many instances of van Gogh’s works would be input. The AI program contains numerous nodes that focus on different aspects of an image. Working together, these nodes will then piece together common elements of a van Gogh painting from the images the machine has been given to analyze. After going through many images of van Gogh paintings, the machine “learns” the features of a typical Van Gogh painting. The machine can then generate a new image containing these features.

In the same way, a machine can be programmed to analyze many instances of code and generate new code.

The input question comes down to this: Does creating or using a program that causes a machine to receive information about the characteristics of a creative work or group of works for the purpose of creating a new work that has the same or similar characteristics infringe the copyright in the creative work(s) that the machine uses in this way?

The Exclusive Rights of Copyright Owners

In the United States, the owner of a copyright in a work has the exclusive rights to:

  • reproduce (make copies of) it;
  • distribute copies of it;
  • publicly perform it;
  • publicly display it; and
  • make derivative works based on it.

(17 U.S.C. § 106). A copyright is infringed when a person exercises any of these exclusive rights without the copyright owner’s permission.

Copyright protection extends only to expression, however. Copyright does not protect ideas, facts, processes, methods, systems or principles.

Direct Infringement

Infringement can be either direct or indirect. Direct infringement occurs when somebody directly violates one of the exclusive rights of a copyright owner. Examples would be a musician who performs a copyright-protected song in public without permission, or a cartoonist who creates a comic based on the Batman and Robin characters and stories without permission.

The kind of tool an infringer uses is not of any great moment. A writer who uses word-processing software to write a story that is simply a copy of someone else’s copyright-protected story is no less guilty of infringement merely because the actual typewritten letters were generated using a computer program that directs a machine to reproduce and display typographical characters in the sequence a user selects.

Contributory and Vicarious Infringement

Infringement liability may also arise indirectly. If one person knowingly induces another person to infringe or contributes to the other person’s infringement in some other way, then each of them may be liable for copyright infringement. The person who actually committed the infringing act could be liable for direct infringement. The person who knowingly encouraged, solicited, induced or facilitated the other person’s infringing act(s) could be liable for contributory infringement.

Vicarious infringement occurs when the law holds one person responsible for the conduct of another because of the nature of the legal relationship between them. The employment relationship is the most common example. An employer generally is held responsible for an employee’s conduct,  provided the employee’s acts were performed within the course and scope of the employment. Copyright infringement is not an exception to that rule.

Programmer vs. User

Direct infringement liability

Under U.S. law, machines are treated as extensions of the people who set them in motion. A camera, for example, is an extension of the photographer. Any images a person causes a camera to generate by pushing a button on it is considered the creation of the person who pushed the button, not of the person(s) who manufactured the camera, much less of the camera itself. By the same token, a person who uses the controls on a machine to direct it to copy elements of other people’s works should be considered the creator of the new work so created. If using the program entails instructing the  machine to create an unauthorized derivative work of copyright-protected images, then it would be the user, not the machine or the software writer, who would be at risk of liability for direct copyright infringement.

Contributory infringement liability

Knowingly providing a device or mechanism to people who use it to infringe copyrights creates a risk of liability for contributory copyright infringement. Under Sony Corp. v. Universal City Studios, however, merely distributing a mechanism that people can use to infringe copyrights is not enough for contributory infringement liability to attach, if the mechanism has substantial uses for which copyright infringement liability does not attach. Arguably, AI has many such uses. For example, it might be used to generate new works from public domain works. Or it might be used to create parodies. (Creating a parody is fair use; it should not result in infringement liability.)

The situation is different if a company goes further and induces, solicits or encourages people to use its mechanism to infringe copyrights. Then it may be at risk of contributory liability. As the United States Supreme Court has said, “one who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties.” Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913, 919 (2005). (Remember Napster?)

Fair Use

If AI-generated output is found to either directly or indirectly infringe copyright(s), the infringer nevertheless might not be held liable, if the infringement amounts to fair use of the copyrighted work(s) that were used as the input for the AI-generated work(s).

Ever since some rap artists began using snippets of copyright-protected music and sound recordings without permission, courts have embarked on a treacherous expedition to articulate a meaningful dividing line between unauthorized derivative works, on one hand, and unauthorized transformative works, on the other. Although the Copyright Act gives copyright owners the exclusive right to create works based on their copyrighted works (called derivative works), courts have held that an unauthorized derivative work may be fair use if it is “transformative.: This has caused a great deal of uncertainty in the law, particularly since the U.S. Copyright Act expressly defines a derivative work as one that transforms another work. (See 17 U.S.C. § 101: “A ‘derivative work’ is a work based upon one or more preexisting works, . . . or any other form in which a work may be recast, transformed, or adapted.” (emphasis added).)

When interpreting and applying the transformative use branch of Fair Use doctrine, courts have issued conflicting and contradictory decisions. As I wrote in another blog post, the U.S. Supreme Court has recently agreed to hear and decide Andy Warhol Foundation for the Visual Arts v. Goldsmith. It is anticipated that the Court will use this case to attempt to clear up all the confusion around the doctrine. It is also possible the Court might take even more drastic action concerning the whole “transformative use” branch of Fair Use.

Some speculate that the questions the Justices asked during oral arguments in Warhol signal a desire to retreat from the expansion of fair use that the “transformativeness” idea spawned. On the other hand, some of the Court’s recent decisions, such as Google v. Oracle, suggest the Court is not particularly worried about large-scale copyright infringing activity, insofar as Fair Use doctrine is concerned.

Conclusion

To date, it does not appear that there is any direct legal precedent in the United States for classifying the use of mass quantities of works as training tools for AI as “fair use.” It seems, however, that there soon will be precedent on that issue, one way or the other. In the meantime, AI generating system users should proceed with caution.

Newly Public Domain Works 2023

Thousands of books, movies, songs and other creative works entered the public domain in the United States in 2023. Here is a partial list compiled by Cokato Minnesota attorney Thomas James.

Thousands of books, movies, songs and other creative works enter the public domain in the United States this year. Here is a partial list. (Click here for last year’s list).

Sherlock Holmes

Last year, it was Winnie the Pooh. This year, Sherlock Holmes officially enters the public domain. Pooh’s release from copyright protection sparked some creative uses of A. Milne’s fictional bear, from a comic strip in which Pooh Bear appears completely naked (i.e., without his red shirt on) to a slasher film called Winnie the Pooh: Blood and Honey, coming soon to a theater near you.

Sherlock Holmes and his sidekick, Dr. Watson, have actually been in the public domain for a long time, since Arthur Conan Doyle began publishing stories about them in the late nineteenth century. The copyrights in those works had already expired when Congress extended copyright terms in 1998. Legal controversies continued to arise, however, over which elements of those characters were still protected by copyright. New elements that were added in later stories potentially could still be protected by copyright even if the copyrights in previous stories in the series had expired. Now, however, the copyright in the last two Sherlock Holmes stories Doyle wrote have expired. Therefore, it appears that all elements of Doyle’s Sherlock Holmes stories are in the public domain now.

One can only imagine what creative uses people will make of the Holmes and Watson characters now that they are officially in the public domain, too.

Sherlock Holmes, public domain character in attorney Tom James article

The Tower Treasure (Hardy Boys)

The Tower Treasure is the first book in the Hardy Boys series of mystery books that Franklin W. Dixon wrote. As of this year, it is in the public domain.

Again, however, only the elements of the characters and the story in that particular book are in the public domain now. Elements that appeared only in later volumes in the series might still be protected by copyright.

Steppenwolf

Herman Hesse’s Der Steppenwolf, in the original German, is now in the public domain. This version is to be distinguished from English translations of the work, which might still be protected by copyright as derivative works. It is also to be distinguished from the classic rock band by the same name. It is always important to distinguish between trademark and other kinds of uses of a term.

The Bridge of San Luis Rey

Thornton Wilder’s Pulitzer Prize winning novel about an investigation into the lives and deaths of people involved in the collapse of a Peruvian rope bridge has now entered the public domain.

Mosquitoes

William Faulkner’s satiric novel enters the public domain this year. This work is to be distinguished from the insect by the same name. The insect, annoyingly, has been in the public domain for centuries.

The Gangs of New York

Herbert Asbury’s The Gangs of New York is now in the public domain.

Amerika

Franz Kafka’s Amerika (also known as Lost In America) — was published posthumously in 1927. It is now in the public domain.

The Jazz Singer

The Jazz Singer is a 1927 American film and one of the first to feature sound. Warner Brothers produced it using the Vitaphone sound-on-disc system and it featured six songs performed by Al Jolson. The short story on which it is based, “The Day of Atonement,” has already been in the public domain for some time. Now the film is, too.

The Battle of the Century

The Laurel and Hardy film, The Battle of the Century, is now in the public domain. Other Laurel and Hardy films, however, may still be protected by copyright.

Metropolis

Science fiction fans are most likely familiar with this 1927 German science fiction silent movie written by Thea von Harbou and Fritz Lang based on von Harbou’s 1925 novel. It was one of the first feature-length movies in that genre. The film is also famous for the phrase, “The Mediator Between the Head and the Hands Must Be the Heart.”

The Lodger

Alfred Hitchcock’s first thriller has entered the public domain.

“We All Scream for Ice Cream”

The song, “I Scream; You Scream; We All Scream for Ice Cream” is now in the public domain. Don’t worry if you uttered this phrase prior to January 1, 2023. Titles and short phrases are not protected by copyright. Now, it would be a different story if you’ve publicly performed the song, or published or recorded the song and/or the lyrics. Merely uttering those words, however, is not a crime.

“Puttin’ on the Ritz”

This song was originally written by Irving Berlin in 1927. Therefore it is now in the public domain. Taco released a performance of a cover version of this song in 1982. This version of the song made it all the way to number 53 in VH1’s 100 Greatest One-Hit Wonders of the 80’s special. Note that even if the original musical composition and lyrics are in the public domain now, recorded performances of the song by particular artists may still be protected. The copyrights in a musical composition and a recording of a performance of it are separate and distinct things. Don’t go copying Taco’s recorded performance of the song without permission. Please.

“My Blue Heaven”

This song, written by Walter Donaldson and George Whiting, is now in the public domain. It was used in the Ziegfeld Follies and was a big hit for crooner Gene Austin. It is not to be confused with the 1990 Steve Martin film with that name, which is still protected by copyright.

“The Best Things In Life Are Free”

This song was written by Buddy DeSylva, Lew Brown and Ray Henderson for the 1927 musical Good News. Many performers have covered it since then. The (ahem) good news is that it is now in the public domain.

Caveats

The works described in this blog post have entered the public domain under U.S. copyright law. The terms of copyrights in other countries are not the same. In the European Union, for example, Herman Hesse’s Der Steppenwolf is still protected by copyright as of January 1, 2023.

And again, remember that even if a work has entered the public domain, new elements first appearing in a derivative work based on it might still be protected by copyright.

The featured image in this article is “The Man with the Twisted Lip.” It appeared in The Strand Magazine in December, 1891. The original caption was “The pipe was still between his lips.” The drawing is in the public domain.

The Top Copyright Cases of 2022

Cokato Minnesota attorney Tom James (“The Cokato Copyright Attorney”) presents his annual list of the top copyright cases of the year.

My selections for the top copyright cases of the year.

“Dark Horse”

Marcus Gray had sued Katy Perry for copyright infringement, claiming that her “Dark Horse” song unlawfully copied portions of his song, “Joyful Noise.” The district court held that the disputed series of eight notes appearing in Gray’s song were not “particularly unique or rare,” and therefore were not protected against infringement. The Ninth Circuit Court of Appeals agreed, ruling that the series of eight notes was not sufficiently original and creative to receive copyright protection. Gray v. Hudson.

“Shape of You”

Across the pond, another music copyright infringement lawsuit was tossed. This one involved Ed Sheeran’s “Shape of You” and Sam Chokri’s “Oh Why.” In this case, the judge refused to infer from the similarities in the two songs that copyright infringement had occurred. The judge ruled that the portion of the song as to which copying had been claimed was “so short, simple, commonplace and obvious in the context of the rest of the song that it is not credible that Mr. Sheeran sought out inspiration from other songs to come up with it.” Sheeran v. Chokri.

Instagram images

Another case out of California, this one involves a lawsuit filed by photographers against Instagram, alleging secondary copyright infringement. The photographers claim that Instagram’s embedding tool facilitates copyright infringement by users of the website. The district court judge dismissed the lawsuit, saying he was bound by the so-called “server test” the Ninth Circuit Court of Appeals announced in Perfect 10 v. Amazon. The server test says, in effect, that a website does not unlawfully “display” a copyrighted image if the image is stored on the original site’s server and is merely embedded in a search result that appears on a user’s screen. The photographers have an appeal pending before the Ninth Circuit Court of Appeals, asking the Court to reconsider its decision in Perfect 10. Courts in other jurisdictions have rejected Perfect 10 v. Amazon. The Court now has the option to either overrule Perfect 10 and allow the photographers’ lawsuit to proceed, or to re-affirm it, thereby creating a circuit split that could eventually lead to U.S. Supreme Court review. Hunley v. Instagram.

Tattoos

Is reproducing a copyrighted image in a tattoo fair use? That is a question at issue in a case pending in New York. Photographer Jeffrey Sedlik took a photograph of musician Miles Davis. Later, a tattoo artist allegedly traced a printout of it to create a stencil to transfer to human skin as a tattoo. Sedlik filed a copyright infringement lawsuit in the United States District Court for the Southern District of New York. Both parties moved for summary judgment. The judge analyzed the claims using the four “fair use” factors. Although the ultimate ruling was that fact issues remained to be decided by a jury, the court issued some important rulings in the course of making that ruling. In particular, the court ruled that affixing an image to skin is not necessarily a protected “transformative use” of an image. According to the court, it is for a jury to decide whether the image at issue in a particular case has been changed significantly enough to be considered “transformative.” It will be interesting to see how this case ultimately plays out, especially if it is still pending when the United States Supreme Court announces its decision in the Warhol case (See below). Sedlik v. Von Drachenberg.

Digital libraries

The book publishers’ lawsuit against Internet Archive, about which I wrote in a previous blog post, is still at the summary judgment stage. Its potential future implications are far-reaching. It is a copyright infringement lawsuit that book publishers filed in the federal district court for the Southern District of New York. The gravamen of the complaint is that Internet Archive allegedly has scanned over a million books and has made them freely available to the public via an Internet website without securing a license or permission from the copyright rights-holders. The case will test the “controlled digital lending” theory of fair use that was propounded in a white paper published by David R. Hansen and Kyle K. Courtney. They argued that distributing digitized copies of books by libraries should be regarded as the functional equivalent of lending physical copies of books to library patrons. Parties and amici have filed briefs in support of motions for summary judgment. An order on the motions is expected soon. The case is Hachette Book Group et al. v. Internet Archive.

Copyright registration

In Fourth Estate Public Benefits Corp. v. Wall-Street.com LLC, 139 S. Ct. 881, 889 (2019), the United States Supreme Court interpreted 17 U.S.C. § 411(a) to mean that a copyright owner cannot file an infringement claim in federal court without first securing either a registration certificate or an official notice of denial of registration from the Copyright Office. In an Illinois Law Review article, I argued that this imposes an unduly onerous burden on copyright owners and that Congress should amend the Copyright Act to abolish the requirement. Unfortunately, Congress has not done that. As I said in a previous blog post, Congressional inaction to correct a harsh law with potentially unjust consequences often leads to exercises of the judicial power of statutory interpretation to ameliorate those consequences. Unicolors v. H&M Hennes & Mauritz.

Unicolors, owner of the copyrights in various fabric designs, sued H&M Hennes & Mauritz (H&M), alleging copyright infringement. The jury rendered a verdict in favor of Unicolor, but H&M moved for judgment as a matter of law. H&M argued that Unicolors had failed to satisfy the requirement of obtaining a registration certificate prior to commencing suit. Although Unicolors had obtained a registration, H&M argued that the registration was not a valid one. Specifically, H&M argued that Unicolors had improperly applied to register multiple works with a single application. According to 37 CFR § 202.3(b)(4) (2020), a single application cannot be used to register multiple works unless all of the works in the application were included in the same unit of publication. The 31 fabric designs, H&M contended, had not all been first published at the same time in a single unit; some had been made available separately exclusively to certain customers. Therefore, they could not properly be registered together as a unit of publication.

The district court denied the motion, holding that a registration may be valid even if contains inaccurate information, provided the registrant did not know the information was inaccurate. The Ninth Circuit Court of Appeals reversed. The Court held that characterizing the group of works as a “unit of publication” in the registration application was a mistake of law, not a mistake of fact. The Court applied the traditional rule of thumb that ignorance of the law is not an excuse, in essence ruling that although a mistake of fact in a registration application might not invalidate the registration for purposes of the pre-litigation registration requirement, a mistake of law in an application will.

The United States Supreme Court granted certiorari. It reversed the Ninth Circuit Court’s reversal, thereby allowing the infringement verdict to stand notwithstanding the improper registration of the works together as a unit of publication rather than individually.

It is hazardous to read too much into the ruling in this case. Copyright claimants certainly should not interpret it to mean that they no longer need to bother with registering a copyright before trying to enforce it in court, or that they do not need to concern themselves with doing it properly. The pre-litigation registration requirement still stands (in the United States), and the Court has not held that it condones willful blindness of legal requirements. Copyright claimants ignore them at their peril.

Andy Warhol, Prince Transformer

I wrote about the Warhol case in a previous blog post. Basically, it is a copyright infringement case alleging that Lynn Goldsmith took a photograph of Prince in her studio and that Andy Warhol later based a series of silkscreen prints and pencil illustrations on it without a license or permission. The Andy Warhol Foundation sought a declaratory judgment that Warhol’s use of the photograph was “fair use.” Goldsmith counterclaimed for copyright infringement. The district court ruled in favor of Warhol and dismissed the photographer’s infringement claim. The Court of Appeals reversed, holding that the district court misapplied the four “fair use” factors and that the derivative works Warhol created do not qualify as fair use. The U.S. Supreme Court granted certiorari and heard oral arguments in October, 2022. A decision is expected next year.

Because this case gives the United States Supreme Court an opportunity to bring some clarity to the extremely murky “transformative use” area of copyright law, it is not only one of this year’s most important copyright cases, but it very likely will wind up being one of the most important copyright cases of all time. Andy Warhol Foundation for the Visual Arts v. Goldsmith.

The Philosophy of Copyright

The Internet has created an existential crisis for copyrights. Well, not really. It has impelled some people to consider, for the first time, the rationale for copyrights and the legal protection of them. That sounds a lot less dramatic and thrilling than “existential crisis,” though.

The two frameworks

There are two basic frameworks for thinking about copyright law: deontological and utilitarian. Deontological approaches focus on rights and duties. Utilitarian approaches focus on the usefulness of copyrights in promoting or accomplishing some social good.

In simpler terms, we can think of copyrights as deserving of protection because respecting individual property rights is a moral good. That is the deontological way of thinking about them. On the other hand, we can think about protecting copyright in terms of how protecting copyrights benefits society – the greatest happiness for the greatest number of people. That is the utilitarian approach.

Generally speaking, European countries have tended toward the deontological, while the United States has tended toward the utilitarian. Droites de suite, the rights of an artist to attribution and integrity (the rights to be credited as author and to the preservation of the integrity of a created work) originated in Europe. The United States Constitution, by contrast, declares that the purpose of giving authors and inventors exclusive rights is simply “[t]o promote the Progress of Science and useful Arts,” (U.S. Const. Art. I, sec. 8(8)), a clearly utilitarian expression of the rationale for protecting intellectual property.

These are just generalities, of course. The amendment of the U.S. Copyright Act to include protection for the integrity and attribution rights of visual artists is an example of how the European approach has been “coming to America.” At the same time, European policy-makers are increasingly influenced by utilitarian ways of thinking.

The difference between the two approaches comes into sharp relief in the area of Fair Use. Viewing copyright as a personal right and infringement as a moral wrong, the concept of “fair use” is difficult to justify. Instead, resort is usually had to utilitarianism, the idea that infringement of individual rights can be justified if it makes a lot of people happier (the “public benefit” consideration in fair use analysis.)

The nature of the right

German law developed on a view of copyright as a personality right. Personality rights are recognized to some extent in American law, too. In the United States, however, only a person’s name, voice and likeness are considered to be elements of a person’s “personality.” The products of one’s mind, the works the person creates, are not. Those things are considered property rights in the United States.

Among those who view copyrights as property rights, there is a divide between those who view them as natural rights and those who view them solely as creatures of positive law. John Locke is the most celebrated proponent of the natural rights theory. Proponents of the positive law approach (as I call it, for purposes of this blog post) do not view authors’, artists’ and inventors’ rights as inalienable natural rights, but as rights the law will protect if and only to the extent that a government sees fit to create a law protecting them.

Proponents of the view that copyrights are solely the creatures of positive law, of course, measure the value of copyright protection in terms of public benefit. If, for example, they think that an Internet free from the restrictions of prohibitions against copyright infringement will make a great number of people happy (“public benefit”), then they will likely advocate for laws and interpretations of laws favoring a broad and expansive “fair use” exception to copyright protection.

The slack of utilitarian tension

Differences of opinion can arise among those who adopt the utilitarian approach to copyright because a variety of conflicting arguments about what will best promote the greatest happiness for the greatest number of people exist.

On one hand, there is the incentive theory expressed in the Intellectual Property Clause of the U.S. Constitution. The idea expressed there is that giving creators rights in their creations will ultimately lead to scientific and artistic progress. Protecting copyrights might not be enough to incentivize creativity but failing to protect them can be a disincentive to creative effort.

Conflicting with this, there is the argument that allowing more people to access and use other people’s ideas and inventions facilitates progress. This is the thought behind Open Source and other approaches focusing on the benefits of social collaboration in the development of ideas and inventions.

Conclusion

Anyhoobie, that is the nutshell version of the philosophy of copyright. Feel free to explore the subject in greater depth on your own. Philosophy can be fun, right? Right?

Who Am I?

That, too, is a great philosophical question. In my case, it is easy to answer. Cokato, Minnesota attorney Thomas James.

.

Photographers’ Rights: Warhol Case Tests the Limits of Transformative Use

The U.S. Supreme Court will soon hear Andy Warhol Foundation for the Visual Arts v. Goldsmith. Attorney Thomas James explains what is at stake for photographers

In a previous post, I identified the Second Circuit’s decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith as one of the three top copyright cases of 2021. It has since been appealed to the United States Supreme Court. Oral argument is scheduled for October 12, 2022.

The dispute

The underlying facts in the case, in a nutshell, are these:

Lynn Goldsmith took a photograph of Prince in her studio in 1981. Later, Andy Warhol created a series of silkscreen prints and pencil illustrations based on it. The Andy Warhol Foundation sought a declaratory judgment that the artist’s use of the photograph was “fair use.” Goldsmith counterclaimed for copyright infringement. The district court ruled in favor of Warhol and dismissed the photographer’s infringement claim.

The Court of Appeals reversed, holding that the district court misapplied the four “fair use” factors and that the derivative works Warhol created do not qualify as fair use.

The United States Supreme Court granted the Warhol Foundation’s certiorari petition.

The issue

In this case, the U.S. Supreme Court is being called upon to provide guidance on the meaning and scope of “transformative use” as an element of fair use analysis. At what point does an unauthorized, altered copy of a copyrighted work stop being an infringing derivative work and become a “transformative” fair use?

The Conundrum

In the chapter on copyright in my book, E-Commerce Law, I predicted a case like this would be coming before the Supreme Court at some point. As I noted there, a tension exists between the Copyright Act’s grant of the exclusive right to authors (or their assignees and licensees) to make modified versions of their works (called “derivative works”), on one hand, and the idea that making modified versions of copyrighted works is transformative fair use, on the other. The notion that making changes to a work that “transform” it into a new work qualifies as fair use obviously threatens to swallow the rule that only the owner of the copyright in a work has the right to make new works based on the work.

Lower courts have not been consistent in their interpretations and approaches to the transformative use concept. The Warhol case presents a wonderful opportunity for the Supreme Court to provide some guidance.

Campell v. Acuff-Rose Music

The “transformative use” saga really begins with the 1994 case, Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 579 (1994). Unable to secure a license to include “samples” (copies of portions) of the Roy Orbison song, “Oh Pretty Woman” in a new version they recorded, 2 Live Crew proceeded to record and distribute their version with the unauthorized sampling anyway, invoking “fair use.”

In a decision that took many attorneys and legal scholars by surprise, the Supreme Court held that 2 Live Crew did not need permission to copy and distribute the work even though the work they created involved substantial copying of the Orbison song. To reach this conclusion, the Court propounded the notion that copying portions of another work — even substantial portions of it — may be permissible if the resulting work is “transformative.” This, the Court held, could hold true sometimes even if the newly created work is not a parody of the original.

In the years that followed, courts have struggled to determine what is a “transformative” modification of a work and what is a non-transformative modification of it. Some courts have demonstrated a willingness to apply the doctrine in such a way as to nearly nullify the exclusivity of an author’s right to make modified versions of his or her works.

Courts have also demonstrated a lack of consistency with respect to how they incorporate and apply “transformativeness” within the four-factor test for fair use set out in 17 U.S.C. Section 107.

Why It Matters

This might seem like an arcane legal issue of little practical significance, but it really isn’t. People are already pushing the transformative use idea into new realms. For example, some tattoo artists have claimed in court filings that they do not need permission to make stencils from photographs because copying a photograph onto skin is a “transformative use.”

Of course, making and distributing exact copies of a photograph for sale in a stream of commerce that directly competes with the original photograph should not be susceptible to a transformative fair use claim. But how far can the claim be carried? If copying a photograph onto somebody’s skin is “transformative” use, would copying it onto somebody’s shirt also be “transformative”?

Clarity and guidance in this area are sorely needed. Hopefully the Supreme Court will take this opportunity to furnish it.

Contact Cokato copyright attorney Thomas James

Need help with registering a copyright or with a copyright problem? Contact attorney Thomas James.

%d bloggers like this: