Why Machine Training AI with Protected Works is Not Fair Use

… if the underlying goal of copyright’s exclusive rights and the fair use exception is to promote new “authorship,” this is doctrinally fatal to the proposal that training AIs on volumes of protected works favors a finding of fair use.

Guest blogger David Newhoff lays out the argument against the claim that training AI systems with copyright-protected works is fair use. David is the author of Who Invented Oscar Wilde? The Photograph at the Center of Modern American Copyright (Potomac Books 2020) and is a copyright advocate/writer at The Illusion of More.


As most copyright watchers already know, two lawsuits were filed at the start of the new year against AI visual works companies. In the U.S., a class-action was filed by visual artists against DeviantArt, Midjourney, and Stability AI; and in the UK, Getty Images is suing Stability AI. Both cases allege infringing use of large volumes of protected works fed into the systems to “train” the algorithms. Regardless of how these two lawsuits might unfold, I want to address the broad defense, already being argued in the blogosphere, that training generative AIs with volumes of protected works is fair use. I don’t think so.

Copyright advocates, skeptics, and even outright antagonists generally agree that the fair use exception, correctly applied, supports the broad aim of copyright law to promote more creative work. In the language of the Constitution, copyright “promotes the progress of science,” but a more accurate, modern description would be that copyright promotes new “authorship” because we do not tend to describe literature, visual arts, music, etc. as “science.”

The fair use doctrine, codified in the federal statute in 1976, originated as judge-made law, and from the seminal Folsom v. Marsh to the contemporary Andy Warhol Foundation v. Goldsmith, the courts have restated, in one way or another, their responsibility to balance the first author’s exclusive rights with a follow-on author’s interest in creating new expression. And as a matter of general principle, it is held that the public benefits from this balancing act because the result is a more diverse market of creative and cultural works.

Fair use defenses are case-by-case considerations and while there may be specific instances in which an AI purpose may be fair use, there are no blanket exceptions. More broadly, though, if the underlying goal of copyright’s exclusive rights and the fair use exception is to promote new “authorship,” this is doctrinally fatal to the proposal that training AIs on volumes of protected works favors a finding of fair use. Even if a court holds that other limiting doctrines render this activity by certain defendants to be non-infringing, a fair use defense should be rejected at summary judgment—at least for the current state of the technology, in which the schematic encompassing AI machine, AI developer, and AI user does nothing to promote new “authorship” as a matter of law.

The definition of “author” in U.S. copyright law means “human author,” and there are no exceptions to this anywhere in our history. The mere existence of a work we might describe as “creative” is not evidence of an author/owner of that work unless there is a valid nexus between a human’s vision and the resulting work fixed in a tangible medium. If you find an anonymous work of art on the street, absent further research, it has no legal author who can assert a claim of copyright in the work that would hold up in any court. And this hypothetical emphasizes the point that the legal meaning of “author” is more rigorous than the philosophical view that art without humans is oxymoronic. (Although it is plausible to find authorship in a work that combines human creativity with AI, I address that subject below.)

As a matter of law, the AI machine itself is disqualified as an “author” full stop. And although the AI owner/developer and AI user/customer are presumably both human, neither is defensibly an “author” of the expressions output by the AI. At least with the current state of technologies making headlines, nowhere in the process—from training the AI, to developing the algorithm, to entering prompts into the system—is there an essential link between those contributions and the individual expressions output by the machine. Consequently, nothing about the process of ingesting protected works to develop these systems in the first place can plausibly claim to serve the purpose of promoting new “authorship.”

But What About the Google Books Case?

Indeed. In the fair use defenses AI developers will present, we should expect to see them lean substantially on the holding in Authors Guild v. Google Books—a decision which arguably exceeds the purpose of fair use to promote new authorship. The Second Circuit, while acknowledging that it was pushing the boundaries of fair use, found the Google Books tool to be “transformative” for its novel utility in presenting snippets of books; and because that utility necessitates scanning whole books into its database, a defendant AI developer will presumably want to make the comparison. But a fair use defense applied to training AIs with volumes of protected works should fail, even under the highly utilitarian holding in Google Books.

While people of good intent can debate the legal merits of that decision, the utility of the Google Books search engine does broadly serve the interest of new authorship with a useful research tool—one I have used many times myself. Google Books provides a new means by which one author may research the works of another author, and this is immediately distinguishable from the generative AI which may be trained to “write books” without authors. Thus, not only does the generative AI fail to promote authorship of the individual works output by the system, but it fails to promote authorship in general.

Although the technology is primitive for the moment, these AIs are expected to “learn” exponentially and grow in complexity such that AIs will presumably compete with or replace at least some human creators in various fields and disciplines. Thus, an enterprise which proposes to diminish the number of working authors, whether intentionally or unintentionally, should only be viewed as devastating to the purpose of copyright law, including the fair use exception.

AI proponents may argue that “democratizing” creativity (i.e., putting these tools in every hand) promotes authorship by making everyone an author. But aside from the cultural vacuum this illusion of more would create, the user prompting the AI has a high burden to prove authorship, and it would really depend on what he is contributing relative to the AI. As mentioned above, some AIs may evolve as tools such that the human in some way “collaborates” with the machine to produce a work of authorship. But this hypothetical points to the reason why fair use is a fact-specific, case-by-case consideration. AI Alpha, which autonomously creates, or creates mostly without human direction, should not benefit from the potential fair use defense of AI Beta, which produces a tool designed to aid, but not replace, human creativity.

Broadly Transformative? Don’t Even Go There

Returning to the constitutional purpose of copyright law to “promote science,” the argument has already been floated as a talking point that training AI systems with protected works promotes computer science in general and is, therefore, “transformative” under fair use factor one for this reason. But this argument should find no purchase in court. To the extent that one of these neural networks might eventually spawn revolutionary utility in medicine or finance etc., it would be unsuitable to ask a court to hold that such voyages of general discovery fit the purpose of copyright, to say nothing of the likelihood that the adventure strays inevitably into patent law. Even the most elastic fair use findings to date reject such a broad defense.

It may be shown that no work(s) output by a particular AI infringes (copies) any of the works that went into its training. It may also be determined that the corpus of works fed into an AI is so rapidly atomized into data that even fleeting “reproduction” is found not to exist, and, thus, the 106(1) right is not infringed. Those questions are going to be raised in court before long, and we shall see where they lead. But to presume fair use as a broad defense for AI “training” is existentially offensive to the purpose of copyright, and perhaps to law in general, because it asks the courts to vest rights in non-humans, which is itself anathema to caselaw in other areas.[1]

It is my oft-stated opinion that creative expression without humans is meaningless as a cultural enterprise, but it is a matter of law to say that copyright is meaningless without “authors” and that there is no such thing as non-human “authors.” For this reason, the argument that training AIs on protected works is inherently fair use should be denied with prejudice.


[1] Cetaceans v. Bush holding that animals do not have standing in court was the basis for rejecting PETA’S complaint against photographer Slater for infringing the copyright rights of the monkey in the “Monkey Selfie” fiasco.


A Thousand Cuts: AI and Self-Destruction

David Newhoff comments on generative AI (artificial intelligence) and public policy.

A guest post written by David Newhoff. AI, of course, stands for “artificial intelligence.” David is the author of Who Invented Oscar Wilde? The Photograph at the Center of Modern American Copyright (Potomac Books 2020) and a copyright advocate/writer at The Illusion of More.


I woke up the other day thinking about artificial intelligence (AI) in context to the Cold War and the nuclear arms race, and curiously enough, the next two articles I read about AI made arms race references. Where my pre-caffeinated mind had gone was back to the early 1980s when, as teenagers, we often asked that futile question as to why any nation needed to stockpile nuclear weapons in quantities that could destroy the world many times over.

Every generation of adolescents believes—and at times confirms—that the adults have no idea what the hell they’re doing; and watching the MADness of what often seemed like a rapturous embrace of nuclear annihilation was, perhaps, the unifying existential threat which shaped our generation’s world view. Since then, reasonable arguments have been made that nuclear stalemate has yielded an unprecedented period of relative global peace, but the underlying question remains:  Are we powerless to stop the development of new modes of self-destruction?

Of course, push-button extinction is easy to imagine and, in a way, easy to ignore. If something were to go terribly wrong, and the missiles fly, it’s game over in a matter of minutes with no timeouts left. So, it is possible to “stop worrying” if not quite “love the bomb” (h/t Strangelove); but today’s technological threats preface outcomes that are less merciful than swift obliteration. Instead, they offer a slow and seemingly inexorable decline toward the dystopias of science fiction—a future in which we are not wiped out in a flash but instead “amused to death” (h/t Postman) as we relinquish humanity itself to the exigencies of technologies that serve little or no purpose.

The first essay I read about AI, written by Anja Kaspersen and Wendell Wallach for the Carnegie Council, advocates a “reset” in ethical thinking about AI, arguing that giant technology investments are once again building systems with little consideration for their potential effect on people. “In the current AI discourse we perceive a widespread failure to appreciate why it is so important to champion human dignity. There is risk of creating a world in which meaning and value are stripped from human life,” the authors write. Later, they quote Robert Oppenheimer …

It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge, and are willing to take the consequences.

I have argued repeatedly that generative AI “art” is devoid of meaning and value and that the question posed by these technologies is not merely how they might influence copyright law, but whether they should exist at all. It may seem farfetched to contemplate banning or regulating the development of AI tech, but it should not be viewed as an outlandish proposal. If certain AI developments have the capacity to dramatically alter human existence—perhaps even erode what it means to be human—why is this any less a subject of public policy than regulating a nuclear power plant or food safety?

Of course, public policy means legislators, and it is quixotic to believe that any Congress, let alone the current one, could sensibly address AI before the industry causes havoc. At best, the tech would flood the market long before the most sincere, bipartisan efforts of lawmakers could grasp the issues; and at worst, far too many politicians have shown that they would sooner exploit these technologies for their own gain than they would seek to regulate it in the public interest. “AI applications are increasingly being developed to track and manipulate humans, whether for commercial, political, or military purposes, by all means available—including deception,” write Kaspersen and Wallach. I think it’s fair to read that as Cambridge Analytica 2.0 and to recognize that the parties who used the Beta version are still around—and many have offices on Capitol Hill.

Kaspersen and Wallach predict that we may soon discover that generative AI will have the same effect on education that “social media has had on truth.” In response, I would ask the following: In the seven years since the destructive power of social media became headline news, have those revelations significantly changed the conversation, let alone muted the cyber-libertarian dogma of the platform owners? I suspect that AI in the classroom threatens to exacerbate rather than parallel the damage done by social media to truth (i.e., reason). If social media has dulled Socratic skills with the flavors of narcissism, ChatGPT promises a future that does not remember what Socratic skills used to mean.

And that brings me to the next article I read in which Chris Gillard and Pete Rorabaugh, writing for Slate, use “arms race” as a metaphor to criticize technological responses to the prospect of students cheating with AI systems like ChatGPT. Their article begins:

In the classroom of the future—if there still are any—it’s easy to imagine the endpoint of an arms race: an artificial intelligence that generates the day’s lessons and prompts, a student-deployed A.I. that will surreptitiously do the assignment, and finally, a third-party A.I. that will determine if any of the pupils actually did the work with their own fingers and brain. Loop complete; no humans needed. If you were to take all the hype about ChatGPT at face value, this might feel inevitable. It’s not.

In what I feared might be another tech-apologist piece labeling concern about AI a “moral panic,” Gillard and Rorabaugh make the opposite point. Their criticism of software solutions to mitigate student cheating is that it is small thinking which erroneously accepts as a fait accompli that these AI systems are here to stay whether we like it or not. “Telling us that resistance to a particular technology is futile is a favorite talking point for technologists who release systems with few if any guardrails out into the world and then put the onus on society to address most of the problems that arise,” they write.

In other words, here we go again. The ethical, and perhaps legal, challenges posed by AI are an extension of the same conversation we generally failed to have about social media and its cheery promises to be an engine of democracy. “It’s a failure of imagination to think that we must learn to live with an A.I. writing tool just because it was built,” Gillard and Rorabaugh argue. I would like to agree but am skeptical that the imagination required to reject certain technologies exists outside the rooms where ethicists gather. And this is why I wake up thinking about AI in context to the Cold War, except of course that the doctrine of Mutually Assured Destruction was rational by contrast.


Photo by the author.

View the original article on The Illusion of More.

Contact attorney Tom James for copyright help

Need help registering a copyright or a group of copyrights in the United States, or enforcing a copyright in the United States? Contact attorney Tom James.

Getty Images Litigation Update

Getty Images has now filed a lawsuit for copyright infringement in the United States.

In a previous post, I reported on a lawsuit that Getty Images had filed in the United Kingdom against Stability AI. Now the company has filed similar claims against the company in the United States.

The complaint, which has been filed in federal district court in Delaware, alleges claims of copyright infringement; providing false copyright management information; removal or alteration of copyright management information; trademark infringement; trademark dilution; unfair competition; and deceptive trade practices. Both monetary damages and injunctive relief are being sought.

An interesting twist in the Getty litigation is that AI-generated works allegedly have included the Getty Images trademark.

Getty Images logo on AI-generated image
(Reproduction of a portion of the Complaint filed in Getty Images v. Stability AI, Inc. in U.S. district court for the district of Delaware, case no. Case 1:23-cv-00135-UNA (2023). The image has been cropped to avoid reproducing the likenesses of persons appearing in the image and to display only what is needed here for purposes of news reporting and commentary,)

Getty Images, which is in the business of collecting and licensing quality images, alleges (among other things) that affixing its trademark to poor quality AI-generated images tarnishes the company’s reputation. If proven, this could constitute trademark dilution, which is prohibited by the Lanham Act.

AI Legal Issues

Thomas James (“The Cokato Copyright Attorney”) describes the range of legal issues, most of which have not yet been resolved, that artificial intelligence (AI) systems have spawned.

AI is not new. Its implementation also is not new. In fact, consumers regularly interact with AI-powered systems every day. Online help systems often use AI to provide quick answers to questions that customers routinely ask. Sometimes these are designed to give a user the impression that s/he is communicating with a person.

AI systems also perform discrete functions such as analyzing a credit report and rendering a decision on a loan or credit card application, or screening employment applications.

Many other uses have been found for AI and new ones are being developed all the time. AI has been trained not just to perform customer service tasks, but also to perform analytics and diagnostic tests; to repair products; to update software; to drive cars; and even to write articles and create images and videos. These developments may be helping to streamline tasks and improve productivity, but they have also generated a range of new legal issues.

Tort liability

While there are many different kinds of tort claims, the elements of tort claims are basically the same: (1) The person sought to be held liable for damages or ordered to comply with a court order must have owed a duty to the person who is seeking the legal remedy; (2) the person breached that duty; (3) the person seeking the legal remedy experienced harm, i.e., real or threatened injury; and (4) the breach was the actual and proximate cause of the harm.

The kind of harm that must be demonstrated varies depending on the kind of tort claim. For example, a claim of negligent driving might involve bodily injury, while a claim of defamation might involve injury to reputation. For some kinds of tort claims, the harm might involve financial or economic injury. 

The duty may be specified in a statute or contract, or it might be judge-made (“common law.”) It may take the form of an affirmative obligation (such as a doctor’s obligation to provide a requisite level of care to a patient), or it may take a negative form, such as the common law duty to refrain from assaulting another person.

The advent of AI does not really require any change in these basic principles, but they can be more difficult to apply to scenarios that involve the use of an AI system.

Example. Acme Co. manufactures and markets Auto-Doc, a machine that diagnoses and repairs car problems. Mike’s Repair Shop lays off its automotive technician employees and replaces them with one of these machines. Suzie Consumer brings her VW Jetta to Mikes Repair Shop for service because she has been hearing a sound that she describes as being a grinding noise that she thinks is coming from either the engine or the glove compartment. The Auto-Doc machine adds engine oil, replaces belts, and removes the contents of the glove compartment. Later that day, Suzie’s brakes fail and her vehicle hits and kills a pedestrian in a crosswalk. A forensic investigation reveals that her brakes failed because they were badly worn. Who should be held liable for the pedestrian’s death – Suzie, Mike’s, Acme Co., some combination of two of them, all of them, or none of them?

The allocation of responsibility will depend, in part, on the degree of autonomy the AI machine possesses. Of course, if it can be shown that Suzie knew or should have known that her brakes were bad, then she most likely could be held responsible for causing the pedestrian’s death. But what about the others? Their liability, or share of liability, is affected by the degree of autonomy the AI machine possesses. If it is completely autonomous, then Acme might be held responsible for failing to program the machine in such a way that it would test for and detect worn brake pads even if a customer expresses an erroneous belief that the sound is coming from the engine or the glove compartment. On the other hand, if the machine is designed only to offer suggestions of possible problems and solutions,  leaving it up to a mechanic to accept or reject them, then Mike’s might be held responsible for negligently accepting the machine’s recommendations. 

Assuming the Auto-Doc machine is fully autonomous, should Mike’s be faulted for relying on it to correctly diagnose car problems? Is Mike’s entitled to rely on Acme’s representations about Auto-Doc’s capabilities, or would the repair shop have a duty to inquire about and/or investigate Auto-Doc’s limitations? Assuming Suzie did not know, and had no reason to suspect, her brakes were worn out, should she be faulted for relying on a fully autonomous machine instead of taking the car to a trained human mechanic?  Why or why not?

Criminal liability

It is conceivable that an AI system might engage in activity that is prohibited by an applicable jurisdiction’s criminal laws. E-mail address harvesting is an example. In the United States, for example, the CAN-SPAM Act makes it a crime to send a commercial email message to an email address that was  obtained  by automated scraping of Internet websites for email addresses. Of course, if a person intentionally uses an AI system for scraping, then liability should be clear. But what if an AI system “learns” to engage in scraping?

AI-generated criminal output may also be a problem. Some countries have made it a crime to display a Nazi symbol, such as a swastika, on a website. Will criminal liability attach if a website or blog owner uses AI to generate illustrated articles about World War II and the system generates and displays articles that are illustrated with World War II era German flags and military uniforms? In the United States, creating or possessing child pornography is illegal. Will criminal liability attach if an AI system generates it?

Some of these kinds of issues can be resolved through traditional legal analysis of the intent and scienter elements of the definitions of crimes. A jurisdiction might wish to consider, however, whether AI systems should be regulated to require system creators to implement measures that would prevent illegal uses of the technology. This raises policy and feasibility questions, such as whether and what kinds of restraints on machine learning should be required, and how to enforce them. Further, would prior restraints on the design and/or use of AI-powered expressive-content-generating systems infringe on First Amendment rights?  

Product liability

Related to the problem of allocating responsibility for harm caused by the use of an AI mechanism is the question whether anyone should be held liable for harm caused when the mechanism is not defective, that is to say, when it is operating as it should.

 Example.  Acme Co. manufactures and sells Auto-Article, a software program that is designed to create content of a type and kind the user specifies. The purpose of the product is to enable a website owner to generate and publish a large volume of content frequently, thereby improving the website’s search engine ranking. It operates   by scouring the Internet and analyzing instances of the content the user specifies to produce new content that “looks like” them. XYZ Co. uses the software to generate articles on medical topics. One of these articles explains that chest pain can be caused by esophageal spasms but that these typically do not require treatment unless they occur frequently enough to interfere with a person’s ability to eat or drink. Joe is experiencing chest pain. He does not seek medical help, however, because he read the article and therefore believes he is experiencing esophageal spasms. He later collapses and dies from a heart attack. A medical doctor is prepared to testify that his death could have been prevented if he had sought medical attention when he began experiencing the pain.

Should either Acme or XYZ Co. be held liable for Joe’s death? Acme could argue that its product was not defective. It was fit for its intended purposes, namely, a machine learning system that generates articles that look like articles of the kind a user specifies. What about XYZ Co.? Would the answer be different if XYZ had published a notice on its site that the information provided in its articles is not necessarily complete and that the articles are not a substitute for advice from a qualified medical professional? If XYZ incurs liability as a result of the publication, would it have a claim against Acme, such as for failure to warn it of the risks of using AI to generate articles on medical topics?

Consumer protection

AI system deployment raises significant health and safety concerns. There is the obvious example of an AI system making incorrect medical diagnoses or treatment recommendations. Autonomous (“self-driving”) motor vehicles are also examples. An extensive body of consumer protection regulations may be anticipated.

Forensic and evidentiary issues

In situations involving the use of semi-autonomous AI, allocating responsibility for harm resulting from the operation of the AI  system  may be difficult. The most basic question in this respect is whether an AI system was in use or not. For example, if a motor vehicle that can be operated in either manual or autonomous mode is involved in an accident, and fault or the extent of liability depends on that (See the discussion of tort liability, above), then a way of determining the mode in which the car was being driven at the time will be needed.

If, in the case of a semi-autonomous AI system, tort liability must be allocated between the creator of the system and a user of it, the question of fault may depend on who actually caused a particular tortious operation to be executed – the system creator or the user. In that event, some method of retracing the steps the AI system used may be essential. This may also be necessary in situations where some factor other than AI contributed, or might have contributed, to the injury. Regulation may be needed to ensure that the steps in an AI system’s operations are, in fact, capable of being ascertained.

Transparency problems also fall into this category. As explained in the Journal of Responsible Technology, people might be put on no-fly lists, denied jobs or benefits, or refused credit without knowing anything more than that the decision was made through some sort of automated process. Even if transparency is achieved and/or mandated, contestability will also be an issue.

Data Privacy

To the extent an AI system collects and stores personal or private information, there is a risk that someone may gain unauthorized access to it.. Depending on how the system is designed to function, there is also a risk that it might autonomously disclose legally protected personal or private information. Security breaches can cause catastrophic problems for data subjects.

Publicity rights

Many jurisdictions recognize a cause of action for violation of a person’s publicity rights (sometimes called “misappropriation of personality.”) In these jurisdictions, a person has an exclusive legal right to commercially exploit his or her own name, likeness or voice. To what extent, and under what circumstances, should liability attach if a commercialized AI system analyzes the name, likeness or voice of a person that it discovers on the Internet? Will the answer depend on how much information about a particular individual’s voice, name or likeness the system uses, on one hand, or how closely the generated output resembles that individual’s voice, name or likeness, on the other?

Contracts

The primary AI-related contract concern is about drafting agreements that adequately and effectively allocate liability for losses resulting from the use of AI technology. Insurance can be expected to play a larger role as the use of AI spreads into more areas.

Bias, Discrimination, Diversity & Inclusion

Some legislators have expressed concern that AI systems will reflect and perpetuate biases and perhaps discriminatory patterns of culture. To what extent should AI system developers be required to ensure that the data their systems use are collected from a diverse mixture of races, ethnicities, genders, gender identities, sexual orientations, abilities and disabilities, socioeconomic classes, and so on? Should developers be required to apply some sort of principle of “equity” with respect to these classifications, and if so, whose vision of equity should they be required to enforce? To what extent should government be involved in making these decisions for system developers and users?

Copyright

AI-generated works like articles, drawings, animations, music and so on, raise two kinds of copyright issues:

  1. Input issues, i.e., questions like whether AI systems that create new works based on existing copyright-protected works infringe the copyrights in those works
  2. Output issues, such as who, if anybody, owns the copyright in an AI-generated work.

I’ve written about AI copyright ownership issues and AI copyright infringement issues in previous blog posts on The Cokato Copyright Attorney.

Patents and other IP

Computer programs can be patented. AI systems can be devised to write computer programs. Can an AI-generated computer program that meets the usual criteria for patentability (novelty, utility, etc.) be patented?

Is existing intellectual property law adequate to deal with AI-generated inventions and creative works? The World Intellectual Property Organization (WIPO) apparently does not think so. It is formulating recommendations for new regulations to deal with the intellectual property aspects of AI.

Conclusion

AI systems raise a wide range of legal issues. The ones identified in this article are merely a sampling, not a complete listing of all possible issues. Not all of these legal issues have answers yet. It can be expected that more AI regulatory measures, in more jurisdictions around the globe, will be coming down the pike very soon.

Contact attorney Thomas James

Contact Minnesota attorney Thomas James for help with copyright and trademark registration and other copyright and trademark related matters.

The Top Copyright Cases of 2022

Cokato Minnesota attorney Tom James (“The Cokato Copyright Attorney”) presents his annual list of the top copyright cases of the year.

My selections for the top copyright cases of the year.

“Dark Horse”

Marcus Gray had sued Katy Perry for copyright infringement, claiming that her “Dark Horse” song unlawfully copied portions of his song, “Joyful Noise.” The district court held that the disputed series of eight notes appearing in Gray’s song were not “particularly unique or rare,” and therefore were not protected against infringement. The Ninth Circuit Court of Appeals agreed, ruling that the series of eight notes was not sufficiently original and creative to receive copyright protection. Gray v. Hudson.

“Shape of You”

Across the pond, another music copyright infringement lawsuit was tossed. This one involved Ed Sheeran’s “Shape of You” and Sam Chokri’s “Oh Why.” In this case, the judge refused to infer from the similarities in the two songs that copyright infringement had occurred. The judge ruled that the portion of the song as to which copying had been claimed was “so short, simple, commonplace and obvious in the context of the rest of the song that it is not credible that Mr. Sheeran sought out inspiration from other songs to come up with it.” Sheeran v. Chokri.

Instagram images

Another case out of California, this one involves a lawsuit filed by photographers against Instagram, alleging secondary copyright infringement. The photographers claim that Instagram’s embedding tool facilitates copyright infringement by users of the website. The district court judge dismissed the lawsuit, saying he was bound by the so-called “server test” the Ninth Circuit Court of Appeals announced in Perfect 10 v. Amazon. The server test says, in effect, that a website does not unlawfully “display” a copyrighted image if the image is stored on the original site’s server and is merely embedded in a search result that appears on a user’s screen. The photographers have an appeal pending before the Ninth Circuit Court of Appeals, asking the Court to reconsider its decision in Perfect 10. Courts in other jurisdictions have rejected Perfect 10 v. Amazon. The Court now has the option to either overrule Perfect 10 and allow the photographers’ lawsuit to proceed, or to re-affirm it, thereby creating a circuit split that could eventually lead to U.S. Supreme Court review. Hunley v. Instagram.

Tattoos

Is reproducing a copyrighted image in a tattoo fair use? That is a question at issue in a case pending in New York. Photographer Jeffrey Sedlik took a photograph of musician Miles Davis. Later, a tattoo artist allegedly traced a printout of it to create a stencil to transfer to human skin as a tattoo. Sedlik filed a copyright infringement lawsuit in the United States District Court for the Southern District of New York. Both parties moved for summary judgment. The judge analyzed the claims using the four “fair use” factors. Although the ultimate ruling was that fact issues remained to be decided by a jury, the court issued some important rulings in the course of making that ruling. In particular, the court ruled that affixing an image to skin is not necessarily a protected “transformative use” of an image. According to the court, it is for a jury to decide whether the image at issue in a particular case has been changed significantly enough to be considered “transformative.” It will be interesting to see how this case ultimately plays out, especially if it is still pending when the United States Supreme Court announces its decision in the Warhol case (See below). Sedlik v. Von Drachenberg.

Digital libraries

The book publishers’ lawsuit against Internet Archive, about which I wrote in a previous blog post, is still at the summary judgment stage. Its potential future implications are far-reaching. It is a copyright infringement lawsuit that book publishers filed in the federal district court for the Southern District of New York. The gravamen of the complaint is that Internet Archive allegedly has scanned over a million books and has made them freely available to the public via an Internet website without securing a license or permission from the copyright rights-holders. The case will test the “controlled digital lending” theory of fair use that was propounded in a white paper published by David R. Hansen and Kyle K. Courtney. They argued that distributing digitized copies of books by libraries should be regarded as the functional equivalent of lending physical copies of books to library patrons. Parties and amici have filed briefs in support of motions for summary judgment. An order on the motions is expected soon. The case is Hachette Book Group et al. v. Internet Archive.

Copyright registration

In Fourth Estate Public Benefits Corp. v. Wall-Street.com LLC, 139 S. Ct. 881, 889 (2019), the United States Supreme Court interpreted 17 U.S.C. § 411(a) to mean that a copyright owner cannot file an infringement claim in federal court without first securing either a registration certificate or an official notice of denial of registration from the Copyright Office. In an Illinois Law Review article, I argued that this imposes an unduly onerous burden on copyright owners and that Congress should amend the Copyright Act to abolish the requirement. Unfortunately, Congress has not done that. As I said in a previous blog post, Congressional inaction to correct a harsh law with potentially unjust consequences often leads to exercises of the judicial power of statutory interpretation to ameliorate those consequences. Unicolors v. H&M Hennes & Mauritz.

Unicolors, owner of the copyrights in various fabric designs, sued H&M Hennes & Mauritz (H&M), alleging copyright infringement. The jury rendered a verdict in favor of Unicolor, but H&M moved for judgment as a matter of law. H&M argued that Unicolors had failed to satisfy the requirement of obtaining a registration certificate prior to commencing suit. Although Unicolors had obtained a registration, H&M argued that the registration was not a valid one. Specifically, H&M argued that Unicolors had improperly applied to register multiple works with a single application. According to 37 CFR § 202.3(b)(4) (2020), a single application cannot be used to register multiple works unless all of the works in the application were included in the same unit of publication. The 31 fabric designs, H&M contended, had not all been first published at the same time in a single unit; some had been made available separately exclusively to certain customers. Therefore, they could not properly be registered together as a unit of publication.

The district court denied the motion, holding that a registration may be valid even if contains inaccurate information, provided the registrant did not know the information was inaccurate. The Ninth Circuit Court of Appeals reversed. The Court held that characterizing the group of works as a “unit of publication” in the registration application was a mistake of law, not a mistake of fact. The Court applied the traditional rule of thumb that ignorance of the law is not an excuse, in essence ruling that although a mistake of fact in a registration application might not invalidate the registration for purposes of the pre-litigation registration requirement, a mistake of law in an application will.

The United States Supreme Court granted certiorari. It reversed the Ninth Circuit Court’s reversal, thereby allowing the infringement verdict to stand notwithstanding the improper registration of the works together as a unit of publication rather than individually.

It is hazardous to read too much into the ruling in this case. Copyright claimants certainly should not interpret it to mean that they no longer need to bother with registering a copyright before trying to enforce it in court, or that they do not need to concern themselves with doing it properly. The pre-litigation registration requirement still stands (in the United States), and the Court has not held that it condones willful blindness of legal requirements. Copyright claimants ignore them at their peril.

Andy Warhol, Prince Transformer

I wrote about the Warhol case in a previous blog post. Basically, it is a copyright infringement case alleging that Lynn Goldsmith took a photograph of Prince in her studio and that Andy Warhol later based a series of silkscreen prints and pencil illustrations on it without a license or permission. The Andy Warhol Foundation sought a declaratory judgment that Warhol’s use of the photograph was “fair use.” Goldsmith counterclaimed for copyright infringement. The district court ruled in favor of Warhol and dismissed the photographer’s infringement claim. The Court of Appeals reversed, holding that the district court misapplied the four “fair use” factors and that the derivative works Warhol created do not qualify as fair use. The U.S. Supreme Court granted certiorari and heard oral arguments in October, 2022. A decision is expected next year.

Because this case gives the United States Supreme Court an opportunity to bring some clarity to the extremely murky “transformative use” area of copyright law, it is not only one of this year’s most important copyright cases, but it very likely will wind up being one of the most important copyright cases of all time. Andy Warhol Foundation for the Visual Arts v. Goldsmith.

New TM Office Action Deadlines

The deadline for responding to US Trademark Office Actions has been shortened to three months, in many cases. Attorney Thomas B. James shares the details.

by Thomas B. James (“The Cokato Copyright Attorney”)

Effective December 3, 2022 the deadline for responding to a U.S. Trademark Office Action is shortened from 6 months to 3 months. Here is what that means in practice.

Historically, applicants for the registration of trademarks in the United States have had 6 months to respond to an Office Action. Beginning December 3, 2022 the time limit has been shortened to 3 months.

Applications subject to the new rule

The new, shorter deadline applies to most kinds of trademark applications, including:

  • Section 1(a) applications (application based on use in commerce)
  • Section 1(b) applications (application based on intent to use)
  • Section 44(e) applications (foreign application)
  • Section 44(d) applications (foreign application)

Applications not subject to the new rule

The new deadline does not apply to:

  • Section 66(a) applications (Madrid Protocol)
  • Office actions issued before December 3, 2022
  • Office actions issued after registration (But note that the new deadline will apply to post-registration Office Actions beginning October 7, 2023)
  • Office actions issued by a work unit other than the law offices, such as the Intent-to-Use Unit or the Examination and Support Workload and Production Unit
  • Office actions that do not require a response (such as an examiner’s amendment)
  • Office actions that do not specify a 3-month response period (e.g., a denial of a request for reconsideration, or a 30-day letter).

Extensions

For a $125 fee, you can request one three-month extension of the time to respond to an Office Action. You will need to file the request for an extension within three months from the “issue date” of the Office Action and before filing your response. If your extension request is granted, then you will have six months from the original “issue date” to file your response.

Use the Trademark Office’s Request for Extension of Time to File a Response form to request an extension. The Trademark Office has issued a warning that it will not process requests that do not use this form.

The form cannot be used to request an extension of time to respond to an Office Action that was issued for a Madrid Protocol section 66(a) application, an Office Action that was issued before December 3, 2022, or to an Office Action to which the new deadline does not apply.

Consequences

Failing to meet the new three-month deadline will have the same consequences as failing to meet the old six-month deadline did. Your application will be deemed abandoned if you do not respond to the Office Action or request an extension on or before the three-month deadline. Similarly, your application will be deemed abandoned if you you are granted an extension but fail to file a response on or before the six-month deadline.

The Trademark Office does not refund registration filing fees for abandoned applications.

As before, in some limited circumstances, you might be able to revive an abandoned application by filing a petition and paying a fee. Otherwise, you will need to start the application process all over again.

More information

Here are links to the relevant Federal Register Notice and Examination Guide

Contact attorney Thomas James

Need help with trademark registration? Contact Thomas B. James, Minnesota attorney.

Unicolors v. H&M Hennes & Mauritz

Unicolors | Cokato attorney Thomas James shows how Congressional inaction to fix a bad law can lead to unusual interpretive gymnastics in the judicial branch.

By Thomas James, Minnesota attorney

In Fourth Estate Public Benefits Corp. v. Wall-Street.com LLC, 139 S. Ct. 881, 889 (2019), the United States Supreme Court interpreted 17 U.S.C. § 411(a) to mean that a copyright owner cannot file an infringement claim in federal court without first securing either a registration certificate or an official notice of denial of registration from the Copyright Office. In an Illinois Law Review article, I argued that this imposes an unduly onerous burden on copyright owners and that Congress should amend the Copyright Act to abolish the requirement. Unfortunately, Congress has not done that.

Congressional inaction to correct a harsh law with potentially unjust consequences predictably leads to judicial decisions exercising the power of statutory interpretation to ameliorate the consequences. The Court’s decision today in Unicolors v. H&M Hennes & Mauritz, __ U.S. __ (No. 20-915, February 24, 2022) is a case in point.

The district court proceedings

Unicolors owns the copyrights in various fabric designs. The company sued H&M Hennes & Mauritz (H&M), claiming that H&M had infringed them. The jury rendered a verdict in favor of Unicolor, but H&M moved for judgment as a matter of law (notwithstanding the jury verdict). H&M argued that Unicolors had failed to satisfy the requirement of obtaining a registration certificate prior to commencing suit. Although Unicolors had obtained a registration, H&M argued that the registration was not a valid one.

Specifically, H&M argued that Unicolors had improperly applied to register multiple works with a single application. According to 37 CFR § 202.3(b)(4) (2020), a single application cannot be used to register multiple works unless all of the works in the application were included in the same unit of publication. The 31 fabric designs, H&M contended, had not all been first published at the same time in a single unit; some had been made available separately exclusively to certain customers. Therefore, they could not be registered together as a unit of publication.

The district court denied the motion, holding that a registration may be valid even if contains inaccurate information, provided the registrant did not know the information was inaccurate.

The  Ninth Circuit’s reversal

On appeal, the Ninth Circuit Court of Appeals acknowledged that Unicolors had failed to satisfy the “single unit of publication” requirement. The Court, however, viewed Unicolors’ characterization of the group of works, in its application, as a “unit of publication” as a mistake of law rather than fact. It is normally a bedrock principle of the law that although mistake of fact may sometimes be asserted as an excuse, ignorance of the law generally cannot be. Since Unicolors had known the relevant facts, namely, that some of the designs had been reserved for some customers separately from the others, its characterization of the group, in the copyright application, as a “unit of publication” was a mistake of law, not fact. Applying the traditional rule that ignorance of the law is not an excuse, the Ninth Circuit held that Unicolor’s registration was not valid.

The United States Supreme Court granted certiorari.

The Supreme Court’s reversal of the reversal

Section 411(b)(1) says that a registration is valid unless it contains information that the applicant knew was inaccurate. Bucking the traditional maxim that ignorance of the law is not an excuse, the Court interpreted the word know, in this context, to include knowledge of either an applicable fact or an applicable law.  The Court drew upon legislative history suggesting that Congress intended to deny infringers the ability to exploit loopholes.

This is actually a good point. A major objective of international copyright treaties and conventions has been to eliminate formalities in the enforcement of copyrights. Registration is one such formality. One may legitimately ask, however, whether Congress’s decision to impose a requirement of obtaining either a certificate of registration or an official denial of registration from the Copyright Office as a precondition to enforcing a copyright reflected an intention to impose and enforce formalities despite the clear intent of treaties by which the United States has agreed to be bound. Not all other countries impose this formal prerequisite to copyright enforcement. In fact, legal scholars both here and abroad have criticized the United States for enacting and enforcing this formality.

The Court dismissed the traditional legal maxim that ignorance of the law is not an excuse by suggesting it only applies to criminal laws. As Justice Thomas points out in his dissent, however, a requirement to “know” a law (or a legal requirement) ordinarily is satisfied, even in civil cases, by constructive knowledge; actual knowledge is not necessary. Citizens generally are charged with the responsibility of knowing what the laws are, whether they are criminal or civil laws. It is not a defense to the imposition of punitive damages in a tort case, for example, that the defendant did not know that he might be subject to a larger damages award if he acted with intentional or reckless disregard for other people’s rights or lives. That ignorance of the law is not an excuse is a large part of the reason for the existence of legal advisers and the legal profession in general.

Thomas points out that in a previous cases, the Court has distinguished between a “willfulness” requirement, which requires proof of actual knowledge, and a “knowledge” requirement, as to which either actual or constructive knowledge normally may suffice. See Cheek v. United States, 498 U.S. 192, 201–203 (1991); Intel Corp. Investment Policy Comm. v. Sulyma, 589 U. S. ___ (2020) (slip op., at 6–7). Indeed, the Court has acknowledged that other “knowledge” requirements in the Copyright Act may be satisfied by either actual or constructive knowledge.

Reading between the lines a little, I think there is room for speculation that some members of the Court regard the prelitigation registration requirement as a formality which, as such, is not really in keeping with the spirit of international treaties calling for the abolition of copyright formalities. Rather than allow a formality to stand in the way of an attempt to enforce a copyright, it is conceivable that the Court chose to deploy its power of judicial interpretation to effect what it believed to be the most just result in this case.

Conclusion

Another old legal maxim I remember from law school is “Hard cases make bad law.” It is too soon to tell how the Court’s decision in this case will play out in practice, but the Court’s allowance of an infringement action to proceed despite the fact that the plaintiff provided false information (whether factual or legal) when securing the registration does seem to open a fairly large can of worms.

Of course, the Court’s decision does not rule out a dismissal of an infringement action if the defendant can prove that the plaintiff had actual knowledge that he or she was providing false information at the time of applying for registration. Actual knowledge, however, can be very difficult to prove.

More importantly, how much mileage are courts going to let people get out of a claim that they did not know the law when they applied for registration? For example, will a person who purchases a copy of a book and then files an application to register the copyright in it be allowed to proceed with an infringement claim because he “did not know” that merely buying a copy of a work does not amount to a purchase of the copyright? (cf. these guys.)

Of course, copyright ownership can still be disputed in an infringement proceeding even after the Court’s decision in this case. Except in the rare case where it can be proven that an applicant actually knew his works did not qualify for the kind of registration application he used, however, it seems like the Court’s decision opens up the copyright registration application process to a great deal of potential abuse, at least when the “error” is not obvious enough for the Copyright Office to detect from the face of the application itself.

Once again, I would suggest that perhaps Congress should just consider abolishing the pre-litigation registration requirement.

NFTs and Copyright

The rise in popularity of nonfungible tokens (NFTs) has generated considerable controversy and confusion about whether and how copyright law applies to them. In this article, Cokato, Minnesota attorney Thomas James discusses the interplay between NFTs and U.S. copyright law.

by Minnesota attorney Thomas James

The rise in popularity of nonfungible tokens (NFTs) has generated considerable controversy and confusion about whether and how copyright law applies to them. In this article, Cokato, Minnesota attorney Thomas James explains what they are and discusses the interplay between NFTs and U.S. copyright law.

Just for fun, call up an attorney and say, “Hey, I‘ve got a quick question for you. Can I make, sell and buy NFTs without getting into copyright trouble?” Depending on the attorney’s age, area of practice, and musical tastes, the answers you get may be anything from “What makes you think that selling shares of the Nichiyu Forklift Thailand company could raise copyright issues?” to “The answer, my friend, is blowing in the wind” – and many variants in between.

(More probably, someone other than the attorney would answer the phone and ask, “Would you like to set up an appointment?” That, however, would not help to make the point.)

Incidentally, don’t really make a telephone call like this “just for fun.” I was only joking I wouldn’t want you to incur unnecessary legal fees or be accused of making an unwanted or disturbing telephone call.

The point is that many members of the legal profession are scrambling just as much as everybody else is to understand NFTs and how copyright laws apply to them. The aim of this article is to reduce some of the confusion by shedding some light on what NFTs are and how copyright laws may apply to them.

What are NFTs?

NFT stands for “non-fungible token.”

Great. Now what the heck is that? Well, let’s break it down.

Fungible vs. Non-fungible

An item is said to be “fungible” if it is interchangeable with similar items. For example, if a retailer orders 100 pounds of red potatoes from a wholesaler, the contract is most likely one for the purchase of fungible goods. The retailer most likely has not specifically identified any particular potato that must be included in the batch, so long as they’re all of merchantable quality. By contrast, if an art collector enters into a contract to purchase an original painting by Peter Doig, it is almost certainly going to be a contract for a non-fungible product (the painting.) The buyer of a non-fungible item wants a specifically identified item.

Currency is a good illustration of the difference. When you cash a check at a bank, you don’t really care which particular bills and coins you are given in exchange for the check, so long as the amount you are given is equal to the amount specified on the check. The currency in this situation is fungible. By contrast, if you present a check for $4 million dollars to a rare coin vendor to purchase a 1913 Liberty V nickel, you would not consider it acceptable for the vendor to give you a standard-issue 2019 nickel in its place. The rare coin in this example is not fungible, i.e., it is non-fungible.

Tokens

A token is something that represents or stands for something else. New York City old-timers may recall subway tokens – small, coin-shaped objects representing the right of access to a subway train. Casino chips are tokens representing specified amounts of money.

A digital token is a programmable digital unit that is recorded on a digital ledger using blockchain technology. There are a lot of different kinds of digital tokens. They can represent physical goods or digital goods.

Bitcoins are examples of fungible digital tokens. Digital NFTs, on the other hand, most commonly represent art, a photograph, music, a video, a meme, or a digitized scan of some other kind. Cryptopunks, pixelated images of characters each one of which is unique and different from others, are some of the earliest NFTs, but many other examples abound.

Ethereum has developed standards for digital tokens. The ERC-721 standard governs digital NFTs. Under this standard, every NFT must have a tokenID. The tokenID is generated when the token is created. Every NFT also must have a contract address. This is a blockchain address that can be viewed using a blockchain scanner. The combination of tokenID and contract address is unique for each NFT.

Blockchains

Both fungible and nonfungible tokens are built and reside on blockchains. A blockchain is simply a database that stores information in digital format. Think of them as digital ledgers. They are called “block” chains because information is stored in groups (“blocks”). When a block reaches its storage capacity, it is closed and linked to the previously filled block. A new block will be formed for any new data that is added later. As this process repeats, a chain of records is created. Hence the “chain” in blockchain. Each block is time-stamped.

Blockchains are simply record-keeping mechanisms. They work well for many, but not all, kinds of digital files. They play a significant role in cryptocurrency systems, as they maintain a secure, decentralized record of transactions. They are not as efficient, however, for large digital files like artwork, videos, sound recordings, and so on. In these cases, a nonfungible token, not the actual file, can be made a part of the chain. This is why, in addition to a tokenID and contract address, an NFT will frequently contain the creator’s wallet address and a link to the work the token represents.

One of the most important things to remember about NFTs, for purposes of copyright law, is that although they might contain a creative work within them, more typically they link to a work in some way. They are pieces of code containing a link; they are not typically the works themselves.  

Transfers of NFTs vs. transfers of copyrights

NFTs representing artwork sometimes sell for millions of dollars. Perhaps this explains the popular misconception that the copyright in the work the NFT represents gets transferred along with the NFT. No, buying an NFT representing a work of art does not, by itself, give the buyer the rights of a copyright owner. You might think that you must be getting something more than a string of code when you buy an NFT, but no. In the United States anyway, an assignment of copyright must be express and made in a writing signed by the copyright owner (or the copyright owner’s authorized agent.)

Of course, if a written contract does expressly provide for the assignment of the copyright, then a transfer of a copyright may co-occur with the transfer of an NFT. In the absence of such a contractual provision, however, buying an NFT does not transfer the copyright in the artwork it represents. Instead, it operates in a way similar to the way buying a copy of a copyrighted book or a print of copyrighted artwork does.

The question whether the transfer of an NFT gives the transferee a copyright license is a little more complicated.

In the United States, an exclusive copyright license, like an outright transfer, must be in writing. A non-exclusive license, on the other hand, may be either express or implied. In addition, it is possible to code any type of agreement into a smart contract (an agreement that is written in code and stored on a blockchain.) If the existence of a valid copyright license can be proven, then the nature and extent of the NFT transferee’s rights may be governed by its terms.

A U.S. federal court had occasion to address the subject of implied copyright licenses in the case of Pelaez v. McGraw Hill, 399 F. Supp. 3d 120 (S.D.N.Y. 2019). There, the court ruled that the test for an implied license is whether the parties’ conduct, taken as a whole, demonstrates an intent to grant a license. The court pointed out that an implied license cannot be based on the unilateral expectations of one person. A party’s subjective belief that he or she has been granted a license is not enough. The totality of facts and circumstances must be such that a court could reasonably infer that both parties intended a license.

Copyright ownership arises at the time an original, creative, expressive work is fixed in a tangible medium. Registration is not required. Despite this feature of copyright law, some countries make registration of the copyright a prerequisite to enforcing it in court. The United States is such a country.

Some people believe that because blockchain operates as an unalterable record of ownership, it serves as a substitute for registration with the U.S. Copyright Office. This is not the case.

The U.S. Copyright Act requires the copyright in a domestic work to be registered with the Copyright Office before an infringement claim may be filed in court. 17 U.S.C. § 411. It does not make an exception for cases in which ownership is sought to be proven by a “poor man’s copyright” (i.e., submitting into evidence the postmark on an envelope in which you have mailed a copy of the work to yourself), much less for a digital NFT.

Of course, a registration certificate only creates a presumption of copyright ownership. The presumption is rebuttable. Could evidence such as the date on which an NFT representing the work was created and written into the blockchain be used to rebut that presumption? Possibly. Then again, how probative is that evidence? Anyone can make a false ownership claim and write it into the blockchain, just as anyone can mail an infringing copy of a work to themselves.

Unless Congress amends the Copyright Act to make blockchain a substitute for registration with the Copyright Office, it would be foolhardy to rely on blockchain as a registration alternative.

Infringement

Is minting an NFT associated with a copyrighted work, without permission, infringement? The answer to this question is not as simple as you might think.

The exclusive rights of a copyright owner include reproduction, distribution, public display, public performance, and the making of derivative works. An NFT containing only a tokenID, contract address and a link to a work is merely a string of code associated with a work; it is not the work itself. If an NFT only contains a link to the work, not the work itself, then it is difficult to see how minting an NFT would violate any of the exclusive rights of a copyright owner.

Of course, if the NFT itself contains copyright-protected elements of the work (and this would have to be something more than the title, artist name and a link), then it might be a reproduction or a derivative of the work. In this situation, creating an NFT without the copyright owner’s permission could constitute infringement, since the copyright owner has the exclusive right to make copies and derivatives of the work.

If the link points to a copy or derivative work that the link creator created in violation of the copyright owner’s exclusive rights to make copies and derivative works, then the link creator could incur two kinds of infringement liability. Even if minting an NFT does not itself infringe a copyright, including in it a link to an infringing copy of a copyright-protected work could result in contributory liability for infringement if that person knows or should know that it will facilitate or encourage unauthorized copying (or other unauthorized use) of a copyrighted work. And of course, there would be direct liability for making the copy or the derivative work without the copyright owner’s permission.

The first sale doctrine

Under U.S. copyright law, the purchaser of a lawfully acquired copy of a copyrighted work may resell that copy without first getting the copyright owner’s permission, unless a contract governing the acquisition of the copy provides otherwise. This is why purchasing a paperback copy of The Andromeda Strain on Amazon.com and later reselling it at a garage sale will not subject you to liability for infringing the copyright owner’s exclusive right to distribute copies of the work.

Does the first sale doctrine also apply to NFTs?

The first sale doctrine generally does not apply to resales of digital goods. This is because a sale of a digital file normally will require making a copy of the file. That would violate the copyright owner’s exclusive right to reproduce his or her work. See, e.g., Capitol Records LLC v. ReDigi Inc. (2d Cir. 2018) (refusing to apply the first sale doctrine to the resale of an MP3 file because the resale would require making an unauthorized reproduction of the original MP3 file).

NFTs, however, arguably are distinguishable from MP3 files. A purchaser of an NFT does not buy the digital file containing the copyright-protected work. An NFT buyer simply purchases a token. Reselling a token does not involve reproducing the work itself. cf. Disney Enterprises Inc. v. Redbox Automated Retail LLC (C.D. Cal. Feb. 20, 2018 (first sale doctrine inapplicable to digital download codes because they are options to create a physical copy, not actual sales of copies).

If the transferee of an NFT uses it to access the copyrighted work, and in the course of doing so, the work is reproduced or distributed, then it would seem that the transferee could, at that point, be liable for copyright infringement. There would also appear to be a potential risk of liability for contributory infringement on the part of the NFT seller, at least in some cases.

Of course, this should not be a problem if the copyright owner has authorized resales by NFT buyers.

Contact Minnesota attorney Thomas James

Need to get in touch with the Cokato Copyright Attorney? Use the contact form.

No Trademark Registration .sucks

The U.S. Trademark Office denied an application to register “.sucks” as a trademark. The Court of Appeals affirmed. Cokato attorney Tom James explains.

by Cokato attorney Tom James

the stylized font claimed for the "SUCKS" trademark discussed in this article by Cokato attorney Tom James

Most people are familiar with a few gTLDs (generic top level domains). The gTLDs .com, .net, .biz, .info, .edu and .gov come to mind. The list of available gTLDs has grown considerably over the past few years, however. Now there are literally hundreds of them. (View the full list here.) Some examples: .food, .auction, .dog, .beer.

And .sucks.

The United States Trademark Office denied an application to register that gTLD as a trademark. The Federal Circuit Court of Appeals just affirmed that decision. The case is Vox Populi Registry, Ltd., No. 2021-1496 (Fed. Cir., February 2, 2022).

The applications

Vox is the domain registry operator for the .SUCKS gTLD. The company filed two trademark applications with the USPTO. One was for the standard character mark .SUCKS in Class 42 (computer and scientific services) for “[d]omain registry operator services related to the gTLD in the mark” and in Class 45 (personal and legal services) for “[d]omain name registration services featuring the gTLD in the mark” as well as “registration of domain names for identification of users on a global computer network featuring the gTLD in the mark.” The other application was for the stylized form of the mark, as shown in the illustration accompanying this article.

The examining attorney refused both applications, on the ground that they failed to operate as trademarks, i.e., as source identifiers. The TTAB agreed, finding that consumers will perceive “.sucks” as merely one of several gTLDs that are used in domain names, not as a source identifier.

Concerning the claim in the stylized form, the Board concluded that although the pixelated font resembling how letters were displayed on early LED screens is not common today, it is not sufficiently distinctive to qualify for trademark protection in this case.

Vox appealed the part of the decision relating to the stylized font to the Federal Circuit Court of Appeals. The court affirmed.

The standard character mark

Under the Lanham Act, a service mark may be registered only if it functions to “identify and distinguish the services of one person . . . from the services of others and to indicate the source of the services.” 15 U.S.C. § 1127. Matter that merely conveys general information about a product or service generally does not function as a source identifier.

In this case, the court held that substantial evidence supported the Board’s finding that consumers will view this standard character mark as only a non-source identifying part of a domain name rather than as a trademark. The court pointed to specimens from Vox’s website that treated domain names ending in “.sucks” as products. rather than as identifier of Vox’s services. Consumers are likely to see gTLDs as part of domain names, not as identifiers of domain name registry operators.

The stylized design

Design or stylization can sometimes make an otherwise unregistrable mark registrable, provide the stylization creates an impression on consumers that is distinct from the words or letters themselves. Here, the Board determined that because of the ubiquity of the font in the early days of computing, consumers would view the pixelated lettering as ordinary rather than as a source identifier.

It appears that Vox did not claim that the stylized presentation of .SUCKS had acquired distinctiveness. If it had done so – and if it could present persuasive evidence of acquired distinctiveness – then the stylized mark might have been registrable.

Conclusion

Does this decision mean that a gTLD can never serve as a trademark? No. To give just one example, AMAZON is both a gTLD and a trademark. The import of the case is only that a gTLD is not likely to be registrable as a service mark for a domain name registry service, where consumers are more likely to see it as simply being a part of a domain name, not as an identifier of a particular domain registry service.

Contact Tom James

Contact Cokato attorney Tom James for help with trademark registration.

Compulsory E-Book Licensing

In 2020, New York legislators introduced bills to require book publishers to offer licenses to libraries to allow them to make digital copies of books (“e-books”) available to the public. The idea has spread to other states. Cokato attorney Tom James explains why these proposals do not rest on solid legal ground.

by Cokato attorney Tom James

In 2020, New York legislators introduced bills (S2890 and A5837) to require book publishers to offer licenses to libraries to allow them to make digital copies of books (“e-books”) available to the public.

The idea has spread to other states. Legislators in Rhode Island (H6246), Maryland (HB518 and SB432), Missouri (HB2210) and Illinois (SB3167) have introduced similar bills. Groups in Connecticut, Texas, Virginia, and Washington are lobbying for similar legislation in those states.

The New York bill passed out of the legislature, but Governor Kathy Hochul vetoed it. Maryland, on the other hand, enacted their bill into law. See Md. Code, Educ. §§ 23-701, 23-702 (2021). It was slated to go into effect on January 1, 2022. Bills in other states are still pending.

AAP lawsuit

The Association of American Press Publishers (AAP) filed a complaint seeking declaratory and injunctive relief against the State of Maryland in federal district court in December, 2021, asserting that the new law is preempted by the federal Copyright Act.  The district court has granted a preliminary injunction against the enforcement of the new law.

Preemption

Governor Kathy Hochul vetoed the New York bill because she believed that federal copyright law preempts the field of copyright regulation. Assuming it is not settled or dismissed, the AAP lawsuit should sort that question out. In the meantime, here is why I believe Governor Hochul is correct.

Express preemption

The Supremacy Clause of the U.S. Constitution makes the “Constitution, and the Laws of the United States which shall be made in Pursuance thereof . . . the supreme Law of the Land . . . any Thing in the Constitution or Laws of any State to the Contrary notwithstanding.” U.S. CONST. art. VI, cl. 2. For this reason, Congress may enact legislation expressly preempting state laws. Pacific Gas & Elec. Co. v. State Energy Res. Conservation and Dev. Comm’n, 461 U.S. 190, 203 (1983). Congress did this in § 301(a) of the 1976 Copyright Act:

“all legal or equitable rights that are equivalent to any of the exclusive rights within the general scope of copyright as specified by section 106 in works of authorship that are fixed in a tangible medium of expression and come within the subject matter of copyright as specified by sections 102 and 103 . . . are governed exclusively by this title….”

17 U.S.C. § 301(a)

Of course, like nearly all legislation, the Copyright Act contains an exception (more familiarly known as a “loophole.”) Despite the preemption of state laws regulating copyrights, states may regulate “activities violating legal or equitable rights that are not equivalent to any of the exclusive rights within the general scope of copyright as specified in section 106.” 17 U.S.C. §§ 301(a), (b)(3). Those exclusive rights are the rights to reproduce, distribute, publicly display, publicly perform, and make derivative works based on a copyright-protected work.

Literary works are protected by copyright whether they are in paper or digital form. The question, then, is whether state laws creating what amount to compulsory licenses regulate or impinge upon the exclusive rights of copyright owners, or whether they regulate something else that is not covered by the Copyright Act. As indicated, the Copyright Act gives copyright owners the exclusive rights to distribute, publicly display, and publicly perform their works. A state law dictating to a copyright owner how, when and to whom s/he may exercise those rights (such as by granting a license) would certainly appear to regulate and impinge upon the exclusive rights of copyright owners.

Implied preemption

Even if Congress had not expressly preempted state regulation of the exclusive rights of copyright owners, it seems likely that the same conclusion would be reached on other grounds anyway.

The United States Supreme Court has held that even if Congress has not expressly preempted state laws – or if express preemption is not determinative because the “loophole” might apply – state laws relating to a particular subject matter may be implicitly preempted in some cases. Arizona v. U.S., 567 U.S. 387, 398–400 (2012).

There are two kinds of implied preemption: conflict preemption and field preemption. Field preemption occurs when Congress has “legislated so comprehensively” in a field that it has “left no room for supplementary state legislation.” R. J. Reynolds Tobacco Co. v. Durham Cty., 479 U.S. 130, 140 (1986). Conflict preemption occurs when a state law “stands as an obstacle to the accomplishment and execution of the full purposes and objectives of Congress.” Crosby v. Nat’l Foreign Trade Council, 530 U.S. 363, 373 (2000); see also ASCAP v. Pataki, 930 F. Supp. 873, 878 (S.D.N.Y. 1996).

Like an express preemption with a loophole, implied preemption does not apply if the state-created right protects a substantial “interest[ ] outside the sphere of congressional concern in the [copyright] laws.’” In re Jackson, 972 F.3d 25, 34–37 (2d Cir. 2020) (internal punctuation and citation omitted). On the other hand, if the state law is “little more than camouflage for an attempt to exercise control over the exploitation of a copyright,” then it is preempted. Id. at 38.

The Copyright Act “creates a balance between the artist’s right to control the work during the term of the copyright protection and the public’s need for access to creative works.” Stewart v. Abend, 495 U.S. 207, 228 (1990); see also College Entrance Examination Bd. v. Pataki, 889 F. Supp. 554, 564 5 (N.D.N.Y. 1995) (evaluation of “the balance struck by Congress between copyright owners found in § 106 of the Copyright Act and the exceptions to those exclusive rights found in §§ 107–118 of the same Act” leads to a finding that the state law is preempted). Congress has struck that balance by giving the author a right “arbitrarily to refuse to license one who seeks to exploit the work.” Stewart, 495 U.S. at 229. See also Lawlor v. Nat’l Screen Serv. Corp., 270 F.2d 146, 154 (3d Cir. 1959) (the right to exclude others is a corollary to the licensing right).

Courts have been fairly consistent in finding that copyright law preempts state laws “that direct a copyright holder to distribute and license against its will or interests.” See, e.g., Orson, Inc. v. Miramax Film Corp., 189 F.3d 377, 386 (3d Cir. 1999), cert. denied, 529 U.S. 1012 (2000). An argument might be made that states have the power to regulate the terms of a license. That, however, is significantly different from requiring a rights holder to grant licenses. At least one court has held that state laws that “appropriate[] a product protected by the copyright law for commercial exploitation against the copyright owner’s wishes” are preempted by the federal Copyright Act. Orson, Inc. v. Miramax Film Corp., 189 F.3d 377, 386 (3d Cir. 1999), cert. denied, 529 U.S. 1012 (2000). A fortiori, a state law that appropriates a content creator’s exclusive right to decide whether and how to allow others to exercise one of his or her exclusive federally protected rights should be held to be preempted.

Moreover, the Copyright Act already covers the kinds of special concessions that copyright owners need to make to public libraries. Publishers should not be subjected to a complex array of state and federal laws making inroads into their federally protected rights, particularly where those inroads could vary significantly from state to state.

The “public interest”

Those who advocate for legislation of this kind typically assert the public interest in having access to literary works. The public certainly does have an interest in having access to literary works. The public also has an interest in having access to food. Does that mean government should force grocers to give a share of their inventory away to the public? That is essentially what these bills do to publishers, albeit indirectly.

One reason people will buy their own copy of a printed book instead of simply reading a library copy is that they do not want to have to get up out of their chairs and make a trip to a library. Digital copies are much easier to access, however. It generally may be done without ever having to get up out of one’s chair. An author may spend hundreds, or even thousands, of hours of his or her time researching, writing, compiling, proofing and editing a book. The least a reader – or a government acting on behalf of reader – can do is pay a couple of bucks for that, rather than using the strong arm of the law to force works to be made freely available online to anyone who applies for a library card – which often is also free, as well.

Absence of need

In any event, most publishers already make their digital catalogs available to public libraries. Something like half a billion digital check-outs from public libraries already occur every year. So why are some states trying to mandate this kind of licensing instead of simply allowing publishers and distributors to continue offering and negotiating licenses as they have been?

That question is open to speculation. One possibility is the “reasonable rate” requirement these bills would mandate publishers to offer to libraries. Who determines what a reasonable rate is? Well, it could be left to state courts to try to figure out. More likely, though, states would delegate authority to an agency of the state to determine rates. States with legislators and regulators who have a greater interest in being liked by their voting constituents than by a handful of published authors can reasonably be expected to set rates for those authors’ works below what they may have earned in a free market.

Concluding thought

The risk to publishers – and more importantly, to authors – should be obvious. Allowing states to require authors and publishers to sell their works at rates dictated by each state, most probably below market, would undermine the ultimate purpose of the Copyright Clause (U.S. Const. Art. 1, § 8), which is “[t]o promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.”

I fear that the continuing outbreaks of state bills like these – which seem extremely likely to be declared unconstitutional – might someday impel Congress to enact a compulsory digital licensing system for e-books on a federal level. If that happens, it will be vitally important for authors, publishers, authors’ organizations and agents to actively monitor and participate in the process.

Contact attorney Tom James

Need help with copyright registration or a copyright matter? Contact attorney Tom James.

%d bloggers like this: