AI Lawsuits Roundup

A status update on 24 pending lawsuits against AI companies – what they’re about and what is happening in court – prepared by Minnesota copyright attorney Thomas James.

A very brief summary of where pending AI lawsuits stand as of February 28, 2024. Compiled by Minnesota attorney Thomas James.

Thomson Reuters v. Ross, (D. Del. 2020)

Filed May 6, 2020. Thomson Reuters, owner of Westlaw, claims that Ross Intelligence infringed copyrights in Westlaw headnotes by training AI on copies of them. The judge has granted, in part, and denied, in part, motions for summary judgment. The questions of fair use and whether the headnotes are sufficiently original to merit copyright protection will go to a jury to decide.

Thaler v. Perlmutter (D.D.C. 2022).

Complaint filed June 2, 2022. Thaler created an AI system called the Creativity Machine. He applied to register copyrights in the output he generated with it. The Copyright Office refused registration on the ground that AI output does not meet the “human authorship” requirement. He then sought judicial review. The district court granted summary judgment for the Copyright Office. In October, 2023, he filed an appeal to the District of Columbia Circuit Court of Appeals (Case no. 23-5233).

Doe v. GitHub, Microsoft, and OpenAI (N.D. Cal. 2022)

Complaint filed November 3, 2022. Software developers claim the defendants trained Codex and Copilot on code derived from theirs, which they published on GitHub. Some claims have been dismissed, but claims that GitHub and OpenAI violated the DMCA and breached open source licenses remain. Discovery is ongoing.

Andersen v. Stability AI (N.D. Cal. 2023)

Complaint filed January 13, 1023. Visual artists sued Midjourney, Stability AI and DeviantArt for copyright infringement for allegedly training their generative-AI models on images scraped from the Internet without copyright holders’ permission. Other claims included DMCA violations, publicity rights violations, unfair competition, breach of contract, and a claim that output images are infringing derivative works. On October 30, 2023, the court largely granted motions to dismiss, but granted leave to amend the complaint. Plaintiffs filed an amended complaint on November 29, 2023. Defendants have filed motions to dismiss the amended complaint. Hearing on the motion is set for May 8, 2024.

Getty Images v. StabilityAI (U.K. 2023)

Complaint filed January, 2023. Getty Images claims StabilityAI scraped images without its consent. Getty’s complaint has survived a motion to dismiss and the case appears to be heading to trial.

Getty Images v. Stability AI (D. Del.)

Complaint filed February 3, 2023. Getty Images alleges claims of copyright infringement, DMCA violation and trademark violations against Stability AI. The judge has dismissed without prejudice a motion to dismiss or transfer on jurisdictional grounds. The motion may be re-filed after the conclusion of jurisdictional discovery, which is ongoing.

Flora v. Prisma Labs (N.D. Cal.)

Complaint filed February 15, 2023. Plaintiffs allege violations of the Illinois Biometric Privacy Act in connection with Prisma Labs’ collection and retention of users’ selfies in AI training. The court has granted Prisma’s motion to compel arbitration.

Kyland Young v. NeoCortext (C.D. Cal. 2023)

Complaint filed April 3, 2023. This complaint alleges that AI tool Reface used a person’s image without consent, in violation of the person’s publicity rights under California law. The court has denied a motion to dismiss, ruling that publicity rights claims are not preempted by federal copyright law. The case has been stayed pending appeal.

Walters v. OpenAI (Gwinnett County Super. Ct. 2023), and Walters v. OpenAI (N.D. Ga. 2023)

Gwinnett County complaint filed June 5, 2023.

Federal district court complaint filed July 14, 2023.

Radio talk show host sued OpenAI for defamation. A reporter had used ChatGPT to get information about him. ChatGPT wrongly described him as a person who had been accused of fraud. In October, 2023, the federal court remanded the case to the Superior Court of Gwinnett County, Georgia.  On January 11, 2024, the Gwinnett County Superior Court denied OpenAI’s motion to dismiss.

P.M. v. OpenAI (N.D. Cal. 2023).

Complaint filed June 28, 2023. Users claim OpenAI violated the federal Electronic Communications Privacy Act and California wiretapping laws by collecting their data when they input content into ChatGPT. They also claim violations of the Computer Fraud and Abuse Act. Plaintiffs voluntarily dismissed the case on September 15, 2023. See now A.T. v. OpenAI (N.D. Cal. 2023) (below).

In re OpenAI ChatGPT Litigation (N.D. Cal. 2023)

Complaint filed June 28, 3023. Originally captioned Tremblay v. OpenAI. Book authors sued OpenAI for direct and vicarious copyright infringement, DMCA violations, unfair competition and negligence. Both input (training) and output (derivative works) claims are alleged, as well as state law claims of unfair competition, etc. Most state law and DMCA claims have been dismissed, but claims based on unauthorized copying during the AI training process remain. An amended complaint is likely to come in March. The court has directed the amended complaint to consolidate Tremblay v. OpenAI, Chabon v. OpenAI, and Silverman v. OpenAI.  

Battle v. Microsoft (D. Md. 2023)

Complaint filed July 7, 2023. Pro se defamation complaint against Microsoft alleging that Bing falsely described him as a member of the “Portland Seven,” a group of Americans who tried to join the Taliban after 9/11.

Kadrey v. Meta (N.D. Cal. 2023)

Complaint filed July 7, 2023. Sarah Silverman and other authors allege Meta infringed copyrights in their works by making copies of them while training Meta’s AI model; that the AI model is itself an infringing derivative work; and that outputs are infringing copies of their works. Plaintiffs also allege DMCA violations, unfair competition, unjust enrichment, and negligence. The court granted Meta’s motion to dismiss all claims except the claim that unauthorized copies were made during the AI training process. An amended complaint and answer have been filed.

J.L. v. Google (N.D. Cal. 2023)

Complaint filed July 11, 2023. An author filed a complaint against Google alleging misuse of content posted on social media and Google platforms to train Google’s AI Bard. (Gemini is the successor to Google’s Bard.) Claims include copyright infringement, DMCA violations, and others. J.L. filed an amended complaint and Google has filed a motion to dismiss it. A hearing is scheduled for May 16, 2024.

A.T. v. OpenAI (N.D. Cal. 2023)

Complaint filed September 5, 2023. ChatGPT users claim the company violated the federal Electronic Communications Privacy Act, the Computer Fraud and Abuse Act, and California Penal Code section 631 (wiretapping). The gravamen of the complaint is that ChatGPT allegedly accessed users’ platform access and intercepted their private information without their knowledge or consent. Motions to dismiss and to compel arbitration are pending.

Chabon v. OpenAI (N.D. Cal. 2023)

Complaint filed September 9, 2023. Authors allege that OpenAI infringed copyrights while training ChatGPT, and that ChatGPT is itself an unauthorized derivative work. They also assert claims of DMCA violations, unfair competition, negligence and unjust enrichment. The case has been consolidated with Tremblay v. OpenAI, and the cases are now captioned In re OpenAI ChatGPT Litigation.

Chabon v. Meta Platforms (N.D. Cal. 2023)

Complaint filed September 12, 2023. Authors assert copyright infringement claims against Meta, alleging that Meta trained its AI using their works and that the AI model itself is an unauthorized derivative work. The authors also assert claims for DMCA violations, unfair competition, negligence, and unjust enrichment. In November, 2023, the court issued an Order dismissing all claims except the claim of unauthorized copying in the course of training the AI. The court described the claim that an AI model trained on a work is a derivative of that work as “nonsensical.”

Authors Guild v. OpenAI, Microsoft, et al. (S.D.N.Y. 2023)

Complaint filed September 19, 1023. Book and fiction writers filed a complaint for copyright infringement in connection with defendants’ training AI on copies of their works without permission. A motion to dismiss has been filed.

Huckabee v. Bloomberg, Meta Platforms, Microsoft, and EleutherAI Institute (S.D.N.Y. 2023)

Complaint filed October 17, 2023. Political figure Mike Huckabee and others allege that the defendants trained AI tools on their works without permission when they used Books3, a text dataset compiled by developers; that their tools are themselves unauthorized derivative works; and that every output of their tools is an infringing derivative work.  Claims against EleutherAI have been voluntarily dismissed. Claims against Meta and Microsoft have been transferred to the Northern District of California. Bloomberg is expected to file a motion to dismiss soon.

Huckabee v. Meta Platforms and Microsoft (N.D. Cal. 2023)

Complaint filed October 17, 2023. Political figure Mike Huckabee and others allege that the defendants trained AI tools on their works without permission when they used Books3, a text dataset compiled by developers; that their tools are themselves unauthorized derivative works; and that every output of their tools is an infringing derivative work. Plaintiffs have filed an amended complaint. Plaintiffs have stipulated to dismissal of claims against Microsoft without prejudice.

Concord Music Group v. Anthropic (M.D. Tenn. 2023)

Complaint filed October 18, 2023. Music publishers claim that Anthropic infringed publisher-owned copyrights in song lyrics when they allegedly were copied as part of an AI training process (Claude) and when lyrics were reproduced and distributed in response to prompts. They have also made claims of contributory and vicarious infringement. Motions to dismiss and for a preliminary injunction are pending.

Alter v. OpenAI and Microsoft (S.D.N.Y. 2023)

Complaint filed November 21, 2023. Nonfiction author alleges claims of copyright infringement and contributory copyright infringement against OpenAI and Microsoft, alleging that reproducing copies of their works in datasets used to train AI infringed copyrights. The court has ordered consolidation of Author’s Guild (23-cv-8292) and Alter (23-cv-10211). On February 12,2024, plaintiffs in other cases filed a motion to intervene and dismiss.

New York Times v. Microsoft and OpenAI (S.D.N.Y. 2023)

Complaint filed December 27, 2023. The New York Times alleges that their news stories were used to train AI without a license or permission, in violation of their exclusive rights of reproduction and public display, as copyright owners. The complaint also alleges vicarious and contributory copyright infringement, DMCA violations, unfair competition, and trademark dilution. The Times seeks damages, an injunction against further infringing conduct, and a Section 503(b) order for the destruction of “all GPT or other LLM models and training sets that incorporate Times Works.” On February 23, 2024, plaintiffs in other cases filed a motion to intervene and dismiss this case.  

Basbanes and Ngagoyeanes v. Microsoft and OpenAI (S.D.N.Y. 2024)

Complaint filed January 5, 2024. Nonfiction authors assert copyright claims against Microsoft and OpenAI. On February 6, 2024, the court consolidated this case with Authors Guild (23-cv-08292) and Alter v. Open AI (23-cv-10211), for pretrial purposes.  

Caveat

This list is not exhaustive. There may be other cases involving AI that are not included here. For a discussion of bias issues in Google’s Gemini, have a look at Scraping Bias on Medium.com.

Nontransformative Nuge

A reversal in the 4th Circuit Court demonstrates the impact the Supreme Court’s decision in Andy Warhol Foundation for the Arts v. Goldsmith is already having on the application of copyright fair use doctrine in federal courts.

Philpot v. Independent Journal Review, No. 21-2021 (4th Circ., Feb. 6, 2024)

Philpot, a concert photographer, registered his photograph of Ted Nugent as part of a group of unpublished works. Prior to registration, he entered into a license agreement giving AXS TV the right to inspect his photographs for the purpose of selecting ones to curate. The agreement provided that the license would become effective upon delivery of photographs for inspection. After registration, Philpot delivered a set of photographs, including the Nugent photograph, to AXS TV. He also published the Nugent photograph to Wikimedia Commons under a Creative Commons (“CC”) license. The CC license allows free use on the condition that attribution is given. LJR published an article called “15 Signs Your Daddy Was a Conservative.” Sign #5 was He hearts the Nuge. LJR used Philpot’s photograph of Ted Nugent as an illustration for the article, without providing an attribution of credit to Philpot.

Philpot sued IJR for copyright infringement.  IJR asserted two defenses: (1) invalid copyright registration; and (2) fair use. The trial court did not decide whether the registration was valid or not, but it granted summary judgment for IJR based on its opinion that the news service’s publication of the photograph was fair use. The Fourth Circuit Court of Appeals reversed, ruling in Philpot’s favor on both issues. The Court held that the copyright registration was valid and that publication of the photograph without permission was not fair use.

The copyright registration

Published and unpublished works cannot be registered together. Including a published work in an application for registration of a group of unpublished works is an inaccuracy that might invalidate the registration, if the applicant was aware of the inaccuracy at the time of applying. Cf. Unicolors v. H&M Hennes & Mauritz, 595 U.S. 178 (2022). LJR argued that Philpot’s pre-registration agreement to send photographs to AJX TV to inspect for possible curation constituted “publication” of them so characterizing them as “unpublished” in the registration application was an inaccuracy known to Philpot.

17 U.S.C. § 101 defines publication as “the distribution of copies . . . to the public” or “offering to distribute copies . . . to a group of persons for purposes of further distribution . . . or public display.” The Court of Appeals held that merely entering into an agreement to furnish copies to a distributor for possible curation does not come within that definition. Sending copies to a limited class of people without concomitantly granting an unrestricted right to further distribute them to the public does not amount to “publication.”

Philpot’s arrangement with AXS TV is analogous to an author submitting a manuscript to a publisher for review for possible future distribution to the public. The U.S. Copyright Office has addressed this. “Sending copies of a manuscript to prospective publishers in an effort to secure a book contract does not [constitute publication].” U.S. Copyright Office, Compendium of U.S. Copyright Office Practices § 1905.1 (3d ed. 2021). Philpot had provided copies of his work for the limited purpose of examination, without a present grant of a right of further distribution. Therefore, the photographs were, in fact, unpublished at the time of the application for registration. Since no inaccuracy existed, the registration was valid.

Fair use

The Court applied the four-factor test for fair use set out in 17 U.S.C. § 107.

(1) Purpose and character of the use. Citing Andy Warhol Found. For the Visual Arts v. Goldsmith, 598 U.S. 508 , 527–33 (2023), the Court held that when, as here, a use is neither transformative nor noncommercial, this factor weighs against a fair use determination. LJR used the photograph for the same purpose as Philpot intended to use it (as a depiction of Mr. Nugent), and it was a commercial purpose.

(2) Nature of the work. Photographs taken by humans are acts of creative expression that receive what courts have described as “thick” copyright protection.” Therefore, this factor weighed against a fair use determination.

(3) Amount and substantiality of the portion used. Since all of the expressive features of the work were used, this factor also weighed against a fair use determination.

(4) Effect on the market for the work. Finally, the Court determined that allowing free use of a copyrighted work for commercial purposes without the copyright owner’s permission could potentially have a negative impact on the author’s market for the work. Therefore, this factor, too, weighed against a fair use determination.

Since all four factors weighed against a fair use determination, the Court reversed the trial court’s grant of summary judgment to IJR and remanded the case for further proceedings.

Conclusion

This decision demonstrates the impact the Warhol decision is having on copyright fair use analysis in the courts. Previously, courts had been interpreting transformativeness very broadly. In many cases, they were ending fair use inquiry as soon as some sort of transformative use could be articulated. As the Court of Appeals decision in this case illustrates, trial courts now need to alter their approach in two ways: (1) They need to return to considering all four fair use factors rather than ending the inquiry upon a defendant’s articulation of some “transformative use;” and (2) They need to apply a much narrower definition of transformativeness than they have been. If both the original work and an unauthorized reproduction of it are used for the purpose of depicting a particular person or scene (as distinguished from parodying or commenting on a work, for example), for commercial gain, then it would no longer appear to be prudent to count on the first of the four fair use factors supporting a fair use determination.


Photo: Photograph published in a July, 1848 edition of L’Illustration. Believed to be the first instance of photojournalism, it is now in the public domain.

Generative-AI as Unfair Trade Practice

While Congress and the courts grapple with generative-AI copyright issues, the FTC weighs in on the risks of unfair competition, monopolization, and consumer deception.

FTC Press Release exceprt

While Congress and the courts are grappling with the copyright issues that AI has generated, the federal government’s primary consumer watchdog has made a rare entry into the the realm of copyright law. The Federal Trade Commission (FTC) has filed a Comment with the U.S. Copyright Office suggesting that generative-AI could be (or be used as) an unfair or deceptive trade practice. The Comment was filed in response to the Copyright Office’s request for comments as it prepares to begin rule-making on the subject of artificial intelligence (AI), particularly, generative-AI.

Monopolization

The FTC is responsible for enforcing the FTC Act, which broadly prohibits “unfair or deceptive” practices. The Act protects consumers from deceptive and unscrupulous business practices. It is also intended to promote fair and healthy competition in U.S. markets. The Supreme Court has held that all violations of the Sherman Act also violate the FTC Act.

So how does generative-AI raise monopolization concerns? The Comment suggests that incumbents in the generative-AI industry could engage in anti-competitive behavior to ensure continuing and exclusive control over the use of the technology. (More on that here.)

The agency cited the usual suspects: bundling, tying, exclusive or discriminatory dealing, mergers, acquisitions. Those kinds of concerns, of course, are common in any business sector. They are not unique to generative-AI. The FTC also described some things that are matters of special concern in the AI space, though.

Network effects

Because positive feedback loops improve the performance of generative-AI, it gets better as more people use it. This results in concentrated market power in incumbent generative-AI companies with diminishing possibilities for new entrants to the market. According to the FTC, “network effects can supercharge a company’s ability and incentive to engage in unfair methods of competition.”

Platform effects

As AI users come to be dependent on a particular incumbent generative-AI platform, the company that owns the platform could take steps to lock their customers into using their platform exclusively.

Copyrights and AI competition

The FTC Comment indicates that the agency is not only weighing the possibility that AI unfairly harms creators’ ability to compete. (The use of pirated or the misuse of copyrighted materials can be an unfair method of competition under Section 5 of the FTC Act.) It is also considering that generative-AI may deceive, or be used to deceive, consumers. Specifically, the FTC expressed a concern that “consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist, but it has been generated by someone else using an AI tool.” (Comment, page 5.)

In one of my favorite passages in the Comment, the FTC suggests that training AI on protected expression without consent, or selling output generated “in the style of” a particular writer or artist, may be an unfair method of competition, “especially when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works….” (Comment, pages 5 – 6).

Fair Use

The significance of the FTC’s injection of itself into the generative-AI copyright fray cannot be overstated. It is extremely likely that during their legislative and rule-making deliberations, both Congress and the Copyright Office are going to be focusing the lion’s share of their attention on the fair use doctrine. They are most likely going to try to allow generative-AI outfits to continue to infringe copyrights (It is already a multi-billion-dollar industry, after all, and with obvious potential political value), while at the same time imposing at least some kinds of limitations to preserve a few shards of the copyright system. Maybe they will devise a system of statutory licensing like they did when online streaming — and the widespread copyright infringement it facilitated– became a thing.

Whatever happens, the overarching question for Congress is going to be, “What kinds of copyright infringement should be considered “fair” use.

Copyright fair use normally is assessed using a four-prong test set out in the Copyright Act. Considerations about unfair competition arguably are subsumed within the fourth factor in that analysis – the effect the infringing use has on the market for the original work.

The other objective of the FTC Act – protecting consumers from deception — does not neatly fit into one of the four statutory factors for copyright fair use. I believe a good argument can be made that it should come within the coverage of the first prong of the four-factor test: the purpose and character of the use. The task for Congress and the Copyright Office, then, should be to determine which particular purposes and kinds of uses of generative-AI should be thought of as fair. There is no reason the Copyright Office should avoid considering Congress’s objectives, expressed in the FTC Act and other laws, when making that determination.

AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

Generative AI: Perfect Tool for the Age of Deception

For many reasons, the new millennium might well be described as the Age of Deception. Cokato Copyright Attorney Tom James explains why generative-AI is a perfect fit for it.

Image by Gerd Altmann on Pixabay.

What is generative AI?

“AI,” of course, stands for artificial intelligence. Generative AI is a variety of it that can produce content such as text and images, seemingly of its own creation. I say “seemingly” because in reality these kinds of AI tools are not really independently creating these images and lines of text. Rather, they are “trained” to emulate existing works created by humans. Essentially, they are derivative work generation machines that enable the creation of derivative works based on potentially millions of human-created works.

AI has been around for decades. It wasn’t until 2014, however, that the technology began to be refined to the point that it could generate text, images, video and audio so similar to real people and their creations that it is difficult, if not impossible, for the average person to tell the difference.

Rapid advances in the technology in the past few years have yielded generative-AI tools that can write entire stories and articles, seemingly paint artistic images, and even generate what appear to be photographic images of people.

AI “hallucinations” (aka lies)

In the AI field, a “hallucination” occurs when an AI tool (such as ChatGPT) generates a confident response that is not justified by the data on which it has been trained.

For example, I queried ChatGPT about whether a company owned equally by a husband and wife could qualify for the preferences the federal government sets aside for women-owned businesses. The chatbot responded with something along the lines of “Certainly” or “Absolutely,” explaining that the U.S. government is required to provide equal opportunities to all people without discriminating on the basis of sex, or something along those lines. When I cited the provision of federal law that contradicts what the chatbot had just asserted, it replied with an apology and something to the effect of “My bad.”

I also asked ChatGPT if any U.S. law imposes unequal obligations on male citizens. The chatbot cheerily reported back to me that no, no such laws exist. I then cited the provision of the United States Code that imposes an obligation to register for Selective Service only upon male citizens. The bot responded that while that is true, it is unimportant and irrelevant because there has not been a draft in a long time and there is not likely to be one anytime soon. I explained to the bot that this response was irrelevant. Young men can be, and are, denied the right to government employment and other civic rights and benefits if they fail to register, regardless of whether a draft is in place or not, and regardless of whether they are prosecuted criminally or not. At this point, ChatGPT announced that it would not be able to continue this conversation with me. In addition, it made up some excuse. I don’t remember what it was, but it was something like too many users were currently logged on.

These are all examples of AI hallucinations. If a human being were to say them, we would call them “lies.”

Generating lie after lie

AI tools regularly concoct lies. For example, when asked to generate a financial statement for a company, a popular AI tool falsely stated that the company’s revenue was some number it apparently had simply made up. According to Slate, in their article, “The Alarming Deceptions at the Heart of an Astounding New Chatbot,” users of large language models like ChatGPT have been complaining that these tools randomly insert falsehoods into the text they generate. Experts now consider frequent “hallucination” (aka lying) to be a major problem in chatbots.

ChatGPT has also generated fake case precedents, replete with plausible-sounding citations. This phenomenon made the news when Stephen Schwartz submitted six fake ChatGPT-generated case precedents in his brief to the federal district court for the Southern District of New York in Mata v. Avianca. Schwartz reported that ChatGPT continued to insist the fake cases were authentic even after their nonexistence was discovered. The judge proceeded to ban the submission of AI-generated filings that have not been reviewed by a human, saying that generative-AI tools

are prone to hallucinations and bias…. [T]hey make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices,… generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to…the truth.

Judge Brantley Starr, Mandatory Certification Regarding Generative Artificial Intelligence.

Facilitating defamation

Section 230 of the Communications Decency Act generally shields Facebook, Google and other online services from liability for providing a platform for users to publish false and defamatory information about other people. That has been a real boon for people who like to destroy other people’s reputations by means of spreading lies and misinformation about them online. It can be difficult and expensive to sue an individual for defamation, particularly when the individual has taken steps to conceal and/or lie about his or her identity. Generative AI tools make the job of defaming people even simpler and easier.

More concerning than the malicious defamatory liars, however, are the many people who earnestly rely on AI as a research tool. In July, 2023, Mark Walters filed a lawsuit against OpenAI, claiming its ChatGPT tool provided false and defamatory misinformation about him to journalist Fred Riehl. I wrote about this lawsuit in a previous blog post. Shortly after this lawsuit was filed, a defamation lawsuit was filed against Microsoft, alleging that its AI tool, too, had generated defamatory lies about an individual. Generative-AI tools can generate false and defamation statements about individuals even if no one has any intention of defaming anyone or ruining another person’s reputation.

Facilitating false light invasion of privacy

Generative AI is also highly effective in portraying people in a false light. In one recently filed lawsuit, Jack Flora and others allege, among other things, that Prisma Labs’ Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed. Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023).

Pot, meet kettle; kettle, pot

“False news is harmful to our community, it makes the world less informed, and it erodes trust. . . . At Meta, we’re working to fight the spread of false news.” Meta (nee Facebook) published that statement back in 2017.  Since then, it has engaged in what is arguably the most ambitious campaign in history to monitor and regulate the content of conversations among humans. Yet, it has also joined other mega-organizations Google and Microsoft in investing multiple billions of dollars in what is the greatest boon to fake news in recorded history: generative-AI.

Toward a braver new world

It would be difficult to imagine a more efficient method of facilitating widespread lying and deception (not to mention false and hateful rhetoric) – and therefore propaganda – than generative-AI. Yet, these mega-organizations continue to sink more and more money into further development and deployment of these lie-generators.

I dread what the future holds in store for our children and theirs.

Generative-AI: The Top 12 Lawsuits

Artificial intelligence (“AI”) is generating more than content; it is generating lawsuits. Here is a brief chronology of what I believe are the most significant lawsuits that have been filed so far.

Artificial intelligence (“AI”) is generating more than content; it is generating lawsuits. Here is a brief chronology of what I believe are the most significant lawsuits that have been filed so far.

Most of these allege copyright infringement, but some make additional or other kinds of claims, such as trademark, privacy or publicity right violations, defamation, unfair competition, and breach of contract, among others. So far, the suits primarily target the developers and purveyors of generative AI chatbots and similar technology. They focus more on what I call “input” than on “output” copyright infringement. That is to say, they allege that copyright infringement is involved in the way particular AI tools are trained.

Thomson Reuters Enterprise Centre GmbH et al. v. ROSS Intelligence (May, 2020)

Thomson Reuters Enterprise Centre GmbH et al. v. ROSS Intelligence Inc., No. 20-cv-613 (D. Del. 2020)

Thomson Reuters alleges that ROSS Intelligence copied its Westlaw database without permission and used it to train a competing AI-driven legal research platform. In defense, ROSS has asserted that it only copied ideas and facts from the Westlaw database of legal research materials. (Facts and ideas are not protected by copyright.) ROSS also argues that its use of content in the Westlaw database is fair use.

One difference between this case and subsequent generative-AI copyright infringement cases is that the defendant in this case is alleged to have induced a third party with a Westlaw license to obtain allegedly proprietary content for the defendant after the defendant had been denied a license of its own. Other cases involve generative AI technologies that operate by scraping publicly available content.

The parties filed cross-motions for summary judgment. While those motions were pending, the U.S. Supreme Court issued its decision in Andy Warhol Found. for the Visual Arts, Inc. v. Goldsmith, 598 U.S. ___, 143 S. Ct. 1258 (2023). The parties have now filed supplemental briefs asserting competing arguments about whether and how the Court’s treatment of transformative use in that case should be interpreted and applied in this case. A decision on the motions is expected soon.

Doe 1 et al. v. GitHub et al. (November, 2022)

Doe 1 et al. v. GitHub, Inc. et al., No. 22-cv-06823 (N.D. Calif. November 3, 2022)

This is a class action lawsuit against GitHub, Microsoft, and OpenAI that was filed in November, 2022. It involves GitHub’s CoPilot, an AI-powered tool that suggests lines of programming code based on what a programmer has written. The complaint alleges that Copilot copies code from publicly available software repositories without complying with the terms of applicable open-source licenses. The complaint also alleges removal of copyright management information in violation of 17 U.S.C. § 1202, unfair competition, and other tort claims.

Andersen et al. v. Stability AI et al. (January 13, 2023)

Andersen et al. v. Stability AI et al., No. 23-cv-00201 (N.D. Calif. Jan. 13, 2023)

Artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed this class action lawsuit against generative-AI companies Stability AI, Midjourney, and DeviantArt on January 13, 2023. The lawsuit alleges that the defendants infringed their copyrights by using their artwork without permission to train AI-powered image generators to create allegedly infringing derivative works.  The lawsuit also alleges violations of 17 U.S.C. § 1202 and publicity rights, breach of contract, and unfair competition.

Getty Images v. Stability AI (February 3, 2023)

Getty Images v. Stability AI, No. 23-cv-00135-UNA (D. Del. February 23, 2023)

Getty Images has filed two lawsuits against Stability AI, one in the United Kingdom and one in the United States, each alleging both input and output copyright infringement. Getty Images owns the rights to millions of images. It is in the business of licensing rights to use copies of the images to others. The lawsuit also accuses Stability AI of falsifying, removing or altering copyright management information, trademark infringement, trademark dilution, unfair competition, and deceptive trade practices.

Stability AI has moved to dismiss the complaint filed in the U.S. for lack of jurisdiction.

Flora et al. v. Prisma Labs (February 15, 2023)

Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023)

Jack Flora and others filed a class action lawsuit against Prisma Labs for invasion of privacy. The complaint alleges, among other things, that the defendant’s Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed, in violation of Illinois’s data privacy laws.

Young v. NeoCortext (April 3, 2023)

Young v. NeoCortext, Inc., 2023-cv-02496 (C.D. Calif. April 3, 2023)

This is a publicity rights case. NeoCortext’s Reface app allows users to paste images of their own faces over those of celebrities in photographs and videos. Kyland Young, a former cast member of the Big Brother reality television show, has sued NeoCortext for allegedly violating his publicity rights. The complaint alleges that NeoCortext has “commercially exploit[ed] his and thousands of other actors, musicians, athletes, celebrities, and other well-known individuals’ names, voices, photographs, or likenesses to sell paid subscriptions to its smartphone application, Refacewithout their permission.”

NeoCortext has asserted a First Amendment defense, among others.

Walters v. Open AI (June 5, 2023)

Walters v. OpenAI, LLC, No. 2023-cv-03122 (N.D. Ga. July 14, 2023) (Complaint originally filed in Gwinnett County, Georgia Superior Court on June 5, 2023; subsequently removed to federal court)

This is a defamation action against OpenAI, the company responsible for ChatGPT. The lawsuit was brought by Mark Walters. He alleges that ChatGPT provided false and defamatory misinformation about him to journalist Fred Riehl in connection with a federal civil rights lawsuit against Washington Attorney General Bob Ferguson and members of his staff. ChatGPT allegedly stated that the lawsuit was one for fraud and embezzlement on the part of Mr. Walters. The complaint alleges that Mr. Walters was “neither a plaintiff nor a defendant in the lawsuit,” and “every statement of fact” pertaining to him in the summary of the federal lawsuit that ChatGPT prepared is false. A New York court recently addressed the questions of sanctions for attorneys who submit briefs containing citations to non-existent “precedents” that were entirely made up by ChatGPT. This is the first case to address tort liability for ChatGPT’s notorious creation of “hallucinatory facts.”

In July, 2023, Jeffery Battle filed a complaint against Microsoft in Maryland alleging that he, too, has been defamed as a result of AI-generated “hallucinatory facts.”

P.M. et al. v. OpenAI et al. (June 28, 2023)

P.M. et al. v. OpenAI LP et al., No. 2023-cv-03199 (N.D. Calif. June 28, 2023)

This lawsuit has been brought by underage individuals against OpenAI and Microsoft. The complaint alleges the defendants’ generative-AI products ChatGPT, Dall-E and Vall-E collect private and personally identifiable information from children without their knowledge or informed consent. The complaint sets out claims for alleged violations of the Electronic Communications Privacy Act; the Computer Fraud and Abuse Act; California’s Invasion of Privacy Act and unfair competition law; Illinois’s Biometric Information Privacy Act, Consumer Fraud and Deceptive Business Practices Act, and Consumer Fraud and Deceptive Business Practices Act; New York General Business Law § 349 (deceptive trade practices); and negligence, invasion of privacy, conversion, unjust enrichment, and breach of duty to warn.

Tremblay v. OpenAI (June 28, 2023)

Tremblay v. OpenAI, Inc., No. 23-cv-03223 (N.D. Calif. June 28, 2023)

Another copyright infringement lawsuit against OpenAI relating to its ChatGPT tool. In this one, authors allege that ChatGPT is trained on the text of books they and other proposed class members authored, and facilitates output copyright infringement. The complaint sets forth claims of copyright infringement, DMCA violations, and unfair competition.

Silverman et al. v. OpenAI (July 7, 2023)

Silverman et al. v. OpenAI, No. 23-cv-03416 (N.D. Calif. July 7, 2023)

Sarah Silverman (comedian/actress/writer) and others allege that OpenAI, by using copyright-protected works without permission to train ChatGPT, committed direct and vicarious copyright infringement, violated section 17 U.S.C. 1202(b), and their rights under unfair competition, negligence, and unjust enrichment law.

Kadrey et al. v. Meta Platforms (July 7, 2023)

Kadrey et al. v. Meta Platforms, No. 2023-cv-03417 (N.D. Calif. July 7, 2023)

The same kinds of allegations as are made in Silverman v. OpenAI, but this time against Meta Platforms, Inc.

J.L. et al. v. Alphabet (July 11, 2023)

J.L. et al. v. Alphabet, Inc. et al. (N.D. Calif. July 11, 2023)

This is a lawsuit against Google and its owner Alphabet, Inc. for allegedly scraping and harvesting private and personal user information, copyright-protected works, and emails, without notice or consent. The complaint alleges claims for invasion of privacy, unfair competition, negligence, copyright infringement, and other causes of action.

On the Regulatory Front

The U.S. Copyright Office is examining the problems associated with registering copyrights in works that rely, in whole or in part, on artificial intelligence. The U.S. Federal Trade Commission (FTC) has suggested that generative-AI implicates “competition concerns.”. Lawmakers in the United States and the European Union are considering legislation to regulate AI in various ways.

Does AI Infringe Copyright?

A previous blog post addressed the question whether AI-generated creations are protected by copyright. This could be called the “output question” in the artificial intelligence area of copyright law. Another question is whether using copyright-protected works as input for AI generative processes infringes the copyrights in those works. This could be called the “input question.” Both kinds of questions are now before the courts. Minnesota attorney Tom James describes a framework for analyzing the input question.

The Input Question in AI Copyright Law

by Thomas James, Minnesota attorney

In a previous blog post, I discussed the question whether AI-generated creations are protected by copyright. This could be called the “output question” in the artificial intelligence area of copyright law. Another question is whether using copyright-protected works as input for AI generative processes infringes the copyrights in those works. This could be called the “input question.” Both kinds of questions are now before the courts. In this blog post, I describe a framework for analyzing the input question.

The Cases

The Getty Images lawsuit

Getty Images is a stock photograph company. It licenses the right to use the images in its collection to those who wish to use them on their websites or for other purposes. Stability AI is the creator of Stable Diffusion, which is described as a “text-to-image diffusion model capable of generating photo-realistic images given any text input.” In January, 2023, Getty Images initiated legal proceedings in the United Kingdom against Stability AI. Getty Images is claiming that Stability AI violated copyrights by using their images and metadata to train AI software without a license.

The independent artists lawsuit

Another lawsuit raising the question whether AI-generated output infringes copyright has been filed in the United States. In this case, a group of visual artists are seeking class action status for claims against Stability AI, Midjourney Inc. and DeviantArt Inc. The artists claim that the companies use their images to train computers “to produce seemingly new images through a mathematical software process.” They describe AI-generated artwork as “collages” made in violation of copyright owners’ exclusive right to create derivative works.

The GitHut Copilot lawsuit

In November, 2022, a class action lawsuit was filed in a U.S. federal court against GitHub, Microsoft, and OpenAI. The lawsuit claims the GitHut Copilot and OpenAI Codex coding assistant services use existing code to generate new code. By training their AI systems on open source programs, the plaintiffs claim, the defendants have allegedly infringed the rights of developers who have posted code under open-source licenses that require attribution.

How AI Works

AI, of course, stands for artificial intelligence. Almost all AI techniques involve machine learning. Machine learning, in turn, involves using a computer algorithm to make a machine improve its performance over time, without having to pre-program it with specific instructions. Data is input to enable the machine to do this. For example, to teach a machine to create a work in the style of Vincent van Gogh, many instances of van Gogh’s works would be input. The AI program contains numerous nodes that focus on different aspects of an image. Working together, these nodes will then piece together common elements of a van Gogh painting from the images the machine has been given to analyze. After going through many images of van Gogh paintings, the machine “learns” the features of a typical Van Gogh painting. The machine can then generate a new image containing these features.

In the same way, a machine can be programmed to analyze many instances of code and generate new code.

The input question comes down to this: Does creating or using a program that causes a machine to receive information about the characteristics of a creative work or group of works for the purpose of creating a new work that has the same or similar characteristics infringe the copyright in the creative work(s) that the machine uses in this way?

The Exclusive Rights of Copyright Owners

In the United States, the owner of a copyright in a work has the exclusive rights to:

  • reproduce (make copies of) it;
  • distribute copies of it;
  • publicly perform it;
  • publicly display it; and
  • make derivative works based on it.

(17 U.S.C. § 106). A copyright is infringed when a person exercises any of these exclusive rights without the copyright owner’s permission.

Copyright protection extends only to expression, however. Copyright does not protect ideas, facts, processes, methods, systems or principles.

Direct Infringement

Infringement can be either direct or indirect. Direct infringement occurs when somebody directly violates one of the exclusive rights of a copyright owner. Examples would be a musician who performs a copyright-protected song in public without permission, or a cartoonist who creates a comic based on the Batman and Robin characters and stories without permission.

The kind of tool an infringer uses is not of any great moment. A writer who uses word-processing software to write a story that is simply a copy of someone else’s copyright-protected story is no less guilty of infringement merely because the actual typewritten letters were generated using a computer program that directs a machine to reproduce and display typographical characters in the sequence a user selects.

Contributory and Vicarious Infringement

Infringement liability may also arise indirectly. If one person knowingly induces another person to infringe or contributes to the other person’s infringement in some other way, then each of them may be liable for copyright infringement. The person who actually committed the infringing act could be liable for direct infringement. The person who knowingly encouraged, solicited, induced or facilitated the other person’s infringing act(s) could be liable for contributory infringement.

Vicarious infringement occurs when the law holds one person responsible for the conduct of another because of the nature of the legal relationship between them. The employment relationship is the most common example. An employer generally is held responsible for an employee’s conduct,  provided the employee’s acts were performed within the course and scope of the employment. Copyright infringement is not an exception to that rule.

Programmer vs. User

Direct infringement liability

Under U.S. law, machines are treated as extensions of the people who set them in motion. A camera, for example, is an extension of the photographer. Any images a person causes a camera to generate by pushing a button on it is considered the creation of the person who pushed the button, not of the person(s) who manufactured the camera, much less of the camera itself. By the same token, a person who uses the controls on a machine to direct it to copy elements of other people’s works should be considered the creator of the new work so created. If using the program entails instructing the  machine to create an unauthorized derivative work of copyright-protected images, then it would be the user, not the machine or the software writer, who would be at risk of liability for direct copyright infringement.

Contributory infringement liability

Knowingly providing a device or mechanism to people who use it to infringe copyrights creates a risk of liability for contributory copyright infringement. Under Sony Corp. v. Universal City Studios, however, merely distributing a mechanism that people can use to infringe copyrights is not enough for contributory infringement liability to attach, if the mechanism has substantial uses for which copyright infringement liability does not attach. Arguably, AI has many such uses. For example, it might be used to generate new works from public domain works. Or it might be used to create parodies. (Creating a parody is fair use; it should not result in infringement liability.)

The situation is different if a company goes further and induces, solicits or encourages people to use its mechanism to infringe copyrights. Then it may be at risk of contributory liability. As the United States Supreme Court has said, “one who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties.” Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913, 919 (2005). (Remember Napster?)

Fair Use

If AI-generated output is found to either directly or indirectly infringe copyright(s), the infringer nevertheless might not be held liable, if the infringement amounts to fair use of the copyrighted work(s) that were used as the input for the AI-generated work(s).

Ever since some rap artists began using snippets of copyright-protected music and sound recordings without permission, courts have embarked on a treacherous expedition to articulate a meaningful dividing line between unauthorized derivative works, on one hand, and unauthorized transformative works, on the other. Although the Copyright Act gives copyright owners the exclusive right to create works based on their copyrighted works (called derivative works), courts have held that an unauthorized derivative work may be fair use if it is “transformative.: This has caused a great deal of uncertainty in the law, particularly since the U.S. Copyright Act expressly defines a derivative work as one that transforms another work. (See 17 U.S.C. § 101: “A ‘derivative work’ is a work based upon one or more preexisting works, . . . or any other form in which a work may be recast, transformed, or adapted.” (emphasis added).)

When interpreting and applying the transformative use branch of Fair Use doctrine, courts have issued conflicting and contradictory decisions. As I wrote in another blog post, the U.S. Supreme Court has recently agreed to hear and decide Andy Warhol Foundation for the Visual Arts v. Goldsmith. It is anticipated that the Court will use this case to attempt to clear up all the confusion around the doctrine. It is also possible the Court might take even more drastic action concerning the whole “transformative use” branch of Fair Use.

Some speculate that the questions the Justices asked during oral arguments in Warhol signal a desire to retreat from the expansion of fair use that the “transformativeness” idea spawned. On the other hand, some of the Court’s recent decisions, such as Google v. Oracle, suggest the Court is not particularly worried about large-scale copyright infringing activity, insofar as Fair Use doctrine is concerned.

Conclusion

To date, it does not appear that there is any direct legal precedent in the United States for classifying the use of mass quantities of works as training tools for AI as “fair use.” It seems, however, that there soon will be precedent on that issue, one way or the other. In the meantime, AI generating system users should proceed with caution.

The Top Copyright Cases of 2022

Cokato Minnesota attorney Tom James (“The Cokato Copyright Attorney”) presents his annual list of the top copyright cases of the year.

My selections for the top copyright cases of the year.

“Dark Horse”

Marcus Gray had sued Katy Perry for copyright infringement, claiming that her “Dark Horse” song unlawfully copied portions of his song, “Joyful Noise.” The district court held that the disputed series of eight notes appearing in Gray’s song were not “particularly unique or rare,” and therefore were not protected against infringement. The Ninth Circuit Court of Appeals agreed, ruling that the series of eight notes was not sufficiently original and creative to receive copyright protection. Gray v. Hudson.

“Shape of You”

Across the pond, another music copyright infringement lawsuit was tossed. This one involved Ed Sheeran’s “Shape of You” and Sam Chokri’s “Oh Why.” In this case, the judge refused to infer from the similarities in the two songs that copyright infringement had occurred. The judge ruled that the portion of the song as to which copying had been claimed was “so short, simple, commonplace and obvious in the context of the rest of the song that it is not credible that Mr. Sheeran sought out inspiration from other songs to come up with it.” Sheeran v. Chokri.

Instagram images

Another case out of California, this one involves a lawsuit filed by photographers against Instagram, alleging secondary copyright infringement. The photographers claim that Instagram’s embedding tool facilitates copyright infringement by users of the website. The district court judge dismissed the lawsuit, saying he was bound by the so-called “server test” the Ninth Circuit Court of Appeals announced in Perfect 10 v. Amazon. The server test says, in effect, that a website does not unlawfully “display” a copyrighted image if the image is stored on the original site’s server and is merely embedded in a search result that appears on a user’s screen. The photographers have an appeal pending before the Ninth Circuit Court of Appeals, asking the Court to reconsider its decision in Perfect 10. Courts in other jurisdictions have rejected Perfect 10 v. Amazon. The Court now has the option to either overrule Perfect 10 and allow the photographers’ lawsuit to proceed, or to re-affirm it, thereby creating a circuit split that could eventually lead to U.S. Supreme Court review. Hunley v. Instagram.

Tattoos

Is reproducing a copyrighted image in a tattoo fair use? That is a question at issue in a case pending in New York. Photographer Jeffrey Sedlik took a photograph of musician Miles Davis. Later, a tattoo artist allegedly traced a printout of it to create a stencil to transfer to human skin as a tattoo. Sedlik filed a copyright infringement lawsuit in the United States District Court for the Southern District of New York. Both parties moved for summary judgment. The judge analyzed the claims using the four “fair use” factors. Although the ultimate ruling was that fact issues remained to be decided by a jury, the court issued some important rulings in the course of making that ruling. In particular, the court ruled that affixing an image to skin is not necessarily a protected “transformative use” of an image. According to the court, it is for a jury to decide whether the image at issue in a particular case has been changed significantly enough to be considered “transformative.” It will be interesting to see how this case ultimately plays out, especially if it is still pending when the United States Supreme Court announces its decision in the Warhol case (See below). Sedlik v. Von Drachenberg.

Digital libraries

The book publishers’ lawsuit against Internet Archive, about which I wrote in a previous blog post, is still at the summary judgment stage. Its potential future implications are far-reaching. It is a copyright infringement lawsuit that book publishers filed in the federal district court for the Southern District of New York. The gravamen of the complaint is that Internet Archive allegedly has scanned over a million books and has made them freely available to the public via an Internet website without securing a license or permission from the copyright rights-holders. The case will test the “controlled digital lending” theory of fair use that was propounded in a white paper published by David R. Hansen and Kyle K. Courtney. They argued that distributing digitized copies of books by libraries should be regarded as the functional equivalent of lending physical copies of books to library patrons. Parties and amici have filed briefs in support of motions for summary judgment. An order on the motions is expected soon. The case is Hachette Book Group et al. v. Internet Archive.

Copyright registration

In Fourth Estate Public Benefits Corp. v. Wall-Street.com LLC, 139 S. Ct. 881, 889 (2019), the United States Supreme Court interpreted 17 U.S.C. § 411(a) to mean that a copyright owner cannot file an infringement claim in federal court without first securing either a registration certificate or an official notice of denial of registration from the Copyright Office. In an Illinois Law Review article, I argued that this imposes an unduly onerous burden on copyright owners and that Congress should amend the Copyright Act to abolish the requirement. Unfortunately, Congress has not done that. As I said in a previous blog post, Congressional inaction to correct a harsh law with potentially unjust consequences often leads to exercises of the judicial power of statutory interpretation to ameliorate those consequences. Unicolors v. H&M Hennes & Mauritz.

Unicolors, owner of the copyrights in various fabric designs, sued H&M Hennes & Mauritz (H&M), alleging copyright infringement. The jury rendered a verdict in favor of Unicolor, but H&M moved for judgment as a matter of law. H&M argued that Unicolors had failed to satisfy the requirement of obtaining a registration certificate prior to commencing suit. Although Unicolors had obtained a registration, H&M argued that the registration was not a valid one. Specifically, H&M argued that Unicolors had improperly applied to register multiple works with a single application. According to 37 CFR § 202.3(b)(4) (2020), a single application cannot be used to register multiple works unless all of the works in the application were included in the same unit of publication. The 31 fabric designs, H&M contended, had not all been first published at the same time in a single unit; some had been made available separately exclusively to certain customers. Therefore, they could not properly be registered together as a unit of publication.

The district court denied the motion, holding that a registration may be valid even if contains inaccurate information, provided the registrant did not know the information was inaccurate. The Ninth Circuit Court of Appeals reversed. The Court held that characterizing the group of works as a “unit of publication” in the registration application was a mistake of law, not a mistake of fact. The Court applied the traditional rule of thumb that ignorance of the law is not an excuse, in essence ruling that although a mistake of fact in a registration application might not invalidate the registration for purposes of the pre-litigation registration requirement, a mistake of law in an application will.

The United States Supreme Court granted certiorari. It reversed the Ninth Circuit Court’s reversal, thereby allowing the infringement verdict to stand notwithstanding the improper registration of the works together as a unit of publication rather than individually.

It is hazardous to read too much into the ruling in this case. Copyright claimants certainly should not interpret it to mean that they no longer need to bother with registering a copyright before trying to enforce it in court, or that they do not need to concern themselves with doing it properly. The pre-litigation registration requirement still stands (in the United States), and the Court has not held that it condones willful blindness of legal requirements. Copyright claimants ignore them at their peril.

Andy Warhol, Prince Transformer

I wrote about the Warhol case in a previous blog post. Basically, it is a copyright infringement case alleging that Lynn Goldsmith took a photograph of Prince in her studio and that Andy Warhol later based a series of silkscreen prints and pencil illustrations on it without a license or permission. The Andy Warhol Foundation sought a declaratory judgment that Warhol’s use of the photograph was “fair use.” Goldsmith counterclaimed for copyright infringement. The district court ruled in favor of Warhol and dismissed the photographer’s infringement claim. The Court of Appeals reversed, holding that the district court misapplied the four “fair use” factors and that the derivative works Warhol created do not qualify as fair use. The U.S. Supreme Court granted certiorari and heard oral arguments in October, 2022. A decision is expected next year.

Because this case gives the United States Supreme Court an opportunity to bring some clarity to the extremely murky “transformative use” area of copyright law, it is not only one of this year’s most important copyright cases, but it very likely will wind up being one of the most important copyright cases of all time. Andy Warhol Foundation for the Visual Arts v. Goldsmith.

New TM Office Action Deadlines

The deadline for responding to US Trademark Office Actions has been shortened to three months, in many cases. Attorney Thomas B. James shares the details.

by Thomas B. James (“The Cokato Copyright Attorney”)

Effective December 3, 2022 the deadline for responding to a U.S. Trademark Office Action is shortened from 6 months to 3 months. Here is what that means in practice.

Historically, applicants for the registration of trademarks in the United States have had 6 months to respond to an Office Action. Beginning December 3, 2022 the time limit has been shortened to 3 months.

Applications subject to the new rule

The new, shorter deadline applies to most kinds of trademark applications, including:

  • Section 1(a) applications (application based on use in commerce)
  • Section 1(b) applications (application based on intent to use)
  • Section 44(e) applications (foreign application)
  • Section 44(d) applications (foreign application)

Applications not subject to the new rule

The new deadline does not apply to:

  • Section 66(a) applications (Madrid Protocol)
  • Office actions issued before December 3, 2022
  • Office actions issued after registration (But note that the new deadline will apply to post-registration Office Actions beginning October 7, 2023)
  • Office actions issued by a work unit other than the law offices, such as the Intent-to-Use Unit or the Examination and Support Workload and Production Unit
  • Office actions that do not require a response (such as an examiner’s amendment)
  • Office actions that do not specify a 3-month response period (e.g., a denial of a request for reconsideration, or a 30-day letter).

Extensions

For a $125 fee, you can request one three-month extension of the time to respond to an Office Action. You will need to file the request for an extension within three months from the “issue date” of the Office Action and before filing your response. If your extension request is granted, then you will have six months from the original “issue date” to file your response.

Use the Trademark Office’s Request for Extension of Time to File a Response form to request an extension. The Trademark Office has issued a warning that it will not process requests that do not use this form.

The form cannot be used to request an extension of time to respond to an Office Action that was issued for a Madrid Protocol section 66(a) application, an Office Action that was issued before December 3, 2022, or to an Office Action to which the new deadline does not apply.

Consequences

Failing to meet the new three-month deadline will have the same consequences as failing to meet the old six-month deadline did. Your application will be deemed abandoned if you do not respond to the Office Action or request an extension on or before the three-month deadline. Similarly, your application will be deemed abandoned if you you are granted an extension but fail to file a response on or before the six-month deadline.

The Trademark Office does not refund registration filing fees for abandoned applications.

As before, in some limited circumstances, you might be able to revive an abandoned application by filing a petition and paying a fee. Otherwise, you will need to start the application process all over again.

More information

Here are links to the relevant Federal Register Notice and Examination Guide

Contact attorney Thomas James

Need help with trademark registration? Contact Thomas B. James, Minnesota attorney.

The Internet Archive Lawsuit

Thomas James (“The Cokato Copyright Attorney”) explains how Hachette Book Group et al. v. Internet Archive, filed in the federal district court for the Southern District of New York on June 1, 2020, tests the limits of authors’ and publishers’ digital rights in their copyright-protected works.

The gravamen of the complaint is that Internet Archive (“IA”) allegedly scanned books and made them freely available to the public via an Internet website without the permission of copyright rights-holders. Book publishers filed this lawsuit alleging that IA’s activities infringe their exclusive rights of reproduction and distribution under the United States Copyright Act.

As of this writing, the case is at the summary judgment stage, with briefing currently scheduled to end in October, 2022. Whatever the outcome, an appeal seems very likely. Here is an overview to bring you up to speed on what the case is about.

The undisputed facts

Per the parties’ stipulation, the following facts are not disputed:

The case involves numerous published books which the publishers who filed this lawsuit (Hachette Book Group, HarperCollins, Penguin Random House, and John Wiley &  Sons) have exclusive rights, under the United States Copyright Act, to reproduce and distribute.

Internet Archive and Open Library of Richmond are nonprofit organizations the IRS has classified as 501(c)(3) public charities. These organizations purchased print copies of certain books identified in the lawsuit.

The core allegations

The plaintiffs allege that IA obtains print books that are protected by copyright, scans them into a digital format, uploads them to its servers, and then distributes these digital copies to members of the public via a website – all without a license and without any payment to authors and publishers. Plaintiffs allege that IA has already scanned 1.3 million books and plans to scan millions more. The complaint describes this as “willful digital piracy on an industrial scale.”

Defenses?

First sale doctrine

One justification that is sometimes advanced for making digital copies of a work available for free online without paying the author or publisher is the so-called “first sale” doctrine. This is an exception to copyright infringement liability that essentially allows the owner of a lawfully acquired copy of a work to sell, transfer or lend it to other people without incurring copyright infringement liability. For example, a person who buys a print edition of a book may lend it to a friend or sell it at a garage sale without having to get the copyright owner’s permission. More to the point, a library may purchase a copy of a print version of a book and proceed to lend it to library patrons without fear of incurring infringement liability for doing so.

The doctrine does not apply to all kinds of works, but it does generally apply  to print books.

The first sale doctrine only provides an exception to infringement liability for the unauthorized distribution of a work, however. It does not provide an exception to liability for unauthorized reproduction of a work. (See 17 U.S.C. § 109.) Scanning books to make digital copies is an act of reproduction, not distribution. Accordingly, the first sale doctrine does not appear to be a good fit as a defense in this case.

“Controlled digital lending”

Public libraries lend physical copies of the books in their collections to library patrons for no charge.  Based on this practice, a white paper published by David R. Hansen and Kyle K. Courtney makes the case for treating the distribution of digitized copies of books by libraries as fair use, where the library maintains a one-to-one ratio between the number of physical copies of a book it has and the number of digital “check-outs” of the digital version it allows at any given time.

The theory, known as controlled digital lending (“CDL”), relies on an assumption that the distribution of a work electronically is the functional equivalent of distributing a physical copy of it, so long as the same limitations on the ability to “check out” the work from the library are imposed.

Publishers dispute this assumption. They take the position that there are important differences between e-books and print books. They maintain that these differences justify the distribution of e-books under a licensing program separate and distinct from their print book purchasing programs. They also question whether e-books are, in fact, distributed subject to the same limitations that apply to a print version of the book.

Fair use

Whether a particular kind of use of a copyright-protected work is “fair use” or not requires consideration of four factors: (1) the nature of the work; (2) the character and purpose of the use; (3) the amount and substantiality of the portion copied; and (4) the effect of the use on the market for the work.

Supporters of free access to copyrighted works for all tend to focus on the “character and purpose” factor. They can be relied upon to argue that free access to literary works is a great benefit to the public. Authors and publishers tend to focus on the other factors. In this case, it seems possible that the factors relating to the amount copied and the effect of the use on the market for the work could weigh against a finding of fair use.  

The federal district court in this case is being called upon to evaluate those factors and decide whether they weigh in favor of treating CDL – or at least, CDL as IA has applied and implemented it – as fair use or not.

Subscribe to The Cokato Copyright Attorney

The Cokato Copyright Attorney (Minnesota lawyer Thomas B. James) will be following this case closely. Subscribe for updates as the case makes its way through the courts.

Contact attorney Thomas James

Need help registering a copyright or trademark, or with a copyright or trademark problem? Contact Cokato, Minnesota attorney Tom James.

Exit mobile version
%%footer%%