Nontransformative Nuge

A reversal in the 4th Circuit Court demonstrates the impact the Supreme Court’s decision in Andy Warhol Foundation for the Arts v. Goldsmith is already having on the application of copyright fair use doctrine in federal courts.

Philpot v. Independent Journal Review, No. 21-2021 (4th Circ., Feb. 6, 2024)

Philpot, a concert photographer, registered his photograph of Ted Nugent as part of a group of unpublished works. Prior to registration, he entered into a license agreement giving AXS TV the right to inspect his photographs for the purpose of selecting ones to curate. The agreement provided that the license would become effective upon delivery of photographs for inspection. After registration, Philpot delivered a set of photographs, including the Nugent photograph, to AXS TV. He also published the Nugent photograph to Wikimedia Commons under a Creative Commons (“CC”) license. The CC license allows free use on the condition that attribution is given. LJR published an article called “15 Signs Your Daddy Was a Conservative.” Sign #5 was He hearts the Nuge. LJR used Philpot’s photograph of Ted Nugent as an illustration for the article, without providing an attribution of credit to Philpot.

Philpot sued IJR for copyright infringement.  IJR asserted two defenses: (1) invalid copyright registration; and (2) fair use. The trial court did not decide whether the registration was valid or not, but it granted summary judgment for IJR based on its opinion that the news service’s publication of the photograph was fair use. The Fourth Circuit Court of Appeals reversed, ruling in Philpot’s favor on both issues. The Court held that the copyright registration was valid and that publication of the photograph without permission was not fair use.

The copyright registration

Published and unpublished works cannot be registered together. Including a published work in an application for registration of a group of unpublished works is an inaccuracy that might invalidate the registration, if the applicant was aware of the inaccuracy at the time of applying. Cf. Unicolors v. H&M Hennes & Mauritz, 595 U.S. 178 (2022). LJR argued that Philpot’s pre-registration agreement to send photographs to AJX TV to inspect for possible curation constituted “publication” of them so characterizing them as “unpublished” in the registration application was an inaccuracy known to Philpot.

17 U.S.C. § 101 defines publication as “the distribution of copies . . . to the public” or “offering to distribute copies . . . to a group of persons for purposes of further distribution . . . or public display.” The Court of Appeals held that merely entering into an agreement to furnish copies to a distributor for possible curation does not come within that definition. Sending copies to a limited class of people without concomitantly granting an unrestricted right to further distribute them to the public does not amount to “publication.”

Philpot’s arrangement with AXS TV is analogous to an author submitting a manuscript to a publisher for review for possible future distribution to the public. The U.S. Copyright Office has addressed this. “Sending copies of a manuscript to prospective publishers in an effort to secure a book contract does not [constitute publication].” U.S. Copyright Office, Compendium of U.S. Copyright Office Practices § 1905.1 (3d ed. 2021). Philpot had provided copies of his work for the limited purpose of examination, without a present grant of a right of further distribution. Therefore, the photographs were, in fact, unpublished at the time of the application for registration. Since no inaccuracy existed, the registration was valid.

Fair use

The Court applied the four-factor test for fair use set out in 17 U.S.C. § 107.

(1) Purpose and character of the use. Citing Andy Warhol Found. For the Visual Arts v. Goldsmith, 598 U.S. 508 , 527–33 (2023), the Court held that when, as here, a use is neither transformative nor noncommercial, this factor weighs against a fair use determination. LJR used the photograph for the same purpose as Philpot intended to use it (as a depiction of Mr. Nugent), and it was a commercial purpose.

(2) Nature of the work. Photographs taken by humans are acts of creative expression that receive what courts have described as “thick” copyright protection.” Therefore, this factor weighed against a fair use determination.

(3) Amount and substantiality of the portion used. Since all of the expressive features of the work were used, this factor also weighed against a fair use determination.

(4) Effect on the market for the work. Finally, the Court determined that allowing free use of a copyrighted work for commercial purposes without the copyright owner’s permission could potentially have a negative impact on the author’s market for the work. Therefore, this factor, too, weighed against a fair use determination.

Since all four factors weighed against a fair use determination, the Court reversed the trial court’s grant of summary judgment to IJR and remanded the case for further proceedings.

Conclusion

This decision demonstrates the impact the Warhol decision is having on copyright fair use analysis in the courts. Previously, courts had been interpreting transformativeness very broadly. In many cases, they were ending fair use inquiry as soon as some sort of transformative use could be articulated. As the Court of Appeals decision in this case illustrates, trial courts now need to alter their approach in two ways: (1) They need to return to considering all four fair use factors rather than ending the inquiry upon a defendant’s articulation of some “transformative use;” and (2) They need to apply a much narrower definition of transformativeness than they have been. If both the original work and an unauthorized reproduction of it are used for the purpose of depicting a particular person or scene (as distinguished from parodying or commenting on a work, for example), for commercial gain, then it would no longer appear to be prudent to count on the first of the four fair use factors supporting a fair use determination.


Photo: Photograph published in a July, 1848 edition of L’Illustration. Believed to be the first instance of photojournalism, it is now in the public domain.

AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

Generative AI: Perfect Tool for the Age of Deception

For many reasons, the new millennium might well be described as the Age of Deception. Cokato Copyright Attorney Tom James explains why generative-AI is a perfect fit for it.

Illustrating generative AI
Image by Gerd Altmann on Pixabay.

What is generative AI?

“AI,” of course, stands for artificial intelligence. Generative AI is a variety of it that can produce content such as text and images, seemingly of its own creation. I say “seemingly” because in reality these kinds of AI tools are not really independently creating these images and lines of text. Rather, they are “trained” to emulate existing works created by humans. Essentially, they are derivative work generation machines that enable the creation of derivative works based on potentially millions of human-created works.

AI has been around for decades. It wasn’t until 2014, however, that the technology began to be refined to the point that it could generate text, images, video and audio so similar to real people and their creations that it is difficult, if not impossible, for the average person to tell the difference.

Rapid advances in the technology in the past few years have yielded generative-AI tools that can write entire stories and articles, seemingly paint artistic images, and even generate what appear to be photographic images of people.

AI “hallucinations” (aka lies)

In the AI field, a “hallucination” occurs when an AI tool (such as ChatGPT) generates a confident response that is not justified by the data on which it has been trained.

For example, I queried ChatGPT about whether a company owned equally by a husband and wife could qualify for the preferences the federal government sets aside for women-owned businesses. The chatbot responded with something along the lines of “Certainly” or “Absolutely,” explaining that the U.S. government is required to provide equal opportunities to all people without discriminating on the basis of sex, or something along those lines. When I cited the provision of federal law that contradicts what the chatbot had just asserted, it replied with an apology and something to the effect of “My bad.”

I also asked ChatGPT if any U.S. law imposes unequal obligations on male citizens. The chatbot cheerily reported back to me that no, no such laws exist. I then cited the provision of the United States Code that imposes an obligation to register for Selective Service only upon male citizens. The bot responded that while that is true, it is unimportant and irrelevant because there has not been a draft in a long time and there is not likely to be one anytime soon. I explained to the bot that this response was irrelevant. Young men can be, and are, denied the right to government employment and other civic rights and benefits if they fail to register, regardless of whether a draft is in place or not, and regardless of whether they are prosecuted criminally or not. At this point, ChatGPT announced that it would not be able to continue this conversation with me. In addition, it made up some excuse. I don’t remember what it was, but it was something like too many users were currently logged on.

These are all examples of AI hallucinations. If a human being were to say them, we would call them “lies.”

Generating lie after lie

AI tools regularly concoct lies. For example, when asked to generate a financial statement for a company, a popular AI tool falsely stated that the company’s revenue was some number it apparently had simply made up. According to Slate, in their article, “The Alarming Deceptions at the Heart of an Astounding New Chatbot,” users of large language models like ChatGPT have been complaining that these tools randomly insert falsehoods into the text they generate. Experts now consider frequent “hallucination” (aka lying) to be a major problem in chatbots.

ChatGPT has also generated fake case precedents, replete with plausible-sounding citations. This phenomenon made the news when Stephen Schwartz submitted six fake ChatGPT-generated case precedents in his brief to the federal district court for the Southern District of New York in Mata v. Avianca. Schwartz reported that ChatGPT continued to insist the fake cases were authentic even after their nonexistence was discovered. The judge proceeded to ban the submission of AI-generated filings that have not been reviewed by a human, saying that generative-AI tools

are prone to hallucinations and bias…. [T]hey make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices,… generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to…the truth.

Judge Brantley Starr, Mandatory Certification Regarding Generative Artificial Intelligence.

Facilitating defamation

Section 230 of the Communications Decency Act generally shields Facebook, Google and other online services from liability for providing a platform for users to publish false and defamatory information about other people. That has been a real boon for people who like to destroy other people’s reputations by means of spreading lies and misinformation about them online. It can be difficult and expensive to sue an individual for defamation, particularly when the individual has taken steps to conceal and/or lie about his or her identity. Generative AI tools make the job of defaming people even simpler and easier.

More concerning than the malicious defamatory liars, however, are the many people who earnestly rely on AI as a research tool. In July, 2023, Mark Walters filed a lawsuit against OpenAI, claiming its ChatGPT tool provided false and defamatory misinformation about him to journalist Fred Riehl. I wrote about this lawsuit in a previous blog post. Shortly after this lawsuit was filed, a defamation lawsuit was filed against Microsoft, alleging that its AI tool, too, had generated defamatory lies about an individual. Generative-AI tools can generate false and defamation statements about individuals even if no one has any intention of defaming anyone or ruining another person’s reputation.

Facilitating false light invasion of privacy

Generative AI is also highly effective in portraying people in a false light. In one recently filed lawsuit, Jack Flora and others allege, among other things, that Prisma Labs’ Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed. Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023).

Pot, meet kettle; kettle, pot

“False news is harmful to our community, it makes the world less informed, and it erodes trust. . . . At Meta, we’re working to fight the spread of false news.” Meta (nee Facebook) published that statement back in 2017.  Since then, it has engaged in what is arguably the most ambitious campaign in history to monitor and regulate the content of conversations among humans. Yet, it has also joined other mega-organizations Google and Microsoft in investing multiple billions of dollars in what is the greatest boon to fake news in recorded history: generative-AI.

Toward a braver new world

It would be difficult to imagine a more efficient method of facilitating widespread lying and deception (not to mention false and hateful rhetoric) – and therefore propaganda – than generative-AI. Yet, these mega-organizations continue to sink more and more money into further development and deployment of these lie-generators.

I dread what the future holds in store for our children and theirs.

Generative-AI: The Top 12 Lawsuits

Artificial intelligence (“AI”) is generating more than content; it is generating lawsuits. Here is a brief chronology of what I believe are the most significant lawsuits that have been filed so far.

Artificial intelligence (“AI”) is generating more than content; it is generating lawsuits. Here is a brief chronology of what I believe are the most significant lawsuits that have been filed so far.

Most of these allege copyright infringement, but some make additional or other kinds of claims, such as trademark, privacy or publicity right violations, defamation, unfair competition, and breach of contract, among others. So far, the suits primarily target the developers and purveyors of generative AI chatbots and similar technology. They focus more on what I call “input” than on “output” copyright infringement. That is to say, they allege that copyright infringement is involved in the way particular AI tools are trained.

Thomson Reuters Enterprise Centre GmbH et al. v. ROSS Intelligence (May, 2020)

Thomson Reuters Enterprise Centre GmbH et al. v. ROSS Intelligence Inc., No. 20-cv-613 (D. Del. 2020)

Thomson Reuters alleges that ROSS Intelligence copied its Westlaw database without permission and used it to train a competing AI-driven legal research platform. In defense, ROSS has asserted that it only copied ideas and facts from the Westlaw database of legal research materials. (Facts and ideas are not protected by copyright.) ROSS also argues that its use of content in the Westlaw database is fair use.

One difference between this case and subsequent generative-AI copyright infringement cases is that the defendant in this case is alleged to have induced a third party with a Westlaw license to obtain allegedly proprietary content for the defendant after the defendant had been denied a license of its own. Other cases involve generative AI technologies that operate by scraping publicly available content.

The parties filed cross-motions for summary judgment. While those motions were pending, the U.S. Supreme Court issued its decision in Andy Warhol Found. for the Visual Arts, Inc. v. Goldsmith, 598 U.S. ___, 143 S. Ct. 1258 (2023). The parties have now filed supplemental briefs asserting competing arguments about whether and how the Court’s treatment of transformative use in that case should be interpreted and applied in this case. A decision on the motions is expected soon.

Doe 1 et al. v. GitHub et al. (November, 2022)

Doe 1 et al. v. GitHub, Inc. et al., No. 22-cv-06823 (N.D. Calif. November 3, 2022)

This is a class action lawsuit against GitHub, Microsoft, and OpenAI that was filed in November, 2022. It involves GitHub’s CoPilot, an AI-powered tool that suggests lines of programming code based on what a programmer has written. The complaint alleges that Copilot copies code from publicly available software repositories without complying with the terms of applicable open-source licenses. The complaint also alleges removal of copyright management information in violation of 17 U.S.C. § 1202, unfair competition, and other tort claims.

Andersen et al. v. Stability AI et al. (January 13, 2023)

Andersen et al. v. Stability AI et al., No. 23-cv-00201 (N.D. Calif. Jan. 13, 2023)

Artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed this class action lawsuit against generative-AI companies Stability AI, Midjourney, and DeviantArt on January 13, 2023. The lawsuit alleges that the defendants infringed their copyrights by using their artwork without permission to train AI-powered image generators to create allegedly infringing derivative works.  The lawsuit also alleges violations of 17 U.S.C. § 1202 and publicity rights, breach of contract, and unfair competition.

Getty Images v. Stability AI (February 3, 2023)

Getty Images v. Stability AI, No. 23-cv-00135-UNA (D. Del. February 23, 2023)

Getty Images has filed two lawsuits against Stability AI, one in the United Kingdom and one in the United States, each alleging both input and output copyright infringement. Getty Images owns the rights to millions of images. It is in the business of licensing rights to use copies of the images to others. The lawsuit also accuses Stability AI of falsifying, removing or altering copyright management information, trademark infringement, trademark dilution, unfair competition, and deceptive trade practices.

Stability AI has moved to dismiss the complaint filed in the U.S. for lack of jurisdiction.

Flora et al. v. Prisma Labs (February 15, 2023)

Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023)

Jack Flora and others filed a class action lawsuit against Prisma Labs for invasion of privacy. The complaint alleges, among other things, that the defendant’s Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed, in violation of Illinois’s data privacy laws.

Young v. NeoCortext (April 3, 2023)

Young v. NeoCortext, Inc., 2023-cv-02496 (C.D. Calif. April 3, 2023)

This is a publicity rights case. NeoCortext’s Reface app allows users to paste images of their own faces over those of celebrities in photographs and videos. Kyland Young, a former cast member of the Big Brother reality television show, has sued NeoCortext for allegedly violating his publicity rights. The complaint alleges that NeoCortext has “commercially exploit[ed] his and thousands of other actors, musicians, athletes, celebrities, and other well-known individuals’ names, voices, photographs, or likenesses to sell paid subscriptions to its smartphone application, Refacewithout their permission.”

NeoCortext has asserted a First Amendment defense, among others.

Walters v. Open AI (June 5, 2023)

Walters v. OpenAI, LLC, No. 2023-cv-03122 (N.D. Ga. July 14, 2023) (Complaint originally filed in Gwinnett County, Georgia Superior Court on June 5, 2023; subsequently removed to federal court)

This is a defamation action against OpenAI, the company responsible for ChatGPT. The lawsuit was brought by Mark Walters. He alleges that ChatGPT provided false and defamatory misinformation about him to journalist Fred Riehl in connection with a federal civil rights lawsuit against Washington Attorney General Bob Ferguson and members of his staff. ChatGPT allegedly stated that the lawsuit was one for fraud and embezzlement on the part of Mr. Walters. The complaint alleges that Mr. Walters was “neither a plaintiff nor a defendant in the lawsuit,” and “every statement of fact” pertaining to him in the summary of the federal lawsuit that ChatGPT prepared is false. A New York court recently addressed the questions of sanctions for attorneys who submit briefs containing citations to non-existent “precedents” that were entirely made up by ChatGPT. This is the first case to address tort liability for ChatGPT’s notorious creation of “hallucinatory facts.”

In July, 2023, Jeffery Battle filed a complaint against Microsoft in Maryland alleging that he, too, has been defamed as a result of AI-generated “hallucinatory facts.”

P.M. et al. v. OpenAI et al. (June 28, 2023)

P.M. et al. v. OpenAI LP et al., No. 2023-cv-03199 (N.D. Calif. June 28, 2023)

This lawsuit has been brought by underage individuals against OpenAI and Microsoft. The complaint alleges the defendants’ generative-AI products ChatGPT, Dall-E and Vall-E collect private and personally identifiable information from children without their knowledge or informed consent. The complaint sets out claims for alleged violations of the Electronic Communications Privacy Act; the Computer Fraud and Abuse Act; California’s Invasion of Privacy Act and unfair competition law; Illinois’s Biometric Information Privacy Act, Consumer Fraud and Deceptive Business Practices Act, and Consumer Fraud and Deceptive Business Practices Act; New York General Business Law § 349 (deceptive trade practices); and negligence, invasion of privacy, conversion, unjust enrichment, and breach of duty to warn.

Tremblay v. OpenAI (June 28, 2023)

Tremblay v. OpenAI, Inc., No. 23-cv-03223 (N.D. Calif. June 28, 2023)

Another copyright infringement lawsuit against OpenAI relating to its ChatGPT tool. In this one, authors allege that ChatGPT is trained on the text of books they and other proposed class members authored, and facilitates output copyright infringement. The complaint sets forth claims of copyright infringement, DMCA violations, and unfair competition.

Silverman et al. v. OpenAI (July 7, 2023)

Silverman et al. v. OpenAI, No. 23-cv-03416 (N.D. Calif. July 7, 2023)

Sarah Silverman (comedian/actress/writer) and others allege that OpenAI, by using copyright-protected works without permission to train ChatGPT, committed direct and vicarious copyright infringement, violated section 17 U.S.C. 1202(b), and their rights under unfair competition, negligence, and unjust enrichment law.

Kadrey et al. v. Meta Platforms (July 7, 2023)

Kadrey et al. v. Meta Platforms, No. 2023-cv-03417 (N.D. Calif. July 7, 2023)

The same kinds of allegations as are made in Silverman v. OpenAI, but this time against Meta Platforms, Inc.

J.L. et al. v. Alphabet (July 11, 2023)

J.L. et al. v. Alphabet, Inc. et al. (N.D. Calif. July 11, 2023)

This is a lawsuit against Google and its owner Alphabet, Inc. for allegedly scraping and harvesting private and personal user information, copyright-protected works, and emails, without notice or consent. The complaint alleges claims for invasion of privacy, unfair competition, negligence, copyright infringement, and other causes of action.

On the Regulatory Front

The U.S. Copyright Office is examining the problems associated with registering copyrights in works that rely, in whole or in part, on artificial intelligence. The U.S. Federal Trade Commission (FTC) has suggested that generative-AI implicates “competition concerns.”. Lawmakers in the United States and the European Union are considering legislation to regulate AI in various ways.

The Top Copyright Cases of 2022

Cokato Minnesota attorney Tom James (“The Cokato Copyright Attorney”) presents his annual list of the top copyright cases of the year.

My selections for the top copyright cases of the year.

“Dark Horse”

Marcus Gray had sued Katy Perry for copyright infringement, claiming that her “Dark Horse” song unlawfully copied portions of his song, “Joyful Noise.” The district court held that the disputed series of eight notes appearing in Gray’s song were not “particularly unique or rare,” and therefore were not protected against infringement. The Ninth Circuit Court of Appeals agreed, ruling that the series of eight notes was not sufficiently original and creative to receive copyright protection. Gray v. Hudson.

“Shape of You”

Across the pond, another music copyright infringement lawsuit was tossed. This one involved Ed Sheeran’s “Shape of You” and Sam Chokri’s “Oh Why.” In this case, the judge refused to infer from the similarities in the two songs that copyright infringement had occurred. The judge ruled that the portion of the song as to which copying had been claimed was “so short, simple, commonplace and obvious in the context of the rest of the song that it is not credible that Mr. Sheeran sought out inspiration from other songs to come up with it.” Sheeran v. Chokri.

Instagram images

Another case out of California, this one involves a lawsuit filed by photographers against Instagram, alleging secondary copyright infringement. The photographers claim that Instagram’s embedding tool facilitates copyright infringement by users of the website. The district court judge dismissed the lawsuit, saying he was bound by the so-called “server test” the Ninth Circuit Court of Appeals announced in Perfect 10 v. Amazon. The server test says, in effect, that a website does not unlawfully “display” a copyrighted image if the image is stored on the original site’s server and is merely embedded in a search result that appears on a user’s screen. The photographers have an appeal pending before the Ninth Circuit Court of Appeals, asking the Court to reconsider its decision in Perfect 10. Courts in other jurisdictions have rejected Perfect 10 v. Amazon. The Court now has the option to either overrule Perfect 10 and allow the photographers’ lawsuit to proceed, or to re-affirm it, thereby creating a circuit split that could eventually lead to U.S. Supreme Court review. Hunley v. Instagram.

Tattoos

Is reproducing a copyrighted image in a tattoo fair use? That is a question at issue in a case pending in New York. Photographer Jeffrey Sedlik took a photograph of musician Miles Davis. Later, a tattoo artist allegedly traced a printout of it to create a stencil to transfer to human skin as a tattoo. Sedlik filed a copyright infringement lawsuit in the United States District Court for the Southern District of New York. Both parties moved for summary judgment. The judge analyzed the claims using the four “fair use” factors. Although the ultimate ruling was that fact issues remained to be decided by a jury, the court issued some important rulings in the course of making that ruling. In particular, the court ruled that affixing an image to skin is not necessarily a protected “transformative use” of an image. According to the court, it is for a jury to decide whether the image at issue in a particular case has been changed significantly enough to be considered “transformative.” It will be interesting to see how this case ultimately plays out, especially if it is still pending when the United States Supreme Court announces its decision in the Warhol case (See below). Sedlik v. Von Drachenberg.

Digital libraries

The book publishers’ lawsuit against Internet Archive, about which I wrote in a previous blog post, is still at the summary judgment stage. Its potential future implications are far-reaching. It is a copyright infringement lawsuit that book publishers filed in the federal district court for the Southern District of New York. The gravamen of the complaint is that Internet Archive allegedly has scanned over a million books and has made them freely available to the public via an Internet website without securing a license or permission from the copyright rights-holders. The case will test the “controlled digital lending” theory of fair use that was propounded in a white paper published by David R. Hansen and Kyle K. Courtney. They argued that distributing digitized copies of books by libraries should be regarded as the functional equivalent of lending physical copies of books to library patrons. Parties and amici have filed briefs in support of motions for summary judgment. An order on the motions is expected soon. The case is Hachette Book Group et al. v. Internet Archive.

Copyright registration

In Fourth Estate Public Benefits Corp. v. Wall-Street.com LLC, 139 S. Ct. 881, 889 (2019), the United States Supreme Court interpreted 17 U.S.C. § 411(a) to mean that a copyright owner cannot file an infringement claim in federal court without first securing either a registration certificate or an official notice of denial of registration from the Copyright Office. In an Illinois Law Review article, I argued that this imposes an unduly onerous burden on copyright owners and that Congress should amend the Copyright Act to abolish the requirement. Unfortunately, Congress has not done that. As I said in a previous blog post, Congressional inaction to correct a harsh law with potentially unjust consequences often leads to exercises of the judicial power of statutory interpretation to ameliorate those consequences. Unicolors v. H&M Hennes & Mauritz.

Unicolors, owner of the copyrights in various fabric designs, sued H&M Hennes & Mauritz (H&M), alleging copyright infringement. The jury rendered a verdict in favor of Unicolor, but H&M moved for judgment as a matter of law. H&M argued that Unicolors had failed to satisfy the requirement of obtaining a registration certificate prior to commencing suit. Although Unicolors had obtained a registration, H&M argued that the registration was not a valid one. Specifically, H&M argued that Unicolors had improperly applied to register multiple works with a single application. According to 37 CFR § 202.3(b)(4) (2020), a single application cannot be used to register multiple works unless all of the works in the application were included in the same unit of publication. The 31 fabric designs, H&M contended, had not all been first published at the same time in a single unit; some had been made available separately exclusively to certain customers. Therefore, they could not properly be registered together as a unit of publication.

The district court denied the motion, holding that a registration may be valid even if contains inaccurate information, provided the registrant did not know the information was inaccurate. The Ninth Circuit Court of Appeals reversed. The Court held that characterizing the group of works as a “unit of publication” in the registration application was a mistake of law, not a mistake of fact. The Court applied the traditional rule of thumb that ignorance of the law is not an excuse, in essence ruling that although a mistake of fact in a registration application might not invalidate the registration for purposes of the pre-litigation registration requirement, a mistake of law in an application will.

The United States Supreme Court granted certiorari. It reversed the Ninth Circuit Court’s reversal, thereby allowing the infringement verdict to stand notwithstanding the improper registration of the works together as a unit of publication rather than individually.

It is hazardous to read too much into the ruling in this case. Copyright claimants certainly should not interpret it to mean that they no longer need to bother with registering a copyright before trying to enforce it in court, or that they do not need to concern themselves with doing it properly. The pre-litigation registration requirement still stands (in the United States), and the Court has not held that it condones willful blindness of legal requirements. Copyright claimants ignore them at their peril.

Andy Warhol, Prince Transformer

I wrote about the Warhol case in a previous blog post. Basically, it is a copyright infringement case alleging that Lynn Goldsmith took a photograph of Prince in her studio and that Andy Warhol later based a series of silkscreen prints and pencil illustrations on it without a license or permission. The Andy Warhol Foundation sought a declaratory judgment that Warhol’s use of the photograph was “fair use.” Goldsmith counterclaimed for copyright infringement. The district court ruled in favor of Warhol and dismissed the photographer’s infringement claim. The Court of Appeals reversed, holding that the district court misapplied the four “fair use” factors and that the derivative works Warhol created do not qualify as fair use. The U.S. Supreme Court granted certiorari and heard oral arguments in October, 2022. A decision is expected next year.

Because this case gives the United States Supreme Court an opportunity to bring some clarity to the extremely murky “transformative use” area of copyright law, it is not only one of this year’s most important copyright cases, but it very likely will wind up being one of the most important copyright cases of all time. Andy Warhol Foundation for the Visual Arts v. Goldsmith.

New TM Office Action Deadlines

The deadline for responding to US Trademark Office Actions has been shortened to three months, in many cases. Attorney Thomas B. James shares the details.

by Thomas B. James (“The Cokato Copyright Attorney”)

Effective December 3, 2022 the deadline for responding to a U.S. Trademark Office Action is shortened from 6 months to 3 months. Here is what that means in practice.

Historically, applicants for the registration of trademarks in the United States have had 6 months to respond to an Office Action. Beginning December 3, 2022 the time limit has been shortened to 3 months.

Applications subject to the new rule

The new, shorter deadline applies to most kinds of trademark applications, including:

  • Section 1(a) applications (application based on use in commerce)
  • Section 1(b) applications (application based on intent to use)
  • Section 44(e) applications (foreign application)
  • Section 44(d) applications (foreign application)

Applications not subject to the new rule

The new deadline does not apply to:

  • Section 66(a) applications (Madrid Protocol)
  • Office actions issued before December 3, 2022
  • Office actions issued after registration (But note that the new deadline will apply to post-registration Office Actions beginning October 7, 2023)
  • Office actions issued by a work unit other than the law offices, such as the Intent-to-Use Unit or the Examination and Support Workload and Production Unit
  • Office actions that do not require a response (such as an examiner’s amendment)
  • Office actions that do not specify a 3-month response period (e.g., a denial of a request for reconsideration, or a 30-day letter).

Extensions

For a $125 fee, you can request one three-month extension of the time to respond to an Office Action. You will need to file the request for an extension within three months from the “issue date” of the Office Action and before filing your response. If your extension request is granted, then you will have six months from the original “issue date” to file your response.

Use the Trademark Office’s Request for Extension of Time to File a Response form to request an extension. The Trademark Office has issued a warning that it will not process requests that do not use this form.

The form cannot be used to request an extension of time to respond to an Office Action that was issued for a Madrid Protocol section 66(a) application, an Office Action that was issued before December 3, 2022, or to an Office Action to which the new deadline does not apply.

Consequences

Failing to meet the new three-month deadline will have the same consequences as failing to meet the old six-month deadline did. Your application will be deemed abandoned if you do not respond to the Office Action or request an extension on or before the three-month deadline. Similarly, your application will be deemed abandoned if you you are granted an extension but fail to file a response on or before the six-month deadline.

The Trademark Office does not refund registration filing fees for abandoned applications.

As before, in some limited circumstances, you might be able to revive an abandoned application by filing a petition and paying a fee. Otherwise, you will need to start the application process all over again.

More information

Here are links to the relevant Federal Register Notice and Examination Guide

Contact attorney Thomas James

Need help with trademark registration? Contact Thomas B. James, Minnesota attorney.

Newly Public Domain Works

The Cokato Copyright Attorney shares excerpts from selected works that are in the public domain now.

by Minnesota attorney Thomas James (not Ernest Hemingway, the guy in the picture)

I am, of course, late with this. Where other writers have simply listed works by author, title and description, however, this article includes quotations from them. Does it get any better than this? I think not. 

Why are they public domain now?

Copyright protection is not eternal. It only lasts for the number of years specified by law. After that, the work is said to have entered the public domain, meaning that anyone may copy, distribute, perform, display or make new works from it.

In the United States, the U.K., Russia, and most of the European Union, a copyright lasts for 70 years after the author’s death. Accordingly, the works of authors who died in 1951 are now in the public domain. In Canada and most of Asia and Africa, copyrights last for 50 years after the author’s death.

Different rules apply to older works. I explain these in more detail in my books. For our purposes here, we can safely say that U.S. works first published in or before 1926 are now in the public domain, and all pre-1923 sound recordings are now in the public domain.

These rules are subject to exceptions. For example, the U.S. terms of copyright for works made for hire are different from the terms of copyright for other kinds of works.

Derivative works might not be in the public domain

I have seen a lot of articles declaring that when a work enters the public domain, people no longer need to worry about being sued for copyright infringement. Technically speaking, this is true, but it is important to clearly identify the version of the work that is in the public domain.

Suppose Arthur Conan Doyle published a “Sherlock Holmes” mystery prior to 1923. Suppose, further, that he published a sequel to it in 1928. In the sequel, he added certain details that did not appear in the previous version. If you were to try your hand at writing a Holmes mystery now, and you included some of the details that first appeared in the 1928 story, then you may be guilty of copyright infringement.

Similarly, the fact that the original Winnie the Pooh story is now in the public domain does not mean that movies based on the book are, too. A derivative work may still be copyright-protected even after the work on which it is based has entered the public domain.

Here is a small sampling of some of the many works that are in the public domain this year.

The Sun Also Rises

“you can’t get away from yourself by moving from one place to another.”

–Ernest Hemingway

The Castle

“Illusions are more common than changes in fortune.”

–Franz Kafka

The Moral Obligation to Be Intelligent

“It is hard to believe that the declaration of antifascism is nowadays any more a mark of sufficient grace in a writer than a declaration against disease would be in a physician or a declaration against accidents would be in a locomotive engineer. The admirable intention in itself is not enough and criticism begins and does not end when the intention is declared.”

–John Erskine

Main Street

“She did not yet know the immense ability of the world to be casually cruel….”

–Sinclair Lewis

Soldier’s Pay

“The saddest thing about love, Joe, is that not only the love cannot last forever, but even the heartbreak is soon forgotten.”

–William Faulkner

The Waves

“the poem, I think, is only your voice speaking.”

–Virginia Woolf

Notes on Democracy

“Under the pressure of fanaticism, and with the mob complacently applauding the show,democratic law tends more and more to be grounded upon the maxim that every citizen is,by nature, a traitor, a libertine, and a scoundrel.In order to dissuade him from his evil-doing the police power is extended until it surpasses anything ever heard of in the oriental monarchies of antiquity.”

–H.L. Mencken

Weary Blues

I got the weary blues

And I can’t be satisfied.

Got the weary blues

And can’t be satisfied.

I ain’t happy no mo’

And I wish that I had died.

Langston Hughes

Nanook of the North (film)

Purple Cow

I never saw a purple cow;

I never hope to see one.

But I can tell you anyhow

I’d rather see than be one!

Gelett Burgess

Pack Up Your Troubles

‘Pack up your troubles in an old kit bag, and smile, smile, smile.”

–George Asaf

Walter Trier’s illustrations for Emil and the Detectives

Winnie the Pooh

“Some people talk to animals. Not many listen though. That’s the problem.”

–A.A. Milne