Nontransformative Nuge

A reversal in the 4th Circuit Court demonstrates the impact the Supreme Court’s decision in Andy Warhol Foundation for the Arts v. Goldsmith is already having on the application of copyright fair use doctrine in federal courts.

Philpot v. Independent Journal Review, No. 21-2021 (4th Circ., Feb. 6, 2024)

Philpot, a concert photographer, registered his photograph of Ted Nugent as part of a group of unpublished works. Prior to registration, he entered into a license agreement giving AXS TV the right to inspect his photographs for the purpose of selecting ones to curate. The agreement provided that the license would become effective upon delivery of photographs for inspection. After registration, Philpot delivered a set of photographs, including the Nugent photograph, to AXS TV. He also published the Nugent photograph to Wikimedia Commons under a Creative Commons (“CC”) license. The CC license allows free use on the condition that attribution is given. LJR published an article called “15 Signs Your Daddy Was a Conservative.” Sign #5 was He hearts the Nuge. LJR used Philpot’s photograph of Ted Nugent as an illustration for the article, without providing an attribution of credit to Philpot.

Philpot sued IJR for copyright infringement.  IJR asserted two defenses: (1) invalid copyright registration; and (2) fair use. The trial court did not decide whether the registration was valid or not, but it granted summary judgment for IJR based on its opinion that the news service’s publication of the photograph was fair use. The Fourth Circuit Court of Appeals reversed, ruling in Philpot’s favor on both issues. The Court held that the copyright registration was valid and that publication of the photograph without permission was not fair use.

The copyright registration

Published and unpublished works cannot be registered together. Including a published work in an application for registration of a group of unpublished works is an inaccuracy that might invalidate the registration, if the applicant was aware of the inaccuracy at the time of applying. Cf. Unicolors v. H&M Hennes & Mauritz, 595 U.S. 178 (2022). LJR argued that Philpot’s pre-registration agreement to send photographs to AJX TV to inspect for possible curation constituted “publication” of them so characterizing them as “unpublished” in the registration application was an inaccuracy known to Philpot.

17 U.S.C. § 101 defines publication as “the distribution of copies . . . to the public” or “offering to distribute copies . . . to a group of persons for purposes of further distribution . . . or public display.” The Court of Appeals held that merely entering into an agreement to furnish copies to a distributor for possible curation does not come within that definition. Sending copies to a limited class of people without concomitantly granting an unrestricted right to further distribute them to the public does not amount to “publication.”

Philpot’s arrangement with AXS TV is analogous to an author submitting a manuscript to a publisher for review for possible future distribution to the public. The U.S. Copyright Office has addressed this. “Sending copies of a manuscript to prospective publishers in an effort to secure a book contract does not [constitute publication].” U.S. Copyright Office, Compendium of U.S. Copyright Office Practices § 1905.1 (3d ed. 2021). Philpot had provided copies of his work for the limited purpose of examination, without a present grant of a right of further distribution. Therefore, the photographs were, in fact, unpublished at the time of the application for registration. Since no inaccuracy existed, the registration was valid.

Fair use

The Court applied the four-factor test for fair use set out in 17 U.S.C. § 107.

(1) Purpose and character of the use. Citing Andy Warhol Found. For the Visual Arts v. Goldsmith, 598 U.S. 508 , 527–33 (2023), the Court held that when, as here, a use is neither transformative nor noncommercial, this factor weighs against a fair use determination. LJR used the photograph for the same purpose as Philpot intended to use it (as a depiction of Mr. Nugent), and it was a commercial purpose.

(2) Nature of the work. Photographs taken by humans are acts of creative expression that receive what courts have described as “thick” copyright protection.” Therefore, this factor weighed against a fair use determination.

(3) Amount and substantiality of the portion used. Since all of the expressive features of the work were used, this factor also weighed against a fair use determination.

(4) Effect on the market for the work. Finally, the Court determined that allowing free use of a copyrighted work for commercial purposes without the copyright owner’s permission could potentially have a negative impact on the author’s market for the work. Therefore, this factor, too, weighed against a fair use determination.

Since all four factors weighed against a fair use determination, the Court reversed the trial court’s grant of summary judgment to IJR and remanded the case for further proceedings.

Conclusion

This decision demonstrates the impact the Warhol decision is having on copyright fair use analysis in the courts. Previously, courts had been interpreting transformativeness very broadly. In many cases, they were ending fair use inquiry as soon as some sort of transformative use could be articulated. As the Court of Appeals decision in this case illustrates, trial courts now need to alter their approach in two ways: (1) They need to return to considering all four fair use factors rather than ending the inquiry upon a defendant’s articulation of some “transformative use;” and (2) They need to apply a much narrower definition of transformativeness than they have been. If both the original work and an unauthorized reproduction of it are used for the purpose of depicting a particular person or scene (as distinguished from parodying or commenting on a work, for example), for commercial gain, then it would no longer appear to be prudent to count on the first of the four fair use factors supporting a fair use determination.


Photo: Photograph published in a July, 1848 edition of L’Illustration. Believed to be the first instance of photojournalism, it is now in the public domain.

What Is In the Public Domain?

How to determine what is in the public domain in the United States, explained by attorney Thomas B. James

Creative expressions generally are protected by copyright law. Sometimes, however, they are not. When that is the case, a work is said to be in “the public domain.”

The rules specifying the conditions for copyright protection vary from country to country. In the United States, they are set out in the Copyright Act, which is codified in Title 17 of the United States Code. The fact that a work is or is not in the public domain in the United States, however, is not determinative of its public domain status in another country. A work that is in the public domain in the United States might still be protected by copyright in another country.

This blog post focuses on the public domain rules set out in U.S. copyright law.

The 3 ways a work enters the public domain

There are three reasons a work may be in the public domain:

  • It was never protected by copyright. Some kinds of expression do not receive copyright protection. Federal government publications created by federal employees, for example, are not protected by copyright.
  • Failure to comply with a formal requirement. At one time, it was possible to lose copyright protection by failing to comply with a legal requirement, such as the requirement to display a copyright notice on a published work.
  • Expiration of the copyright term. Unlike trademarks, copyrights are time-limited. That is to say, the duration of a copyright is limited to a specified term. Congress has altered the durations of copyrights several times.

It is important to keep in mind that once a work enters the public domain, the copyright is gone. This is true even if copyright was lost only because of failure to comply with a formal requirement that has since been abolished. For example, if a work was published in 1976 without a copyright notice, it entered the public domain. The elimination of the copyright notice requirement in 1989 did not have the effect of reviving it. A few very limited exceptions exist, but in general, the elimination of a formal requirement does not have the effect of reviving copyrights in works that have already entered the public domain.

Guidelines for determining the copyright term

The following rules may be used for determining whether a work of a kind that is protected by copyright is in the public domain or not.

Different sets of rules apply to sound recordings, architectural works, and works first published outside the United States by a foreign national or a U.S. citizen living abroad. They are not covered in this blog post.

Note that the term of a copyright runs through the end of the calendar year in which it would otherwise expire. That is to say, a work enters the public domain on the first day of the year following the expiration of its term.

Unpublished and unregistered works

General rule: Life of the author + 70 years. If the author’s date of death is not known, then the term is 120 years from the date of creation.

Anonymous or pseudonymous works: 120 years from the date of creation.

Works made for hire: 120 years from the date of creation.

Works registered or first published in the US

Before 1929

All works registered or first published in the United States before 1929 are in the public domain now.

1929 to 1963

  • Published without a copyright notice: In the public domain.
  • Published with a copyright notice, but not renewed: In the public domain.
  • Published with a copyright notice, and renewed: 95 years after the first publication date.

1964 to 1977

  • Published without a copyright notice: In the public domain.
  • Published with a copyright notice: 95 years after the first publication date.

1978 to March 1, 1989

  • Created before 1978 and first published, with a copyright notice, between 1978 and March 1, 1989: Either 70 years after the death of the author or December 31, 2047, whichever occurs later. (For works made for hire, it is (a) 95 years after the date of first publication or 120 years after creation, whichever occurs first, or (b) December 31, 2047, whichever occurs later.
  • Created after 1977 and published with a copyright notice: 70 years after the death of the author (For works made for hire, it is 95 years after the date of first publication or 120 years after creation, whichever occurs first.)
  • Published without a copyright notice, and without subsequent registration within 5 years: In the public domain.
  • Published without a copyright notice but with subsequent registration within 5 years: Life of the author + 70 years (For works made for hire, it is 95 years after first publication or 120 years after creation, whichever occurs first.)

March 1, 1989 to 2002

  • Created before 1978 and first published between March 1, 1989 and 2002: The greater of (a) The life of the author + 70 years (For works made for hire, it is 95 years after first publication or 120 years after creation, whichever occurs first); or (b) December 1, 2047.
  • Created after 1977: Life of the author + 70 years (For works made for hire, it is 95 years after first publication or 120 years after creation, whichever occurs first.)

after 2002

  • Life of the author + 70 years
  • Works made for hire: 95 years after the date of publication or 120 years after the date of creation, whichever occurs first.

Sound recordings, architecture, and foreign works

The foregoing rules do NOT apply to sound recordings, architectural works, and works that were first published outside the United States by a foreign national or a U.S. citizen living abroad. Special sets of rules apply when determining the public domain status of those kinds of works.

Contact attorney Tom James for copyright help

Contact Tom James (“The Cokato Copyright Attorney) for copyright help.

Top IP Developments of 2023

2023 was a big year for U.S. intellectual property law. Major developments occurred in every area. Here are the highlights.

Copyright

Fair Use

Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith et al.

I’ve written about this case before here and here. The Supreme Court issued a ruling in the case in May. The decision is significant because it finally reined in the “transformative use” doctrine that the Court first announced in Campbell v. Acuff-Rose Music back in 1994. In that case, 2 Live Crew had copied key parts of the Roy Orbison song, “Oh, Pretty Women” to make a parody of the song in its own rap style. The Court held that the 2 Live Crew version, although reproducing portions of both the original song and the original recording of it without permission, transformed it into something else. Therefore, even though it infringed the copyright, the 2 Live Crew version was for a transformative purpose and therefore protected as fair use.

In the thirty years since Campbell, lower courts have been applying the “transformative use” principle announced in Campbell in diverse and divergent ways. Some interpretations severely eviscerated the copyright owner’s exclusive right to make derivative works. Their interpretations often conflicted. What one circuit called transformative “fair use” another circuit called actionable infringement. Hence the need for Supreme Court intervention.

In 1984, Vanity Fair licensed one of photographer Lynn Goldsmith’s photographs of Prince to illustrate a magazine article about him. Per the agreement, Andy Warhol made a silkscreen using the photograph for the magazine and Vanity Fair credited the original photograph to Goldsmith. Unknown to her, however, Warhol proceeded to make 15 additional works based on Goldsmith’s photograph withour her permission.. In 2016, the Andy Warhol Foundation for the Arts licensed one of them to Condé Nast as an illustration for one of their magazines. The Foundation received a cool $10,000 for it, with neither payment nor credit given to Goldsmith. The Foundation then filed a lawsuit seeking a declaration that its use of the photograph was a protected fair use under 17 U.S.C. § 107. The district court granted declaratory judgment in favor of the Foundation. The Second Circuit Court of Appeals reversed, ruling that the four-factor “fair use” analysis favored Goldsmith. The Supreme Court sided with the Court of Appeals.

Noting that it was not ruling on whether Warhol’s making of works using the photograph was fair use, the Court limited its analysis to the narrow question whether the Foundation’s licensing of the Warhol work to Condé Nast was fair use. On that point, the Court determined that the use of the photograph to illustrate a story about Prince was identical to the use Goldsmith had made of the photograph (i.e., to illustrate a magazine article about Prince.) Unlike 2 Live Crew’s use of “Oh, Pretty Woman,” the purpose of the use in this case was not to mock or parody the original work.

The case is significant for vindicating the Copyright Act’s promise to copyright owners of an exclusive right to make derivate works. While Warhol put his own artistic spin on the photograph – and that might have been sufficient to sustain a fair use defense if he had been the one being sued – the Warhol Foundation’s and Condé Nast’s purpose was no different from Goldsmith’s, i.e., as an illustration for an article about Prince. Differences in the purpose or character of a use, the Court held, “must be evaluated in the context of the specific use at issue.” Had the Warhol Foundation been sued for displaying Warhol’s modifications of the photograph for purposes of social commentary in its own gallery, the result might have been different.

Although the holding is a seemingly narrow one, the Court did take the opportunity to disapprove the lower court practice of ending a fair use inquiry at the moment an infringer asserted that an unauthorized copy or derivative work was created for a purpose different from the original author’s.

Statute of Limitations and Damages

Warner Chappell Music, Inc. v. Nealy

The U.S. Supreme Court has granted certiorari to review this Eleventh Circuit decision. At issue is whether a copyright plaintiff may recover damages for infringement that occurred outside of the limitations period, that is, infringement occurring more than three years before a lawsuit was filed.

The circuits are split on this question. According to the Second Circuit, damages are recoverable only for acts of infringement that occurred during the 3-year period preceding the filing of the complaint. The Ninth and Eleventh Circuits, on the other hand, have held that as long as the lawsuit is timely filed, damages may be awarded for infringement that occurred more than three years prior to the filing, at least when the discovery rule has been invoked to allow a later filing. In Nealy, the Eleventh Circuit held that damages may be recovered for infringement occurring more than three years before the claim is filed if the plaintiff did not discover the infringement until some time after it first began.

A decision will be coming in 2024.

Artificial Intelligence

Copyrightability

Thaler v. Perlmutter, et. al.

This was an APA proceeding initiated in the federal district court of the District of Columbia for review of the United State Copyright Office’s refusal to register a copyright in an AI-generated work. In August, the district court upheld the Copyright Office’s decision that an AI-generated work is not protected by copyright, asserting that “human creativity is the sine qua non at the core of copyrightability….” For purposes of the Copyright Act, only human beings can be “authors.” Machines, non-human animals, spirits and natural forces do not get copyright protection for their creations.

An appeal of the decision is pending in the D.C. Circuit Court of Appeals.

Infringement

Many cases that were filed or are still pending in 2023 allege that using copyrighted works to train AI, or creating derivative works using AI, infringes the copyrights in the works so used. Most of these cases make additional claims as well, such as claims of unfair competition, trademark infringement, or violations of publicity and DMCA rights.

 I have been blogging about these cases throughout the year. Significant rulings on the issues raised in them are expected to be made in 2024.

Trademark

Parody Goods

Jack Daniels’s Properties Inc. v. VIP Products LLC

For more information about this case, read my blog post about it here.

This is the “parody goods” case. VIP Products used the “Bad Spaniels” name to market its dog toys, which were patterned on the distinctive shape of a Jack Daniel’s whiskey bottle. VIP filed a lawsuit seeking a declaratory judgment that its product did not infringe the Jack Daniel’s brand. Jack Daniel’s counterclaimed for trademark infringement and dilution. Regarding infringement, VIP claimed First Amendment protection. Regarding dilution, VIP claimed the use was a parody of a famous mark and therefore qualified for protection as trademark fair use. The district court granted summary judgment to VIP.

The Supreme Court reversed. The Court held that when an alleged infringer uses the trademark of another (or something confusingly similar to it) as a designation of source for the infringer’s own goods, it is a commercial, not an expressive, use. Accordingly, the First Amendment is not a consideration in such cases.

Rogers v. Grimaldi had held that when the title of a creative work (in that case, a film) makes reference to a trademark for an artistic or expressive purposes (in that case, Fred Astaire and Ginger Rogers), the First Amendment shields the creator from trademark liability. In the Jack Daniel’s case, the Court distinguished Rogers, holding that it does not insulate the use of trademarks as trademarks (i.e. as indicators of the source or origin of a product or service) from ordinary trademark scrutiny. Even through the dog toys may have had an expressive purpose, VIP admitted it used Bad Spaniels as a source identifier. Therefore, the First Amendment does not apply.

The Court held that the same rule applies to dilution claims. The First Amendment does not shield parody goods from a dilution claim when the alleged diluter uses a mark (or something confusingly similar to it) as a designation of source for its own products or services.

International Law

Abitron Austria v. Hetronic International

Here, the Supreme Court held that the Lanham Act does not have extraterritorial reach. Specifically, the Court held that Sections 1114(1)(a) and 1125 (a)(1) extend only to those claims where the infringing use in commerce occurs in the United States. They do not extend to infringement occurring solely outside of the United States, even if consumer confusion occurs in the United States.

The decision is a reminder to trademark owners that if they want to protect their trademark rights in other countries, they should take steps to protect their rights in those countries, such as by registering their trademarks there.

Patents

Patents are beyond the scope of this blog. Even so, a couple of developments are worth noting.

Enablement

Amgen v. Sanofi

In this case, the Supreme Court considered the validity of certain patents on antibodies used to lower cholesterol under the Patent Act’s enablement requirement (35 U.S.C. 112(a)).  At issue was whether Amgen could patent an entire genus of antibodies without disclosing sufficient information to enable a person skilled in the art to create the potentially millions of antibodies in it. The Court basically said no.

If a patent claims an entire class of processes, machines, manufactures, or compositions of matter, the patent’s specification must enable a person skilled in the art to make and use the entire class. In other words, the specification must enable the full scope of the invention as defined by its claims.

Amgen v. Sanofi, 598 U.S. ____ (2023)

Executive Power

In December, the Biden administration asserted that it can cite “excessive prices” to justify the exercise of Bayh-Dole march-in rights. The Biden Administration also has continued to support a World Trade Organization TRIPS patent waiver for COVID-19 medicines. These developments are obviously of some concern to pharmaceutical companies and members of the patent bar.

Conclusion

My vote for the most the significant IP case of 2023 is Andy Warhol Foundation v. Goldsmith. Lower courts had all but allowed the transofrmative use defense to swallow up the exclusive right of a copyright owner to create derivative works. The Supreme Court provided much-needed correction. I predict that in 2024, the most significant decisions will also be in the copyright realm, but they will have to do with AI.

Generative-AI as Unfair Trade Practice

While Congress and the courts grapple with generative-AI copyright issues, the FTC weighs in on the risks of unfair competition, monopolization, and consumer deception.

FTC Press Release exceprt

While Congress and the courts are grappling with the copyright issues that AI has generated, the federal government’s primary consumer watchdog has made a rare entry into the the realm of copyright law. The Federal Trade Commission (FTC) has filed a Comment with the U.S. Copyright Office suggesting that generative-AI could be (or be used as) an unfair or deceptive trade practice. The Comment was filed in response to the Copyright Office’s request for comments as it prepares to begin rule-making on the subject of artificial intelligence (AI), particularly, generative-AI.

Monopolization

The FTC is responsible for enforcing the FTC Act, which broadly prohibits “unfair or deceptive” practices. The Act protects consumers from deceptive and unscrupulous business practices. It is also intended to promote fair and healthy competition in U.S. markets. The Supreme Court has held that all violations of the Sherman Act also violate the FTC Act.

So how does generative-AI raise monopolization concerns? The Comment suggests that incumbents in the generative-AI industry could engage in anti-competitive behavior to ensure continuing and exclusive control over the use of the technology. (More on that here.)

The agency cited the usual suspects: bundling, tying, exclusive or discriminatory dealing, mergers, acquisitions. Those kinds of concerns, of course, are common in any business sector. They are not unique to generative-AI. The FTC also described some things that are matters of special concern in the AI space, though.

Network effects

Because positive feedback loops improve the performance of generative-AI, it gets better as more people use it. This results in concentrated market power in incumbent generative-AI companies with diminishing possibilities for new entrants to the market. According to the FTC, “network effects can supercharge a company’s ability and incentive to engage in unfair methods of competition.”

Platform effects

As AI users come to be dependent on a particular incumbent generative-AI platform, the company that owns the platform could take steps to lock their customers into using their platform exclusively.

Copyrights and AI competition

The FTC Comment indicates that the agency is not only weighing the possibility that AI unfairly harms creators’ ability to compete. (The use of pirated or the misuse of copyrighted materials can be an unfair method of competition under Section 5 of the FTC Act.) It is also considering that generative-AI may deceive, or be used to deceive, consumers. Specifically, the FTC expressed a concern that “consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist, but it has been generated by someone else using an AI tool.” (Comment, page 5.)

In one of my favorite passages in the Comment, the FTC suggests that training AI on protected expression without consent, or selling output generated “in the style of” a particular writer or artist, may be an unfair method of competition, “especially when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works….” (Comment, pages 5 – 6).

Fair Use

The significance of the FTC’s injection of itself into the generative-AI copyright fray cannot be overstated. It is extremely likely that during their legislative and rule-making deliberations, both Congress and the Copyright Office are going to be focusing the lion’s share of their attention on the fair use doctrine. They are most likely going to try to allow generative-AI outfits to continue to infringe copyrights (It is already a multi-billion-dollar industry, after all, and with obvious potential political value), while at the same time imposing at least some kinds of limitations to preserve a few shards of the copyright system. Maybe they will devise a system of statutory licensing like they did when online streaming — and the widespread copyright infringement it facilitated– became a thing.

Whatever happens, the overarching question for Congress is going to be, “What kinds of copyright infringement should be considered “fair” use.

Copyright fair use normally is assessed using a four-prong test set out in the Copyright Act. Considerations about unfair competition arguably are subsumed within the fourth factor in that analysis – the effect the infringing use has on the market for the original work.

The other objective of the FTC Act – protecting consumers from deception — does not neatly fit into one of the four statutory factors for copyright fair use. I believe a good argument can be made that it should come within the coverage of the first prong of the four-factor test: the purpose and character of the use. The task for Congress and the Copyright Office, then, should be to determine which particular purposes and kinds of uses of generative-AI should be thought of as fair. There is no reason the Copyright Office should avoid considering Congress’s objectives, expressed in the FTC Act and other laws, when making that determination.

Case Update: Andersen v. Stability AI

Artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action lawsuit against Stability AI, DeviantArt, and MidJourney in federal district court alleging causes of action for copyright infringement, removal or alteration of copyright management information, and violation of publicity rights. (Andersen, et al. v. Stability AI Ltd. et al., No. 23-cv-00201-WHO (N.D. Calif. 2023).) The claims relate to the defendants’ alleged unlicensed use of their copyright-protected artistic works in generative-AI systems.

On October 30, 2023, U.S. district judge William H. Orrick dismissed all claims except for Andersen’s direct infringement claim against Stability. Most of the dismissals, however, were granted with leave to amend.

The Claims

McKernan’s and Ortiz’s copyright infringement claims

The judge dismissed McKernan’s and Ortiz’s copyright infringement claims because they did not register the copyrights in their works with the U.S. Copyright Office.

I criticized the U.S. requirement of registration as a prerequisite to the enforcement of a domestic copyright in a U.S. court in a 2019 Illinois Law Review article (“Copyright Enforcement: Time to Abolish the Pre-Litigation Registration Requirement.”) This is a uniquely American requirement. Moreover, the requirement does not apply to foreign works. This has resulted in the anomaly that foreign authors have an easier time enforcing the copyrights in their works in the United States than U.S. authors do. Nevertheless, until Congress acts to change this, it is still necessary for U.S. authors to register their copyrights with the U.S. Copyright Office before they can enforce their copyrights in U.S. courts.  

Since there was no claim that McKernan or Ortiz had registered their copyrights, the judge had no real choice under current U.S. copyright law but to dismiss their claims.

Andersen’s copyright infringement claim against Stability

Andersen’s complaint alleges that she “owns a copyright interest in over two hundred Works included in the Training Data” and that Stability used some of them as training data. Defendants moved to dismiss this claim because it failed to specifically identify which of those works had been registered. The judge, however, determined that her attestation that some of her registered works had been used as training images sufficed, for pleading purposes.  A motion to dismiss tests the sufficiency of a complaint to state a claim; it does not test the truth or falsity of the assertions made in a pleading. Stability can attempt to disprove the assertion later in the proceeding. Accordingly, Judge Orrick denied Stability’s motion to dismiss Andersen’s direct copyright infringement claim.

Andersen’s copyright infringement claims against DeviantArt and MidJourney

The complaint alleges that Stability created and released a software program called Stable Diffusion and that it downloaded copies of billions of copyrighted images (known as “training images”), without permission, to create it. Stability allegedly used the services of LAION (LargeScale Artificial Intelligence Open Network) to scrape the images from the Internet. Further, the complaint alleges, Stable Diffusion is a “software library” providing image-generating service to the other defendants named in the complaint. DeviantArt offers an online platform where artists can upload their works. In 2022, it released a product called “DreamUp” that relies on Stable Diffusion to produce images. The complaint alleges that artwork the plaintiffs uploaded to the DeviantArt site was scraped into the LAION database and then used to train Stable Diffusion. MidJourney is also alleged to have used the Stable Diffusion library.

Judge Orrick granted the motion to dismiss the claims of direct infringement against DeviantArt and MidJourney, with leave to amend the complaint to clarify the theory of liability.

DMCA claims

The complaint makes allegations about unlawful removal of copyright management information in violation of the Digital Millennium Copyright Act (DMCA). Judge Orrick found the complaint deficient in this respect, but granted leave to amend to clarify which defendant(s) are alleged to have done this, when it allegedly occurred, and what specific copyright management information was allegedly removed.

Publicity rights claims

 Plaintiffs allege that the defendants used their names in their products by allowing users to request the generation of artwork “in the style of” their names. Judge Orrick determined the complaint did not plead sufficient factual allegations to state a claim. Accordingly, he dismissed the claim, with leave to amend. In a footnote, the court deferred to a later time the question whether the Copyright Act preempts the publicity claims.

In addition, DeviantArt filed a motion to strike under California’s Anti-SLAPP statute. The court deferred decision on that motion until after the Plaintiffs have had time to file an amended complaint.

Unfair competition claims

The court also dismissed plaintiffs’ claims of unfair competition, with leave to amend.

Breach of contract claim against DeviantArt

Plaintiffs allege that DeviantArt violated its own Terms of Service in connection with their DreamUp product and alleged scraping of works users upload to the site. This claim, too, was dismissed with leave to amend.

Conclusion

Media reports have tended to overstate the significance of Judge Orrick’s October 30, 2023 Order. Reports of the death of the lawsuit are greatly exaggerated. It would have been nice if greater attention had been paid to the registration requirement during the drafting of the complaint, but the lawsuit nevertheless is still very much alive. We won’t really know whether it will remain that way unless and until the plaintiffs amend the complaint – which they are almost certainly going to do.

Need help with copyright registration? Contact attorney Tom James.

The Top 3 Generative-AI Copyright Issues

Black hole consuming a star. Photo credit: NASA.

What are your favorite generative-AI copyright issues? In this capsule summary, Cokato attorney Tom James shares what his three favorites are.

Generative artificial intelligence refers collectively to technology that is capable of generating new text, images, audio/visual and possibly other content in response to a user’s prompts. They are trained by feeding them mass quantities of ABC (already-been-created) works. Some of America’s biggest mega-corporations have invested billions of dollars into this technology. They are now facing a barrage of lawsuits, most of them asserting claims of copyright infringement.

Issue #1: Does AI Output Infringe Copyrights?

Copyrights give their owners an exclusive right to reproduce their copyright-protected works and to create derivative works based on them. If a generative-AI user prompts the service to reproduce the text of a pre-existing work, and it proceeds to do so, this could implicate the exclusive right of reproduction. If a generative-AI user prompts it to create a work in the style of another work and/or author, this could implicate the exclusive right to create derivative works.

To establish infringement, it will be necessary to prove copying. Two identical but independently created works may each be protected by copyright. Put another way, a person is not guilty of infringement merely by creating a work that is identical or similar to another if he/she/it came up with it completely on his/her/its own.

Despite “training” their proteges on existing works, generative-AI outfits deny that their tools actually copy any of them. They say that any similarity to any existing works, living or dead, is purely coincidental. Thus, OpenAI has stated that copyright infringement “is an unlikely accidental outcome.”

The “accidental outcome” defense seems to me like a hard one to swallow in those cases where a prompt involves creating a story involving a specified fictional character that enjoys copyright protection. If the character is distinctive enough — and a piece of work in and of itself, so to speak — to enjoy copyright protection (such as, say, Mr. Spock from the Star Trek series), then any generated output would seem to be an unauthorized derivative work, at least if the AI tool is any good.

If AI output infringes a copyright in an existing work, who would be liable for it? Potentially, the person who entered the prompt might be held liable for direct infringement. The AI tool provider might arguably be liable for contributory infringement.

Issue #2: Does AI Training Infringe Copyrights?

AI systems are “trained” to create works by exposing a computer program system to large numbers of existing works downloaded from the Internet.

When content is downloaded from the Internet, a copy of it is made. This process will “involve the reproduction of entire works or substantial portions thereof.” OpenAI, for example, acknowledges that its programs are trained on “large, publicly available datasets that include copyrighted works” and that this process “involves first making copies of the data to be analyzed….” Making these copies without permission may infringe the copyright holders’ exclusive right to make reproductions of their works.

Generative-AI outfits tend to argue that the training process is fair use.

Fair use claims require consideration of four statutory factors:

  • the purpose and character of the use;
  • the nature of the work copied;
  • the amount and substantiality of the portion copied; and
  • the effect on the market for the work.

OpenAI relies on the precedent set in Authors Guild v. Google for its invocation of “fair use.” That case involved Google’s copying of the entire text of books to construct its popular searchable database.

A number of lawsuits currently pending in the courts are raising the question whether and how, the AI training process is “fair use.”

Issue #3: Are AI-Generated Works Protected by Copyright?

The Copyright Act affords copyright protection to “original works of authorship.” The U.S. Copyright Office recognizes copyright only in works “created by a human being.” Courts, too, have declined to extend copyright protection to nonhuman authors. (Remember the monkey selfie case?) A recent copyright registration applicant has filed a lawsuit against the U.S. Copyright Office alleging that the Office wrongfully denied registration of an AI-generated work. A federal court has now rejected his argument that human authorship is not required for copyright ownership.

In March 2023, the Copyright Office released guidance stating that when AI “determines the expressive elements of its output, the generated material is not the product of human authorship.” Moreover, an argument might be made that a general prompt, such as “create a story about a dog in the style of Jack London,” is an idea, not expression. It is well settled that only expression gets copyright protection; ideas do not.

In September 2023, the Copyright Office Review Board affirmed the Office’s refusal to register a copyright in a work that was generated by Midjourney and then modified by the applicant, on the basis that the applicant did not disclaim the AI-generated material.

The Office also has the power to cancel improvidently granted registrations. (Words to the wise: Disclose and disclaim.)

These are my favorite generative-AI legal issues. What are yours?

AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

Generative AI: Perfect Tool for the Age of Deception

For many reasons, the new millennium might well be described as the Age of Deception. Cokato Copyright Attorney Tom James explains why generative-AI is a perfect fit for it.

Image by Gerd Altmann on Pixabay.

What is generative AI?

“AI,” of course, stands for artificial intelligence. Generative AI is a variety of it that can produce content such as text and images, seemingly of its own creation. I say “seemingly” because in reality these kinds of AI tools are not really independently creating these images and lines of text. Rather, they are “trained” to emulate existing works created by humans. Essentially, they are derivative work generation machines that enable the creation of derivative works based on potentially millions of human-created works.

AI has been around for decades. It wasn’t until 2014, however, that the technology began to be refined to the point that it could generate text, images, video and audio so similar to real people and their creations that it is difficult, if not impossible, for the average person to tell the difference.

Rapid advances in the technology in the past few years have yielded generative-AI tools that can write entire stories and articles, seemingly paint artistic images, and even generate what appear to be photographic images of people.

AI “hallucinations” (aka lies)

In the AI field, a “hallucination” occurs when an AI tool (such as ChatGPT) generates a confident response that is not justified by the data on which it has been trained.

For example, I queried ChatGPT about whether a company owned equally by a husband and wife could qualify for the preferences the federal government sets aside for women-owned businesses. The chatbot responded with something along the lines of “Certainly” or “Absolutely,” explaining that the U.S. government is required to provide equal opportunities to all people without discriminating on the basis of sex, or something along those lines. When I cited the provision of federal law that contradicts what the chatbot had just asserted, it replied with an apology and something to the effect of “My bad.”

I also asked ChatGPT if any U.S. law imposes unequal obligations on male citizens. The chatbot cheerily reported back to me that no, no such laws exist. I then cited the provision of the United States Code that imposes an obligation to register for Selective Service only upon male citizens. The bot responded that while that is true, it is unimportant and irrelevant because there has not been a draft in a long time and there is not likely to be one anytime soon. I explained to the bot that this response was irrelevant. Young men can be, and are, denied the right to government employment and other civic rights and benefits if they fail to register, regardless of whether a draft is in place or not, and regardless of whether they are prosecuted criminally or not. At this point, ChatGPT announced that it would not be able to continue this conversation with me. In addition, it made up some excuse. I don’t remember what it was, but it was something like too many users were currently logged on.

These are all examples of AI hallucinations. If a human being were to say them, we would call them “lies.”

Generating lie after lie

AI tools regularly concoct lies. For example, when asked to generate a financial statement for a company, a popular AI tool falsely stated that the company’s revenue was some number it apparently had simply made up. According to Slate, in their article, “The Alarming Deceptions at the Heart of an Astounding New Chatbot,” users of large language models like ChatGPT have been complaining that these tools randomly insert falsehoods into the text they generate. Experts now consider frequent “hallucination” (aka lying) to be a major problem in chatbots.

ChatGPT has also generated fake case precedents, replete with plausible-sounding citations. This phenomenon made the news when Stephen Schwartz submitted six fake ChatGPT-generated case precedents in his brief to the federal district court for the Southern District of New York in Mata v. Avianca. Schwartz reported that ChatGPT continued to insist the fake cases were authentic even after their nonexistence was discovered. The judge proceeded to ban the submission of AI-generated filings that have not been reviewed by a human, saying that generative-AI tools

are prone to hallucinations and bias…. [T]hey make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices,… generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to…the truth.

Judge Brantley Starr, Mandatory Certification Regarding Generative Artificial Intelligence.

Facilitating defamation

Section 230 of the Communications Decency Act generally shields Facebook, Google and other online services from liability for providing a platform for users to publish false and defamatory information about other people. That has been a real boon for people who like to destroy other people’s reputations by means of spreading lies and misinformation about them online. It can be difficult and expensive to sue an individual for defamation, particularly when the individual has taken steps to conceal and/or lie about his or her identity. Generative AI tools make the job of defaming people even simpler and easier.

More concerning than the malicious defamatory liars, however, are the many people who earnestly rely on AI as a research tool. In July, 2023, Mark Walters filed a lawsuit against OpenAI, claiming its ChatGPT tool provided false and defamatory misinformation about him to journalist Fred Riehl. I wrote about this lawsuit in a previous blog post. Shortly after this lawsuit was filed, a defamation lawsuit was filed against Microsoft, alleging that its AI tool, too, had generated defamatory lies about an individual. Generative-AI tools can generate false and defamation statements about individuals even if no one has any intention of defaming anyone or ruining another person’s reputation.

Facilitating false light invasion of privacy

Generative AI is also highly effective in portraying people in a false light. In one recently filed lawsuit, Jack Flora and others allege, among other things, that Prisma Labs’ Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed. Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023).

Pot, meet kettle; kettle, pot

“False news is harmful to our community, it makes the world less informed, and it erodes trust. . . . At Meta, we’re working to fight the spread of false news.” Meta (nee Facebook) published that statement back in 2017.  Since then, it has engaged in what is arguably the most ambitious campaign in history to monitor and regulate the content of conversations among humans. Yet, it has also joined other mega-organizations Google and Microsoft in investing multiple billions of dollars in what is the greatest boon to fake news in recorded history: generative-AI.

Toward a braver new world

It would be difficult to imagine a more efficient method of facilitating widespread lying and deception (not to mention false and hateful rhetoric) – and therefore propaganda – than generative-AI. Yet, these mega-organizations continue to sink more and more money into further development and deployment of these lie-generators.

I dread what the future holds in store for our children and theirs.

Another AI lawsuit against Microsoft and OpenAI

Last June, Microsoft, OpenAI and others were hit with a class action lawsuit involving their AI data-scraping technologies. On Tuesday (September 5, 2023) another class action lawsuit was filed against them. The gravamen of both of these complaints is that these companies allegedly trained their AI technologies using personal information from millions of users, in violation of federal and state privacy statutes and other laws.

Among the laws alleged to have been violated are the Electronic Communications Privacy Act, the Computer Fraud and Abuse Act, the California Invasion of Privacy Act, California’s unfair competition law, Illinois’s Biometric Information Privacy Act, and the Illinois Consumer Fraud and Deceptive Business Practices Act. The lawsuits also allege a variety of common law claims, including negligence, invasion of privacy, conversion, unjust enrichment, breach of the duty to warn, and such.

This is just the most recent lawsuit in a growing body of claims against big AI. Many involve allegations of copyright infringement, but privacy is a growing concern. This particular suit is asking for an award of monetary damages and an order that would require the companies to implement safeguards for the protection of private data.

Microsoft reportedly has invested billions of dollars in OpenAI and its app, ChatGPT.

The case is A.T. v. OpenAI LP, U.S. District Court for the Northern District of California, No. 3:23-cv-04557 (September 5, 2023).

Is Microsoft “too big to fail” in court? We shall see.

A Recent Exit from Paradise

Over a year ago, Steven Thaler filed an application with the United States Copyright Office to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” In the application, he listed a machine as the “author” and himself as the copyright owner. The Copyright Office refused registration  on the grounds that the work lacked human authorship. Thaler then filed a lawsuit in federal court seeking to overturn that determination. On August 18, 2023 the court upheld the Copyright Office’s refusal of registration. The case is Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236 (D.D.C. Aug. 18, 2023).

Read more about the history of this case in my previous blog post, “A Recent Entrance to Complexity.”

The Big Bright Green Creativity Machine

In his application for registration, Thaler had listed his computer, referred to as “Creativity Machine,” as the “author” of the work, and himself as a claimant. The Copyright Office denied registration on the basis that copyright only protects human authorship.

Taking the Copyright Office to court

Unsuccessful in securing a reversal through administrative appeals, Thaler filed a lawsuit in federal court claiming the Office’s denial of registration was “arbitrary, capricious, an abuse of discretion and not in accordance with the law….”

The court ultimately sided with the Copyright Office. In its decision, it provided a cogent explanation of the rationale for the human authorship requirement:

The act of human creation—and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts—was thus central to American copyright from its very inception. Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.

Id.

A Complex Issue

As I discussed in a previous blog post, the issue is not as simple as it might seem. There are different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. These tools are capable of generating content with very little user input. If the only human input is a user’s directive to “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

The difficult question – and one that is almost certain to be the subject of ongoing litigation and probably new Copyright Office regulations – is exactly how much, and what kind of, human input is necessary before a human can claim authorship.  Equally as perplexing is how much, if at all, the Copyright Office should involve itself in ascertaining and evaluating the details of the process by which a work was created. And, of course, what consequences should flow from an applicant’s failure to disclose complete details about the nature and extent of machine involvement in the creative process.

Conclusion

The court in this case did not dive into these issues. The only thing we can safely take away from this decision is the broad proposition that a work is not protected by copyright to the extent it is generated by a machine.

Exit mobile version
%%footer%%