Voice Cloning

Nipper, painting by Francis Barraud (1898-1899)
Painting of Nipper by Francis Barraud (1898-99); subsequently used as a trademark with “HIs Master’s Voice.”

Lehrman v. Lovo, Inc.

On July 10, 2025, the federal district court for the Southern District of New York issued an Order granting in part and denying in part a motion to dismiss a putative class action lawsuit that Paul Lehrman and Linnea Sage commenced against Lovo, Inc. The lawsuit, Lehrman v. Lovo, Inc., alleges that Lovo used artificial intelligence to make and sell unauthorized “clones” of their voices.

Specifically, the complaint alleges that the plaintiffs are voice-over actors. For a fee, they read and record scripts for their clients. Lovo allegedly sells a text-to-speech subscription service that allows clients to generate voice-over narrations. The service is described as one that uses “AI-driven software known as ‘Generator’ or ‘Genny,'” which was “created using ‘1000s of voices.'” Genny allegedly creates voice clones, i.e., copies of real people’s voices. Lovo allegedly granted its customers “commercial rights for all content generated,” including “any monetized, business-related uses such as videos, audio books, advertising promotion, web page vlogging, or product integration.” (Lovo terms of service.) The complaint alleges that Lovo hired the plaintiffs to provide voice recordings for “research purposes only,” but that Lovo proceeded to exploit them commercially by licensing their use to Lovo subscribers.

This lawsuit ensued.

The complaint sets out claims for:

  • Copyright infringement
  • Trademark infringement
  • Breach of contract
  • Fraud
  • Conversion
  • Unjust enrichment
  • Unfair competition
  • New York civil rights laws
  • New York consumer protection laws.

The defendant moved to dismiss the complaint for failure to state a claim.

The copyright claims

Sage alleged that Lovo infringed the copyright in one of her voice recordings by reproducing it in presentations and YouTube videos. The court allowed this claim to proceed.

Plaintiffs also claimed that Lovo’s unauthorized use of their voice recordings in training its generative-AI product infringed their copyrights in the sound recordings. The court ruled that the complaint did not contain enough factual detail about how the training process infringed one of the exclusive rights of copyright ownership. Therefore, it dismissed this claim with leave to amend.

The court dismissed the plaintiffs’ claims of output infringement, i.e., claims that the “cloned” voices the AI tool generated infringed copyrights in the original sound recordings.

Copyright protection in a sound recording extends only to the actual recording itself. Fixation of sounds that imitate or simulate the ones captured in the original recording does not infringe the copyright in the sound recording.

This issue often comes up in connection with copyrights in music recordings. If Chuck Berry writes a song called “Johnny B. Goode” and records himself performing it, he will own two copyrights – one in the musical composition and one in the sound recording. If a second person then records himself performing the same song, and he doesn’t have a license (compulsory or otherwise) to do so, that person would be infringing the copyright in the music but not the copyright in the sound recording. This is true even if he is very good at imitating Berry’s voice and guitar work. For a claim of sound recording infringement to succeed, it must be shown that the actual recording itself was copied.

Plaintiffs did not allege that Lovo used Genny to output AI-generated reproductions of their original recordings. Rather, they alleged that Genny is able to create new recordings that mimic attributes of their voices.

The court added that the sound of a voice is not copyrightable expression, and even if it were, the plaintiffs had registered claims of copyright in their recordings, not in their voices.

The trademark claims

In addition to infringement, the Lanham Act creates two other potential bases of trademark liability: (1) false association; and (2) false advertising. 15 U.S.C. sec. 1125(a)(1)(A) and (B). Plaintiffs asserted both kinds of claims. The judge dismissed these claims.

False association

The Second Circuit court of appeals recently held, in Electra v. 59 Murray Enter., Inc. and Souza v. Exotic Island Enters., Inc., that using a person’s likeness to create an endorsement without the person’s permission can constitute a “false association” violation. In other words, a federally-protected, trademark-like interest in one’s image, likeness, personality and identity exists. (See, e.g., Jackson v. Odenat.)

Although acknowledging that this right extends to one’s voice, the judge ruled that the voices in this case did not function as trademarks. They did not identify the source of a product or service. Rather, they were themselves the product or service. For this reason, the judge ruled that the plaintiffs had failed to show that their voices, as such, are protectable trademarks under Section 43(a)(1)(A) of the Lanham Act.

False Advertising

Section 43(a)(1)(B) of the Lanham Act (codified at 15 U.S.C. sec. 1125(a)(1)(B)) prohibits misrepresentations about “the nature, characteristics, qualities, or geographic origin of . . . goods, services, or commercial activities.” The plaintiffs claimed that Lovo marketed their voices under different names (“Kyle Snow” and “Sally Coleman.”) The court determined that this was not fraudulent, however, because Lovo marketed them as what they were, namely, synthetic clones of the actors’ voices, not as their actual voices.

Plaintiffs also claimed that Lovo’s marketing materials falsely stated that the cloned voices “came with all commercial rights.” They asserted that they had not granted those rights to Lovo. The court ruled, however, that even if Lovo was guilty of misrepresentation, it was not the kind of misrepresentation that comes within Section 43(a)(1)(B), as it did not concern the nature, characteristics, qualities, or geographic origin of the voices.

State law claims

Although the court dismissed the copyright and trademark claims, it allowed some state law claims to proceed. Specifically, the court denied the motion to dismiss claims for breach of contract, violations of sections 50 and 51 of the New York Civil Rights Law, and violations of New York consumer protection law.

Both the common law and the New York Civil Rights Law prohibit the commercial use of a living person’s name, likeness or voice without consent. Known as “misappropriation of personality” or violation of publicity or privacy rights, this is emerging as one of the leading issues in AI law.

The court also allowed state law claims of false advertising and deceptive trade practices to proceed. The New York laws are not subject to the “nature, characteristics, qualities, or geographic origin” limitation set out in Section 43(a) of the Lanham Act.

Conclusion

I expect this case will come to be cited for the rule that copyright cannot be claimed in a voice. Copyright law protects only expression, not a person’s corporeal attributes. The lack of copyright protection for a person’s voice, however, does not mean that voice cloning is “legal.” Depending on the particular facts and circumstances, it may violate one or more other laws.

It also should be noted that after the Joe Biden voice-cloning incident of 2024, states have been enacting statutes regulating the creation and distribution of voice clones. Even where a specific statute is not applicable, though, a broader statute (such as the FTC Act or a similar state law) might cover the situation.

Images and references in this blog post are for illustrative purposes only. No endorsement, sponsorship or affiliation with any person, organization, company, brand, product or service is intended, implied, or exists.

Official portrait of Vice President Joe Biden in his West Wing Office at the White House, Jan. 10, 2013. (Official White House Photo by David Lienemann)

AI OK; Piracy Not: Bartz v. Anthropic

A federal judge has issued a landmark fair use decision in a generative-AI copyright infringement lawsuit.

In a previous blog post, I wrote about the fair use decision in Thomson Reuters v. ROSS. As I explained there, that case involved a search-and-retrieval AI system, so the holding was not determinative of fair use in the context of generative AI. Now we finally have a decision that addresses fair use in the generative-AI context.

Bartz et al. v. Anthropic PBC

Anthropic is an AI software firm founded by former OpenAI employees. It offers a generative-AI tool called Claude. Like other generative-AI tools, Claude mimics human conversational skills. When a user enters a text prompt, Claude will generate a response that is very much like one a human being might make (except it is sometimes more knowledgeable.) It is able to do this by using large language models (LLMs) that have been trained on millions of books and texts.

Adrea Bartz, Charles Graeber, and Kirk Wallace Johnson are book authors. In August 2024, they sued Anthropic, claiming the company infringed the copyrights in their works. Specifically, they alleged that Anthropic copied their works from pirated and purchased sources, digitized print versions, assembled them into a central library, and used the library to train LLMs, all without permission. Anthropic asserted, among other things, a fair use defense.

Earlier this year, Anthropic filed a motion for summary judgment on the question of fair use.

On June 23, 2025, Judge Alsup issued an Order granting summary judgment in part and denying it in part. It is the first major ruling on fair use in the dozens of generative-AI copyright infringement lawsuits that are currently pending in federal courts.

The Order includes several key rulings.

Books

Digitization

Anthropic acquired both pirated and lawfully purchased printed copies of copyright-protected works and digitized them to create a central e-library. Authors claimed that making digital copies of their works infringed the exclusive right of copyright owners to reproduce their works. (See 17 U.S.C. 106.)

In the process of scanning print books to create digital versions of them, the print copies were destroyed. Book bindings were stripped so that each individual page could be scanned. The print copies were then discarded. The digital copies were not distributed to others. Under these circumstances, the court ruled that making digital versions of print books is fair use.

The court likened format to a frame around a work, as distinguished from the work itself. As such, a digital version is not a new derivative work. Rather, it is a transformative use of an existing work. So long as the digital version is merely a substitute for a print version a person has lawfully acquired, and so long as the print version is destroyed and the digital version is not further copied or distributed to others, then digitizing a printed work is fair use. This is consistent with the first sale doctrine (17 U.S.C. 109(a)), which gives the purchaser of a copy of a work a right to dispose of that particular copy as the purchaser sees fit.

In short, the mere conversion of a lawfully acquired print book to a digital file to save space and enable searchability is transformative, and so long as the print version is destroyed and the digital version is not further copied or distributed, it is fair use.

AI Training Is Transformative Fair Use

The authors did not contend that Claude generated infringing output. Instead, they argued that copies of their works were used as inputs to train the AI. The Copyright Act, however, does not prohibit or restrict the reading or analysis of copyrighted works. So long as a copy is lawfully purchased, the owner of the purchased copy can read it and think about it as often as he or she wishes.

[I]f someone were to read all the modern-day classics because of their exceptional expression, memorize them, and then emulate a blend of their best writing, would that violate the Copyright Act? Of course not.

Order.

Judge Alsup described AI training as “spectacularly” transformative.” Id. After considering all four fair use factors, he concluded that training AI on lawfully acquired copyright-protected works (as distinguished from the initial acquisition of copies) is fair use.

Pirating Is Not Fair Use

In addition to lawfully purchasing copies of some works, Anthropic also acquired infringing copies of works from pirate sites. Judge Alsup ruled that these, and uses made from them, are not fair use. The case will now proceed to trial on the issue of damages resulting from the infringement.

Conclusion

Each of these rulings seems, well, sort of obvious. It is nice to have the explanations laid out so clearly in one place, though.

When Your Car Is a Character

Carroll Shelby Licensing v. Halicki et al.

If you’re like me, you’ve probably owned a car with character, or even several cars with character, at some time in your life. A used Volkswagen Jetta with a replacement alternator that was held in place with washers. An old Plymouth Duster with a floor and doors that rusted clean through before the slant-6 ever had a problem. A Honda Fit that . . . well, this is probably a good place to stop dredging up memories. This post isn’t about cars with character. It’s about cars as characters. Specifically, the question whether it is possible to claim copyright protection in a car that appears in a book, movie, song, or other work.

The Ninth Circuit Court of Appeals had occasion to address this very question in Carroll Shelby Licensing et al. v. Halicki et al, No. 23-3731 (9th Cir., May 27, 2025).

Gone in 60 Seconds and Sequela

In the 1974 movie, Gone in 60 Seconds, the protagonist is tasked with stealing forty-eight types of cars. He and his colleagues assign them names. They call the Ford Mustang with black stripes “Eleanor.” Action ensues.

Three movies incorporating elements of this one were made and released thereafter — The Junkman, Deadline Auto Theft, and a year 2000 remake of Gone in 60 Seconds. A car that was made to look like the Mustang in the original Gone in 60 Seconds appeared in these movies, as well. The message, “‘Eleanor’ from the movie Gone in 60 Seconds” was painted on its side.

Shelby contracted with Classic Recreations to produce “GT-500CR” Mustangs. Without going into all of the contractual and procedural details, the owner of the copyright in the first three movies eventually asserted a claim of copyright infringement, raising the question whether copyright can be claimed in “Eleanor,” the Ford Mustang car that appeared in the movies.

Character Copyrights

Fictional works generally are eligible for copyright protection. Sometimes copyright protection will extend to fictional characters within one as well. Mickey Mouse and Godzilla are examples.

NOTE: This blog post was not produced, sponsored or endorsed by Disney, and is not affiliated with Disney or any person, company or organization affiliated or associated with Disney.

The test for independent character copyright protection is set out in DC Comics v. Towle. In sum, the character must:

  1. have both physical and conceptual qualities;
  2. be “sufficiently delineated” to be recognizable as the same character whenever it appears; and
  3. be “especially distinctive” with “some unique elements of expression.”

The 9th Circuit Court of Appeals held that “Eleanor” failed to meet any of these criteria.

1. Physical and conceptual qualities

Eleanor had physical qualities, but the Court held that it lacked conceptual qualities. Conceptual qualities include “anthropomorphic qualities, acting with agency and volition, displaying sentience and emotion, expressing personality, speaking, thinking, or interacting with other characters or objects.” Shelby, supra. The character does not have to be human. It can be almost anything, so long as it has some of the above traits. Thus, the Batmobile could qualify.

The Court determined, however, that Eleanor the car lacked any of these conceptual qualities, likening her to prop rather than a character.

2. Sufficient delineation

Here, the Court determined that Eleanor lacked consistent traits. In some iterations, Eleanor appeared as a yellow and black Fastback Mustang; in others, as a gray and black Shelby GT-500 Mustang, or a rusty, paintless Mustang. The Court concluded that Eleanor was too lightly sketched to satisfy the “sufficient delineation” test.

3. Unique elements of expression

Having no regard at all for Eleanor’s feelings, the Court declared, “Nothing distinguishes Eleanor from any number of sports cars appearing in car-centric action films.” According to the Court, she was just a run-of-the-mill automobile. Accordingly, she failed the distinctiveness test.

Quiz

Just for fun, try your hand at applying the Towle test to determine which of these might qualify for copyright protection and which ones don’t:

  1. Chuck Berry’s Maybeline
  2. My Mother the Car
  3. Prince’s Little Red Corvette
  4. Chitty Chitty Bang Bang
  5. Christine
  6. KITT in Knight Rider
  7. The Magic School Bus
  8. Thomas the Tank Engine
  9. Gumdrop
  10. Dick Turpin
  11. Truckster in National Lampoon’s Vacation
  12. The Bluesmobile in The Blues Brothers
  13. The Hearse
  14. Herbie in The Love Bug
  15. The DeLorean in Back to the Future
  16. The Gnome-Mobile
  17. Ecto-1 in Ghostbusters
  18. Bessie in Doctor Who
  19. General Lee in The Dukes of Hazzard
  20. The Munster Coach in The Munsters
  21. Shellraiser in Teenage Mutant Ninja Turtles
  22. Benny the Cab in Who Framed Roger Rabbit?
  23. Lightning McQueen in Cars
  24. The Mystery Machine in Scooby-Doo
  25. The Gadgetmobile in Inspector Gadget
  26. Mustang Sally
  27. Killdozer
  28. Ivor the Engine
  29. Tootle
  30. Roary the Racing Car.

Give yourself 3,500 extra points if you are familiar with all of these references.

Points are not redeemable for value.

Concluding Thought

Even if a fictional character does not qualify for copyright protection, it might be protected as a trademark in some cases. The requirements for trademark protection are a subject for another day.

Photographers’ Rights

The Second Circuit Court of Appeals reversed a trial judge’s dismissal of a photographer’s copyright infringement complaint, holding that because “fair use” was not clearly established on the face of the complaint, the district court should not have dismissed the complaint sua sponte. Romanova v. Amilus, Inc.

Romanova v. Amilus, Inc., No. 23-828 (2nd Cir., May 23, 2025)

The Second Circuit Court of Appeals reversed a trial judge’s dismissal of a photographer’s copyright infringement complaint, holding that because “fair use” was not clearly established on the face of the complaint, the district court should not have dismissed the complaint sua sponte.

European grass snake photograph illustrating copyright infringement article by Cokato Minnesota attorney Thomas James

Photographer Jana Romanova created a photograph of a woman with a snake wrapped around her left hand and another snake crawling up her torso. (Not the one pictured here.) She licensed it to National Geographic Magazine for a single use. According to the complaint, Amilus, Inc. allegedly made a copy of the photograph and published it to its website. Romanova allegedly sent notifications demanding the removal of the photograph from the website. The defendant allegedly did not respond. This lawsuit followed.

The defendant allegedly did not appear or respond to the complaint, so Romanova moved for the entry of default judgment. Rather than grant a default judgment, however, the district court judge sua sponte ordered Romanova to show cause why the court should not dismiss the case on the grounds that the defendant’s use of the photograph was fair use. Although fair use is an affirmative defense, which defendants have the burden of asserting and proving, the judge opined that the fair use defense did not need to be pleaded because the judge believed the fair use defense was “clearly established on the fact of the complaint.

Romanova appealed. The Second Circuit Court of Appeals reversed, effectively allowing the infringement claim to go forward.

Fair Use

In its decision, the Second Circuit Court of Appeals clarified how courts are to interpret and apply the four-factor “fair use” test outlined in the Copyright Act, 17 U.S.C. § 107 (purpose and character of the use; nature of the work; amount and substantiality of the portion copied; and the effect on the market for the work.)

The district court concluded that the defendant’s publication of the photograph communicated a different message than what the photographer intended. According to the district court, the purpose of the publication in the National Geographic was “to showcase persons in [her] home country of Russia that kept snakes as pets, specifically to capture pet snakes in common environments that are more associated with mainstream domesticated animals.” The district court found that the purpose of the defendant’s publication was to communicate a message about “the ever-increasing amount of pet photography circulating online.

Apparently the district court was under the impression that the use of a copyright-protected work for any different purpose, or to communicate any different message, is “transformative” and therefore “fair use.” The Court of Appeals clarified that is not the case. In addition to alleging and proving the use was for a different purpose or conveyed a different meaning, a defendant seeking to establish a fair use defense must also allege and prove a justification for the copying.

Examples of purposes that may justify copying a work include commentary or criticism of the copied work, or providing information to the public about the copied work, in circumstances where the copy does not become a substitute for the work. (See, e.g., Authors Guild v. Google, Inc., 804 F.3d 202, 212 (2d Cir. 2015).) Copying for evidentiary purposes (such as to support a claim that the creator of the work published a defamatory statement) can also be a valid justification to support a fair use defense. Creating small, low-resolution copies of images (“thumbnails”) may be justified when the purpose is to facilitate Internet searching. (Perfect 10 v. Amazon.com, 508 F.3d 1146, 1165 (9th Cir. 2007). Facilitating blind people’s access to a work may provide a justification for converting it into a format that blind people can read. (Authors Guild v. HathiTrust, 755 F.3d 87, 97 (2d Cir. 2014).

The Court cited other examples of potential justifications for copying. The Court admonished, however, that the question whether justification exists is a fact-specific determination that must be made on a case-by-case basis.

[J]ustification is often found when the copying serves to critique, or otherwise comment on, the original, or its author, but can also be found in other circumstances, such as when the copying provides useful information about the original, or on other subjects, usually in circumstances where the copying does not make the expressive content of the original available to the public.

Romanova, supra.

The only “justification” the district court cited for the copying was that it believed the defendant merely wanted to illustrate its perception of a growing trend to publish photographs of people with pets. “Little could remain of an author’s copyright protection if others could secure the right to copy and distribute a work simply by asserting some fact about the copied work,” the Court observed. The defendant’s publication of the copy did not communicate criticism or commentary on the original photograph or its author, or any other subject, the Court held.

The Court held that the remaining three fair use factors also militated against a finding of fair use.

Sua Sponte Dismissal for “Fair Use”

Justice Sullivan filed a concurring opinion. He would have reversed on procedural grounds without reaching the substantive issue. Specifically, Justice Sullivan objected to the trial judge’s raising of the fair use defense sua sponte on behalf of a non-appearing defendant. Normally, if a complaint establishes a prima case for relief, the court does not consider affirmative defenses (such as fair use) unless the defendant asserts them. That is to say, fair use is an affirmative defense; the defendant, not the plaintiff, bears the burden of proof.

Conclusion

Appeals courts continue to rein in overly expansive applications of “transformative” fair use by the lower courts. Here, the Court of Appeals soundly reasoned that merely being able to articulate an additional purpose served by publishing an author’s entire work, unchanged, will not, by itself, suffice to establish either transformative use or fair use.

Copyrights in AI-Generated Content

Copyright registrations are being issued for works created with generative-AI tools, subject to some important qualifications. Also, Internet Archves revisited (briefly)

The U.S. Copyright Office has issued its long-awaited report on the copyrightability of works created using AI-generated output. The legality of using copyrighted works to train generative-AI systems is a topic for another day.

Key takeaways:

  • Copyright protects the elements of a work that are created by a human, but does not protect elements that were AI-generated (probably the key take-away from the Report)
  • The Copyright Office believes existing law is adequate to deal with AI copyright issues; it does not believe any new legislation is needed
  • Using AI to assist in the creative process does not affect copyrightability
  • Prompts do not provide sufficient control over the output to be considered creative works.
  • Protection exists for the following, if they involve sufficient human creativity:
    • Selection, coordination, and arrangement of AI-generated output
      • Modification of AI-generated content
        • Human-created elements distinguishable from AI-generated elements.

Prompts

A key question for the Copyright Office was whether a highly detailed prompt could suffice as human creative expression. The Office says no; “[P]rompts alone do not provide sufficient human control to make users of an AI system the authors of the output. Prompts essentially function as instructions that convey unprotectable ideas. While highly detailed prompts could contain the user’s desired expressive elements, at present they do not control how the AI system processes them in generating the output.”

How much control does a human need over the output-generation process to be considered an author? The answer, apparently, is “So much control that the AI mechanism’s contribution was purely rote or mechanical. “The fact that identical prompts can generate multiple different outputs further indicates a lack of human control.”

Expressive prompts

If the prompt itself is sufficiently creative and original, the expression contained in the prompt may qualify for copyright protection. For example, if a user prompts an AI tool to change a story from first-person to third-person point of view, and includes the first-person version in the prompt, then copyright may be claimed in the story that was included in the prompt. The author could claim copyright in the story as a “human-generated element” distinguishable from anything AI thereafter did to it. The human-created work must be perceptible in the output.

Registration of hybrid works

The U.S. Copyright Office has now issued several registrations for works that contain a combination of both human creative expression and AI-generated output. Examples:

Irontic, LLC has a registered copyright in Senzia Opera, a sound recording with “music and singing voices by [sic] generated by artificial intelligence,” according to the copyright registration. That material is excluded from the claim. The registration, however, does provide protection for the story, lyrics, spoken words, and the selection, coordination, and arrangement of the sound recording.

Computer programs can be protected by copyright, but if any source code was generated by AI, it must be excluded from the claim. Thus, the Adobe GenStudio for Performance Marketing computer program is protected by copyright, but any source code in it that was AI-generated is not.

A record company received a copyright registration for human additions and modifications to AI-generated art.

As an example of a “selection, coordination and arrangement” copyright, there is the registration of a work called “A Collection of Objects Which Do Not Exist,” consisting of a collage of AI-generated images. “A Single Piece of American Cheese,” is another example of a registered copyright claim based on the selection, coordination, or arrangement of AI-generated elements.

China

A Chinese court has taken a contrary position, holding that an AI-generated image produced by Stable Diffusion is copyrightable because the prompts he chose reflected his aesthetic choices.

Internet Archives Postscript

In January, the Second Circuit Court of Appeals affirmed the decision in Hachette Book Group, Inc. v. Internet Archive. This came as no surprise. A couple of important things that bear repeating came out of this decision, though.

First, the Court of Appeals reaffirmed that fair use is an affirmative defense. As such, the defendant bears the burden of establishing the level of market harm the use has caused or may cause. While a copyright owner may reasonably be required to identify relevant markets, he/she/it is not required to present empirical data to support a claim of market harm. The defendant bears the burden of proof of a fair use defense, including proof pertinent to each of the four factors comprising the defense.

Confusion seems to have crept into some attorneys’ and judges’ analysis of the issue. This is probably because it is well known that the plaintiff bears the burden of proof of damages, which can also involve evidence of market harm. The question of damages, however, is separate and distinct from the “market harm” element of a fair use defense.

The second important point the Second Circuit made in Hatchette is that the “public benefit” balancing that Justice Breyer performed in Google LLC v. Oracle America, Inc. needs to focus on something more than just the short-term benefits to the public in getting free access to infringing copies of works. Otherwise, the “public benefit” in getting free copies of copyright-protected stuff would outweigh the rights of copyright owners every time.  The long-term benefits of protecting the rights of authors must also be considered.

True, libraries and consumers may reap some short-term benefits from access to free digital books, but what are the long-term consequences? [Those consequences, i.e.,] depriv[ing] publishers and authors of the revenues due to them as compensation for their unique creations [outweigh any public benefit in having free access to copyrighted works.]

Id.

They reined in Google v. Oracle.

Thomas James is a human. No part of this article was AI-generated.

Fair Use Decision in Thomson Reuters v. Ross

A court has handed down the first known ruling (to me, anyway) on “fair use” in the wave of copyright infringement lawsuits against AI companies that are pending in federal courts.

A court has handed down the first known ruling (to me, anyway) on “fair use” in the wave of copyright infringement lawsuits against AI companies that are pending in federal courts. The ruling came in Thomas Reuters v. Ross. Thomas Reuters filed this lawsuit against Ross Intelligence back in 2020, alleging that Ross trained its AI models on Westlaw headnotes to build a competing legal research tool, infringing numerous copyrights in the process. Ross asserted a fair use defense.

In 2023, Thomson Reuters sought summary judgment against Ross on the fair use defense. At that time, Judge Bibas denied the motion. This week, however, the judge reversed himself, knocking out at least a major portion of the fair use defense.

Ross had argued that Westlaw headnotes are not sufficiently original to warrant copyright protection and that even if they are, the use made of them was “fair use.” After painstakingly reviewing the headnotes and comparing them with the database materials, he concluded that 2,243 headnotes were sufficiently original to receive copyright protection, that Ross infringed them, and that “fair use” was not a defense in this instance because the purpose of the use was commercial and it competed in the same market with Westlaw. Because of that, it was likely to have an adverse impact on the market for Westlaw.

While this might seem to spell the end for AI companies in the many other lawsuits where they are relying on a “fair use” defense, that is not necessarily so. As Judge Bibas noted, the Ross AI was non-generative. Generative AI tools may be distinguishable in the fair use analysis.

I will be presenting a program on Recent Developments in AI Law in New Jersey this summer. This one certainly will merit mention. Whether any more major developments will come to pass between now and then remains to be seen.

New AI Copyright Infringement Lawsuit

Another copyright and trademark infringement lawsuit against an AI company was filed this week. This one pits news article publishers Advance Local Media, Condé Nast, The Atlantic, Forbes Media, The Guardian, Business Insider, LA Times, McClatchy Media Company, Newsday, Plain Dealer Publishing Company, POLITICO, The Republican Company, Toronto Star Newspapers, and Vox Media against AI company Cohere.

The complaint alleges that Cohere made unauthorized use of publisher content in developing and operating its generative AI systems, infringing numerous copyrights and trademarks. The plaintiffs are seeking an injunction and monetary damages.

More copyright stories here.

What Is In the Public Domain?

How to determine what is in the public domain in the United States, explained by attorney Thomas B. James

Thomas B James on "Public Domain"

Creative expressions generally are protected by copyright law. Sometimes, however, they are not. When that is the case, a work is said to be in “the public domain.”

The rules specifying the conditions for copyright protection vary from country to country. In the United States, they are set out in the Copyright Act, which is codified in Title 17 of the United States Code. The fact that a work is or is not in the public domain in the United States, however, is not determinative of its public domain status in another country. A work that is in the public domain in the United States might still be protected by copyright in another country.

This blog post focuses on the public domain rules set out in U.S. copyright law.

The 3 ways a work enters the public domain

There are three reasons a work may be in the public domain:

  • It was never protected by copyright. Some kinds of expression do not receive copyright protection. Federal government publications created by federal employees, for example, are not protected by copyright.
  • Failure to comply with a formal requirement. At one time, it was possible to lose copyright protection by failing to comply with a legal requirement, such as the requirement to display a copyright notice on a published work.
  • Expiration of the copyright term. Unlike trademarks, copyrights are time-limited. That is to say, the duration of a copyright is limited to a specified term. Congress has altered the durations of copyrights several times.

It is important to keep in mind that once a work enters the public domain, the copyright is gone. This is true even if copyright was lost only because of failure to comply with a formal requirement that has since been abolished. For example, if a work was published in 1976 without a copyright notice, it entered the public domain. The elimination of the copyright notice requirement in 1989 did not have the effect of reviving it. A few very limited exceptions exist, but in general, the elimination of a formal requirement does not have the effect of reviving copyrights in works that have already entered the public domain.

Guidelines for determining the copyright term

The following rules may be used for determining whether a work of a kind that is protected by copyright is in the public domain or not.

Different sets of rules apply to sound recordings, architectural works, and works first published outside the United States by a foreign national or a U.S. citizen living abroad. They are not covered in this blog post.

Note that the term of a copyright runs through the end of the calendar year in which it would otherwise expire. That is to say, a work enters the public domain on the first day of the year following the expiration of its term.

Unpublished and unregistered works

General rule: Life of the author + 70 years. If the author’s date of death is not known, then the term is 120 years from the date of creation.

Anonymous or pseudonymous works: 120 years from the date of creation.

Works made for hire: 120 years from the date of creation.

Works registered or first published in the US

Before 1929

All works registered or first published in the United States before 1929 are in the public domain now.

1929 to 1963

  • Published without a copyright notice: In the public domain.
  • Published with a copyright notice, but not renewed: In the public domain.
  • Published with a copyright notice, and renewed: 95 years after the first publication date.

1964 to 1977

  • Published without a copyright notice: In the public domain.
  • Published with a copyright notice: 95 years after the first publication date.

1978 to March 1, 1989

  • Created before 1978 and first published, with a copyright notice, between 1978 and March 1, 1989: Either 70 years after the death of the author or December 31, 2047, whichever occurs later. (For works made for hire, it is (a) 95 years after the date of first publication or 120 years after creation, whichever occurs first, or (b) December 31, 2047, whichever occurs later.
  • Created after 1977 and published with a copyright notice: 70 years after the death of the author (For works made for hire, it is 95 years after the date of first publication or 120 years after creation, whichever occurs first.)
  • Published without a copyright notice, and without subsequent registration within 5 years: In the public domain.
  • Published without a copyright notice but with subsequent registration within 5 years: Life of the author + 70 years (For works made for hire, it is 95 years after first publication or 120 years after creation, whichever occurs first.)

March 1, 1989 to 2002

  • Created before 1978 and first published between March 1, 1989 and 2002: The greater of (a) The life of the author + 70 years (For works made for hire, it is 95 years after first publication or 120 years after creation, whichever occurs first); or (b) December 1, 2047.
  • Created after 1977: Life of the author + 70 years (For works made for hire, it is 95 years after first publication or 120 years after creation, whichever occurs first.)

after 2002

  • Life of the author + 70 years
  • Works made for hire: 95 years after the date of publication or 120 years after the date of creation, whichever occurs first.

Sound recordings, architecture, and foreign works

The foregoing rules do NOT apply to sound recordings, architectural works, and works that were first published outside the United States by a foreign national or a U.S. citizen living abroad. Special sets of rules apply when determining the public domain status of those kinds of works.

Contact attorney Tom James for copyright help

Contact Tom James (“The Cokato Copyright Attorney) for copyright help.

Generative-AI as Unfair Trade Practice

While Congress and the courts grapple with generative-AI copyright issues, the FTC weighs in on the risks of unfair competition, monopolization, and consumer deception.

Federal Trade Commission headline as illustration for Thomas James article
FTC Press Release exceprt

While Congress and the courts are grappling with the copyright issues that AI has generated, the federal government’s primary consumer watchdog has made a rare entry into the the realm of copyright law. The Federal Trade Commission (FTC) has filed a Comment with the U.S. Copyright Office suggesting that generative-AI could be (or be used as) an unfair or deceptive trade practice. The Comment was filed in response to the Copyright Office’s request for comments as it prepares to begin rule-making on the subject of artificial intelligence (AI), particularly, generative-AI.

Monopolization

The FTC is responsible for enforcing the FTC Act, which broadly prohibits “unfair or deceptive” practices. The Act protects consumers from deceptive and unscrupulous business practices. It is also intended to promote fair and healthy competition in U.S. markets. The Supreme Court has held that all violations of the Sherman Act also violate the FTC Act.

So how does generative-AI raise monopolization concerns? The Comment suggests that incumbents in the generative-AI industry could engage in anti-competitive behavior to ensure continuing and exclusive control over the use of the technology. (More on that here.)

The agency cited the usual suspects: bundling, tying, exclusive or discriminatory dealing, mergers, acquisitions. Those kinds of concerns, of course, are common in any business sector. They are not unique to generative-AI. The FTC also described some things that are matters of special concern in the AI space, though.

Network effects

Because positive feedback loops improve the performance of generative-AI, it gets better as more people use it. This results in concentrated market power in incumbent generative-AI companies with diminishing possibilities for new entrants to the market. According to the FTC, “network effects can supercharge a company’s ability and incentive to engage in unfair methods of competition.”

Platform effects

As AI users come to be dependent on a particular incumbent generative-AI platform, the company that owns the platform could take steps to lock their customers into using their platform exclusively.

Copyrights and AI competition

The FTC Comment indicates that the agency is not only weighing the possibility that AI unfairly harms creators’ ability to compete. (The use of pirated or the misuse of copyrighted materials can be an unfair method of competition under Section 5 of the FTC Act.) It is also considering that generative-AI may deceive, or be used to deceive, consumers. Specifically, the FTC expressed a concern that “consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist, but it has been generated by someone else using an AI tool.” (Comment, page 5.)

In one of my favorite passages in the Comment, the FTC suggests that training AI on protected expression without consent, or selling output generated “in the style of” a particular writer or artist, may be an unfair method of competition, “especially when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works….” (Comment, pages 5 – 6).

Fair Use

The significance of the FTC’s injection of itself into the generative-AI copyright fray cannot be overstated. It is extremely likely that during their legislative and rule-making deliberations, both Congress and the Copyright Office are going to be focusing the lion’s share of their attention on the fair use doctrine. They are most likely going to try to allow generative-AI outfits to continue to infringe copyrights (It is already a multi-billion-dollar industry, after all, and with obvious potential political value), while at the same time imposing at least some kinds of limitations to preserve a few shards of the copyright system. Maybe they will devise a system of statutory licensing like they did when online streaming — and the widespread copyright infringement it facilitated– became a thing.

Whatever happens, the overarching question for Congress is going to be, “What kinds of copyright infringement should be considered “fair” use.

Copyright fair use normally is assessed using a four-prong test set out in the Copyright Act. Considerations about unfair competition arguably are subsumed within the fourth factor in that analysis – the effect the infringing use has on the market for the original work.

The other objective of the FTC Act – protecting consumers from deception — does not neatly fit into one of the four statutory factors for copyright fair use. I believe a good argument can be made that it should come within the coverage of the first prong of the four-factor test: the purpose and character of the use. The task for Congress and the Copyright Office, then, should be to determine which particular purposes and kinds of uses of generative-AI should be thought of as fair. There is no reason the Copyright Office should avoid considering Congress’s objectives, expressed in the FTC Act and other laws, when making that determination.

AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

A Recent Exit from Paradise

Over a year ago, Steven Thaler filed an application with the United States Copyright Office to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” In the application, he listed a machine as the “author” and himself as the copyright owner. The Copyright Office refused registration  on the grounds that the work lacked human authorship. Thaler then filed a lawsuit in federal court seeking to overturn that determination. On August 18, 2023 the court upheld the Copyright Office’s refusal of registration. The case is Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236 (D.D.C. Aug. 18, 2023).

Read more about the history of this case in my previous blog post, “A Recent Entrance to Complexity.”

The Big Bright Green Creativity Machine

In his application for registration, Thaler had listed his computer, referred to as “Creativity Machine,” as the “author” of the work, and himself as a claimant. The Copyright Office denied registration on the basis that copyright only protects human authorship.

Taking the Copyright Office to court

Unsuccessful in securing a reversal through administrative appeals, Thaler filed a lawsuit in federal court claiming the Office’s denial of registration was “arbitrary, capricious, an abuse of discretion and not in accordance with the law….”

The court ultimately sided with the Copyright Office. In its decision, it provided a cogent explanation of the rationale for the human authorship requirement:

The act of human creation—and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts—was thus central to American copyright from its very inception. Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.

Id.

A Complex Issue

As I discussed in a previous blog post, the issue is not as simple as it might seem. There are different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. These tools are capable of generating content with very little user input. If the only human input is a user’s directive to “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

The difficult question – and one that is almost certain to be the subject of ongoing litigation and probably new Copyright Office regulations – is exactly how much, and what kind of, human input is necessary before a human can claim authorship.  Equally as perplexing is how much, if at all, the Copyright Office should involve itself in ascertaining and evaluating the details of the process by which a work was created. And, of course, what consequences should flow from an applicant’s failure to disclose complete details about the nature and extent of machine involvement in the creative process.

Conclusion

The court in this case did not dive into these issues. The only thing we can safely take away from this decision is the broad proposition that a work is not protected by copyright to the extent it is generated by a machine.