Trump’s Executive Order on AI

On December 11, 2025, President Trump issued another Executive Order. This one is intended to promote “national dominance” in “a race with adversaries for supremacy.” To “win,” the Order says, AI companies should not be encumbered by state regulation. “The policy of the United States,” the Order says, is “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” It sets up an AI Litigation Task Force to challenge state AI laws that allegedly do not do that.

Excepted from the Order are state laws on child safety protections, data center infrastructure, and state government use of AI.

Which State AI Laws?

The Order speaks generally about “state AI laws,” but does not define the term. Here are some examples of state AI laws:

Stalking and Harassment

A North Dakota statute criminalizes using a robot to frighten or harass another person. It defines a robot to include a drone or other system that uses AI technology. (N.D. Cent. Code § 12.1-17-07.(1), (2)(f)). This appears to be a “state AI law.” North Dakota statutes also prohibit stalking accomplished by using either a robot or a non-AI form of technology. (N.D. Cent. Code § 12.1-17-07.1(1)(d)). Preempting this statute would produce an anomalous result. It would be a crime to stalk somebody unless you use an AI-powered device to do it.

Political Deepfakes

Several states have enacted laws prohibiting the distribution of political deepfakes to influence an election. Regulations range from a prohibition against the distribution of a deepfake to influence an election within a specified time period before the election to requiring disclosure that it is AI-generated. Minn. Stat. § 609.771 is an example of such a regulation. The need for this kind of statute was highlighted in 2024 when someone used AI to clone Joe Biden’s voice and generate an audio file that sounded like Mr. Biden himself was urging people not to vote for him.

Sexual Deepfakes

Both state and federal governments have enacted laws aimed at curbing the proliferation of “revenge porn.” The TAKE IT DOWN Act is an example. Minn. Stat. § 604.32 is another example (deepfakes depicting intimate body parts or sexual acts).

State and federal laws in this area cover much of the same ground. The principal difference is that the federal crime must involve interstate commerce; state crimes do not. The only practical effect of preemption of this kind of state AI law, therefore, would be to eliminate state prohibitions of wholly intrastate sexual deepfakes. If the Executive Order succeeds in its objectives, then state laws that prohibit the creation or distribution of sexual deepfakes wholly within the same state, as some do, would be preempted, with the result that making and distributing sexual deepfakes would be lawful so long as you only transmit it to other people in your state and not to someone in a different state.

Digital Replicas

Many states have enacted laws prohibiting or regulating the unauthorized creation and exploitation of digital replicas. The California Digital Replicas Act and Tennessee’s ELVIS Act are examples. AI is used in the creation of digital replicas. It is unclear whether these kinds of enactments are “state AI laws.” Arguably, a person could use technologies more primitive than generative-AI to create a digital image of a person. If these statutes are preempted only to the extent they apply to AI-generated digital replicas, then it would seem that unauthorized exploiters of other people’s faces and voices for commercial gain would be incentivized to use AI to engage in unauthorized commerceial exploitation of other people.

Child Pornography

Several states have either enacted laws or amended existing laws to bring AI-generated images of what look like real children within the prohibition against child pornography. See, e.g., N.D. Cent. Code § 12.1.-27.2—01.  The Executive Order exempts “child safety protections,” but real children do not necessarily have to be used in AI-generated images. This kind of state statute arguably would not come within the meaning of a “child safety protection.”

Health Care Oversight

California’s Physicians Make Decisions Act requires a human person to oversee health care decisions about medical necessity. This is to ensure that medical care is not left entirely up to an AI bot. The law was enacted with the support of the California Medical Association to ensure that patients receive adequate health care. If the law is nullified, then it would seem that hospitals would be free to replace doctors with AI chatbots.

Chatbots

Some states prohibit the deceptive use of a chatbot, such as by falsely representing to people who interact with one that they are interacting with a real person. In addition, some states have enacted laws requiring disclosure to consumers when they are interacting with a non-human AI. See, e.g., the Colorado Artificial Intelligence Act.

Privacy

Some states have enacted either stand-alone laws or amended existing privacy laws to ensure they protect the privacy of personally identifiable information stored by AI systems. See, e.g., Utah Code 13-721-201, -203 (regulating the sharing of a person’s mental health information by a chatbot); and amendments to the California Consumer Privacy Act making it applicable to information stored in an AI system.

Disclosure

California’s Generative AI Training Data Transparency Act requires disclosure of training data used in developing generative-AI technology.

The Texas Responsible Artificial Intelligence Governance Act

Among other things, the Texas Responsible AI Governance Act prohibits the use of AI to restrict constitutional rights, to discriminate on the basis of race, or to encourage criminal activity. These seem like reasonable proscriptions.

Trump’s “AI czar,” venture capitalist David Sacks, has said the administration is not gong to “push back” on all state laws, only “the most onerous” ones. It is unclear which of these will be deemed “onerous.”

State AI Laws are Not Preempted

News media headlines are trumpeting that the Executive Order preempts state AI laws. This is not true. It directs this administration to try to strike down some state AI laws. It contemplates working with Congress to formulate and enact preemptive legislation. It is doubtful that a President could constitutionally preempt state laws by executive order.

Postscript

Striving for uniformity in the regulation of artificial intelligence is not a bad idea. There should be room, though, for both federal and state legislation. Rather than abolishing state laws, a uniform code or model act for states might be a better idea. Moreover, if we are going to start caring about an onerous complex of differing state laws, and feeling a need to establish a national framework, perhaps the President and Congress might wish to address the sprawling morass of privacy and data security regulations in the United States.

Smelly Trademarks

If you have a distinctive smell, you might be able to claim trademark rights in it.

If it smells like a trademark and it functions like a trademark, it might be a trademark.

Rose-Scented Tires

Sumitomo Rubber Industries has successfully applied for the registration of an olfactory trademark in India. It is the smell of a rose, as applied to tires. India’s Trademark Registry has now accepted it for advertisement.

This is not the first time Sumitomo has secured trademark protection for its smelly tires. In fact, the company’s rosy tire was the first smell mark registered in the United Kingdom, back in 1996.

It’s a Smell World

Since the U.K.’s venture into scent trademarks, smell trademarks have been approved in several other jurisdictions around the world.

In 1999, the European Union accepted an application to register the smell of freshly cut grass as a trademark for tennis balls. It quickly closed the door to scent marks, however, in 2002. In Sieckmann v. German Patent and Trademark Office, trademark protection was sought for a “balsamically fruity” scent with “a slight hint of cinnamon.” The ECJ ruled that a chemical formula did not represent the odor, the written description was not sufficiently clear, precise and objective, and that a physical deposit of a sample of the scent did not constitute the “graphic representation” the applicable trademark law required. This closed the door to smell marks in the EU for many years.

Scent mark registrations have been issued in the United States, though there are only a little over dozen of them. Examples include Play-Doh, Power Plus fruity-scented vehicle lube, and minty fresh bowling balls.

The non-functionality requirement is the biggest obstacle for scent trademarks in the United States. The scent must be non-functional and serve only as a source identifier. Smells intrinsic to the purpose of a product will not qualify. So no trademark protection for the smell of a perfume or air freshener. And sorry, Burger King, probably no trademark for the smell of charred meat, either.

“Graphical Representation”

India’s statute requires trademark applicants to provide a graphical representation of the mark. This is what has stood in the way of smell claims in India all this time. Sumitomo, however, figured out a way to do it. It created a representation of the odor in “olfactory space.”

It worked.

Will Flavor Be the Next Frontier?

It is unlikely that flavors will join smells as registrable brand identifiers. Although theoretically possible, no flavor has ever been registered as a trademark in the United States. The USPTO has expressed doubt that a flavor could ever function as a trademark because it is functional. (TMEP 1202.13). Also, although a dog might do it, consumers normally do not taste a product to determine its origin before deciding whether to buy it. Merchants probably would frown on the practice, as would other customers in the store. Hopefully, stores do not allow dogs to sample products intended for human consumption, either.

Need help registering a non-traditional trademark? Contact attorney Tom James.

Excuse Me While I KIST the Sky

Sunkist is a trademark of Sunkist, Inc. This image is used for illustrative purposes only. No endorsement, sponsorship or affiliation with any company, product or brand is intended or implied.

The Jimi Hendrix song, “Purple Haze” contains one of the most famous misheard lyrics of all time. Ever fixated on sex and sexuality, many people insist that when he sings, “Excuse me while I kiss the sky,” he is saying, “Excuse me while I kiss this guy.” Kiss the sky and kiss this guy are near-homophones, that is to say, they are phrases that nearly sound alike. In the trademark world, homophones and near-homophones can create or contribute to a likelihood of confusion which, in turn, can result in a denial of registration to one of the marks and/or infringement liability.

Jimi Hendrix also wrote a song called Love or Confusion, but that is a story for a different day. See In re Peace and Love World Live, LLC.

Sunkist Growers v. Intrastate Distributors

Intrastate Distributors, Inc. applied to register KIST, both as a standard character mark and as a stylized mark, for soft drinks. Sunkist, Inc. filed an opposition to registration with the Trademark Trials and Appeals Board (TTAB), arguing that KIST is confusingly similar to SUNKIST, when used in connection with beverages. The TTAB dismissed the opposition, finding no likelihood of confusion.

On July 23, 2025, the U.S. Federal Circuit Court of Appeals reversed.

The DuPont Factors

Likelihood of confusion is a question of law that requires weighing of findings of fact on the DuPont factors. This list of factors comes from E.I. DuPont de Nemours v. Celanese Corp., a 1973 case that identified 13 factors relevant to likelihood-of-confusion analysis:

  1. Similarity of the marks
  2. Nature of the goods and services
  3. Trade channels
  4. Conditions of purchase (e.g., whether buyers of the product or service are likely to carefully consider options before purchasing, or are more likely to make a quick or impulsive purchasing decision.)
  5. Fame
  6. Number of similar trademarks used with similar goods or services
  7. Actual confusion
  8. History of concurrent use, if any
  9. Variety of different products and services with which the trademark is used
  10. Interactions and relationship between the applicant and the trademark owner
  11. Extent of the applicant’s right to prevent others from using the trademark on specific goods or services
  12. Potential confusion
  13. Any other relevant information.

The analysis may focus on dispositive factors, such as similarity of the marks and relatedness of the goods. Also, as a general rule, the more related the goods or services are, the less similar the marks need to be to support a finding of likelihood of confusion.

In this case, the Board found four factors supported a finding of likelihood of confusion, but determined that KIST and SUNKIST were dissimilar. The Board believed they had different commercial impressions. KIST, the Board believed, referenced a kiss, while SUNKIST referenced the sun.

The Board relied on some of IDI’s marketing materials that depict a pair of lips adjacent to the word KIST. The Court of Appeals, however, noted that lips do not appear next to the word in all of the marketing materials. Nor was the image of lips claimed as part of the trademark. Therefore, the Court held that the Board’s finding that KIST references a kiss was not supported by substantial evidence.

The Court also determined that Sunkist did not always depict an image of a sun adjacent to its mark SUNKIST.

In short, when considered as standard character marks, not as design marks, SUNKIST and KIST are substantially similar and consumers are likely to confusedly believe that beverages using either of these marks come from the same source.

Confusing Similarity

Two marks may be confusingly similar if they are similar as to appearance, sound, meaning, or commercial impression. (TMEP 1207.01(b).) The test is not whether the two marks are linguistically distinguishable, but whether they are likely to give a consumer the impression that there is a commercial connection between them.

When assessing the likelihood of confusion between compound words, courts may appropriately identify a part of the term as the dominant part and other parts as peripheral. If the dominant part is distinctive, then greater weight may be assigned to the similarity between the dominant parts of two marks than to the peripheral parts. Thus, for example, regarding CYNERGIE and SYNERGY PEEL, the TTAB determined that SYNERGY was the dominant part of both marks, and that CYNERGIE and SYNERGY were sound-alikes. Consequently, they were confusingly similar.  (TEMP 1207.01(b)(viii).)

Although the Court of Appeals did not invoke this particular rationale in its KIST decision, it seems to me that this doctrine also would support a determination that KIST and SUNKIST are confusingly similar. KIST is the dominant part of both marks and it is distinctive.

Conclusion

Changing the spelling of, or adding or subtracting peripheral parts from, an existing trademark is not often likely to solve a likelihood of confusion problem, particularly when the goods or services are identical, similar, or related. A standard character trademark search should look not only for identical matches but also for words and phrases which, although not identical, might look or sound similar, in whole or in part, or have meanings similar to the one you are trying to clear.

Voice Cloning

Painting of Nipper by Francis Barraud (1898-99); subsequently used as a trademark with “HIs Master’s Voice.”

Lehrman v. Lovo, Inc.

On July 10, 2025, the federal district court for the Southern District of New York issued an Order granting in part and denying in part a motion to dismiss a putative class action lawsuit that Paul Lehrman and Linnea Sage commenced against Lovo, Inc. The lawsuit, Lehrman v. Lovo, Inc., alleges that Lovo used artificial intelligence to make and sell unauthorized “clones” of their voices.

Specifically, the complaint alleges that the plaintiffs are voice-over actors. For a fee, they read and record scripts for their clients. Lovo allegedly sells a text-to-speech subscription service that allows clients to generate voice-over narrations. The service is described as one that uses “AI-driven software known as ‘Generator’ or ‘Genny,'” which was “created using ‘1000s of voices.'” Genny allegedly creates voice clones, i.e., copies of real people’s voices. Lovo allegedly granted its customers “commercial rights for all content generated,” including “any monetized, business-related uses such as videos, audio books, advertising promotion, web page vlogging, or product integration.” (Lovo terms of service.) The complaint alleges that Lovo hired the plaintiffs to provide voice recordings for “research purposes only,” but that Lovo proceeded to exploit them commercially by licensing their use to Lovo subscribers.

This lawsuit ensued.

The complaint sets out claims for:

  • Copyright infringement
  • Trademark infringement
  • Breach of contract
  • Fraud
  • Conversion
  • Unjust enrichment
  • Unfair competition
  • New York civil rights laws
  • New York consumer protection laws.

The defendant moved to dismiss the complaint for failure to state a claim.

The copyright claims

Sage alleged that Lovo infringed the copyright in one of her voice recordings by reproducing it in presentations and YouTube videos. The court allowed this claim to proceed.

Plaintiffs also claimed that Lovo’s unauthorized use of their voice recordings in training its generative-AI product infringed their copyrights in the sound recordings. The court ruled that the complaint did not contain enough factual detail about how the training process infringed one of the exclusive rights of copyright ownership. Therefore, it dismissed this claim with leave to amend.

The court dismissed the plaintiffs’ claims of output infringement, i.e., claims that the “cloned” voices the AI tool generated infringed copyrights in the original sound recordings.

Copyright protection in a sound recording extends only to the actual recording itself. Fixation of sounds that imitate or simulate the ones captured in the original recording does not infringe the copyright in the sound recording.

This issue often comes up in connection with copyrights in music recordings. If Chuck Berry writes a song called “Johnny B. Goode” and records himself performing it, he will own two copyrights – one in the musical composition and one in the sound recording. If a second person then records himself performing the same song, and he doesn’t have a license (compulsory or otherwise) to do so, that person would be infringing the copyright in the music but not the copyright in the sound recording. This is true even if he is very good at imitating Berry’s voice and guitar work. For a claim of sound recording infringement to succeed, it must be shown that the actual recording itself was copied.

Plaintiffs did not allege that Lovo used Genny to output AI-generated reproductions of their original recordings. Rather, they alleged that Genny is able to create new recordings that mimic attributes of their voices.

The court added that the sound of a voice is not copyrightable expression, and even if it were, the plaintiffs had registered claims of copyright in their recordings, not in their voices.

The trademark claims

In addition to infringement, the Lanham Act creates two other potential bases of trademark liability: (1) false association; and (2) false advertising. 15 U.S.C. sec. 1125(a)(1)(A) and (B). Plaintiffs asserted both kinds of claims. The judge dismissed these claims.

False association

The Second Circuit court of appeals recently held, in Electra v. 59 Murray Enter., Inc. and Souza v. Exotic Island Enters., Inc., that using a person’s likeness to create an endorsement without the person’s permission can constitute a “false association” violation. In other words, a federally-protected, trademark-like interest in one’s image, likeness, personality and identity exists. (See, e.g., Jackson v. Odenat.)

Although acknowledging that this right extends to one’s voice, the judge ruled that the voices in this case did not function as trademarks. They did not identify the source of a product or service. Rather, they were themselves the product or service. For this reason, the judge ruled that the plaintiffs had failed to show that their voices, as such, are protectable trademarks under Section 43(a)(1)(A) of the Lanham Act.

False Advertising

Section 43(a)(1)(B) of the Lanham Act (codified at 15 U.S.C. sec. 1125(a)(1)(B)) prohibits misrepresentations about “the nature, characteristics, qualities, or geographic origin of . . . goods, services, or commercial activities.” The plaintiffs claimed that Lovo marketed their voices under different names (“Kyle Snow” and “Sally Coleman.”) The court determined that this was not fraudulent, however, because Lovo marketed them as what they were, namely, synthetic clones of the actors’ voices, not as their actual voices.

Plaintiffs also claimed that Lovo’s marketing materials falsely stated that the cloned voices “came with all commercial rights.” They asserted that they had not granted those rights to Lovo. The court ruled, however, that even if Lovo was guilty of misrepresentation, it was not the kind of misrepresentation that comes within Section 43(a)(1)(B), as it did not concern the nature, characteristics, qualities, or geographic origin of the voices.

State law claims

Although the court dismissed the copyright and trademark claims, it allowed some state law claims to proceed. Specifically, the court denied the motion to dismiss claims for breach of contract, violations of sections 50 and 51 of the New York Civil Rights Law, and violations of New York consumer protection law.

Both the common law and the New York Civil Rights Law prohibit the commercial use of a living person’s name, likeness or voice without consent. Known as “misappropriation of personality” or violation of publicity or privacy rights, this is emerging as one of the leading issues in AI law.

The court also allowed state law claims of false advertising and deceptive trade practices to proceed. The New York laws are not subject to the “nature, characteristics, qualities, or geographic origin” limitation set out in Section 43(a) of the Lanham Act.

Conclusion

I expect this case will come to be cited for the rule that copyright cannot be claimed in a voice. Copyright law protects only expression, not a person’s corporeal attributes. The lack of copyright protection for a person’s voice, however, does not mean that voice cloning is “legal.” Depending on the particular facts and circumstances, it may violate one or more other laws.

It also should be noted that after the Joe Biden voice-cloning incident of 2024, states have been enacting statutes regulating the creation and distribution of voice clones. Even where a specific statute is not applicable, though, a broader statute (such as the FTC Act or a similar state law) might cover the situation.

Images and references in this blog post are for illustrative purposes only. No endorsement, sponsorship or affiliation with any person, organization, company, brand, product or service is intended, implied, or exists.

Official portrait of Vice President Joe Biden in his West Wing Office at the White House, Jan. 10, 2013. (Official White House Photo by David Lienemann)

Court Rules AI Training is Fair Use

Just days after the first major fair use ruling in a generative-AI case, a second court has determined that using copyrighted works to train AI is fair use. Kadrey et al. v. Meta Platforms, No. 3:23-cv-03417-VC (N.D. Cal. June 25, 2025).

The Kadrey v. Meta Platforms Lawsuit

I previously wrote about this lawsuit here and here.

Meta Platforms owns and operates social media services including Facebook, Instagram, and WhatsApp. It is also the developer of a large language model (LLM) called “Llama.” One of its releases, Meta AI, is an AI chatbot that utilizes Llama.

To train its AI, Meta obtained data from a wide variety of sources. The company initially pursued licensing deals with book publishers. It turned out, though, that in many cases, individual authors owned the copyrights. Unlike music, no organization handles collective licensing of rights in book content. Meta then downloaded shadow library databases. Instead of licensing works in the databases, Meta decided to just go ahead and use them without securing licenses. To download them more quickly, Meta torrented them using BitTorrent.

Meta trained its AI models to prevent them from “memorizing” and outputting text from the training data, with the result that no more than 50 words and punctuation marks from any given work were reproduced in any given output.

The plaintiffs named in the Complaint are thirteen book authors who have published novels, plays, short stories, memoirs, essays, and nonfiction books. Sarah Silverman, author of The Bedwetter; Junot Diaz, author of The Brief Wondrous Life of Oscar Wao; and Andrew Sean Greer, author of Less, are among the authors named as plaintiffs in the lawsuit. The complaint alleges that Meta downloaded 666 copies of their books without permission and states claims for direct copyright infringement, vicarious copyright infringement, removal of copyright management information in violation of the Digital Millennium Copyright Act (DMCA), and various state law claims. All claims except the ones for direct copyright infringement and violation of the DMCA were dismissed in prior proceedings.

Both sides moved for summary judgment on fair use with respect to the claim that Meta’s use of the copyrighted works to train its AI infringed copyrights. Meta moved for summary judgment on the DMCA claims. Neither side moved for summary judgment on a claim that Meta infringed copyrights by distributing their works (via leeching or seeding).

On June 25, 2025 Judge Chhabria granted Meta’s motion for summary judgment on fair use with respect to AI training; reserved the motion for summary judgment on the DMCA claims for decision in a separate order, and held that the claim of infringing distribution via leeching or seeding “will remain a live issue in the case.”

Judge Chhabria’s Fair Use Analysis

Judge Chhabria analyzed each of the four fair use factors. As is the custom, he treated the first (Character or purpose of the use) and fourth (Effect on the market for the work) factors as the most important of the four.

He disposed of the first factor fairly easily, as Judge Alsup did in Bartz v. Anthropic, finding that the use of copyrighted works to train AI is a transformative use. This finding weighs heavily in favor of fair use. The purpose of Meta’s AI tools is not to generate books for people to read. Indeed, in this case, Meta had installed guardrails to prevent the tools from generating duplicates or near-duplicates of the books on which the AI was trained. Moreover, even if it could allow a user to prompt the creation of a book “in the style of” a specified author, there was no evidence that it could produce an identical work or a work that was substantially similar to one on which it had been trained. And writing styles are not copyrightable.

Significantly, the judge held that the use of shadow libraries to obtain unauthorized copies of books does not necessarily destroy a fair use defense. When the ultimate use to be made of a work is transformative, the downloading of books to further that use is also transformative, the judge wrote. This ruling contrasts with other judges who have intimated that using pirated copies of works weighs against, or may even prevent, a finding of fair use.

Unlike some judges, who tend to consider the fair use analysis over and done if transformative use is found, Judge Chhabria recognized that even if the purpose of the use is transformative, its effect on the market for the infringed work still has to be considered.

3 Ways of Proving Adverse Market Effect

The Order lays out three potential kinds of arguments that may be advanced to establish the adverse effect of an infringing use on the market for the work:

  1. The infringing work creates a market substitute for the work;
  2. Use of the work to train AI without permission deprives copyright owners of a market for licenses to use their works in AI training;
  3. Dilution of the market with competing works.

Market Substitution

In this case, direct market substitution could not be established because Meta had installed guardrails that prevented users from generating copies of works that had been used in the training. Its AI tools were incapable of generating copies of the work that could serve as substitutes for the authors’ works.

The Market for AI Licenses

The court refused to recognize the loss of potential profits from licensing the use of a work for AI training purposes as a cognizable harm.

Market Dilution

The argument here would be that the generation of many works that compete in the same market as the original work on which the AI was trained dilutes the market for the original work. Judge Chhabria described this as indirect market substitution.

The copyright owners in this case, however, focused on the first two arguments. They did not present evidence that Meta’a AI tools were capable of generating books; that they do, in fact, generate books; or that the books they generate or are capable of generating compete with books these authors wrote. There was no evidence of diminished sales of their books.

Market harm cannot be assumed when generated copies are not copies that can serve as substitutes for the specific books claimed to have been infringed. When the output is transformative, as it was in this case, market substitution is not self-evident.

Judge Chhabria chided the plaintiffs for making only a “half-hearted argument” of a significant threat of market harm. He wrote that they presented “no meaningful evidence on market dilution at all.”

Consequently, he ruled that the fourth fair use factor favored Meta.

Conclusion

The decision in this case is as significant for what the court didn’t do as it is for what it did. It handed a fair use victory to Meta. At the same time, though, it did not rule out a finding that training AI tools on copyrighted works is not fair use in an appropriate case. The court left open the possibility that a copyright owner might prevail on a claim that training AI on copyrighted works is not fair use in a different case. And it pointed the way, albeit in dictum, namely, by making a strong showing of market dilution.

That claim is not far-fetched. https://www.wired.com/story/scammy-ai-generated-books-flooding-amazon/

AI OK; Piracy Not: Bartz v. Anthropic

A federal judge has issued a landmark fair use decision in a generative-AI copyright infringement lawsuit.

In a previous blog post, I wrote about the fair use decision in Thomson Reuters v. ROSS. As I explained there, that case involved a search-and-retrieval AI system, so the holding was not determinative of fair use in the context of generative AI. Now we finally have a decision that addresses fair use in the generative-AI context.

Bartz et al. v. Anthropic PBC

Anthropic is an AI software firm founded by former OpenAI employees. It offers a generative-AI tool called Claude. Like other generative-AI tools, Claude mimics human conversational skills. When a user enters a text prompt, Claude will generate a response that is very much like one a human being might make (except it is sometimes more knowledgeable.) It is able to do this by using large language models (LLMs) that have been trained on millions of books and texts.

Adrea Bartz, Charles Graeber, and Kirk Wallace Johnson are book authors. In August 2024, they sued Anthropic, claiming the company infringed the copyrights in their works. Specifically, they alleged that Anthropic copied their works from pirated and purchased sources, digitized print versions, assembled them into a central library, and used the library to train LLMs, all without permission. Anthropic asserted, among other things, a fair use defense.

Earlier this year, Anthropic filed a motion for summary judgment on the question of fair use.

On June 23, 2025, Judge Alsup issued an Order granting summary judgment in part and denying it in part. It is the first major ruling on fair use in the dozens of generative-AI copyright infringement lawsuits that are currently pending in federal courts.

The Order includes several key rulings.

Digitization

Anthropic acquired both pirated and lawfully purchased printed copies of copyright-protected works and digitized them to create a central e-library. Authors claimed that making digital copies of their works infringed the exclusive right of copyright owners to reproduce their works. (See 17 U.S.C. 106.)

In the process of scanning print books to create digital versions of them, the print copies were destroyed. Book bindings were stripped so that each individual page could be scanned. The print copies were then discarded. The digital copies were not distributed to others. Under these circumstances, the court ruled that making digital versions of print books is fair use.

The court likened format to a frame around a work, as distinguished from the work itself. As such, a digital version is not a new derivative work. Rather, it is a transformative use of an existing work. So long as the digital version is merely a substitute for a print version a person has lawfully acquired, and so long as the print version is destroyed and the digital version is not further copied or distributed to others, then digitizing a printed work is fair use. This is consistent with the first sale doctrine (17 U.S.C. 109(a)), which gives the purchaser of a copy of a work a right to dispose of that particular copy as the purchaser sees fit.

In short, the mere conversion of a lawfully acquired print book to a digital file to save space and enable searchability is transformative, and so long as the print version is destroyed and the digital version is not further copied or distributed, it is fair use.

AI Training Is Transformative Fair Use

The authors did not contend that Claude generated infringing output. Instead, they argued that copies of their works were used as inputs to train the AI. The Copyright Act, however, does not prohibit or restrict the reading or analysis of copyrighted works. So long as a copy is lawfully purchased, the owner of the purchased copy can read it and think about it as often as he or she wishes.

[I]f someone were to read all the modern-day classics because of their exceptional expression, memorize them, and then emulate a blend of their best writing, would that violate the Copyright Act? Of course not.

Order.

Judge Alsup described AI training as “spectacularly” transformative.” Id. After considering all four fair use factors, he concluded that training AI on lawfully acquired copyright-protected works (as distinguished from the initial acquisition of copies) is fair use.

Pirating Is Not Fair Use

In addition to lawfully purchasing copies of some works, Anthropic also acquired infringing copies of works from pirate sites. Judge Alsup ruled that these, and uses made from them, are not fair use. The case will now proceed to trial on the issue of damages resulting from the infringement.

Conclusion

Each of these rulings seems, well, sort of obvious. It is nice to have the explanations laid out so clearly in one place, though.

The Copyright Discovery Rule Stands

Last year, the United States Supreme Court held that as long as a claim is timely filed, damages may be recovered for any loss or injury, including losses incurred more than three years before the claim is filed (Warner Chappell Music. v. Nealy). The Court expressed no opinion about whether the Copyright Act’s three-year limitation period begins to run when the infringing act occurs or when the victim discovers it, leaving that question for another day. “Another day” arrived, but the Court still declined to address it. What, if anything, can be made of that?

Statute of Limitations for Copyright Infringement

The Copyright Act imposes a 3-year limitations period for copyright infringement claims. Specifically:

No civil action shall be maintained under the provisions of this title unless it is commenced within three years after the claim accrued.

17 U.S.C. 507(b).

But when does a claim accrue? That is the (potentially) million-dollar question.

According to the “incident of injury” rule, an infringement claim accrues when an infringing act occurs. Petrella v. Metro-Goldwyn-Mayer, Inc., 572 U. S. 663, 670 (2014). Under this rule, an infringement victim who did not learn about an infringing act until three years after it occurred would be out luck.

Courts in many circuits, however, apply an alternative rule. Known as the “discovery rule,” it holds that a copyright infringement claim accrues when “the plaintiff discovers, or with due diligence should have discovered, the injury that forms the basis for the claim.” William A. Graham Co. v. Haughey, 568 F. 3d 425, 433 (CA3 2009) (internal quotation marks omitted). According to Patry on Copyright, this is the majority rule.

If a court applies the discovery rule, then the infringement complaint must be filed within three years after the victim learns or should reasonably have learned of the infringing act, even if that act occurred more than three years earlier.


The Look-Back Period for Damages

As I explained in a previous blog post, the United States Supreme Court did not have the question about the validity of either accrual theory before it in Warner Chappell Music. Accordingly, it did not address the issue. Instead, the Court limited itself to deciding only the specific question before it, namely, whether damages can be claimed for all injuries that occurred before the victim learned (or reasonably should have learned) of an infringing act. The Court held that they can be. And this is true even for losses occurring more than three years before the infringement was discovered. Statutes of limitations only determine when a claim may be filed; they do not limit the look-back period for recovering damages for injury. “The Copyright Act contains no separate time-based limit on monetary recovery.” Warner Chappell Music, supra.

It must be kept in mind that the discovery rule has an important proviso. The clock starts clicking on a claim from the first date a victim actually knew or should have known of an infringement. In many cases, it may become more difficult to convince a judge that the victim’s unawareness of the infringing act was reasonable if a lot of time has gone by since the infringement occurred. Reasonableness, however, depends on all the facts and circumstances, so it has to be decided on a case-by-case basis.

RADesign, Inc. v. Ruthie Davis et al.

Michael Grecco Productions, Inc. sued RADesign, Inc. and others for copyright infringement. The complaint alleged that the defendant’s infringing use of a copyright-protected photograph began on August 16, 2017, and that the plaintiff discovered it on February 8, 2021. The complaint was filed in October, 2021. As a result, the claim would be barred under the “incident of injury” rule because it was filed more than three years after the alleged infringement occurred. The complaint, however, was filed in the Second Circuit, a jurisdiction that recognizes the discovery rule. Therefore, the question became whether the failure to discover the infringement within three years was reasonable. The district court held that it was not. The court described the copyright owner in this case as “sophisticated” in detecting and litigating infringements and therefore not entitled to the benefit of the discovery rule.

The Second Circuit Court of Appeals reversed, declaring, “This ‘sophisticated plaintiff’ rationale has no mooring to our cases.”

The U.S. Supreme Court’s Denial of Certiorari

RADesign, Inc. filed a petition for certiorari to the United States Supreme Court. The sole question presented was “Whether a claim ‘accrue[s]’ under the Copyright Act’s statute of limitations for civil actions, 17 U.S.C. 507(b), when the infringement occurs (the ‘injury rule’) or when a plaintiff discovers or reasonably should have discovered the infringement (the ‘discovery rule’).” The petition argued that the Copyright Act does not explicitly provide for a discovery rule and asserted that the courts of appeal should not have adopted one.

Unlike in Warner Chappell Music, the Court now had the validity of the discovery rule in copyright infringement cases squarely before it. The Court, however, declined the invitation to review that question. On June 16, 2025, it denied certiorari.

What a Denial of Certiorari Means

Really, the only legal effect of a denial of certiorari is that the lower court’s decision stands. In this case, that would mean that the Second Circuit Court of Appeals’ decision remains in effect for that specific case. For the time being, anyway, attorneys can cite the reasoning and holding of the Second Circuit Court of Appeals decision as legal precedent in other cases.

What a Denial of Certiorari Does Not Mean

A denial of certiorari does not mean that the Supreme Court agreed with the Court of Appeals. The Court of Appeal’s decision sets a precedent in the Second Circuit, but the denial of certiorari does not have that effect. It simply means the Supreme Court has decided not to trouble itself with the question at this time.

Caveats

Copyright owners and practitioners should not read too much into this decision. Even if the discovery rule forecloses a finding of untimeliness on the face of a complaint, a defendant may still be able to assert untimeliness as an affirmative defense. Again, the reasonableness of delayed acquisition of knowledge of infringement must be decided on a case-by-case basis. Copyright owners and their attorneys should be vigilant in detecting infringement of protected works and diligent in timely filing claims.


When Your Car Is a Character

Carroll Shelby Licensing v. Halicki et al.

If you’re like me, you’ve probably owned a car with character, or even several cars with character, at some time in your life. A used Volkswagen Jetta with a replacement alternator that was held in place with washers. An old Plymouth Duster with a floor and doors that rusted clean through before the slant-6 ever had a problem. A Honda Fit that . . . well, this is probably a good place to stop dredging up memories. This post isn’t about cars with character. It’s about cars as characters. Specifically, the question whether it is possible to claim copyright protection in a car that appears in a book, movie, song, or other work.

The Ninth Circuit Court of Appeals had occasion to address this very question in Carroll Shelby Licensing et al. v. Halicki et al, No. 23-3731 (9th Cir., May 27, 2025).

Gone in 60 Seconds and Sequela

In the 1974 movie, Gone in 60 Seconds, the protagonist is tasked with stealing forty-eight types of cars. He and his colleagues assign them names. They call the Ford Mustang with black stripes “Eleanor.” Action ensues.

Three movies incorporating elements of this one were made and released thereafter — The Junkman, Deadline Auto Theft, and a year 2000 remake of Gone in 60 Seconds. A car that was made to look like the Mustang in the original Gone in 60 Seconds appeared in these movies, as well. The message, “‘Eleanor’ from the movie Gone in 60 Seconds” was painted on its side.

Shelby contracted with Classic Recreations to produce “GT-500CR” Mustangs. Without going into all of the contractual and procedural details, the owner of the copyright in the first three movies eventually asserted a claim of copyright infringement, raising the question whether copyright can be claimed in “Eleanor,” the Ford Mustang car that appeared in the movies.

Character Copyrights

Fictional works generally are eligible for copyright protection. Sometimes copyright protection will extend to fictional characters within one as well. Mickey Mouse and Godzilla are examples.

NOTE: This blog post was not produced, sponsored or endorsed by Disney, and is not affiliated with Disney or any person, company or organization affiliated or associated with Disney.

The test for independent character copyright protection is set out in DC Comics v. Towle. In sum, the character must:

  1. have both physical and conceptual qualities;
  2. be “sufficiently delineated” to be recognizable as the same character whenever it appears; and
  3. be “especially distinctive” with “some unique elements of expression.”

The 9th Circuit Court of Appeals held that “Eleanor” failed to meet any of these criteria.

1. Physical and conceptual qualities

Eleanor had physical qualities, but the Court held that it lacked conceptual qualities. Conceptual qualities include “anthropomorphic qualities, acting with agency and volition, displaying sentience and emotion, expressing personality, speaking, thinking, or interacting with other characters or objects.” Shelby, supra. The character does not have to be human. It can be almost anything, so long as it has some of the above traits. Thus, the Batmobile could qualify.

The Court determined, however, that Eleanor the car lacked any of these conceptual qualities, likening her to prop rather than a character.

2. Sufficient delineation

Here, the Court determined that Eleanor lacked consistent traits. In some iterations, Eleanor appeared as a yellow and black Fastback Mustang; in others, as a gray and black Shelby GT-500 Mustang, or a rusty, paintless Mustang. The Court concluded that Eleanor was too lightly sketched to satisfy the “sufficient delineation” test.

3. Unique elements of expression

Having no regard at all for Eleanor’s feelings, the Court declared, “Nothing distinguishes Eleanor from any number of sports cars appearing in car-centric action films.” According to the Court, she was just a run-of-the-mill automobile. Accordingly, she failed the distinctiveness test.

Quiz

Just for fun, try your hand at applying the Towle test to determine which of these might qualify for copyright protection and which ones don’t:

  1. Chuck Berry’s Maybeline
  2. My Mother the Car
  3. Prince’s Little Red Corvette
  4. Chitty Chitty Bang Bang
  5. Christine
  6. KITT in Knight Rider
  7. The Magic School Bus
  8. Thomas the Tank Engine
  9. Gumdrop
  10. Dick Turpin
  11. Truckster in National Lampoon’s Vacation
  12. The Bluesmobile in The Blues Brothers
  13. The Hearse
  14. Herbie in The Love Bug
  15. The DeLorean in Back to the Future
  16. The Gnome-Mobile
  17. Ecto-1 in Ghostbusters
  18. Bessie in Doctor Who
  19. General Lee in The Dukes of Hazzard
  20. The Munster Coach in The Munsters
  21. Shellraiser in Teenage Mutant Ninja Turtles
  22. Benny the Cab in Who Framed Roger Rabbit?
  23. Lightning McQueen in Cars
  24. The Mystery Machine in Scooby-Doo
  25. The Gadgetmobile in Inspector Gadget
  26. Mustang Sally
  27. Killdozer
  28. Ivor the Engine
  29. Tootle
  30. Roary the Racing Car.

Give yourself 3,500 extra points if you are familiar with all of these references.

Points are not redeemable for value.

Concluding Thought

Even if a fictional character does not qualify for copyright protection, it might be protected as a trademark in some cases. The requirements for trademark protection are a subject for another day.

Photographers’ Rights

The Second Circuit Court of Appeals reversed a trial judge’s dismissal of a photographer’s copyright infringement complaint, holding that because “fair use” was not clearly established on the face of the complaint, the district court should not have dismissed the complaint sua sponte. Romanova v. Amilus, Inc.

Romanova v. Amilus, Inc., No. 23-828 (2nd Cir., May 23, 2025)

The Second Circuit Court of Appeals reversed a trial judge’s dismissal of a photographer’s copyright infringement complaint, holding that because “fair use” was not clearly established on the face of the complaint, the district court should not have dismissed the complaint sua sponte.

Photographer Jana Romanova created a photograph of a woman with a snake wrapped around her left hand and another snake crawling up her torso. (Not the one pictured here.) She licensed it to National Geographic Magazine for a single use. According to the complaint, Amilus, Inc. allegedly made a copy of the photograph and published it to its website. Romanova allegedly sent notifications demanding the removal of the photograph from the website. The defendant allegedly did not respond. This lawsuit followed.

The defendant allegedly did not appear or respond to the complaint, so Romanova moved for the entry of default judgment. Rather than grant a default judgment, however, the district court judge sua sponte ordered Romanova to show cause why the court should not dismiss the case on the grounds that the defendant’s use of the photograph was fair use. Although fair use is an affirmative defense, which defendants have the burden of asserting and proving, the judge opined that the fair use defense did not need to be pleaded because the judge believed the fair use defense was “clearly established on the fact of the complaint.

Romanova appealed. The Second Circuit Court of Appeals reversed, effectively allowing the infringement claim to go forward.

Fair Use

In its decision, the Second Circuit Court of Appeals clarified how courts are to interpret and apply the four-factor “fair use” test outlined in the Copyright Act, 17 U.S.C. § 107 (purpose and character of the use; nature of the work; amount and substantiality of the portion copied; and the effect on the market for the work.)

The district court concluded that the defendant’s publication of the photograph communicated a different message than what the photographer intended. According to the district court, the purpose of the publication in the National Geographic was “to showcase persons in [her] home country of Russia that kept snakes as pets, specifically to capture pet snakes in common environments that are more associated with mainstream domesticated animals.” The district court found that the purpose of the defendant’s publication was to communicate a message about “the ever-increasing amount of pet photography circulating online.

Apparently the district court was under the impression that the use of a copyright-protected work for any different purpose, or to communicate any different message, is “transformative” and therefore “fair use.” The Court of Appeals clarified that is not the case. In addition to alleging and proving the use was for a different purpose or conveyed a different meaning, a defendant seeking to establish a fair use defense must also allege and prove a justification for the copying.

Examples of purposes that may justify copying a work include commentary or criticism of the copied work, or providing information to the public about the copied work, in circumstances where the copy does not become a substitute for the work. (See, e.g., Authors Guild v. Google, Inc., 804 F.3d 202, 212 (2d Cir. 2015).) Copying for evidentiary purposes (such as to support a claim that the creator of the work published a defamatory statement) can also be a valid justification to support a fair use defense. Creating small, low-resolution copies of images (“thumbnails”) may be justified when the purpose is to facilitate Internet searching. (Perfect 10 v. Amazon.com, 508 F.3d 1146, 1165 (9th Cir. 2007). Facilitating blind people’s access to a work may provide a justification for converting it into a format that blind people can read. (Authors Guild v. HathiTrust, 755 F.3d 87, 97 (2d Cir. 2014).

The Court cited other examples of potential justifications for copying. The Court admonished, however, that the question whether justification exists is a fact-specific determination that must be made on a case-by-case basis.

[J]ustification is often found when the copying serves to critique, or otherwise comment on, the original, or its author, but can also be found in other circumstances, such as when the copying provides useful information about the original, or on other subjects, usually in circumstances where the copying does not make the expressive content of the original available to the public.

Romanova, supra.

The only “justification” the district court cited for the copying was that it believed the defendant merely wanted to illustrate its perception of a growing trend to publish photographs of people with pets. “Little could remain of an author’s copyright protection if others could secure the right to copy and distribute a work simply by asserting some fact about the copied work,” the Court observed. The defendant’s publication of the copy did not communicate criticism or commentary on the original photograph or its author, or any other subject, the Court held.

The Court held that the remaining three fair use factors also militated against a finding of fair use.

Sua Sponte Dismissal for “Fair Use”

Justice Sullivan filed a concurring opinion. He would have reversed on procedural grounds without reaching the substantive issue. Specifically, Justice Sullivan objected to the trial judge’s raising of the fair use defense sua sponte on behalf of a non-appearing defendant. Normally, if a complaint establishes a prima case for relief, the court does not consider affirmative defenses (such as fair use) unless the defendant asserts them. That is to say, fair use is an affirmative defense; the defendant, not the plaintiff, bears the burden of proof.

Conclusion

Appeals courts continue to rein in overly expansive applications of “transformative” fair use by the lower courts. Here, the Court of Appeals soundly reasoned that merely being able to articulate an additional purpose served by publishing an author’s entire work, unchanged, will not, by itself, suffice to establish either transformative use or fair use.

Court of Appeals Affirms Registration Refusal for AI-Generated Output

In 2019, Stephen Thaler developed an AI system he called The Creativity Machine. He generated output he called A Recent Entrance to Paradise. When he applied to register a copyright claim in the output, he listed the machine as the author. He claimed ownership of the work as a work made for hire. In his application, he asserted that the work was autonomously created by a machine. The Copyright Office denied the claim on the basis that human authorship is a required element of a copyright claim.

On appeal, the United States district court affirmed the Copyright Office’s decision. Thaler attempted to argue, for the first time, that it was copyrightable because he provided instructions and directed the machine’s creation of the work. The district court found that he had waived that argument.

The Court of Appeals Affirms

Thaler sought review in the Court of Appeals for the Federal Circuit. On March 18, 2025, the Court of Appeals affirmed. The Court cited language in the Copyright Act that suggested Congress intended only human beings to be authors. The Court did not reach the question whether the Copyright Clause of the U.S. Constitution might protect machine-generated works if Congress should choose someday to extend copyright protection to these kinds of materials.

The Court held that the question whether Thalercould claim authorship on the basis of the fact that he made and directed the operation of the Creativity Machine has not been preserved for appeal.

Exit mobile version
%%footer%%