Trump’s Executive Order on AI

On December 11, 2025, President Trump issued another Executive Order. This one is intended to promote “national dominance” in “a race with adversaries for supremacy.” To “win,” the Order says, AI companies should not be encumbered by state regulation. “The policy of the United States,” the Order says, is “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” It sets up an AI Litigation Task Force to challenge state AI laws that allegedly do not do that.

Excepted from the Order are state laws on child safety protections, data center infrastructure, and state government use of AI.

Which State AI Laws?

The Order speaks generally about “state AI laws,” but does not define the term. Here are some examples of state AI laws:

Stalking and Harassment

A North Dakota statute criminalizes using a robot to frighten or harass another person. It defines a robot to include a drone or other system that uses AI technology. (N.D. Cent. Code § 12.1-17-07.(1), (2)(f)). This appears to be a “state AI law.” North Dakota statutes also prohibit stalking accomplished by using either a robot or a non-AI form of technology. (N.D. Cent. Code § 12.1-17-07.1(1)(d)). Preempting this statute would produce an anomalous result. It would be a crime to stalk somebody unless you use an AI-powered device to do it.

Political Deepfakes

Several states have enacted laws prohibiting the distribution of political deepfakes to influence an election. Regulations range from a prohibition against the distribution of a deepfake to influence an election within a specified time period before the election to requiring disclosure that it is AI-generated. Minn. Stat. § 609.771 is an example of such a regulation. The need for this kind of statute was highlighted in 2024 when someone used AI to clone Joe Biden’s voice and generate an audio file that sounded like Mr. Biden himself was urging people not to vote for him.

Sexual Deepfakes

Both state and federal governments have enacted laws aimed at curbing the proliferation of “revenge porn.” The TAKE IT DOWN Act is an example. Minn. Stat. § 604.32 is another example (deepfakes depicting intimate body parts or sexual acts).

State and federal laws in this area cover much of the same ground. The principal difference is that the federal crime must involve interstate commerce; state crimes do not. The only practical effect of preemption of this kind of state AI law, therefore, would be to eliminate state prohibitions of wholly intrastate sexual deepfakes. If the Executive Order succeeds in its objectives, then state laws that prohibit the creation or distribution of sexual deepfakes wholly within the same state, as some do, would be preempted, with the result that making and distributing sexual deepfakes would be lawful so long as you only transmit it to other people in your state and not to someone in a different state.

Digital Replicas

Many states have enacted laws prohibiting or regulating the unauthorized creation and exploitation of digital replicas. The California Digital Replicas Act and Tennessee’s ELVIS Act are examples. AI is used in the creation of digital replicas. It is unclear whether these kinds of enactments are “state AI laws.” Arguably, a person could use technologies more primitive than generative-AI to create a digital image of a person. If these statutes are preempted only to the extent they apply to AI-generated digital replicas, then it would seem that unauthorized exploiters of other people’s faces and voices for commercial gain would be incentivized to use AI to engage in unauthorized commerceial exploitation of other people.

Child Pornography

Several states have either enacted laws or amended existing laws to bring AI-generated images of what look like real children within the prohibition against child pornography. See, e.g., N.D. Cent. Code § 12.1.-27.2—01.  The Executive Order exempts “child safety protections,” but real children do not necessarily have to be used in AI-generated images. This kind of state statute arguably would not come within the meaning of a “child safety protection.”

Health Care Oversight

California’s Physicians Make Decisions Act requires a human person to oversee health care decisions about medical necessity. This is to ensure that medical care is not left entirely up to an AI bot. The law was enacted with the support of the California Medical Association to ensure that patients receive adequate health care. If the law is nullified, then it would seem that hospitals would be free to replace doctors with AI chatbots.

Chatbots

Some states prohibit the deceptive use of a chatbot, such as by falsely representing to people who interact with one that they are interacting with a real person. In addition, some states have enacted laws requiring disclosure to consumers when they are interacting with a non-human AI. See, e.g., the Colorado Artificial Intelligence Act.

Privacy

Some states have enacted either stand-alone laws or amended existing privacy laws to ensure they protect the privacy of personally identifiable information stored by AI systems. See, e.g., Utah Code 13-721-201, -203 (regulating the sharing of a person’s mental health information by a chatbot); and amendments to the California Consumer Privacy Act making it applicable to information stored in an AI system.

Disclosure

California’s Generative AI Training Data Transparency Act requires disclosure of training data used in developing generative-AI technology.

The Texas Responsible Artificial Intelligence Governance Act

Among other things, the Texas Responsible AI Governance Act prohibits the use of AI to restrict constitutional rights, to discriminate on the basis of race, or to encourage criminal activity. These seem like reasonable proscriptions.

Trump’s “AI czar,” venture capitalist David Sacks, has said the administration is not gong to “push back” on all state laws, only “the most onerous” ones. It is unclear which of these will be deemed “onerous.”

State AI Laws are Not Preempted

News media headlines are trumpeting that the Executive Order preempts state AI laws. This is not true. It directs this administration to try to strike down some state AI laws. It contemplates working with Congress to formulate and enact preemptive legislation. It is doubtful that a President could constitutionally preempt state laws by executive order.

Postscript

Striving for uniformity in the regulation of artificial intelligence is not a bad idea. There should be room, though, for both federal and state legislation. Rather than abolishing state laws, a uniform code or model act for states might be a better idea. Moreover, if we are going to start caring about an onerous complex of differing state laws, and feeling a need to establish a national framework, perhaps the President and Congress might wish to address the sprawling morass of privacy and data security regulations in the United States.

Voice Cloning

Painting of Nipper by Francis Barraud (1898-99); subsequently used as a trademark with “HIs Master’s Voice.”

Lehrman v. Lovo, Inc.

On July 10, 2025, the federal district court for the Southern District of New York issued an Order granting in part and denying in part a motion to dismiss a putative class action lawsuit that Paul Lehrman and Linnea Sage commenced against Lovo, Inc. The lawsuit, Lehrman v. Lovo, Inc., alleges that Lovo used artificial intelligence to make and sell unauthorized “clones” of their voices.

Specifically, the complaint alleges that the plaintiffs are voice-over actors. For a fee, they read and record scripts for their clients. Lovo allegedly sells a text-to-speech subscription service that allows clients to generate voice-over narrations. The service is described as one that uses “AI-driven software known as ‘Generator’ or ‘Genny,'” which was “created using ‘1000s of voices.'” Genny allegedly creates voice clones, i.e., copies of real people’s voices. Lovo allegedly granted its customers “commercial rights for all content generated,” including “any monetized, business-related uses such as videos, audio books, advertising promotion, web page vlogging, or product integration.” (Lovo terms of service.) The complaint alleges that Lovo hired the plaintiffs to provide voice recordings for “research purposes only,” but that Lovo proceeded to exploit them commercially by licensing their use to Lovo subscribers.

This lawsuit ensued.

The complaint sets out claims for:

  • Copyright infringement
  • Trademark infringement
  • Breach of contract
  • Fraud
  • Conversion
  • Unjust enrichment
  • Unfair competition
  • New York civil rights laws
  • New York consumer protection laws.

The defendant moved to dismiss the complaint for failure to state a claim.

The copyright claims

Sage alleged that Lovo infringed the copyright in one of her voice recordings by reproducing it in presentations and YouTube videos. The court allowed this claim to proceed.

Plaintiffs also claimed that Lovo’s unauthorized use of their voice recordings in training its generative-AI product infringed their copyrights in the sound recordings. The court ruled that the complaint did not contain enough factual detail about how the training process infringed one of the exclusive rights of copyright ownership. Therefore, it dismissed this claim with leave to amend.

The court dismissed the plaintiffs’ claims of output infringement, i.e., claims that the “cloned” voices the AI tool generated infringed copyrights in the original sound recordings.

Copyright protection in a sound recording extends only to the actual recording itself. Fixation of sounds that imitate or simulate the ones captured in the original recording does not infringe the copyright in the sound recording.

This issue often comes up in connection with copyrights in music recordings. If Chuck Berry writes a song called “Johnny B. Goode” and records himself performing it, he will own two copyrights – one in the musical composition and one in the sound recording. If a second person then records himself performing the same song, and he doesn’t have a license (compulsory or otherwise) to do so, that person would be infringing the copyright in the music but not the copyright in the sound recording. This is true even if he is very good at imitating Berry’s voice and guitar work. For a claim of sound recording infringement to succeed, it must be shown that the actual recording itself was copied.

Plaintiffs did not allege that Lovo used Genny to output AI-generated reproductions of their original recordings. Rather, they alleged that Genny is able to create new recordings that mimic attributes of their voices.

The court added that the sound of a voice is not copyrightable expression, and even if it were, the plaintiffs had registered claims of copyright in their recordings, not in their voices.

The trademark claims

In addition to infringement, the Lanham Act creates two other potential bases of trademark liability: (1) false association; and (2) false advertising. 15 U.S.C. sec. 1125(a)(1)(A) and (B). Plaintiffs asserted both kinds of claims. The judge dismissed these claims.

False association

The Second Circuit court of appeals recently held, in Electra v. 59 Murray Enter., Inc. and Souza v. Exotic Island Enters., Inc., that using a person’s likeness to create an endorsement without the person’s permission can constitute a “false association” violation. In other words, a federally-protected, trademark-like interest in one’s image, likeness, personality and identity exists. (See, e.g., Jackson v. Odenat.)

Although acknowledging that this right extends to one’s voice, the judge ruled that the voices in this case did not function as trademarks. They did not identify the source of a product or service. Rather, they were themselves the product or service. For this reason, the judge ruled that the plaintiffs had failed to show that their voices, as such, are protectable trademarks under Section 43(a)(1)(A) of the Lanham Act.

False Advertising

Section 43(a)(1)(B) of the Lanham Act (codified at 15 U.S.C. sec. 1125(a)(1)(B)) prohibits misrepresentations about “the nature, characteristics, qualities, or geographic origin of . . . goods, services, or commercial activities.” The plaintiffs claimed that Lovo marketed their voices under different names (“Kyle Snow” and “Sally Coleman.”) The court determined that this was not fraudulent, however, because Lovo marketed them as what they were, namely, synthetic clones of the actors’ voices, not as their actual voices.

Plaintiffs also claimed that Lovo’s marketing materials falsely stated that the cloned voices “came with all commercial rights.” They asserted that they had not granted those rights to Lovo. The court ruled, however, that even if Lovo was guilty of misrepresentation, it was not the kind of misrepresentation that comes within Section 43(a)(1)(B), as it did not concern the nature, characteristics, qualities, or geographic origin of the voices.

State law claims

Although the court dismissed the copyright and trademark claims, it allowed some state law claims to proceed. Specifically, the court denied the motion to dismiss claims for breach of contract, violations of sections 50 and 51 of the New York Civil Rights Law, and violations of New York consumer protection law.

Both the common law and the New York Civil Rights Law prohibit the commercial use of a living person’s name, likeness or voice without consent. Known as “misappropriation of personality” or violation of publicity or privacy rights, this is emerging as one of the leading issues in AI law.

The court also allowed state law claims of false advertising and deceptive trade practices to proceed. The New York laws are not subject to the “nature, characteristics, qualities, or geographic origin” limitation set out in Section 43(a) of the Lanham Act.

Conclusion

I expect this case will come to be cited for the rule that copyright cannot be claimed in a voice. Copyright law protects only expression, not a person’s corporeal attributes. The lack of copyright protection for a person’s voice, however, does not mean that voice cloning is “legal.” Depending on the particular facts and circumstances, it may violate one or more other laws.

It also should be noted that after the Joe Biden voice-cloning incident of 2024, states have been enacting statutes regulating the creation and distribution of voice clones. Even where a specific statute is not applicable, though, a broader statute (such as the FTC Act or a similar state law) might cover the situation.

Images and references in this blog post are for illustrative purposes only. No endorsement, sponsorship or affiliation with any person, organization, company, brand, product or service is intended, implied, or exists.

Official portrait of Vice President Joe Biden in his West Wing Office at the White House, Jan. 10, 2013. (Official White House Photo by David Lienemann)

Top Copyright Cases of 2024

Warner Chappell Music Inc. v. Nealy

The Copyright Act imposes a three-year period of limitations for copyright infringement claims. There has been a split in the circuits about whether this means that damages could be claimed only for infringement occurring during the three-year period or whether damages could be recovered for earlier acts of infringement so long as the claim is timely filed.

The issue arises in cases where a claimant invokes the discovery rule. The general rule is that a limitations period runs from the date of the act giving rise to the cause of action. The discovery rule, by contrast, measures the limitations period from the date the infringing act is discovered. Thus, for example, if an infringing act occurred in 2012 but the copyright owner did not learn about it until 2022, then under the traditional rule, the claim would be time-barred. Under the discovery rule, it would not be.

The Court’s holding means that if the discovery rule applies in the jurisdiction where suit is filed, and a claimant properly invokes it, then damages are not limited to the three years preceding suit. Rather, any damages incurred since the date of the infringint act are recoverable.

The Court did not rule on the validity of the discovery rule.

Warner Chappell Music Inc.. v. Nealy, 601 U.S. ____ (2024). Read more here.

 Hachette Book Group Inc. v. Internet Archive

I wrote about this case back in 2022, when it was at the summary judgment stage in the district court for the Southern District of New York. The complaint, filed by book publishers, alleged that the Internet Archive made digital copies of over a million print books and then freely distributed the copies to members of the public, all without the permission of the copyright owners. In 2023, the district judge ruled in favor of the publishers, holding that the enterprise was not “fair use.” This year, the Second Circuit Court of Appeals affirmed the decision.

To some, the decision might seem like a no-brainer. Copying other people’s books and giving them away for free, without the copyright owners’ permission, sounds like core copyright infringement, right? Yet, before the Warhol v. Goldsmith decision in 2023, courts had been applying such an expansive view of the “transformative use” branch of fair use that some people thought that making digital copies of a print book was categorically “transformative” and therefore fair use. This decision makes it clear that no, it isn’t.

The Internet Archive has said it will not appeal the decision to the United States Supreme Court.

Hachette Book Group Inc. et al. v. Internet Archive, No. 23-1260 (2nd Cir. 2024)

Griner v. King

U.S. Representative Steve King’s campaign committee used a copyright-protected photograph in his campaign without permission. King’s committee had argued fair use and that it had an “implied license” to use the image because it had been widely circulated as a meme on the Internet. The Eighth Circuit Court of Appeals upheld an Iowa jury’s verdict for the copyright owner.

Griner et al. v. King et al., No. 23-2117, (8th Cir. 2024)

The Intercept Media v. OpenAI

This isn’t really a momentous decision, in terms of precedential value, but it is the first major victory for Big AI in the plethora of AI-related lawsuits they are facing.

The Intercept Media, Inc. sued OpenAI and Microsoft Corporation for alleged Digital Millennium Copyright Act (DMCA) violations in connection with training the AI tool, ChatGPT. The defendants filed a motion to dismiss. On November 21, 2024 the New York court dismissed claims against Microsoft with prejudice. The court dismissed the 17 U.S.C. § 1202(b)(3) claim against OpenAI but allowed the claim under 17 U.S.C. §1202(b)(1) to proceed.  

Section 1202(b)(1) prohibits unauthorized removal or alteration of copyright management information, including author information and the copyright notice.

The Intercept Media Inc. v. OpenAI Inc., No. 1:24-cv-01515, (S.D.N.Y. Nov. 21, 2024).

Stay tuned…

Many AI-related copyright lawsuits continued to proceed through the courts in 2024, with decisions expected in 2025 or later.

The New Copyright Circumvention Rules

In 1998, Congress enacted the Digital Millenium Copyright Act (“DMCA”). In addition to establishing the notice-and-take-down regimen with which website and blog owners are (or should be) familiar, the DMCA made it unlawful to “circumvent a technological measure that effectively controls access to” copyrighted material. (17 U.S.C. § 1201(a)(1)(A)). The Act set out some permanent exemptions, i.e., situations where circumvention is allowed. In addition, it gave the Librarian of Congress power to periodically establish new ones. These additional exemptions are temporary, lasting for three years, but the Librarian of Congress can and does renew them. On October 18, 2024, the Librarian of Congress issued a Final Rule renewing some exemptions and creating some new ones.

What is “circumvention of a technological measure”?

Circumventing a technological measure means “to descramble a scrambled work, to decrypt an encrypted work, or otherwise to avoid, bypass, remove, deactivate, or impair a technological measure, without the authority of the copyright owner.” (17 U.S.C. § 1201(a)(3)(A)).

So, no decrypting or unscrambling to get access to a copyrighted work. What else? Well, anything that involves avoiding or bypassing a technological measure without the copyright owner’s permission. You can’t do that, either.

A technological measure that “controls access to a work” can be anything that “requires the application of information, or a process or a treatment, with the authority of the copyright owner,” to gain access to the work.” (17 U.S.C. § 1201(a)(3)(B)). Entering a password-protected website without a password the copyright owner has authorized you to use is an example.

The permanent exemptions

Section 1201 of Title 17 lists permanent exemptions for:

  • Nonprofit libraries, archives, and educational institutions that circumvent copyright protection measures solely for the purpose of determining whether to acquire a copy of the work for a permitted purpose
  • Law enforcement, intelligence, and government activities
  • Reverse engineering
  • Encryption research
  • Prevention of access of minors to material on the Internet
  • Prevention of the collection or dissemination of personally identifying information
  • Security testing

Detailed conditions apply to each of these exemptions. If you are thinking of invoking one of them, read the entire statutory provision carefully and seek professional legal advice.

Renewed temporary exemptions

The following temporary exemptions have been renewed for another 3-year term:

  • Fair use of short portions of motion pictures for certain educational and derivative uses

This includes use in a parody or in a documentary film about the work’s biographical or historically significant nature; use in a noncommercial video; use in nonfiction multimedia e-books; use for educational purposes by educational institution faculty and students; educational uses in Massive Open Online Courses; and educational uses in nonprofit digital and media literacy programs offered by libraries, museum, and other organizations.

  • Closed captioning and other disability access services by disability service offices or similar units at educational institutions for students, faculty or staff with disabilities
  • Preservation of copies of motion pictures by an eligible library, archives, or museum
  • Scholarly research and teaching involving text and data mining of motion pictures or electronic literary works by researchers affiliated with a nonprofit educational institution
  • Literary work or previously published sheet music that is distributed electronically and include access controls that interfere with assistive technologies
  • Access to patient data on medical devices or monitoring systems
  • Computer programs that unlock wireless devices to allow connection of a device to an alternative wireless network
  • “Jailbreaking” computer programs (computer programs that enable electronic devices to interoperate with or to remove software applications), for the purpose of jailbreaking smartphones and other portable all-purpose computing devices, smart televisions, voice assistant devices, and routers and dedicated networking devices
  • Computer programs that control motorized land vehicles, marine vessels, and mechanized agricultural vehicles for the purposes of diagnosis, repair, or modification of a vehicle or vessel function
  • Diagnosis, maintenance or repair of devices designed primarily for use by consumers
  • Access to computer programs that are contained in and control the functioning of medical devices or systems, and related data files, for purposes of diagnosis, maintenance, or repair
  • Security research
  • Individual play by video gamers and preservation of video games by a library, archives or museum for which outside server support has been discontinued, and preservation by a library, archives, or museum of discontinued video games that never required server support
  • Preservation of computer programs by libraries, archives, and museums
  • Computer programs that operate 3D printers to allow use of alternative material
  • Investigation of potential infringment of free and open-source computer programs

Again, detailed conditions apply to each of these exemptions. If you are thinking of invoking one of them, read the entire provision carefully and seek professional legal advice.

New Exemptions

New 3-year exemptions the Librarian of Congress just announced in October, 2024 include:

  • Sharing of copies of corpora by academic researchers with researchers affiliated with other nonprofit institutions of higher education for purposes of conducting independent text or data mining research and teaching, where those researchers are in compliance with the exemption
  • Diagnosis, maintenance and repair of retail-level commercial food preparation equipment
  • Access, storage and sharing of vehicle operational and telematics data generated by motorized land vehicles and marine vessels

And once again, detailed conditions apply to each of these exemptions. If you are thinking of invoking one of them, read the entire provision carefully and seek professional legal advice.


Confused by copyright, trademark and other IP issues? Read my book, IP Law for Non-IP Attorneys, available on Amazon.com

Joint Custody and Equal Shared Parenting Laws

Yes, this is off-topic. It is, however, the reason I haven’t been posting to this blog lately. In addition to finishing out some cases, I have been working on developing this 90-minute program for the past few months.

In what seems like a lifetime ago, I practiced family law. During that time, I witnessed first-hand the havoc the sole-custody regime wreaked on families, both parents and children. I’ve always believed there had to be a better way.

In this webinar, I will be presenting a brief overview of the joint custody and equal shared parenting laws of the fifty U.S. states. Professor Daniel Fernandez-Kranz will join me to talk about how equal shared parenting has been working in Spain. Kentucky family law attorney Carl Knochelmann, Jr. will talk about the impact Kentucky’s statute, which is the first-ever presumptive equal shared parenting time law, has been having. Professor Donald Hubin will round things out with a look at what can be learned from Ohio’s experiences with both equal shared parenting and the traditional sole custody model. He will also present findings about the interplay of equal shared parenting laws and domestic violence, based on data gathered from Kentucky and Ohio.

California has approved the webinar for 90-minutes of MCLE and LSCLE (family law specialist) continuing legal education credits. Continuing legal and mediator education credits are available in many other states as well.

The live webinar is on October 24, 2024. There will be a video replay on November 8, 2024.

If you have an interest, you can find more information, and registration links, at EchionCLE.com

I promise I will get back to copyright and trademark issues soon.

Can We Talk Here? – Trademark Speech Rights

In recent years, the United States Supreme Court has been grappling with the thorny question of how the First Amendment applies to trademarks. In this blog post, attorney Thomas B. James attempts a reconciliation of recent pronouncements.

The Slants (Matal v. Tam)

Simon Tam, lead singer of the band, The Slants, tried to register the band name as a trademark. The USPTO denied the application, citing 15 U.S.C. § 1052(a). That provision prohibited the registration of any trademark that could “disparage . . . or bring . . . into contemp[t] or disrepute” any persons. The Fifth Circuit Court of Appeals declared the statute facially unconstitutional under the First Amendment. The U.S. Supreme Court affirmed.

The USPTO argued that the issuance of a registration certificate is “government speech.” Since the government and its members are not required to maintain neutrality in the views they express and are only required to maintain viewpoint neutrality when regulating private speech, the USPTO contended that it was not required to maintain viewpoint neutrality when deciding whether to issue a trademark registration certificate or not. The Court rejected that argument, holding that a trademark is private speech. As such, the government is not free to engage in viewpoint discrimination when deciding which ones to favor with a registration certification.

“If the federal registration of a trademark makes the mark government speech, the Federal Government is babbling prodigiously and incoherently.”

— Hon. Samuel Alito, in Matal v. Tam

Commercial speech

At one time, the Court took the position that the First Amendment does not protect commercial speech (speech relating to the marketing of products or services). Valentine v. Chrestensen (1942) (“[T]he Constitution imposes no . . . restraint on government as it respects purely commercial advertising.”)

In Virginia State Pharmacy Bd. v. Virginia Citizens Consumer Council (1976) the Court reversed its position on this point, declaring that “the free flow of commercial information” is important enough to warrant First Amendment protection.

The Court announced a framework for assessing the constitutionality of a restriction on commercial speech a few years later, in Central Hudson Gas & Elec. Co. v. Public Serv. Comm. of N.Y. (1980).

The Central Hudson test, as it has come to be known, holds, first, that commercial speech receives First Amendment protection only if it concerns lawful activity and is not false or misleading. If it clears those two hurdles, then government regulation of it is permissible only if the regulation directly advances a substantial government interest and is not more extensive than necessary to serve that interest. That is to say, the regulation must be narrowly tailored to advance a substantial government interest.

In other words, commercial speech receives an intermediate level of scrutiny. Unlike regulations of political speech, the government only needs to identify a “substantial” interest, not necessarily a “compelling” one. Also unlike political speech, the regulation in question does not have to be the least speech-restrictive means of achieving it. It is required to be no more extensive than necessary to serve the interest in question, however.

In Tam, the Court held that it did not need to decide whether trademarks are commercial speech or not. The Court rejected the government’s contention that it has a substantial interest in protecting people from hearing things they might find offensive, declaring that “the proudest boast of our free speech jurisprudence is that we protect the freedom to express ‘the thought that we hate.'” See United States v. Schwimmer, 279 U. S. 644, 655 (1929) (Holmes,
J., dissenting).

The Court also rejected the second justification the government offered, that disparaging trademarks disrupt the flow of commerce. The statute, the Court held, is not narrowly drawn to eradicate invidious discrimination. Prohibiting registration of trademarks that disparage any person, group or institution, it would also prohibit registration of marks like “Down with racists” or “Slavery is an evil institution.”

The Court also identified what it described as a “deeper problem”:

If affixing the commercial label permits the suppression of any speech that may lead to political or social “volatility,” free speech would be endangered.

— Hon. Samuel Alito, in Matal v. Tam

In short, the Court acknowledged that commercial speech can have non-commercial expressive content. When that is the case, courts should zealously guard against government encroachment on private speech rights.

FUCT

Two years later, the Court was asked to review the USPTO’s refusal to register FUCT as a trademark. The Court came to the same conclusion about the portion of the statute that prohibited the registration of “scandalous” or “immoral” trademarks as it did about the prohibition against registering “disparaging” trademarks. Because this, too, involves viewpoint discrimination, the Court held that this prohibition, too, violates the First Amendment. Iancu v. Brunetti, 139 S. Ct. 2294 (2019)

Bad Spaniels (Jack Daniels v. VIP Properties)

The Court revisited trademark speech rights in 2023, in Jack Daniel’s Properties v. VIP Products.

I’ve written about this case before. Basically, Jack Daniel’s Property owned (and still owns) trademarks in the Jack Daniel’s bottle and in many of the words and graphics on its label for its alcoholic beverages. VIP Products began making and marketing a dog toy designed to look like a Jack Daniel’s whiskey bottle. The toy had labels affixed to it parodying the Jack Daniel’s label. For example, it used the phrase “Bad Spaniels” in place of “Jack Daniel’s.” And instead of “Old No. 7 Brand Tennessee Sour Mash Whiskey,” it displayed “The Old No. 2 On Your Tennessee Carpet.” Jack Daniel’s issued a cease-and-desist demand. In response, VIP Products filed a declaratory judgment action seeking a declaration that its parody neither infringed nor diluted Jack Daniel’s trademarks and, in any event, was a protected “fair use” under the First Amendment.

The district court rejected these claims, essentially holding that the First Amendment does not establish a “fair use” exception for the expressive aspect(s) of a trademark when it is used as a source-identifier for a product. The Ninth Circuit Court of Appeals reversed.

The United States Supreme Court reversed the Ninth Circuit. The Court opined that although using a trademark for an expressive purpose might qualify for First Amendment protection, that protection does not insulate the user from trademark infringement or dilution liability if it is also used as a source-identifier. Parodic uses are exempt from liability only if they are not used to designate the source of a product or service.

The Court did not mention Central Hudson or discuss the commercial speech doctrine. It is likely the Court did not feel a need to do that because trademark infringement involves trademarks that are claimed to be likely to confuse consumers about the source of a product or service. Such trademarks would not clear one of the first hurdles for commercial speech protection under Central Hudson, namely, that the speech must not be misleading.

“Trump Too Small” (Vidal v. Elster)

Steve Elster applied to federally register the trademark “Trump too small” to
use on shirts and hats. The USPTO denied the application, citing 15 U. S. C. §1052(c). That provision prohibits the registration of a mark that “[c]onsists of or comprises a name . . . identifying a particular living individual except by
his written consent.” Elster appealed, asserting that this statute infringed his First Amendment right to free speech.

The Federal Circuit Court of Appeals agreed with him. The U.S. Supreme Court, however, reversed the Federal Circuit, holding that this provision of the Lanham Act does not violate the First Amendment. Vidal v. Elster, 602 U.S. __ (2024).

The decision in this case is consistent with Jack Daniel’s. Unlike Jack Daniel’s, this case did not involve a claim that the use of the trademark was likely to cause consumer confusion about the source of the product. (After all, how likely would consumers be to mistakenly believe that Trump was marketing products ridiculing his own size?) In this case, the first two hurdles for commercial speech protection under Central Hudson would appear to have been cleared.

Reconciling this decision with Tam is not as easy. What happened to the idea the Court voiced in Tam that when commercial speech has non-commercial expressive content, courts should zealously guard against government encroachment on private speech rights?

The legislative history of Section 1052(c) demonstrates that the prohibition against using a living person’s name as part of a trademark was enacted for essentially the same reason that the prohibitions against disparaging, scandalous or immoral trademarks were: Members of Congress found the “idea of prostituting great names by sticking them on all kinds of goods” — like the idea of including scandalous, immoral or disparaging content in a trademark — to be “very distasteful,” and wanted “to prevent such outrages of the sensibilities of the American people.”1 That was the very same kind of “interest” that the government invoked in Tam and that the Court found insufficient and not tailored narrowly enough to sustain the speech restriction at issue in that case. What was different here?

By requiring consent, Section 1052(c) effectively precludes the registration of a mark that criticizes an elected government official while allowing the official to register positive messages about himself or herself. HILLARY FOR AMERICA was permitted to be registered, but HILLARY FOR PRISON was not. It seems an awful lot like viewpoint discrimination, doesn’t it?

Why shouldn’t the politically expressive aspect of a trademark (as distinguished from the purely source-identifying aspect) receive the same exacting strict scrutiny analysis that normally applies to regulations of political speech?

Well, the Supreme Court did not think this case involved viewpoint discrimination. The requirement of consent to use a person’s name in a trademark applies to people of all political persuasions, the Court reasoned. Consent would be required to register a politician’s name as a trademark whether the politician in question is a Democrat, a Republican, a Communist, or anything else, and consent would be required whether the politician in question supports or opposes, say, abortion rights, or gun rights, or anything else.

Nevertheless, the fact remains that the statute, as applied, requires an elected official’s prior approval of a trademark before it can be registered. Maybe that doesn’t rise to the level of a direct prior restraint on speech, but it would certainly seem to have a chilling effect on political speech at the core of the First Amendment.

Conclusion

It is not at all clear to me that the cases can be reconciled on a logically coherent doctrinal basis. Reliance on the common law history of trademarks might support a determination that Section 1052(c) is not unconstitutional on its face. I am not completely convinced, however, that the Court was adequately responsive to the argument that the statute is unconstitutional as applied to the names of elected officials. But what do I know? I’m just some guy living next to a cornfield in the middle of nowhere.

  1. See Respondent’s Brief at p. 7. ↩︎

Generative-AI as Unfair Trade Practice

While Congress and the courts grapple with generative-AI copyright issues, the FTC weighs in on the risks of unfair competition, monopolization, and consumer deception.

FTC Press Release exceprt

While Congress and the courts are grappling with the copyright issues that AI has generated, the federal government’s primary consumer watchdog has made a rare entry into the the realm of copyright law. The Federal Trade Commission (FTC) has filed a Comment with the U.S. Copyright Office suggesting that generative-AI could be (or be used as) an unfair or deceptive trade practice. The Comment was filed in response to the Copyright Office’s request for comments as it prepares to begin rule-making on the subject of artificial intelligence (AI), particularly, generative-AI.

Monopolization

The FTC is responsible for enforcing the FTC Act, which broadly prohibits “unfair or deceptive” practices. The Act protects consumers from deceptive and unscrupulous business practices. It is also intended to promote fair and healthy competition in U.S. markets. The Supreme Court has held that all violations of the Sherman Act also violate the FTC Act.

So how does generative-AI raise monopolization concerns? The Comment suggests that incumbents in the generative-AI industry could engage in anti-competitive behavior to ensure continuing and exclusive control over the use of the technology. (More on that here.)

The agency cited the usual suspects: bundling, tying, exclusive or discriminatory dealing, mergers, acquisitions. Those kinds of concerns, of course, are common in any business sector. They are not unique to generative-AI. The FTC also described some things that are matters of special concern in the AI space, though.

Network effects

Because positive feedback loops improve the performance of generative-AI, it gets better as more people use it. This results in concentrated market power in incumbent generative-AI companies with diminishing possibilities for new entrants to the market. According to the FTC, “network effects can supercharge a company’s ability and incentive to engage in unfair methods of competition.”

Platform effects

As AI users come to be dependent on a particular incumbent generative-AI platform, the company that owns the platform could take steps to lock their customers into using their platform exclusively.

Copyrights and AI competition

The FTC Comment indicates that the agency is not only weighing the possibility that AI unfairly harms creators’ ability to compete. (The use of pirated or the misuse of copyrighted materials can be an unfair method of competition under Section 5 of the FTC Act.) It is also considering that generative-AI may deceive, or be used to deceive, consumers. Specifically, the FTC expressed a concern that “consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist, but it has been generated by someone else using an AI tool.” (Comment, page 5.)

In one of my favorite passages in the Comment, the FTC suggests that training AI on protected expression without consent, or selling output generated “in the style of” a particular writer or artist, may be an unfair method of competition, “especially when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works….” (Comment, pages 5 – 6).

Fair Use

The significance of the FTC’s injection of itself into the generative-AI copyright fray cannot be overstated. It is extremely likely that during their legislative and rule-making deliberations, both Congress and the Copyright Office are going to be focusing the lion’s share of their attention on the fair use doctrine. They are most likely going to try to allow generative-AI outfits to continue to infringe copyrights (It is already a multi-billion-dollar industry, after all, and with obvious potential political value), while at the same time imposing at least some kinds of limitations to preserve a few shards of the copyright system. Maybe they will devise a system of statutory licensing like they did when online streaming — and the widespread copyright infringement it facilitated– became a thing.

Whatever happens, the overarching question for Congress is going to be, “What kinds of copyright infringement should be considered “fair” use.

Copyright fair use normally is assessed using a four-prong test set out in the Copyright Act. Considerations about unfair competition arguably are subsumed within the fourth factor in that analysis – the effect the infringing use has on the market for the original work.

The other objective of the FTC Act – protecting consumers from deception — does not neatly fit into one of the four statutory factors for copyright fair use. I believe a good argument can be made that it should come within the coverage of the first prong of the four-factor test: the purpose and character of the use. The task for Congress and the Copyright Office, then, should be to determine which particular purposes and kinds of uses of generative-AI should be thought of as fair. There is no reason the Copyright Office should avoid considering Congress’s objectives, expressed in the FTC Act and other laws, when making that determination.

The Top 3 Generative-AI Copyright Issues

Black hole consuming a star. Photo credit: NASA.

What are your favorite generative-AI copyright issues? In this capsule summary, Cokato attorney Tom James shares what his three favorites are.

Generative artificial intelligence refers collectively to technology that is capable of generating new text, images, audio/visual and possibly other content in response to a user’s prompts. They are trained by feeding them mass quantities of ABC (already-been-created) works. Some of America’s biggest mega-corporations have invested billions of dollars into this technology. They are now facing a barrage of lawsuits, most of them asserting claims of copyright infringement.

Issue #1: Does AI Output Infringe Copyrights?

Copyrights give their owners an exclusive right to reproduce their copyright-protected works and to create derivative works based on them. If a generative-AI user prompts the service to reproduce the text of a pre-existing work, and it proceeds to do so, this could implicate the exclusive right of reproduction. If a generative-AI user prompts it to create a work in the style of another work and/or author, this could implicate the exclusive right to create derivative works.

To establish infringement, it will be necessary to prove copying. Two identical but independently created works may each be protected by copyright. Put another way, a person is not guilty of infringement merely by creating a work that is identical or similar to another if he/she/it came up with it completely on his/her/its own.

Despite “training” their proteges on existing works, generative-AI outfits deny that their tools actually copy any of them. They say that any similarity to any existing works, living or dead, is purely coincidental. Thus, OpenAI has stated that copyright infringement “is an unlikely accidental outcome.”

The “accidental outcome” defense seems to me like a hard one to swallow in those cases where a prompt involves creating a story involving a specified fictional character that enjoys copyright protection. If the character is distinctive enough — and a piece of work in and of itself, so to speak — to enjoy copyright protection (such as, say, Mr. Spock from the Star Trek series), then any generated output would seem to be an unauthorized derivative work, at least if the AI tool is any good.

If AI output infringes a copyright in an existing work, who would be liable for it? Potentially, the person who entered the prompt might be held liable for direct infringement. The AI tool provider might arguably be liable for contributory infringement.

Issue #2: Does AI Training Infringe Copyrights?

AI systems are “trained” to create works by exposing a computer program system to large numbers of existing works downloaded from the Internet.

When content is downloaded from the Internet, a copy of it is made. This process will “involve the reproduction of entire works or substantial portions thereof.” OpenAI, for example, acknowledges that its programs are trained on “large, publicly available datasets that include copyrighted works” and that this process “involves first making copies of the data to be analyzed….” Making these copies without permission may infringe the copyright holders’ exclusive right to make reproductions of their works.

Generative-AI outfits tend to argue that the training process is fair use.

Fair use claims require consideration of four statutory factors:

  • the purpose and character of the use;
  • the nature of the work copied;
  • the amount and substantiality of the portion copied; and
  • the effect on the market for the work.

OpenAI relies on the precedent set in Authors Guild v. Google for its invocation of “fair use.” That case involved Google’s copying of the entire text of books to construct its popular searchable database.

A number of lawsuits currently pending in the courts are raising the question whether and how, the AI training process is “fair use.”

Issue #3: Are AI-Generated Works Protected by Copyright?

The Copyright Act affords copyright protection to “original works of authorship.” The U.S. Copyright Office recognizes copyright only in works “created by a human being.” Courts, too, have declined to extend copyright protection to nonhuman authors. (Remember the monkey selfie case?) A recent copyright registration applicant has filed a lawsuit against the U.S. Copyright Office alleging that the Office wrongfully denied registration of an AI-generated work. A federal court has now rejected his argument that human authorship is not required for copyright ownership.

In March 2023, the Copyright Office released guidance stating that when AI “determines the expressive elements of its output, the generated material is not the product of human authorship.” Moreover, an argument might be made that a general prompt, such as “create a story about a dog in the style of Jack London,” is an idea, not expression. It is well settled that only expression gets copyright protection; ideas do not.

In September 2023, the Copyright Office Review Board affirmed the Office’s refusal to register a copyright in a work that was generated by Midjourney and then modified by the applicant, on the basis that the applicant did not disclaim the AI-generated material.

The Office also has the power to cancel improvidently granted registrations. (Words to the wise: Disclose and disclaim.)

These are my favorite generative-AI legal issues. What are yours?

AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

Generative AI: Perfect Tool for the Age of Deception

For many reasons, the new millennium might well be described as the Age of Deception. Cokato Copyright Attorney Tom James explains why generative-AI is a perfect fit for it.

Image by Gerd Altmann on Pixabay.

What is generative AI?

“AI,” of course, stands for artificial intelligence. Generative AI is a variety of it that can produce content such as text and images, seemingly of its own creation. I say “seemingly” because in reality these kinds of AI tools are not really independently creating these images and lines of text. Rather, they are “trained” to emulate existing works created by humans. Essentially, they are derivative work generation machines that enable the creation of derivative works based on potentially millions of human-created works.

AI has been around for decades. It wasn’t until 2014, however, that the technology began to be refined to the point that it could generate text, images, video and audio so similar to real people and their creations that it is difficult, if not impossible, for the average person to tell the difference.

Rapid advances in the technology in the past few years have yielded generative-AI tools that can write entire stories and articles, seemingly paint artistic images, and even generate what appear to be photographic images of people.

AI “hallucinations” (aka lies)

In the AI field, a “hallucination” occurs when an AI tool (such as ChatGPT) generates a confident response that is not justified by the data on which it has been trained.

For example, I queried ChatGPT about whether a company owned equally by a husband and wife could qualify for the preferences the federal government sets aside for women-owned businesses. The chatbot responded with something along the lines of “Certainly” or “Absolutely,” explaining that the U.S. government is required to provide equal opportunities to all people without discriminating on the basis of sex, or something along those lines. When I cited the provision of federal law that contradicts what the chatbot had just asserted, it replied with an apology and something to the effect of “My bad.”

I also asked ChatGPT if any U.S. law imposes unequal obligations on male citizens. The chatbot cheerily reported back to me that no, no such laws exist. I then cited the provision of the United States Code that imposes an obligation to register for Selective Service only upon male citizens. The bot responded that while that is true, it is unimportant and irrelevant because there has not been a draft in a long time and there is not likely to be one anytime soon. I explained to the bot that this response was irrelevant. Young men can be, and are, denied the right to government employment and other civic rights and benefits if they fail to register, regardless of whether a draft is in place or not, and regardless of whether they are prosecuted criminally or not. At this point, ChatGPT announced that it would not be able to continue this conversation with me. In addition, it made up some excuse. I don’t remember what it was, but it was something like too many users were currently logged on.

These are all examples of AI hallucinations. If a human being were to say them, we would call them “lies.”

Generating lie after lie

AI tools regularly concoct lies. For example, when asked to generate a financial statement for a company, a popular AI tool falsely stated that the company’s revenue was some number it apparently had simply made up. According to Slate, in their article, “The Alarming Deceptions at the Heart of an Astounding New Chatbot,” users of large language models like ChatGPT have been complaining that these tools randomly insert falsehoods into the text they generate. Experts now consider frequent “hallucination” (aka lying) to be a major problem in chatbots.

ChatGPT has also generated fake case precedents, replete with plausible-sounding citations. This phenomenon made the news when Stephen Schwartz submitted six fake ChatGPT-generated case precedents in his brief to the federal district court for the Southern District of New York in Mata v. Avianca. Schwartz reported that ChatGPT continued to insist the fake cases were authentic even after their nonexistence was discovered. The judge proceeded to ban the submission of AI-generated filings that have not been reviewed by a human, saying that generative-AI tools

are prone to hallucinations and bias…. [T]hey make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices,… generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to…the truth.

Judge Brantley Starr, Mandatory Certification Regarding Generative Artificial Intelligence.

Facilitating defamation

Section 230 of the Communications Decency Act generally shields Facebook, Google and other online services from liability for providing a platform for users to publish false and defamatory information about other people. That has been a real boon for people who like to destroy other people’s reputations by means of spreading lies and misinformation about them online. It can be difficult and expensive to sue an individual for defamation, particularly when the individual has taken steps to conceal and/or lie about his or her identity. Generative AI tools make the job of defaming people even simpler and easier.

More concerning than the malicious defamatory liars, however, are the many people who earnestly rely on AI as a research tool. In July, 2023, Mark Walters filed a lawsuit against OpenAI, claiming its ChatGPT tool provided false and defamatory misinformation about him to journalist Fred Riehl. I wrote about this lawsuit in a previous blog post. Shortly after this lawsuit was filed, a defamation lawsuit was filed against Microsoft, alleging that its AI tool, too, had generated defamatory lies about an individual. Generative-AI tools can generate false and defamation statements about individuals even if no one has any intention of defaming anyone or ruining another person’s reputation.

Facilitating false light invasion of privacy

Generative AI is also highly effective in portraying people in a false light. In one recently filed lawsuit, Jack Flora and others allege, among other things, that Prisma Labs’ Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed. Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023).

Pot, meet kettle; kettle, pot

“False news is harmful to our community, it makes the world less informed, and it erodes trust. . . . At Meta, we’re working to fight the spread of false news.” Meta (nee Facebook) published that statement back in 2017.  Since then, it has engaged in what is arguably the most ambitious campaign in history to monitor and regulate the content of conversations among humans. Yet, it has also joined other mega-organizations Google and Microsoft in investing multiple billions of dollars in what is the greatest boon to fake news in recorded history: generative-AI.

Toward a braver new world

It would be difficult to imagine a more efficient method of facilitating widespread lying and deception (not to mention false and hateful rhetoric) – and therefore propaganda – than generative-AI. Yet, these mega-organizations continue to sink more and more money into further development and deployment of these lie-generators.

I dread what the future holds in store for our children and theirs.

Exit mobile version
%%footer%%