Voice Cloning

Painting of Nipper by Francis Barraud (1898-99); subsequently used as a trademark with “HIs Master’s Voice.”

Lehrman v. Lovo, Inc.

On July 10, 2025, the federal district court for the Southern District of New York issued an Order granting in part and denying in part a motion to dismiss a putative class action lawsuit that Paul Lehrman and Linnea Sage commenced against Lovo, Inc. The lawsuit, Lehrman v. Lovo, Inc., alleges that Lovo used artificial intelligence to make and sell unauthorized “clones” of their voices.

Specifically, the complaint alleges that the plaintiffs are voice-over actors. For a fee, they read and record scripts for their clients. Lovo allegedly sells a text-to-speech subscription service that allows clients to generate voice-over narrations. The service is described as one that uses “AI-driven software known as ‘Generator’ or ‘Genny,'” which was “created using ‘1000s of voices.'” Genny allegedly creates voice clones, i.e., copies of real people’s voices. Lovo allegedly granted its customers “commercial rights for all content generated,” including “any monetized, business-related uses such as videos, audio books, advertising promotion, web page vlogging, or product integration.” (Lovo terms of service.) The complaint alleges that Lovo hired the plaintiffs to provide voice recordings for “research purposes only,” but that Lovo proceeded to exploit them commercially by licensing their use to Lovo subscribers.

This lawsuit ensued.

The complaint sets out claims for:

  • Copyright infringement
  • Trademark infringement
  • Breach of contract
  • Fraud
  • Conversion
  • Unjust enrichment
  • Unfair competition
  • New York civil rights laws
  • New York consumer protection laws.

The defendant moved to dismiss the complaint for failure to state a claim.

The copyright claims

Sage alleged that Lovo infringed the copyright in one of her voice recordings by reproducing it in presentations and YouTube videos. The court allowed this claim to proceed.

Plaintiffs also claimed that Lovo’s unauthorized use of their voice recordings in training its generative-AI product infringed their copyrights in the sound recordings. The court ruled that the complaint did not contain enough factual detail about how the training process infringed one of the exclusive rights of copyright ownership. Therefore, it dismissed this claim with leave to amend.

The court dismissed the plaintiffs’ claims of output infringement, i.e., claims that the “cloned” voices the AI tool generated infringed copyrights in the original sound recordings.

Copyright protection in a sound recording extends only to the actual recording itself. Fixation of sounds that imitate or simulate the ones captured in the original recording does not infringe the copyright in the sound recording.

This issue often comes up in connection with copyrights in music recordings. If Chuck Berry writes a song called “Johnny B. Goode” and records himself performing it, he will own two copyrights – one in the musical composition and one in the sound recording. If a second person then records himself performing the same song, and he doesn’t have a license (compulsory or otherwise) to do so, that person would be infringing the copyright in the music but not the copyright in the sound recording. This is true even if he is very good at imitating Berry’s voice and guitar work. For a claim of sound recording infringement to succeed, it must be shown that the actual recording itself was copied.

Plaintiffs did not allege that Lovo used Genny to output AI-generated reproductions of their original recordings. Rather, they alleged that Genny is able to create new recordings that mimic attributes of their voices.

The court added that the sound of a voice is not copyrightable expression, and even if it were, the plaintiffs had registered claims of copyright in their recordings, not in their voices.

The trademark claims

In addition to infringement, the Lanham Act creates two other potential bases of trademark liability: (1) false association; and (2) false advertising. 15 U.S.C. sec. 1125(a)(1)(A) and (B). Plaintiffs asserted both kinds of claims. The judge dismissed these claims.

False association

The Second Circuit court of appeals recently held, in Electra v. 59 Murray Enter., Inc. and Souza v. Exotic Island Enters., Inc., that using a person’s likeness to create an endorsement without the person’s permission can constitute a “false association” violation. In other words, a federally-protected, trademark-like interest in one’s image, likeness, personality and identity exists. (See, e.g., Jackson v. Odenat.)

Although acknowledging that this right extends to one’s voice, the judge ruled that the voices in this case did not function as trademarks. They did not identify the source of a product or service. Rather, they were themselves the product or service. For this reason, the judge ruled that the plaintiffs had failed to show that their voices, as such, are protectable trademarks under Section 43(a)(1)(A) of the Lanham Act.

False Advertising

Section 43(a)(1)(B) of the Lanham Act (codified at 15 U.S.C. sec. 1125(a)(1)(B)) prohibits misrepresentations about “the nature, characteristics, qualities, or geographic origin of . . . goods, services, or commercial activities.” The plaintiffs claimed that Lovo marketed their voices under different names (“Kyle Snow” and “Sally Coleman.”) The court determined that this was not fraudulent, however, because Lovo marketed them as what they were, namely, synthetic clones of the actors’ voices, not as their actual voices.

Plaintiffs also claimed that Lovo’s marketing materials falsely stated that the cloned voices “came with all commercial rights.” They asserted that they had not granted those rights to Lovo. The court ruled, however, that even if Lovo was guilty of misrepresentation, it was not the kind of misrepresentation that comes within Section 43(a)(1)(B), as it did not concern the nature, characteristics, qualities, or geographic origin of the voices.

State law claims

Although the court dismissed the copyright and trademark claims, it allowed some state law claims to proceed. Specifically, the court denied the motion to dismiss claims for breach of contract, violations of sections 50 and 51 of the New York Civil Rights Law, and violations of New York consumer protection law.

Both the common law and the New York Civil Rights Law prohibit the commercial use of a living person’s name, likeness or voice without consent. Known as “misappropriation of personality” or violation of publicity or privacy rights, this is emerging as one of the leading issues in AI law.

The court also allowed state law claims of false advertising and deceptive trade practices to proceed. The New York laws are not subject to the “nature, characteristics, qualities, or geographic origin” limitation set out in Section 43(a) of the Lanham Act.

Conclusion

I expect this case will come to be cited for the rule that copyright cannot be claimed in a voice. Copyright law protects only expression, not a person’s corporeal attributes. The lack of copyright protection for a person’s voice, however, does not mean that voice cloning is “legal.” Depending on the particular facts and circumstances, it may violate one or more other laws.

It also should be noted that after the Joe Biden voice-cloning incident of 2024, states have been enacting statutes regulating the creation and distribution of voice clones. Even where a specific statute is not applicable, though, a broader statute (such as the FTC Act or a similar state law) might cover the situation.

Images and references in this blog post are for illustrative purposes only. No endorsement, sponsorship or affiliation with any person, organization, company, brand, product or service is intended, implied, or exists.

Official portrait of Vice President Joe Biden in his West Wing Office at the White House, Jan. 10, 2013. (Official White House Photo by David Lienemann)

Court Rules AI Training is Fair Use

Just days after the first major fair use ruling in a generative-AI case, a second court has determined that using copyrighted works to train AI is fair use. Kadrey et al. v. Meta Platforms, No. 3:23-cv-03417-VC (N.D. Cal. June 25, 2025).

The Kadrey v. Meta Platforms Lawsuit

I previously wrote about this lawsuit here and here.

Meta Platforms owns and operates social media services including Facebook, Instagram, and WhatsApp. It is also the developer of a large language model (LLM) called “Llama.” One of its releases, Meta AI, is an AI chatbot that utilizes Llama.

To train its AI, Meta obtained data from a wide variety of sources. The company initially pursued licensing deals with book publishers. It turned out, though, that in many cases, individual authors owned the copyrights. Unlike music, no organization handles collective licensing of rights in book content. Meta then downloaded shadow library databases. Instead of licensing works in the databases, Meta decided to just go ahead and use them without securing licenses. To download them more quickly, Meta torrented them using BitTorrent.

Meta trained its AI models to prevent them from “memorizing” and outputting text from the training data, with the result that no more than 50 words and punctuation marks from any given work were reproduced in any given output.

The plaintiffs named in the Complaint are thirteen book authors who have published novels, plays, short stories, memoirs, essays, and nonfiction books. Sarah Silverman, author of The Bedwetter; Junot Diaz, author of The Brief Wondrous Life of Oscar Wao; and Andrew Sean Greer, author of Less, are among the authors named as plaintiffs in the lawsuit. The complaint alleges that Meta downloaded 666 copies of their books without permission and states claims for direct copyright infringement, vicarious copyright infringement, removal of copyright management information in violation of the Digital Millennium Copyright Act (DMCA), and various state law claims. All claims except the ones for direct copyright infringement and violation of the DMCA were dismissed in prior proceedings.

Both sides moved for summary judgment on fair use with respect to the claim that Meta’s use of the copyrighted works to train its AI infringed copyrights. Meta moved for summary judgment on the DMCA claims. Neither side moved for summary judgment on a claim that Meta infringed copyrights by distributing their works (via leeching or seeding).

On June 25, 2025 Judge Chhabria granted Meta’s motion for summary judgment on fair use with respect to AI training; reserved the motion for summary judgment on the DMCA claims for decision in a separate order, and held that the claim of infringing distribution via leeching or seeding “will remain a live issue in the case.”

Judge Chhabria’s Fair Use Analysis

Judge Chhabria analyzed each of the four fair use factors. As is the custom, he treated the first (Character or purpose of the use) and fourth (Effect on the market for the work) factors as the most important of the four.

He disposed of the first factor fairly easily, as Judge Alsup did in Bartz v. Anthropic, finding that the use of copyrighted works to train AI is a transformative use. This finding weighs heavily in favor of fair use. The purpose of Meta’s AI tools is not to generate books for people to read. Indeed, in this case, Meta had installed guardrails to prevent the tools from generating duplicates or near-duplicates of the books on which the AI was trained. Moreover, even if it could allow a user to prompt the creation of a book “in the style of” a specified author, there was no evidence that it could produce an identical work or a work that was substantially similar to one on which it had been trained. And writing styles are not copyrightable.

Significantly, the judge held that the use of shadow libraries to obtain unauthorized copies of books does not necessarily destroy a fair use defense. When the ultimate use to be made of a work is transformative, the downloading of books to further that use is also transformative, the judge wrote. This ruling contrasts with other judges who have intimated that using pirated copies of works weighs against, or may even prevent, a finding of fair use.

Unlike some judges, who tend to consider the fair use analysis over and done if transformative use is found, Judge Chhabria recognized that even if the purpose of the use is transformative, its effect on the market for the infringed work still has to be considered.

3 Ways of Proving Adverse Market Effect

The Order lays out three potential kinds of arguments that may be advanced to establish the adverse effect of an infringing use on the market for the work:

  1. The infringing work creates a market substitute for the work;
  2. Use of the work to train AI without permission deprives copyright owners of a market for licenses to use their works in AI training;
  3. Dilution of the market with competing works.

Market Substitution

In this case, direct market substitution could not be established because Meta had installed guardrails that prevented users from generating copies of works that had been used in the training. Its AI tools were incapable of generating copies of the work that could serve as substitutes for the authors’ works.

The Market for AI Licenses

The court refused to recognize the loss of potential profits from licensing the use of a work for AI training purposes as a cognizable harm.

Market Dilution

The argument here would be that the generation of many works that compete in the same market as the original work on which the AI was trained dilutes the market for the original work. Judge Chhabria described this as indirect market substitution.

The copyright owners in this case, however, focused on the first two arguments. They did not present evidence that Meta’a AI tools were capable of generating books; that they do, in fact, generate books; or that the books they generate or are capable of generating compete with books these authors wrote. There was no evidence of diminished sales of their books.

Market harm cannot be assumed when generated copies are not copies that can serve as substitutes for the specific books claimed to have been infringed. When the output is transformative, as it was in this case, market substitution is not self-evident.

Judge Chhabria chided the plaintiffs for making only a “half-hearted argument” of a significant threat of market harm. He wrote that they presented “no meaningful evidence on market dilution at all.”

Consequently, he ruled that the fourth fair use factor favored Meta.

Conclusion

The decision in this case is as significant for what the court didn’t do as it is for what it did. It handed a fair use victory to Meta. At the same time, though, it did not rule out a finding that training AI tools on copyrighted works is not fair use in an appropriate case. The court left open the possibility that a copyright owner might prevail on a claim that training AI on copyrighted works is not fair use in a different case. And it pointed the way, albeit in dictum, namely, by making a strong showing of market dilution.

That claim is not far-fetched. https://www.wired.com/story/scammy-ai-generated-books-flooding-amazon/

Photographers’ Rights

The Second Circuit Court of Appeals reversed a trial judge’s dismissal of a photographer’s copyright infringement complaint, holding that because “fair use” was not clearly established on the face of the complaint, the district court should not have dismissed the complaint sua sponte. Romanova v. Amilus, Inc.

Romanova v. Amilus, Inc., No. 23-828 (2nd Cir., May 23, 2025)

The Second Circuit Court of Appeals reversed a trial judge’s dismissal of a photographer’s copyright infringement complaint, holding that because “fair use” was not clearly established on the face of the complaint, the district court should not have dismissed the complaint sua sponte.

Photographer Jana Romanova created a photograph of a woman with a snake wrapped around her left hand and another snake crawling up her torso. (Not the one pictured here.) She licensed it to National Geographic Magazine for a single use. According to the complaint, Amilus, Inc. allegedly made a copy of the photograph and published it to its website. Romanova allegedly sent notifications demanding the removal of the photograph from the website. The defendant allegedly did not respond. This lawsuit followed.

The defendant allegedly did not appear or respond to the complaint, so Romanova moved for the entry of default judgment. Rather than grant a default judgment, however, the district court judge sua sponte ordered Romanova to show cause why the court should not dismiss the case on the grounds that the defendant’s use of the photograph was fair use. Although fair use is an affirmative defense, which defendants have the burden of asserting and proving, the judge opined that the fair use defense did not need to be pleaded because the judge believed the fair use defense was “clearly established on the fact of the complaint.

Romanova appealed. The Second Circuit Court of Appeals reversed, effectively allowing the infringement claim to go forward.

Fair Use

In its decision, the Second Circuit Court of Appeals clarified how courts are to interpret and apply the four-factor “fair use” test outlined in the Copyright Act, 17 U.S.C. § 107 (purpose and character of the use; nature of the work; amount and substantiality of the portion copied; and the effect on the market for the work.)

The district court concluded that the defendant’s publication of the photograph communicated a different message than what the photographer intended. According to the district court, the purpose of the publication in the National Geographic was “to showcase persons in [her] home country of Russia that kept snakes as pets, specifically to capture pet snakes in common environments that are more associated with mainstream domesticated animals.” The district court found that the purpose of the defendant’s publication was to communicate a message about “the ever-increasing amount of pet photography circulating online.

Apparently the district court was under the impression that the use of a copyright-protected work for any different purpose, or to communicate any different message, is “transformative” and therefore “fair use.” The Court of Appeals clarified that is not the case. In addition to alleging and proving the use was for a different purpose or conveyed a different meaning, a defendant seeking to establish a fair use defense must also allege and prove a justification for the copying.

Examples of purposes that may justify copying a work include commentary or criticism of the copied work, or providing information to the public about the copied work, in circumstances where the copy does not become a substitute for the work. (See, e.g., Authors Guild v. Google, Inc., 804 F.3d 202, 212 (2d Cir. 2015).) Copying for evidentiary purposes (such as to support a claim that the creator of the work published a defamatory statement) can also be a valid justification to support a fair use defense. Creating small, low-resolution copies of images (“thumbnails”) may be justified when the purpose is to facilitate Internet searching. (Perfect 10 v. Amazon.com, 508 F.3d 1146, 1165 (9th Cir. 2007). Facilitating blind people’s access to a work may provide a justification for converting it into a format that blind people can read. (Authors Guild v. HathiTrust, 755 F.3d 87, 97 (2d Cir. 2014).

The Court cited other examples of potential justifications for copying. The Court admonished, however, that the question whether justification exists is a fact-specific determination that must be made on a case-by-case basis.

[J]ustification is often found when the copying serves to critique, or otherwise comment on, the original, or its author, but can also be found in other circumstances, such as when the copying provides useful information about the original, or on other subjects, usually in circumstances where the copying does not make the expressive content of the original available to the public.

Romanova, supra.

The only “justification” the district court cited for the copying was that it believed the defendant merely wanted to illustrate its perception of a growing trend to publish photographs of people with pets. “Little could remain of an author’s copyright protection if others could secure the right to copy and distribute a work simply by asserting some fact about the copied work,” the Court observed. The defendant’s publication of the copy did not communicate criticism or commentary on the original photograph or its author, or any other subject, the Court held.

The Court held that the remaining three fair use factors also militated against a finding of fair use.

Sua Sponte Dismissal for “Fair Use”

Justice Sullivan filed a concurring opinion. He would have reversed on procedural grounds without reaching the substantive issue. Specifically, Justice Sullivan objected to the trial judge’s raising of the fair use defense sua sponte on behalf of a non-appearing defendant. Normally, if a complaint establishes a prima case for relief, the court does not consider affirmative defenses (such as fair use) unless the defendant asserts them. That is to say, fair use is an affirmative defense; the defendant, not the plaintiff, bears the burden of proof.

Conclusion

Appeals courts continue to rein in overly expansive applications of “transformative” fair use by the lower courts. Here, the Court of Appeals soundly reasoned that merely being able to articulate an additional purpose served by publishing an author’s entire work, unchanged, will not, by itself, suffice to establish either transformative use or fair use.

Joint Custody and Equal Shared Parenting Laws

Yes, this is off-topic. It is, however, the reason I haven’t been posting to this blog lately. In addition to finishing out some cases, I have been working on developing this 90-minute program for the past few months.

In what seems like a lifetime ago, I practiced family law. During that time, I witnessed first-hand the havoc the sole-custody regime wreaked on families, both parents and children. I’ve always believed there had to be a better way.

In this webinar, I will be presenting a brief overview of the joint custody and equal shared parenting laws of the fifty U.S. states. Professor Daniel Fernandez-Kranz will join me to talk about how equal shared parenting has been working in Spain. Kentucky family law attorney Carl Knochelmann, Jr. will talk about the impact Kentucky’s statute, which is the first-ever presumptive equal shared parenting time law, has been having. Professor Donald Hubin will round things out with a look at what can be learned from Ohio’s experiences with both equal shared parenting and the traditional sole custody model. He will also present findings about the interplay of equal shared parenting laws and domestic violence, based on data gathered from Kentucky and Ohio.

California has approved the webinar for 90-minutes of MCLE and LSCLE (family law specialist) continuing legal education credits. Continuing legal and mediator education credits are available in many other states as well.

The live webinar is on October 24, 2024. There will be a video replay on November 8, 2024.

If you have an interest, you can find more information, and registration links, at EchionCLE.com

I promise I will get back to copyright and trademark issues soon.

Suggestive Trademarks

Is that a source identifier in your pocket or are you just being descriptive?

A trademark gives its owner an exclusive right to use it in connection with a particular kind or category of products or services. At the same time, though, trademark law seeks to promote competition. To that end, it generally does not allow people or companies to claim the exclusive right to use generic words or words that are used to describe a feature or quality of a product. If someone could claim an exclusive right to use gasoline or unleaded as a trademark for petroleum products, then nobody else could enter the market.

To qualify for trademark protection, a mark cannot be merely descriptive of the product or service. It must be distinctive, in the sense that consumers see it not merely as a description of the product or service but as an identifier of the source of the product or service, that is to say, an identifier of the producer or sponsor of the product or service.

Sometimes a mark that is merely descriptive at first acquires distinctiveness over time. Through advertising and long usage, consumers come to see it as a source-identifier. International Business Machines (IBM) is an example. It is descriptive of the product or service, but over time consumers have come to associate it with a particular source of business machines. In general, though, words and phrases that are merely descriptive do not qualify for trademark protection.

Inherently Distinctive Trademarks

Some kinds of marks are regarded as inherently distinctive, meaning there is no need to make a showing of acquired distinctiveness before trademark rights may be claimed in them. A mark is inherently distinctive if it is fanciful, arbitrary, or suggestive.

A fanciful mark is something that is completely made up. Xerox is an example.

An arbitrary mark is one which, although a real word, bears no logical relationship to the product or service. Apple, as a trademark for computers, is an example.

A suggestive mark is one that hints at but does not directly describe a quality or feature of a product or service.

Why It Matters

A lot rides on whether a mark is suggestive or merely descriptive. If it is merely descriptive, then its first use in commerce will not be as a trademark. Competitors may freely use the same mark until it acquires distinctiveness. If you try to register it with the USPTO, you might get it on the Supplemental Register, but it won’t make it onto the Principal Register. It will not enjoy the presumptions of validity and ownership that trademarks registered on the Principal Register do. Maybe you will be able to get it there someday, but only if you present persuasive evidence of acquired distinctiveness.

Suggestive trademarks, on the other hand, may qualify for trademark protection upon their first use in commerce. And if registered, they get all the advantages of being included in the Principal Register.

Where Is the Line?

The difference between descriptiveness and suggestiveness is not very intuitive. The classic formulation is that a descriptive term directly describes something, while a suggestive term merely hints at it. If a term directly states what a product or service is or does, or literally describes a feature or quality, then it might be generic or descriptive, but it is not suggestive. If a term does not directly describe the product or service, or a quality or feature of it, that is to say, if some additional thought process is needed to get to the intended description, then it is suggestive.

Jaguar is an example of a suggestive trademark. The product does not literally feature or contain a wild animal. An additional thought process is needed to get to the intended descriptor, which in this case would be fast. Fast would be a merely descriptive mark for a car. Jaguar requires an additional thought process, namely, the idea that jaguars are fast, to reach the intended meaning.

Coppertone is another example. Suntan oil would be a descriptive mark. Coppertone, on the other hand, does not directly describe the product. Instead, it suggests what might happen if someone uses it, i.e., the person may acquire something akin to a copper skin color.

Incidentally, changing the spelling of a descriptive term generally will not bring it into the realm of inherent distinctiveness even if the word, as so spelled, technically is an invented one. Fastt Car, for instance, would almost certainly be treated by the USPTO or a court as a merely descriptive trademark. The relevant question is whether consumers would read or hear it as an adjective, not whether a member of the Spelling and Grammar Police Squad would.

Criticism

Courts and legal scholars have noted that the line between descriptiveness and suggestiveness is not always clear. Professor Jake Linford, for example, has argued that the distinction is “illusory at best,” and that a suggestive mark is “more like a descriptive mark than the law currently recognizes.” (Jake Linford, The False Dichotomy Between Suggestive and Descriptive Trademarks, 76 OHIO ST. L.J. 1367 (2015), available here. )

Nevertheless, the distinction is currently recognized in the law, at least in the United States.

European Union

The European Union appears to take a narrower view of suggestiveness. For example, it denied protection for “How Can I Make You Smile Today?” as a trademark for orthodontic and dental supplies. As certain fast food chains and performing artists can attest, slogans and phrases can easily be claimed as trademarks in the United States. This slogan would clearly fall on the “suggestive” side of the suggestive/descriptive line under the classic formulations of the distinction in the United States.



AI Lawsuits Roundup

A status update on 24 pending lawsuits against AI companies – what they’re about and what is happening in court – prepared by Minnesota copyright attorney Thomas James.

A very brief summary of where pending AI lawsuits stand as of February 28, 2024. Compiled by Minnesota attorney Thomas James.

Thomson Reuters v. Ross, (D. Del. 2020)

Filed May 6, 2020. Thomson Reuters, owner of Westlaw, claims that Ross Intelligence infringed copyrights in Westlaw headnotes by training AI on copies of them. The judge has granted, in part, and denied, in part, motions for summary judgment. The questions of fair use and whether the headnotes are sufficiently original to merit copyright protection will go to a jury to decide.

Thaler v. Perlmutter (D.D.C. 2022).

Complaint filed June 2, 2022. Thaler created an AI system called the Creativity Machine. He applied to register copyrights in the output he generated with it. The Copyright Office refused registration on the ground that AI output does not meet the “human authorship” requirement. He then sought judicial review. The district court granted summary judgment for the Copyright Office. In October, 2023, he filed an appeal to the District of Columbia Circuit Court of Appeals (Case no. 23-5233).

Doe v. GitHub, Microsoft, and OpenAI (N.D. Cal. 2022)

Complaint filed November 3, 2022. Software developers claim the defendants trained Codex and Copilot on code derived from theirs, which they published on GitHub. Some claims have been dismissed, but claims that GitHub and OpenAI violated the DMCA and breached open source licenses remain. Discovery is ongoing.

Andersen v. Stability AI (N.D. Cal. 2023)

Complaint filed January 13, 1023. Visual artists sued Midjourney, Stability AI and DeviantArt for copyright infringement for allegedly training their generative-AI models on images scraped from the Internet without copyright holders’ permission. Other claims included DMCA violations, publicity rights violations, unfair competition, breach of contract, and a claim that output images are infringing derivative works. On October 30, 2023, the court largely granted motions to dismiss, but granted leave to amend the complaint. Plaintiffs filed an amended complaint on November 29, 2023. Defendants have filed motions to dismiss the amended complaint. Hearing on the motion is set for May 8, 2024.

Getty Images v. StabilityAI (U.K. 2023)

Complaint filed January, 2023. Getty Images claims StabilityAI scraped images without its consent. Getty’s complaint has survived a motion to dismiss and the case appears to be heading to trial.

Getty Images v. Stability AI (D. Del.)

Complaint filed February 3, 2023. Getty Images alleges claims of copyright infringement, DMCA violation and trademark violations against Stability AI. The judge has dismissed without prejudice a motion to dismiss or transfer on jurisdictional grounds. The motion may be re-filed after the conclusion of jurisdictional discovery, which is ongoing.

Flora v. Prisma Labs (N.D. Cal.)

Complaint filed February 15, 2023. Plaintiffs allege violations of the Illinois Biometric Privacy Act in connection with Prisma Labs’ collection and retention of users’ selfies in AI training. The court has granted Prisma’s motion to compel arbitration.

Kyland Young v. NeoCortext (C.D. Cal. 2023)

Complaint filed April 3, 2023. This complaint alleges that AI tool Reface used a person’s image without consent, in violation of the person’s publicity rights under California law. The court has denied a motion to dismiss, ruling that publicity rights claims are not preempted by federal copyright law. The case has been stayed pending appeal.

Walters v. OpenAI (Gwinnett County Super. Ct. 2023), and Walters v. OpenAI (N.D. Ga. 2023)

Gwinnett County complaint filed June 5, 2023.

Federal district court complaint filed July 14, 2023.

Radio talk show host sued OpenAI for defamation. A reporter had used ChatGPT to get information about him. ChatGPT wrongly described him as a person who had been accused of fraud. In October, 2023, the federal court remanded the case to the Superior Court of Gwinnett County, Georgia.  On January 11, 2024, the Gwinnett County Superior Court denied OpenAI’s motion to dismiss.

P.M. v. OpenAI (N.D. Cal. 2023).

Complaint filed June 28, 2023. Users claim OpenAI violated the federal Electronic Communications Privacy Act and California wiretapping laws by collecting their data when they input content into ChatGPT. They also claim violations of the Computer Fraud and Abuse Act. Plaintiffs voluntarily dismissed the case on September 15, 2023. See now A.T. v. OpenAI (N.D. Cal. 2023) (below).

In re OpenAI ChatGPT Litigation (N.D. Cal. 2023)

Complaint filed June 28, 3023. Originally captioned Tremblay v. OpenAI. Book authors sued OpenAI for direct and vicarious copyright infringement, DMCA violations, unfair competition and negligence. Both input (training) and output (derivative works) claims are alleged, as well as state law claims of unfair competition, etc. Most state law and DMCA claims have been dismissed, but claims based on unauthorized copying during the AI training process remain. An amended complaint is likely to come in March. The court has directed the amended complaint to consolidate Tremblay v. OpenAI, Chabon v. OpenAI, and Silverman v. OpenAI.  

Battle v. Microsoft (D. Md. 2023)

Complaint filed July 7, 2023. Pro se defamation complaint against Microsoft alleging that Bing falsely described him as a member of the “Portland Seven,” a group of Americans who tried to join the Taliban after 9/11.

Kadrey v. Meta (N.D. Cal. 2023)

Complaint filed July 7, 2023. Sarah Silverman and other authors allege Meta infringed copyrights in their works by making copies of them while training Meta’s AI model; that the AI model is itself an infringing derivative work; and that outputs are infringing copies of their works. Plaintiffs also allege DMCA violations, unfair competition, unjust enrichment, and negligence. The court granted Meta’s motion to dismiss all claims except the claim that unauthorized copies were made during the AI training process. An amended complaint and answer have been filed.

J.L. v. Google (N.D. Cal. 2023)

Complaint filed July 11, 2023. An author filed a complaint against Google alleging misuse of content posted on social media and Google platforms to train Google’s AI Bard. (Gemini is the successor to Google’s Bard.) Claims include copyright infringement, DMCA violations, and others. J.L. filed an amended complaint and Google has filed a motion to dismiss it. A hearing is scheduled for May 16, 2024.

A.T. v. OpenAI (N.D. Cal. 2023)

Complaint filed September 5, 2023. ChatGPT users claim the company violated the federal Electronic Communications Privacy Act, the Computer Fraud and Abuse Act, and California Penal Code section 631 (wiretapping). The gravamen of the complaint is that ChatGPT allegedly accessed users’ platform access and intercepted their private information without their knowledge or consent. Motions to dismiss and to compel arbitration are pending.

Chabon v. OpenAI (N.D. Cal. 2023)

Complaint filed September 9, 2023. Authors allege that OpenAI infringed copyrights while training ChatGPT, and that ChatGPT is itself an unauthorized derivative work. They also assert claims of DMCA violations, unfair competition, negligence and unjust enrichment. The case has been consolidated with Tremblay v. OpenAI, and the cases are now captioned In re OpenAI ChatGPT Litigation.

Chabon v. Meta Platforms (N.D. Cal. 2023)

Complaint filed September 12, 2023. Authors assert copyright infringement claims against Meta, alleging that Meta trained its AI using their works and that the AI model itself is an unauthorized derivative work. The authors also assert claims for DMCA violations, unfair competition, negligence, and unjust enrichment. In November, 2023, the court issued an Order dismissing all claims except the claim of unauthorized copying in the course of training the AI. The court described the claim that an AI model trained on a work is a derivative of that work as “nonsensical.”

Authors Guild v. OpenAI, Microsoft, et al. (S.D.N.Y. 2023)

Complaint filed September 19, 1023. Book and fiction writers filed a complaint for copyright infringement in connection with defendants’ training AI on copies of their works without permission. A motion to dismiss has been filed.

Huckabee v. Bloomberg, Meta Platforms, Microsoft, and EleutherAI Institute (S.D.N.Y. 2023)

Complaint filed October 17, 2023. Political figure Mike Huckabee and others allege that the defendants trained AI tools on their works without permission when they used Books3, a text dataset compiled by developers; that their tools are themselves unauthorized derivative works; and that every output of their tools is an infringing derivative work.  Claims against EleutherAI have been voluntarily dismissed. Claims against Meta and Microsoft have been transferred to the Northern District of California. Bloomberg is expected to file a motion to dismiss soon.

Huckabee v. Meta Platforms and Microsoft (N.D. Cal. 2023)

Complaint filed October 17, 2023. Political figure Mike Huckabee and others allege that the defendants trained AI tools on their works without permission when they used Books3, a text dataset compiled by developers; that their tools are themselves unauthorized derivative works; and that every output of their tools is an infringing derivative work. Plaintiffs have filed an amended complaint. Plaintiffs have stipulated to dismissal of claims against Microsoft without prejudice.

Concord Music Group v. Anthropic (M.D. Tenn. 2023)

Complaint filed October 18, 2023. Music publishers claim that Anthropic infringed publisher-owned copyrights in song lyrics when they allegedly were copied as part of an AI training process (Claude) and when lyrics were reproduced and distributed in response to prompts. They have also made claims of contributory and vicarious infringement. Motions to dismiss and for a preliminary injunction are pending.

Alter v. OpenAI and Microsoft (S.D.N.Y. 2023)

Complaint filed November 21, 2023. Nonfiction author alleges claims of copyright infringement and contributory copyright infringement against OpenAI and Microsoft, alleging that reproducing copies of their works in datasets used to train AI infringed copyrights. The court has ordered consolidation of Author’s Guild (23-cv-8292) and Alter (23-cv-10211). On February 12,2024, plaintiffs in other cases filed a motion to intervene and dismiss.

New York Times v. Microsoft and OpenAI (S.D.N.Y. 2023)

Complaint filed December 27, 2023. The New York Times alleges that their news stories were used to train AI without a license or permission, in violation of their exclusive rights of reproduction and public display, as copyright owners. The complaint also alleges vicarious and contributory copyright infringement, DMCA violations, unfair competition, and trademark dilution. The Times seeks damages, an injunction against further infringing conduct, and a Section 503(b) order for the destruction of “all GPT or other LLM models and training sets that incorporate Times Works.” On February 23, 2024, plaintiffs in other cases filed a motion to intervene and dismiss this case.  

Basbanes and Ngagoyeanes v. Microsoft and OpenAI (S.D.N.Y. 2024)

Complaint filed January 5, 2024. Nonfiction authors assert copyright claims against Microsoft and OpenAI. On February 6, 2024, the court consolidated this case with Authors Guild (23-cv-08292) and Alter v. Open AI (23-cv-10211), for pretrial purposes.  

Caveat

This list is not exhaustive. There may be other cases involving AI that are not included here. For a discussion of bias issues in Google’s Gemini, have a look at Scraping Bias on Medium.com.

Nontransformative Nuge

A reversal in the 4th Circuit Court demonstrates the impact the Supreme Court’s decision in Andy Warhol Foundation for the Arts v. Goldsmith is already having on the application of copyright fair use doctrine in federal courts.

Philpot v. Independent Journal Review, No. 21-2021 (4th Circ., Feb. 6, 2024)

Philpot, a concert photographer, registered his photograph of Ted Nugent as part of a group of unpublished works. Prior to registration, he entered into a license agreement giving AXS TV the right to inspect his photographs for the purpose of selecting ones to curate. The agreement provided that the license would become effective upon delivery of photographs for inspection. After registration, Philpot delivered a set of photographs, including the Nugent photograph, to AXS TV. He also published the Nugent photograph to Wikimedia Commons under a Creative Commons (“CC”) license. The CC license allows free use on the condition that attribution is given. LJR published an article called “15 Signs Your Daddy Was a Conservative.” Sign #5 was He hearts the Nuge. LJR used Philpot’s photograph of Ted Nugent as an illustration for the article, without providing an attribution of credit to Philpot.

Philpot sued IJR for copyright infringement.  IJR asserted two defenses: (1) invalid copyright registration; and (2) fair use. The trial court did not decide whether the registration was valid or not, but it granted summary judgment for IJR based on its opinion that the news service’s publication of the photograph was fair use. The Fourth Circuit Court of Appeals reversed, ruling in Philpot’s favor on both issues. The Court held that the copyright registration was valid and that publication of the photograph without permission was not fair use.

The copyright registration

Published and unpublished works cannot be registered together. Including a published work in an application for registration of a group of unpublished works is an inaccuracy that might invalidate the registration, if the applicant was aware of the inaccuracy at the time of applying. Cf. Unicolors v. H&M Hennes & Mauritz, 595 U.S. 178 (2022). LJR argued that Philpot’s pre-registration agreement to send photographs to AJX TV to inspect for possible curation constituted “publication” of them so characterizing them as “unpublished” in the registration application was an inaccuracy known to Philpot.

17 U.S.C. § 101 defines publication as “the distribution of copies . . . to the public” or “offering to distribute copies . . . to a group of persons for purposes of further distribution . . . or public display.” The Court of Appeals held that merely entering into an agreement to furnish copies to a distributor for possible curation does not come within that definition. Sending copies to a limited class of people without concomitantly granting an unrestricted right to further distribute them to the public does not amount to “publication.”

Philpot’s arrangement with AXS TV is analogous to an author submitting a manuscript to a publisher for review for possible future distribution to the public. The U.S. Copyright Office has addressed this. “Sending copies of a manuscript to prospective publishers in an effort to secure a book contract does not [constitute publication].” U.S. Copyright Office, Compendium of U.S. Copyright Office Practices § 1905.1 (3d ed. 2021). Philpot had provided copies of his work for the limited purpose of examination, without a present grant of a right of further distribution. Therefore, the photographs were, in fact, unpublished at the time of the application for registration. Since no inaccuracy existed, the registration was valid.

Fair use

The Court applied the four-factor test for fair use set out in 17 U.S.C. § 107.

(1) Purpose and character of the use. Citing Andy Warhol Found. For the Visual Arts v. Goldsmith, 598 U.S. 508 , 527–33 (2023), the Court held that when, as here, a use is neither transformative nor noncommercial, this factor weighs against a fair use determination. LJR used the photograph for the same purpose as Philpot intended to use it (as a depiction of Mr. Nugent), and it was a commercial purpose.

(2) Nature of the work. Photographs taken by humans are acts of creative expression that receive what courts have described as “thick” copyright protection.” Therefore, this factor weighed against a fair use determination.

(3) Amount and substantiality of the portion used. Since all of the expressive features of the work were used, this factor also weighed against a fair use determination.

(4) Effect on the market for the work. Finally, the Court determined that allowing free use of a copyrighted work for commercial purposes without the copyright owner’s permission could potentially have a negative impact on the author’s market for the work. Therefore, this factor, too, weighed against a fair use determination.

Since all four factors weighed against a fair use determination, the Court reversed the trial court’s grant of summary judgment to IJR and remanded the case for further proceedings.

Conclusion

This decision demonstrates the impact the Warhol decision is having on copyright fair use analysis in the courts. Previously, courts had been interpreting transformativeness very broadly. In many cases, they were ending fair use inquiry as soon as some sort of transformative use could be articulated. As the Court of Appeals decision in this case illustrates, trial courts now need to alter their approach in two ways: (1) They need to return to considering all four fair use factors rather than ending the inquiry upon a defendant’s articulation of some “transformative use;” and (2) They need to apply a much narrower definition of transformativeness than they have been. If both the original work and an unauthorized reproduction of it are used for the purpose of depicting a particular person or scene (as distinguished from parodying or commenting on a work, for example), for commercial gain, then it would no longer appear to be prudent to count on the first of the four fair use factors supporting a fair use determination.


Photo: Photograph published in a July, 1848 edition of L’Illustration. Believed to be the first instance of photojournalism, it is now in the public domain.

Generative-AI as Unfair Trade Practice

While Congress and the courts grapple with generative-AI copyright issues, the FTC weighs in on the risks of unfair competition, monopolization, and consumer deception.

FTC Press Release exceprt

While Congress and the courts are grappling with the copyright issues that AI has generated, the federal government’s primary consumer watchdog has made a rare entry into the the realm of copyright law. The Federal Trade Commission (FTC) has filed a Comment with the U.S. Copyright Office suggesting that generative-AI could be (or be used as) an unfair or deceptive trade practice. The Comment was filed in response to the Copyright Office’s request for comments as it prepares to begin rule-making on the subject of artificial intelligence (AI), particularly, generative-AI.

Monopolization

The FTC is responsible for enforcing the FTC Act, which broadly prohibits “unfair or deceptive” practices. The Act protects consumers from deceptive and unscrupulous business practices. It is also intended to promote fair and healthy competition in U.S. markets. The Supreme Court has held that all violations of the Sherman Act also violate the FTC Act.

So how does generative-AI raise monopolization concerns? The Comment suggests that incumbents in the generative-AI industry could engage in anti-competitive behavior to ensure continuing and exclusive control over the use of the technology. (More on that here.)

The agency cited the usual suspects: bundling, tying, exclusive or discriminatory dealing, mergers, acquisitions. Those kinds of concerns, of course, are common in any business sector. They are not unique to generative-AI. The FTC also described some things that are matters of special concern in the AI space, though.

Network effects

Because positive feedback loops improve the performance of generative-AI, it gets better as more people use it. This results in concentrated market power in incumbent generative-AI companies with diminishing possibilities for new entrants to the market. According to the FTC, “network effects can supercharge a company’s ability and incentive to engage in unfair methods of competition.”

Platform effects

As AI users come to be dependent on a particular incumbent generative-AI platform, the company that owns the platform could take steps to lock their customers into using their platform exclusively.

Copyrights and AI competition

The FTC Comment indicates that the agency is not only weighing the possibility that AI unfairly harms creators’ ability to compete. (The use of pirated or the misuse of copyrighted materials can be an unfair method of competition under Section 5 of the FTC Act.) It is also considering that generative-AI may deceive, or be used to deceive, consumers. Specifically, the FTC expressed a concern that “consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist, but it has been generated by someone else using an AI tool.” (Comment, page 5.)

In one of my favorite passages in the Comment, the FTC suggests that training AI on protected expression without consent, or selling output generated “in the style of” a particular writer or artist, may be an unfair method of competition, “especially when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works….” (Comment, pages 5 – 6).

Fair Use

The significance of the FTC’s injection of itself into the generative-AI copyright fray cannot be overstated. It is extremely likely that during their legislative and rule-making deliberations, both Congress and the Copyright Office are going to be focusing the lion’s share of their attention on the fair use doctrine. They are most likely going to try to allow generative-AI outfits to continue to infringe copyrights (It is already a multi-billion-dollar industry, after all, and with obvious potential political value), while at the same time imposing at least some kinds of limitations to preserve a few shards of the copyright system. Maybe they will devise a system of statutory licensing like they did when online streaming — and the widespread copyright infringement it facilitated– became a thing.

Whatever happens, the overarching question for Congress is going to be, “What kinds of copyright infringement should be considered “fair” use.

Copyright fair use normally is assessed using a four-prong test set out in the Copyright Act. Considerations about unfair competition arguably are subsumed within the fourth factor in that analysis – the effect the infringing use has on the market for the original work.

The other objective of the FTC Act – protecting consumers from deception — does not neatly fit into one of the four statutory factors for copyright fair use. I believe a good argument can be made that it should come within the coverage of the first prong of the four-factor test: the purpose and character of the use. The task for Congress and the Copyright Office, then, should be to determine which particular purposes and kinds of uses of generative-AI should be thought of as fair. There is no reason the Copyright Office should avoid considering Congress’s objectives, expressed in the FTC Act and other laws, when making that determination.

AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

Generative AI: Perfect Tool for the Age of Deception

For many reasons, the new millennium might well be described as the Age of Deception. Cokato Copyright Attorney Tom James explains why generative-AI is a perfect fit for it.

Image by Gerd Altmann on Pixabay.

What is generative AI?

“AI,” of course, stands for artificial intelligence. Generative AI is a variety of it that can produce content such as text and images, seemingly of its own creation. I say “seemingly” because in reality these kinds of AI tools are not really independently creating these images and lines of text. Rather, they are “trained” to emulate existing works created by humans. Essentially, they are derivative work generation machines that enable the creation of derivative works based on potentially millions of human-created works.

AI has been around for decades. It wasn’t until 2014, however, that the technology began to be refined to the point that it could generate text, images, video and audio so similar to real people and their creations that it is difficult, if not impossible, for the average person to tell the difference.

Rapid advances in the technology in the past few years have yielded generative-AI tools that can write entire stories and articles, seemingly paint artistic images, and even generate what appear to be photographic images of people.

AI “hallucinations” (aka lies)

In the AI field, a “hallucination” occurs when an AI tool (such as ChatGPT) generates a confident response that is not justified by the data on which it has been trained.

For example, I queried ChatGPT about whether a company owned equally by a husband and wife could qualify for the preferences the federal government sets aside for women-owned businesses. The chatbot responded with something along the lines of “Certainly” or “Absolutely,” explaining that the U.S. government is required to provide equal opportunities to all people without discriminating on the basis of sex, or something along those lines. When I cited the provision of federal law that contradicts what the chatbot had just asserted, it replied with an apology and something to the effect of “My bad.”

I also asked ChatGPT if any U.S. law imposes unequal obligations on male citizens. The chatbot cheerily reported back to me that no, no such laws exist. I then cited the provision of the United States Code that imposes an obligation to register for Selective Service only upon male citizens. The bot responded that while that is true, it is unimportant and irrelevant because there has not been a draft in a long time and there is not likely to be one anytime soon. I explained to the bot that this response was irrelevant. Young men can be, and are, denied the right to government employment and other civic rights and benefits if they fail to register, regardless of whether a draft is in place or not, and regardless of whether they are prosecuted criminally or not. At this point, ChatGPT announced that it would not be able to continue this conversation with me. In addition, it made up some excuse. I don’t remember what it was, but it was something like too many users were currently logged on.

These are all examples of AI hallucinations. If a human being were to say them, we would call them “lies.”

Generating lie after lie

AI tools regularly concoct lies. For example, when asked to generate a financial statement for a company, a popular AI tool falsely stated that the company’s revenue was some number it apparently had simply made up. According to Slate, in their article, “The Alarming Deceptions at the Heart of an Astounding New Chatbot,” users of large language models like ChatGPT have been complaining that these tools randomly insert falsehoods into the text they generate. Experts now consider frequent “hallucination” (aka lying) to be a major problem in chatbots.

ChatGPT has also generated fake case precedents, replete with plausible-sounding citations. This phenomenon made the news when Stephen Schwartz submitted six fake ChatGPT-generated case precedents in his brief to the federal district court for the Southern District of New York in Mata v. Avianca. Schwartz reported that ChatGPT continued to insist the fake cases were authentic even after their nonexistence was discovered. The judge proceeded to ban the submission of AI-generated filings that have not been reviewed by a human, saying that generative-AI tools

are prone to hallucinations and bias…. [T]hey make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices,… generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to…the truth.

Judge Brantley Starr, Mandatory Certification Regarding Generative Artificial Intelligence.

Facilitating defamation

Section 230 of the Communications Decency Act generally shields Facebook, Google and other online services from liability for providing a platform for users to publish false and defamatory information about other people. That has been a real boon for people who like to destroy other people’s reputations by means of spreading lies and misinformation about them online. It can be difficult and expensive to sue an individual for defamation, particularly when the individual has taken steps to conceal and/or lie about his or her identity. Generative AI tools make the job of defaming people even simpler and easier.

More concerning than the malicious defamatory liars, however, are the many people who earnestly rely on AI as a research tool. In July, 2023, Mark Walters filed a lawsuit against OpenAI, claiming its ChatGPT tool provided false and defamatory misinformation about him to journalist Fred Riehl. I wrote about this lawsuit in a previous blog post. Shortly after this lawsuit was filed, a defamation lawsuit was filed against Microsoft, alleging that its AI tool, too, had generated defamatory lies about an individual. Generative-AI tools can generate false and defamation statements about individuals even if no one has any intention of defaming anyone or ruining another person’s reputation.

Facilitating false light invasion of privacy

Generative AI is also highly effective in portraying people in a false light. In one recently filed lawsuit, Jack Flora and others allege, among other things, that Prisma Labs’ Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed. Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023).

Pot, meet kettle; kettle, pot

“False news is harmful to our community, it makes the world less informed, and it erodes trust. . . . At Meta, we’re working to fight the spread of false news.” Meta (nee Facebook) published that statement back in 2017.  Since then, it has engaged in what is arguably the most ambitious campaign in history to monitor and regulate the content of conversations among humans. Yet, it has also joined other mega-organizations Google and Microsoft in investing multiple billions of dollars in what is the greatest boon to fake news in recorded history: generative-AI.

Toward a braver new world

It would be difficult to imagine a more efficient method of facilitating widespread lying and deception (not to mention false and hateful rhetoric) – and therefore propaganda – than generative-AI. Yet, these mega-organizations continue to sink more and more money into further development and deployment of these lie-generators.

I dread what the future holds in store for our children and theirs.

Exit mobile version
%%footer%%