Generative AI: Perfect Tool for the Age of Deception

For many reasons, the new millennium might well be described as the Age of Deception. Cokato Copyright Attorney Tom James explains why generative-AI is a perfect fit for it.

Illustrating generative AI
Image by Gerd Altmann on Pixabay.

What is generative AI?

“AI,” of course, stands for artificial intelligence. Generative AI is a variety of it that can produce content such as text and images, seemingly of its own creation. I say “seemingly” because in reality these kinds of AI tools are not really independently creating these images and lines of text. Rather, they are “trained” to emulate existing works created by humans. Essentially, they are derivative work generation machines that enable the creation of derivative works based on potentially millions of human-created works.

AI has been around for decades. It wasn’t until 2014, however, that the technology began to be refined to the point that it could generate text, images, video and audio so similar to real people and their creations that it is difficult, if not impossible, for the average person to tell the difference.

Rapid advances in the technology in the past few years have yielded generative-AI tools that can write entire stories and articles, seemingly paint artistic images, and even generate what appear to be photographic images of people.

AI “hallucinations” (aka lies)

In the AI field, a “hallucination” occurs when an AI tool (such as ChatGPT) generates a confident response that is not justified by the data on which it has been trained.

For example, I queried ChatGPT about whether a company owned equally by a husband and wife could qualify for the preferences the federal government sets aside for women-owned businesses. The chatbot responded with something along the lines of “Certainly” or “Absolutely,” explaining that the U.S. government is required to provide equal opportunities to all people without discriminating on the basis of sex, or something along those lines. When I cited the provision of federal law that contradicts what the chatbot had just asserted, it replied with an apology and something to the effect of “My bad.”

I also asked ChatGPT if any U.S. law imposes unequal obligations on male citizens. The chatbot cheerily reported back to me that no, no such laws exist. I then cited the provision of the United States Code that imposes an obligation to register for Selective Service only upon male citizens. The bot responded that while that is true, it is unimportant and irrelevant because there has not been a draft in a long time and there is not likely to be one anytime soon. I explained to the bot that this response was irrelevant. Young men can be, and are, denied the right to government employment and other civic rights and benefits if they fail to register, regardless of whether a draft is in place or not, and regardless of whether they are prosecuted criminally or not. At this point, ChatGPT announced that it would not be able to continue this conversation with me. In addition, it made up some excuse. I don’t remember what it was, but it was something like too many users were currently logged on.

These are all examples of AI hallucinations. If a human being were to say them, we would call them “lies.”

Generating lie after lie

AI tools regularly concoct lies. For example, when asked to generate a financial statement for a company, a popular AI tool falsely stated that the company’s revenue was some number it apparently had simply made up. According to Slate, in their article, “The Alarming Deceptions at the Heart of an Astounding New Chatbot,” users of large language models like ChatGPT have been complaining that these tools randomly insert falsehoods into the text they generate. Experts now consider frequent “hallucination” (aka lying) to be a major problem in chatbots.

ChatGPT has also generated fake case precedents, replete with plausible-sounding citations. This phenomenon made the news when Stephen Schwartz submitted six fake ChatGPT-generated case precedents in his brief to the federal district court for the Southern District of New York in Mata v. Avianca. Schwartz reported that ChatGPT continued to insist the fake cases were authentic even after their nonexistence was discovered. The judge proceeded to ban the submission of AI-generated filings that have not been reviewed by a human, saying that generative-AI tools

are prone to hallucinations and bias…. [T]hey make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices,… generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to…the truth.

Judge Brantley Starr, Mandatory Certification Regarding Generative Artificial Intelligence.

Facilitating defamation

Section 230 of the Communications Decency Act generally shields Facebook, Google and other online services from liability for providing a platform for users to publish false and defamatory information about other people. That has been a real boon for people who like to destroy other people’s reputations by means of spreading lies and misinformation about them online. It can be difficult and expensive to sue an individual for defamation, particularly when the individual has taken steps to conceal and/or lie about his or her identity. Generative AI tools make the job of defaming people even simpler and easier.

More concerning than the malicious defamatory liars, however, are the many people who earnestly rely on AI as a research tool. In July, 2023, Mark Walters filed a lawsuit against OpenAI, claiming its ChatGPT tool provided false and defamatory misinformation about him to journalist Fred Riehl. I wrote about this lawsuit in a previous blog post. Shortly after this lawsuit was filed, a defamation lawsuit was filed against Microsoft, alleging that its AI tool, too, had generated defamatory lies about an individual. Generative-AI tools can generate false and defamation statements about individuals even if no one has any intention of defaming anyone or ruining another person’s reputation.

Facilitating false light invasion of privacy

Generative AI is also highly effective in portraying people in a false light. In one recently filed lawsuit, Jack Flora and others allege, among other things, that Prisma Labs’ Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed. Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023).

Pot, meet kettle; kettle, pot

“False news is harmful to our community, it makes the world less informed, and it erodes trust. . . . At Meta, we’re working to fight the spread of false news.” Meta (nee Facebook) published that statement back in 2017.  Since then, it has engaged in what is arguably the most ambitious campaign in history to monitor and regulate the content of conversations among humans. Yet, it has also joined other mega-organizations Google and Microsoft in investing multiple billions of dollars in what is the greatest boon to fake news in recorded history: generative-AI.

Toward a braver new world

It would be difficult to imagine a more efficient method of facilitating widespread lying and deception (not to mention false and hateful rhetoric) – and therefore propaganda – than generative-AI. Yet, these mega-organizations continue to sink more and more money into further development and deployment of these lie-generators.

I dread what the future holds in store for our children and theirs.

Another AI lawsuit against Microsoft and OpenAI

Last June, Microsoft, OpenAI and others were hit with a class action lawsuit involving their AI data-scraping technologies. On Tuesday (September 5, 2023) another class action lawsuit was filed against them. The gravamen of both of these complaints is that these companies allegedly trained their AI technologies using personal information from millions of users, in violation of federal and state privacy statutes and other laws.

Among the laws alleged to have been violated are the Electronic Communications Privacy Act, the Computer Fraud and Abuse Act, the California Invasion of Privacy Act, California’s unfair competition law, Illinois’s Biometric Information Privacy Act, and the Illinois Consumer Fraud and Deceptive Business Practices Act. The lawsuits also allege a variety of common law claims, including negligence, invasion of privacy, conversion, unjust enrichment, breach of the duty to warn, and such.

This is just the most recent lawsuit in a growing body of claims against big AI. Many involve allegations of copyright infringement, but privacy is a growing concern. This particular suit is asking for an award of monetary damages and an order that would require the companies to implement safeguards for the protection of private data.

Microsoft reportedly has invested billions of dollars in OpenAI and its app, ChatGPT.

The case is A.T. v. OpenAI LP, U.S. District Court for the Northern District of California, No. 3:23-cv-04557 (September 5, 2023).

Is Microsoft “too big to fail” in court? We shall see.

A Recent Exit from Paradise

Over a year ago, Steven Thaler filed an application with the United States Copyright Office to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” In the application, he listed a machine as the “author” and himself as the copyright owner. The Copyright Office refused registration  on the grounds that the work lacked human authorship. Thaler then filed a lawsuit in federal court seeking to overturn that determination. On August 18, 2023 the court upheld the Copyright Office’s refusal of registration. The case is Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236 (D.D.C. Aug. 18, 2023).

Read more about the history of this case in my previous blog post, “A Recent Entrance to Complexity.”

The Big Bright Green Creativity Machine

In his application for registration, Thaler had listed his computer, referred to as “Creativity Machine,” as the “author” of the work, and himself as a claimant. The Copyright Office denied registration on the basis that copyright only protects human authorship.

Taking the Copyright Office to court

Unsuccessful in securing a reversal through administrative appeals, Thaler filed a lawsuit in federal court claiming the Office’s denial of registration was “arbitrary, capricious, an abuse of discretion and not in accordance with the law….”

The court ultimately sided with the Copyright Office. In its decision, it provided a cogent explanation of the rationale for the human authorship requirement:

The act of human creation—and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts—was thus central to American copyright from its very inception. Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.

Id.

A Complex Issue

As I discussed in a previous blog post, the issue is not as simple as it might seem. There are different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. These tools are capable of generating content with very little user input. If the only human input is a user’s directive to “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

The difficult question – and one that is almost certain to be the subject of ongoing litigation and probably new Copyright Office regulations – is exactly how much, and what kind of, human input is necessary before a human can claim authorship.  Equally as perplexing is how much, if at all, the Copyright Office should involve itself in ascertaining and evaluating the details of the process by which a work was created. And, of course, what consequences should flow from an applicant’s failure to disclose complete details about the nature and extent of machine involvement in the creative process.

Conclusion

The court in this case did not dive into these issues. The only thing we can safely take away from this decision is the broad proposition that a work is not protected by copyright to the extent it is generated by a machine.

Generative-AI: The Top 12 Lawsuits

Artificial intelligence (“AI”) is generating more than content; it is generating lawsuits. Here is a brief chronology of what I believe are the most significant lawsuits that have been filed so far.

Artificial intelligence (“AI”) is generating more than content; it is generating lawsuits. Here is a brief chronology of what I believe are the most significant lawsuits that have been filed so far.

Most of these allege copyright infringement, but some make additional or other kinds of claims, such as trademark, privacy or publicity right violations, defamation, unfair competition, and breach of contract, among others. So far, the suits primarily target the developers and purveyors of generative AI chatbots and similar technology. They focus more on what I call “input” than on “output” copyright infringement. That is to say, they allege that copyright infringement is involved in the way particular AI tools are trained.

Thomson Reuters Enterprise Centre GmbH et al. v. ROSS Intelligence (May, 2020)

Thomson Reuters Enterprise Centre GmbH et al. v. ROSS Intelligence Inc., No. 20-cv-613 (D. Del. 2020)

Thomson Reuters alleges that ROSS Intelligence copied its Westlaw database without permission and used it to train a competing AI-driven legal research platform. In defense, ROSS has asserted that it only copied ideas and facts from the Westlaw database of legal research materials. (Facts and ideas are not protected by copyright.) ROSS also argues that its use of content in the Westlaw database is fair use.

One difference between this case and subsequent generative-AI copyright infringement cases is that the defendant in this case is alleged to have induced a third party with a Westlaw license to obtain allegedly proprietary content for the defendant after the defendant had been denied a license of its own. Other cases involve generative AI technologies that operate by scraping publicly available content.

The parties filed cross-motions for summary judgment. While those motions were pending, the U.S. Supreme Court issued its decision in Andy Warhol Found. for the Visual Arts, Inc. v. Goldsmith, 598 U.S. ___, 143 S. Ct. 1258 (2023). The parties have now filed supplemental briefs asserting competing arguments about whether and how the Court’s treatment of transformative use in that case should be interpreted and applied in this case. A decision on the motions is expected soon.

Doe 1 et al. v. GitHub et al. (November, 2022)

Doe 1 et al. v. GitHub, Inc. et al., No. 22-cv-06823 (N.D. Calif. November 3, 2022)

This is a class action lawsuit against GitHub, Microsoft, and OpenAI that was filed in November, 2022. It involves GitHub’s CoPilot, an AI-powered tool that suggests lines of programming code based on what a programmer has written. The complaint alleges that Copilot copies code from publicly available software repositories without complying with the terms of applicable open-source licenses. The complaint also alleges removal of copyright management information in violation of 17 U.S.C. § 1202, unfair competition, and other tort claims.

Andersen et al. v. Stability AI et al. (January 13, 2023)

Andersen et al. v. Stability AI et al., No. 23-cv-00201 (N.D. Calif. Jan. 13, 2023)

Artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed this class action lawsuit against generative-AI companies Stability AI, Midjourney, and DeviantArt on January 13, 2023. The lawsuit alleges that the defendants infringed their copyrights by using their artwork without permission to train AI-powered image generators to create allegedly infringing derivative works.  The lawsuit also alleges violations of 17 U.S.C. § 1202 and publicity rights, breach of contract, and unfair competition.

Getty Images v. Stability AI (February 3, 2023)

Getty Images v. Stability AI, No. 23-cv-00135-UNA (D. Del. February 23, 2023)

Getty Images has filed two lawsuits against Stability AI, one in the United Kingdom and one in the United States, each alleging both input and output copyright infringement. Getty Images owns the rights to millions of images. It is in the business of licensing rights to use copies of the images to others. The lawsuit also accuses Stability AI of falsifying, removing or altering copyright management information, trademark infringement, trademark dilution, unfair competition, and deceptive trade practices.

Stability AI has moved to dismiss the complaint filed in the U.S. for lack of jurisdiction.

Flora et al. v. Prisma Labs (February 15, 2023)

Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023)

Jack Flora and others filed a class action lawsuit against Prisma Labs for invasion of privacy. The complaint alleges, among other things, that the defendant’s Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed, in violation of Illinois’s data privacy laws.

Young v. NeoCortext (April 3, 2023)

Young v. NeoCortext, Inc., 2023-cv-02496 (C.D. Calif. April 3, 2023)

This is a publicity rights case. NeoCortext’s Reface app allows users to paste images of their own faces over those of celebrities in photographs and videos. Kyland Young, a former cast member of the Big Brother reality television show, has sued NeoCortext for allegedly violating his publicity rights. The complaint alleges that NeoCortext has “commercially exploit[ed] his and thousands of other actors, musicians, athletes, celebrities, and other well-known individuals’ names, voices, photographs, or likenesses to sell paid subscriptions to its smartphone application, Refacewithout their permission.”

NeoCortext has asserted a First Amendment defense, among others.

Walters v. Open AI (June 5, 2023)

Walters v. OpenAI, LLC, No. 2023-cv-03122 (N.D. Ga. July 14, 2023) (Complaint originally filed in Gwinnett County, Georgia Superior Court on June 5, 2023; subsequently removed to federal court)

This is a defamation action against OpenAI, the company responsible for ChatGPT. The lawsuit was brought by Mark Walters. He alleges that ChatGPT provided false and defamatory misinformation about him to journalist Fred Riehl in connection with a federal civil rights lawsuit against Washington Attorney General Bob Ferguson and members of his staff. ChatGPT allegedly stated that the lawsuit was one for fraud and embezzlement on the part of Mr. Walters. The complaint alleges that Mr. Walters was “neither a plaintiff nor a defendant in the lawsuit,” and “every statement of fact” pertaining to him in the summary of the federal lawsuit that ChatGPT prepared is false. A New York court recently addressed the questions of sanctions for attorneys who submit briefs containing citations to non-existent “precedents” that were entirely made up by ChatGPT. This is the first case to address tort liability for ChatGPT’s notorious creation of “hallucinatory facts.”

In July, 2023, Jeffrey Battle filed a complaint against Microsoft in Maryland alleging that he, too, has been defamed as a result of AI-generated “hallucinatory facts.”

P.M. et al. v. OpenAI et al. (June 28, 2023)

P.M. et al. v. OpenAI LP et al., No. 2023-cv-03199 (N.D. Calif. June 28, 2023)

This lawsuit has been brought by underage individuals against OpenAI and Microsoft. The complaint alleges the defendants’ generative-AI products ChatGPT, Dall-E and Vall-E collect private and personally identifiable information from children without their knowledge or informed consent. The complaint sets out claims for alleged violations of the Electronic Communications Privacy Act; the Computer Fraud and Abuse Act; California’s Invasion of Privacy Act and unfair competition law; Illinois’s Biometric Information Privacy Act, Consumer Fraud and Deceptive Business Practices Act, and Consumer Fraud and Deceptive Business Practices Act; New York General Business Law § 349 (deceptive trade practices); and negligence, invasion of privacy, conversion, unjust enrichment, and breach of duty to warn.

Tremblay v. OpenAI (June 28, 2023)

Tremblay v. OpenAI, Inc., No. 23-cv-03223 (N.D. Calif. June 28, 2023)

Another copyright infringement lawsuit against OpenAI relating to its ChatGPT tool. In this one, authors allege that ChatGPT is trained on the text of books they and other proposed class members authored, and facilitates output copyright infringement. The complaint sets forth claims of copyright infringement, DMCA violations, and unfair competition.

Silverman et al. v. OpenAI (July 7, 2023)

Silverman et al. v. OpenAI, No. 23-cv-03416 (N.D. Calif. July 7, 2023)

Sarah Silverman (comedian/actress/writer) and others allege that OpenAI, by using copyright-protected works without permission to train ChatGPT, committed direct and vicarious copyright infringement, violated section 17 U.S.C. 1202(b), and their rights under unfair competition, negligence, and unjust enrichment law.

Kadrey et al. v. Meta Platforms (July 7, 2023)

Kadrey et al. v. Meta Platforms, No. 2023-cv-03417 (N.D. Calif. July 7, 2023)

The same kinds of allegations as are made in Silverman v. OpenAI, but this time against Meta Platforms, Inc.

J.L. et al. v. Alphabet (July 11, 2023)

J.L. et al. v. Alphabet, Inc. et al. (N.D. Calif. July 11, 2023)

This is a lawsuit against Google and its owner Alphabet, Inc. for allegedly scraping and harvesting private and personal user information, copyright-protected works, and emails, without notice or consent. The complaint alleges claims for invasion of privacy, unfair competition, negligence, copyright infringement, and other causes of action.

On the Regulatory Front

The U.S. Copyright Office is examining the problems associated with registering copyrights in works that rely, in whole or in part, on artificial intelligence. The U.S. Federal Trade Commission (FTC) has suggested that generative-AI implicates “competition concerns.”. Lawmakers in the United States and the European Union are considering legislation to regulate AI in various ways.

%d bloggers like this: