Generative-AI as Unfair Trade Practice

While Congress and the courts grapple with generative-AI copyright issues, the FTC weighs in on the risks of unfair competition, monopolization, and consumer deception.

FTC Press Release exceprt

While Congress and the courts are grappling with the copyright issues that AI has generated, the federal government’s primary consumer watchdog has made a rare entry into the the realm of copyright law. The Federal Trade Commission (FTC) has filed a Comment with the U.S. Copyright Office suggesting that generative-AI could be (or be used as) an unfair or deceptive trade practice. The Comment was filed in response to the Copyright Office’s request for comments as it prepares to begin rule-making on the subject of artificial intelligence (AI), particularly, generative-AI.

Monopolization

The FTC is responsible for enforcing the FTC Act, which broadly prohibits “unfair or deceptive” practices. The Act protects consumers from deceptive and unscrupulous business practices. It is also intended to promote fair and healthy competition in U.S. markets. The Supreme Court has held that all violations of the Sherman Act also violate the FTC Act.

So how does generative-AI raise monopolization concerns? The Comment suggests that incumbents in the generative-AI industry could engage in anti-competitive behavior to ensure continuing and exclusive control over the use of the technology. (More on that here.)

The agency cited the usual suspects: bundling, tying, exclusive or discriminatory dealing, mergers, acquisitions. Those kinds of concerns, of course, are common in any business sector. They are not unique to generative-AI. The FTC also described some things that are matters of special concern in the AI space, though.

Network effects

Because positive feedback loops improve the performance of generative-AI, it gets better as more people use it. This results in concentrated market power in incumbent generative-AI companies with diminishing possibilities for new entrants to the market. According to the FTC, “network effects can supercharge a company’s ability and incentive to engage in unfair methods of competition.”

Platform effects

As AI users come to be dependent on a particular incumbent generative-AI platform, the company that owns the platform could take steps to lock their customers into using their platform exclusively.

Copyrights and AI competition

The FTC Comment indicates that the agency is not only weighing the possibility that AI unfairly harms creators’ ability to compete. (The use of pirated or the misuse of copyrighted materials can be an unfair method of competition under Section 5 of the FTC Act.) It is also considering that generative-AI may deceive, or be used to deceive, consumers. Specifically, the FTC expressed a concern that “consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist, but it has been generated by someone else using an AI tool.” (Comment, page 5.)

In one of my favorite passages in the Comment, the FTC suggests that training AI on protected expression without consent, or selling output generated “in the style of” a particular writer or artist, may be an unfair method of competition, “especially when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works….” (Comment, pages 5 – 6).

Fair Use

The significance of the FTC’s injection of itself into the generative-AI copyright fray cannot be overstated. It is extremely likely that during their legislative and rule-making deliberations, both Congress and the Copyright Office are going to be focusing the lion’s share of their attention on the fair use doctrine. They are most likely going to try to allow generative-AI outfits to continue to infringe copyrights (It is already a multi-billion-dollar industry, after all, and with obvious potential political value), while at the same time imposing at least some kinds of limitations to preserve a few shards of the copyright system. Maybe they will devise a system of statutory licensing like they did when online streaming — and the widespread copyright infringement it facilitated– became a thing.

Whatever happens, the overarching question for Congress is going to be, “What kinds of copyright infringement should be considered “fair” use.

Copyright fair use normally is assessed using a four-prong test set out in the Copyright Act. Considerations about unfair competition arguably are subsumed within the fourth factor in that analysis – the effect the infringing use has on the market for the original work.

The other objective of the FTC Act – protecting consumers from deception — does not neatly fit into one of the four statutory factors for copyright fair use. I believe a good argument can be made that it should come within the coverage of the first prong of the four-factor test: the purpose and character of the use. The task for Congress and the Copyright Office, then, should be to determine which particular purposes and kinds of uses of generative-AI should be thought of as fair. There is no reason the Copyright Office should avoid considering Congress’s objectives, expressed in the FTC Act and other laws, when making that determination.

Case Update: Andersen v. Stability AI

Artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action lawsuit against Stability AI, DeviantArt, and MidJourney in federal district court alleging causes of action for copyright infringement, removal or alteration of copyright management information, and violation of publicity rights. (Andersen, et al. v. Stability AI Ltd. et al., No. 23-cv-00201-WHO (N.D. Calif. 2023).) The claims relate to the defendants’ alleged unlicensed use of their copyright-protected artistic works in generative-AI systems.

On October 30, 2023, U.S. district judge William H. Orrick dismissed all claims except for Andersen’s direct infringement claim against Stability. Most of the dismissals, however, were granted with leave to amend.

The Claims

McKernan’s and Ortiz’s copyright infringement claims

The judge dismissed McKernan’s and Ortiz’s copyright infringement claims because they did not register the copyrights in their works with the U.S. Copyright Office.

I criticized the U.S. requirement of registration as a prerequisite to the enforcement of a domestic copyright in a U.S. court in a 2019 Illinois Law Review article (“Copyright Enforcement: Time to Abolish the Pre-Litigation Registration Requirement.”) This is a uniquely American requirement. Moreover, the requirement does not apply to foreign works. This has resulted in the anomaly that foreign authors have an easier time enforcing the copyrights in their works in the United States than U.S. authors do. Nevertheless, until Congress acts to change this, it is still necessary for U.S. authors to register their copyrights with the U.S. Copyright Office before they can enforce their copyrights in U.S. courts.  

Since there was no claim that McKernan or Ortiz had registered their copyrights, the judge had no real choice under current U.S. copyright law but to dismiss their claims.

Andersen’s copyright infringement claim against Stability

Andersen’s complaint alleges that she “owns a copyright interest in over two hundred Works included in the Training Data” and that Stability used some of them as training data. Defendants moved to dismiss this claim because it failed to specifically identify which of those works had been registered. The judge, however, determined that her attestation that some of her registered works had been used as training images sufficed, for pleading purposes.  A motion to dismiss tests the sufficiency of a complaint to state a claim; it does not test the truth or falsity of the assertions made in a pleading. Stability can attempt to disprove the assertion later in the proceeding. Accordingly, Judge Orrick denied Stability’s motion to dismiss Andersen’s direct copyright infringement claim.

Andersen’s copyright infringement claims against DeviantArt and MidJourney

The complaint alleges that Stability created and released a software program called Stable Diffusion and that it downloaded copies of billions of copyrighted images (known as “training images”), without permission, to create it. Stability allegedly used the services of LAION (LargeScale Artificial Intelligence Open Network) to scrape the images from the Internet. Further, the complaint alleges, Stable Diffusion is a “software library” providing image-generating service to the other defendants named in the complaint. DeviantArt offers an online platform where artists can upload their works. In 2022, it released a product called “DreamUp” that relies on Stable Diffusion to produce images. The complaint alleges that artwork the plaintiffs uploaded to the DeviantArt site was scraped into the LAION database and then used to train Stable Diffusion. MidJourney is also alleged to have used the Stable Diffusion library.

Judge Orrick granted the motion to dismiss the claims of direct infringement against DeviantArt and MidJourney, with leave to amend the complaint to clarify the theory of liability.

DMCA claims

The complaint makes allegations about unlawful removal of copyright management information in violation of the Digital Millennium Copyright Act (DMCA). Judge Orrick found the complaint deficient in this respect, but granted leave to amend to clarify which defendant(s) are alleged to have done this, when it allegedly occurred, and what specific copyright management information was allegedly removed.

Publicity rights claims

 Plaintiffs allege that the defendants used their names in their products by allowing users to request the generation of artwork “in the style of” their names. Judge Orrick determined the complaint did not plead sufficient factual allegations to state a claim. Accordingly, he dismissed the claim, with leave to amend. In a footnote, the court deferred to a later time the question whether the Copyright Act preempts the publicity claims.

In addition, DeviantArt filed a motion to strike under California’s Anti-SLAPP statute. The court deferred decision on that motion until after the Plaintiffs have had time to file an amended complaint.

Unfair competition claims

The court also dismissed plaintiffs’ claims of unfair competition, with leave to amend.

Breach of contract claim against DeviantArt

Plaintiffs allege that DeviantArt violated its own Terms of Service in connection with their DreamUp product and alleged scraping of works users upload to the site. This claim, too, was dismissed with leave to amend.

Conclusion

Media reports have tended to overstate the significance of Judge Orrick’s October 30, 2023 Order. Reports of the death of the lawsuit are greatly exaggerated. It would have been nice if greater attention had been paid to the registration requirement during the drafting of the complaint, but the lawsuit nevertheless is still very much alive. We won’t really know whether it will remain that way unless and until the plaintiffs amend the complaint – which they are almost certainly going to do.

Need help with copyright registration? Contact attorney Tom James.

AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

A Recent Exit from Paradise

Over a year ago, Steven Thaler filed an application with the United States Copyright Office to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” In the application, he listed a machine as the “author” and himself as the copyright owner. The Copyright Office refused registration  on the grounds that the work lacked human authorship. Thaler then filed a lawsuit in federal court seeking to overturn that determination. On August 18, 2023 the court upheld the Copyright Office’s refusal of registration. The case is Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236 (D.D.C. Aug. 18, 2023).

Read more about the history of this case in my previous blog post, “A Recent Entrance to Complexity.”

The Big Bright Green Creativity Machine

In his application for registration, Thaler had listed his computer, referred to as “Creativity Machine,” as the “author” of the work, and himself as a claimant. The Copyright Office denied registration on the basis that copyright only protects human authorship.

Taking the Copyright Office to court

Unsuccessful in securing a reversal through administrative appeals, Thaler filed a lawsuit in federal court claiming the Office’s denial of registration was “arbitrary, capricious, an abuse of discretion and not in accordance with the law….”

The court ultimately sided with the Copyright Office. In its decision, it provided a cogent explanation of the rationale for the human authorship requirement:

The act of human creation—and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts—was thus central to American copyright from its very inception. Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.

Id.

A Complex Issue

As I discussed in a previous blog post, the issue is not as simple as it might seem. There are different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. These tools are capable of generating content with very little user input. If the only human input is a user’s directive to “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

The difficult question – and one that is almost certain to be the subject of ongoing litigation and probably new Copyright Office regulations – is exactly how much, and what kind of, human input is necessary before a human can claim authorship.  Equally as perplexing is how much, if at all, the Copyright Office should involve itself in ascertaining and evaluating the details of the process by which a work was created. And, of course, what consequences should flow from an applicant’s failure to disclose complete details about the nature and extent of machine involvement in the creative process.

Conclusion

The court in this case did not dive into these issues. The only thing we can safely take away from this decision is the broad proposition that a work is not protected by copyright to the extent it is generated by a machine.

New AI Copyright Guidance

The Copyright Office is providing guidance to copyright applicants who wish to register works with AI-generated content in them.

On Thursday, March 16, 2023, the United States Copyright Office published new guidance regarding the registration of copyrights in AI-generated material. in the Federal Register. Here is the tl;dr version.

The Problem

Artificial intelligence (AI) technologies are now capable of producing content that would be considered expressive works if created by a human being. These technologies “train” on mass quantities of existing human-authored works and use patterns detected in them to generate like content. This creates a thorny question about authorship: To what extent can a person who uses AI technology to generate content be considered the “author” of such content?

It isn’t a hypothetical problem. The Copyright Office has already started receiving applications for registration of copyrights in works that are either wholly or partially AI-generated.

The U.S. Copyright Act gives the Copyright Office power to determine whether and what kinds of additional information it may need from a copyright registration applicant in order to evaluate the existence, ownership and duration of a purported copyright. On March 16, 2023, the Office exercised that power by publishing Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence in the Federal Register. [Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190 (March 16, 2023)]

Sorry, HAL, No Registration for You

Consistent with judicial rulings, the U.S. Copyright Office takes the position that only material that is created by a human being is protected by copyright. In other words, copyrights only protect human authorship. If a monkey can’t own a copyright in a photograph and an elephant can’t own a copyright in a portrait it paints, a computer-driven technology cannot own a copyright in the output it generates. Sorry, robots; it’s a human’s world.

As stated in the Compendium of Copyright Office Practices:

The Copyright Office “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”

U.S. Copyright Office, Compendium of U.S.
Copyright Office Practices
sec. 313.2 (3d ed. 2021)

Partially AI-Generated Works

A work that is the product of a human being’s own original conception, to which s/he gave visible form clearly has a human author. A work that is entirely the result of mechanical reproduction clearly does not. Things get murkier when AI technology is used to generate content to which a human being applies some creativity.

According to the new guidance, merely prompting an AI technology to generate a poem, drawing or the like, without more, is not enough to establish human authorship if the AI technology determines the expressive elements of its output. This kind of content is not protected by copyright and a registration applicant therefore will need to disclaim it in the application.

On the other hand, if a human being selects and arranges AI-generated content, the selection and arrangement may be protected by copyright even if the content itself is not. Similarly, if a human being makes significant modifications to AI-generated content, then those modifications may receive copyright protection. In all cases, of course, the selection, arrangement or modification must be sufficiently creative in order to qualify for copyright protection.

Disclosure required

The new guidance imposes a duty on copyright registration applicants to disclose the inclusion of AI-generated content in any work submitted for registration.

Standard application

If you use AI technology to any extent in creating the work, you will need to use the Standard application, not the Single application, to register the copyright in it.

Claims and disclaimers

The applicant will need to describe the human author’s contributions to the work in the “Author Created” field of the application. A claim should only be made in this.

Any significant AI-generated content must be explicitly excluded (disclaimed), in the “Limitations of the Claim” section of the application, in the “Other” field, under the “Material Excluded” heading.

Previously filed applications

If you have already filed an application for a work that includes AI-generated material, you will need to make sure that it makes an adequate disclosure about that. The newly-issued guidance says you should contact the Copyright Office’s Public Information Office and report that you omitted AI information from the application. This will cause a notation to the record to be made. When an examiner sees the notation, s/he may contact you to obtain additional information if necessary.

If a registration has already been issued, you should submit a supplemntary registration form to correct it. Failing to do that could result in your registration being cancelled, if the Office becomes aware that information essential to its evaluation of registrability has been omitted. In addition, a court may ignore a registration in an infringement action if it concludes that you knowingly provided the Copyright Office with false information.


Need help with a copyright application or registration?

Contact attorney Tom James.

A Recent Entrance to Complexity

The United States Copyright Office recently reaffirmed its position that it will not register AI-generated content, because it is not created by a human. The rule is easy to state; the devil is in the details. Attorney Thomas James explains.

Last year, the United States Copyright Office issued a copyright registration to Kristina Kashtanova for the graphic novel, Zarya of the Dawn. A month later, the Copyright Office issued a notice of cancellation of the registration, along with a request for additional information.

The Copyright Office, consistent with judicial decisions, takes the position that copyright requires human authorship. The Office requested additional information regarding the creative process that resulted in the novel because parts of it were AI-generated. Kashtanova complied with the request for additional information.

This week, the Copyright Office responded with a letter explaining that the registration would be cancelled, but that a new, more limited one will be issued. The Office explained that its concern related to the author’s use of Midjourney, an AI-powered image generating tool, to generate images used in the work:

Because Midjourney starts with randomly generated noise that evolves into a final image, there is no guarantee that a particular prompt will generate any particular visual output”

U.S. Copyright Office letter

The Office concluded that the text the author wrote, as well as the author’s selection, coordination and arrangement of written and visual elements, are protected by copyright, and therefore may be registered. The images generated by Midjourney, however, would not be registered because they were “not the product of human authorship.” The new registration will cover only the text and editing components of the work, not the AI-generated images.

A Previous Entrance to Paradise

Early last year, the Copyright Office refused copyright registration for an AI-generated image. Steven Thaler had filed an application to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” He listed himself as the copyright owner. The Copyright Office denied registration on the grounds that the work lacked human authorship. Thaler filed a lawsuit in federal court seeking to overturn that determination. The lawsuit is still pending. It is currently at the summary judgment stage.

The core issue

The core issue, of course, is whether a person who uses AI to generate content such as text or artwork can claim copyright protection in the content so generated. Put another way, can a user who deploys artificial intelligence to generate a seemingly expressive work (such as artwork or a novel) claim authorship?

This question is not as simple as it may seem. There can be different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. Text and images can be generated by these systems with minimal human input. If the only human input is a user’s directive to “Write a story” or “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

Peering into the worm can

The complicating consideration with content-generative AI mechanisms is that they have the potential to allow many different levels of user involvement in the generation of output. The more details a user adds to the instructions s/he gives to the machine, the more it begins to appear that the user is, in fact, contributing something creative to the project.

Is a prompt to “Write a story about a dog” a sufficiently creative contribution to the resulting output to qualify the user as an “author”? Maybe not. But what about, “Write a story about a dog who joins a traveling circus”? Or “Write a story about a dog named Pablo who joins a traveling circus”? Or “Write a story about a dog with a peculiar bark that begins, ‘Once upon a time, there was a dog named Pablo who joined a circus,’ and ends with Pablo deciding to return home”?

At what point along the spectrum of user-provided detail does copyright protectable authorship come into existence?

A question that is just as important to ask is: How much, if at all, should the Copyright Office involve itself with ascertaining the details of the creative process that were involved in a work?

In a similar vein, should copyright registration applicants be required to disclose whether their works contain AI-generated content? Should they be required to affirmatively disclaim rights in elements of AI-generated content that are not protected by copyright?

Expanding the Rule of Doubt

Alternatively, should the U.S. Copyright Office adopt something like a Rule of Doubt when copyright is claimed in AI-generated content? The Rule of Doubt, in its current form, is the rule that the U.S. Copyright Office will accept a copyright registration of a claim containing software object code, even though the Copyright Office is unable to verify whether the object code contains copyrightable work. If effect, if the applicant attests that the code is copyrightable, then the Copyright Office will assume that it is and will register the claim. Under 37 C.F.R. § 202.20(c)(2)(vii)(B), this may be done when an applicant seeks to register a copyright in object code rather than source code. The same is true of material that is redacted to protect a trade secret.

When the Office issues a registration under the Rule of Doubt, it adds an annotation to the certificate and to the public record indicating that the copyright was registered under the Rule of Doubt.

Under the existing rule, the applicant must file a declaration stating that material for which registration is sought does, in fact, contain original authorship.

This approach allows registration but leaves it to courts (not the Copyright Office) to decide on a case-by-case basis whether material for which copyright is claimed contains copyrightable authorship.  

Expanding the Rule of Doubt to apply to material generated at least in part by AI might not be the most satisfying solution for AI users, but it is one that could result in fewer snags and delays in the registration process.

Conclusion

The Copyright Office has said that it soon will be developing registration guidance for works created in part using material generated by artificial intelligence technology. Public notices and events relating to this topic may be expected in the coming months.


Need help with a copyright matter? Contact attorney Thomas James.

A Thousand Cuts: AI and Self-Destruction

David Newhoff comments on generative AI (artificial intelligence) and public policy.

A guest post written by David Newhoff. AI, of course, stands for “artificial intelligence.” David is the author of Who Invented Oscar Wilde? The Photograph at the Center of Modern American Copyright (Potomac Books 2020) and a copyright advocate/writer at The Illusion of More.


I woke up the other day thinking about artificial intelligence (AI) in context to the Cold War and the nuclear arms race, and curiously enough, the next two articles I read about AI made arms race references. Where my pre-caffeinated mind had gone was back to the early 1980s when, as teenagers, we often asked that futile question as to why any nation needed to stockpile nuclear weapons in quantities that could destroy the world many times over.

Every generation of adolescents believes—and at times confirms—that the adults have no idea what the hell they’re doing; and watching the MADness of what often seemed like a rapturous embrace of nuclear annihilation was, perhaps, the unifying existential threat which shaped our generation’s world view. Since then, reasonable arguments have been made that nuclear stalemate has yielded an unprecedented period of relative global peace, but the underlying question remains:  Are we powerless to stop the development of new modes of self-destruction?

Of course, push-button extinction is easy to imagine and, in a way, easy to ignore. If something were to go terribly wrong, and the missiles fly, it’s game over in a matter of minutes with no timeouts left. So, it is possible to “stop worrying” if not quite “love the bomb” (h/t Strangelove); but today’s technological threats preface outcomes that are less merciful than swift obliteration. Instead, they offer a slow and seemingly inexorable decline toward the dystopias of science fiction—a future in which we are not wiped out in a flash but instead “amused to death” (h/t Postman) as we relinquish humanity itself to the exigencies of technologies that serve little or no purpose.

The first essay I read about AI, written by Anja Kaspersen and Wendell Wallach for the Carnegie Council, advocates a “reset” in ethical thinking about AI, arguing that giant technology investments are once again building systems with little consideration for their potential effect on people. “In the current AI discourse we perceive a widespread failure to appreciate why it is so important to champion human dignity. There is risk of creating a world in which meaning and value are stripped from human life,” the authors write. Later, they quote Robert Oppenheimer …

It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge, and are willing to take the consequences.

I have argued repeatedly that generative AI “art” is devoid of meaning and value and that the question posed by these technologies is not merely how they might influence copyright law, but whether they should exist at all. It may seem farfetched to contemplate banning or regulating the development of AI tech, but it should not be viewed as an outlandish proposal. If certain AI developments have the capacity to dramatically alter human existence—perhaps even erode what it means to be human—why is this any less a subject of public policy than regulating a nuclear power plant or food safety?

Of course, public policy means legislators, and it is quixotic to believe that any Congress, let alone the current one, could sensibly address AI before the industry causes havoc. At best, the tech would flood the market long before the most sincere, bipartisan efforts of lawmakers could grasp the issues; and at worst, far too many politicians have shown that they would sooner exploit these technologies for their own gain than they would seek to regulate it in the public interest. “AI applications are increasingly being developed to track and manipulate humans, whether for commercial, political, or military purposes, by all means available—including deception,” write Kaspersen and Wallach. I think it’s fair to read that as Cambridge Analytica 2.0 and to recognize that the parties who used the Beta version are still around—and many have offices on Capitol Hill.

Kaspersen and Wallach predict that we may soon discover that generative AI will have the same effect on education that “social media has had on truth.” In response, I would ask the following: In the seven years since the destructive power of social media became headline news, have those revelations significantly changed the conversation, let alone muted the cyber-libertarian dogma of the platform owners? I suspect that AI in the classroom threatens to exacerbate rather than parallel the damage done by social media to truth (i.e., reason). If social media has dulled Socratic skills with the flavors of narcissism, ChatGPT promises a future that does not remember what Socratic skills used to mean.

And that brings me to the next article I read in which Chris Gillard and Pete Rorabaugh, writing for Slate, use “arms race” as a metaphor to criticize technological responses to the prospect of students cheating with AI systems like ChatGPT. Their article begins:

In the classroom of the future—if there still are any—it’s easy to imagine the endpoint of an arms race: an artificial intelligence that generates the day’s lessons and prompts, a student-deployed A.I. that will surreptitiously do the assignment, and finally, a third-party A.I. that will determine if any of the pupils actually did the work with their own fingers and brain. Loop complete; no humans needed. If you were to take all the hype about ChatGPT at face value, this might feel inevitable. It’s not.

In what I feared might be another tech-apologist piece labeling concern about AI a “moral panic,” Gillard and Rorabaugh make the opposite point. Their criticism of software solutions to mitigate student cheating is that it is small thinking which erroneously accepts as a fait accompli that these AI systems are here to stay whether we like it or not. “Telling us that resistance to a particular technology is futile is a favorite talking point for technologists who release systems with few if any guardrails out into the world and then put the onus on society to address most of the problems that arise,” they write.

In other words, here we go again. The ethical, and perhaps legal, challenges posed by AI are an extension of the same conversation we generally failed to have about social media and its cheery promises to be an engine of democracy. “It’s a failure of imagination to think that we must learn to live with an A.I. writing tool just because it was built,” Gillard and Rorabaugh argue. I would like to agree but am skeptical that the imagination required to reject certain technologies exists outside the rooms where ethicists gather. And this is why I wake up thinking about AI in context to the Cold War, except of course that the doctrine of Mutually Assured Destruction was rational by contrast.


Photo by the author.

View the original article on The Illusion of More.

Contact attorney Tom James for copyright help

Need help registering a copyright or a group of copyrights in the United States, or enforcing a copyright in the United States? Contact attorney Tom James.

AI Legal Issues

Thomas James (“The Cokato Copyright Attorney”) describes the range of legal issues, most of which have not yet been resolved, that artificial intelligence (AI) systems have spawned.

AI is not new. Its implementation also is not new. In fact, consumers regularly interact with AI-powered systems every day. Online help systems often use AI to provide quick answers to questions that customers routinely ask. Sometimes these are designed to give a user the impression that s/he is communicating with a person.

AI systems also perform discrete functions such as analyzing a credit report and rendering a decision on a loan or credit card application, or screening employment applications.

Many other uses have been found for AI and new ones are being developed all the time. AI has been trained not just to perform customer service tasks, but also to perform analytics and diagnostic tests; to repair products; to update software; to drive cars; and even to write articles and create images and videos. These developments may be helping to streamline tasks and improve productivity, but they have also generated a range of new legal issues.

Tort liability

While there are many different kinds of tort claims, the elements of tort claims are basically the same: (1) The person sought to be held liable for damages or ordered to comply with a court order must have owed a duty to the person who is seeking the legal remedy; (2) the person breached that duty; (3) the person seeking the legal remedy experienced harm, i.e., real or threatened injury; and (4) the breach was the actual and proximate cause of the harm.

The kind of harm that must be demonstrated varies depending on the kind of tort claim. For example, a claim of negligent driving might involve bodily injury, while a claim of defamation might involve injury to reputation. For some kinds of tort claims, the harm might involve financial or economic injury. 

The duty may be specified in a statute or contract, or it might be judge-made (“common law.”) It may take the form of an affirmative obligation (such as a doctor’s obligation to provide a requisite level of care to a patient), or it may take a negative form, such as the common law duty to refrain from assaulting another person.

The advent of AI does not really require any change in these basic principles, but they can be more difficult to apply to scenarios that involve the use of an AI system.

Example. Acme Co. manufactures and markets Auto-Doc, a machine that diagnoses and repairs car problems. Mike’s Repair Shop lays off its automotive technician employees and replaces them with one of these machines. Suzie Consumer brings her VW Jetta to Mikes Repair Shop for service because she has been hearing a sound that she describes as being a grinding noise that she thinks is coming from either the engine or the glove compartment. The Auto-Doc machine adds engine oil, replaces belts, and removes the contents of the glove compartment. Later that day, Suzie’s brakes fail and her vehicle hits and kills a pedestrian in a crosswalk. A forensic investigation reveals that her brakes failed because they were badly worn. Who should be held liable for the pedestrian’s death – Suzie, Mike’s, Acme Co., some combination of two of them, all of them, or none of them?

The allocation of responsibility will depend, in part, on the degree of autonomy the AI machine possesses. Of course, if it can be shown that Suzie knew or should have known that her brakes were bad, then she most likely could be held responsible for causing the pedestrian’s death. But what about the others? Their liability, or share of liability, is affected by the degree of autonomy the AI machine possesses. If it is completely autonomous, then Acme might be held responsible for failing to program the machine in such a way that it would test for and detect worn brake pads even if a customer expresses an erroneous belief that the sound is coming from the engine or the glove compartment. On the other hand, if the machine is designed only to offer suggestions of possible problems and solutions,  leaving it up to a mechanic to accept or reject them, then Mike’s might be held responsible for negligently accepting the machine’s recommendations. 

Assuming the Auto-Doc machine is fully autonomous, should Mike’s be faulted for relying on it to correctly diagnose car problems? Is Mike’s entitled to rely on Acme’s representations about Auto-Doc’s capabilities, or would the repair shop have a duty to inquire about and/or investigate Auto-Doc’s limitations? Assuming Suzie did not know, and had no reason to suspect, her brakes were worn out, should she be faulted for relying on a fully autonomous machine instead of taking the car to a trained human mechanic?  Why or why not?

Criminal liability

It is conceivable that an AI system might engage in activity that is prohibited by an applicable jurisdiction’s criminal laws. E-mail address harvesting is an example. In the United States, for example, the CAN-SPAM Act makes it a crime to send a commercial email message to an email address that was  obtained  by automated scraping of Internet websites for email addresses. Of course, if a person intentionally uses an AI system for scraping, then liability should be clear. But what if an AI system “learns” to engage in scraping?

AI-generated criminal output may also be a problem. Some countries have made it a crime to display a Nazi symbol, such as a swastika, on a website. Will criminal liability attach if a website or blog owner uses AI to generate illustrated articles about World War II and the system generates and displays articles that are illustrated with World War II era German flags and military uniforms? In the United States, creating or possessing child pornography is illegal. Will criminal liability attach if an AI system generates it?

Some of these kinds of issues can be resolved through traditional legal analysis of the intent and scienter elements of the definitions of crimes. A jurisdiction might wish to consider, however, whether AI systems should be regulated to require system creators to implement measures that would prevent illegal uses of the technology. This raises policy and feasibility questions, such as whether and what kinds of restraints on machine learning should be required, and how to enforce them. Further, would prior restraints on the design and/or use of AI-powered expressive-content-generating systems infringe on First Amendment rights?  

Product liability

Related to the problem of allocating responsibility for harm caused by the use of an AI mechanism is the question whether anyone should be held liable for harm caused when the mechanism is not defective, that is to say, when it is operating as it should.

 Example.  Acme Co. manufactures and sells Auto-Article, a software program that is designed to create content of a type and kind the user specifies. The purpose of the product is to enable a website owner to generate and publish a large volume of content frequently, thereby improving the website’s search engine ranking. It operates   by scouring the Internet and analyzing instances of the content the user specifies to produce new content that “looks like” them. XYZ Co. uses the software to generate articles on medical topics. One of these articles explains that chest pain can be caused by esophageal spasms but that these typically do not require treatment unless they occur frequently enough to interfere with a person’s ability to eat or drink. Joe is experiencing chest pain. He does not seek medical help, however, because he read the article and therefore believes he is experiencing esophageal spasms. He later collapses and dies from a heart attack. A medical doctor is prepared to testify that his death could have been prevented if he had sought medical attention when he began experiencing the pain.

Should either Acme or XYZ Co. be held liable for Joe’s death? Acme could argue that its product was not defective. It was fit for its intended purposes, namely, a machine learning system that generates articles that look like articles of the kind a user specifies. What about XYZ Co.? Would the answer be different if XYZ had published a notice on its site that the information provided in its articles is not necessarily complete and that the articles are not a substitute for advice from a qualified medical professional? If XYZ incurs liability as a result of the publication, would it have a claim against Acme, such as for failure to warn it of the risks of using AI to generate articles on medical topics?

Consumer protection

AI system deployment raises significant health and safety concerns. There is the obvious example of an AI system making incorrect medical diagnoses or treatment recommendations. Autonomous (“self-driving”) motor vehicles are also examples. An extensive body of consumer protection regulations may be anticipated.

Forensic and evidentiary issues

In situations involving the use of semi-autonomous AI, allocating responsibility for harm resulting from the operation of the AI  system  may be difficult. The most basic question in this respect is whether an AI system was in use or not. For example, if a motor vehicle that can be operated in either manual or autonomous mode is involved in an accident, and fault or the extent of liability depends on that (See the discussion of tort liability, above), then a way of determining the mode in which the car was being driven at the time will be needed.

If, in the case of a semi-autonomous AI system, tort liability must be allocated between the creator of the system and a user of it, the question of fault may depend on who actually caused a particular tortious operation to be executed – the system creator or the user. In that event, some method of retracing the steps the AI system used may be essential. This may also be necessary in situations where some factor other than AI contributed, or might have contributed, to the injury. Regulation may be needed to ensure that the steps in an AI system’s operations are, in fact, capable of being ascertained.

Transparency problems also fall into this category. As explained in the Journal of Responsible Technology, people might be put on no-fly lists, denied jobs or benefits, or refused credit without knowing anything more than that the decision was made through some sort of automated process. Even if transparency is achieved and/or mandated, contestability will also be an issue.

Data Privacy

To the extent an AI system collects and stores personal or private information, there is a risk that someone may gain unauthorized access to it.. Depending on how the system is designed to function, there is also a risk that it might autonomously disclose legally protected personal or private information. Security breaches can cause catastrophic problems for data subjects.

Publicity rights

Many jurisdictions recognize a cause of action for violation of a person’s publicity rights (sometimes called “misappropriation of personality.”) In these jurisdictions, a person has an exclusive legal right to commercially exploit his or her own name, likeness or voice. To what extent, and under what circumstances, should liability attach if a commercialized AI system analyzes the name, likeness or voice of a person that it discovers on the Internet? Will the answer depend on how much information about a particular individual’s voice, name or likeness the system uses, on one hand, or how closely the generated output resembles that individual’s voice, name or likeness, on the other?

Contracts

The primary AI-related contract concern is about drafting agreements that adequately and effectively allocate liability for losses resulting from the use of AI technology. Insurance can be expected to play a larger role as the use of AI spreads into more areas.

Bias, Discrimination, Diversity & Inclusion

Some legislators have expressed concern that AI systems will reflect and perpetuate biases and perhaps discriminatory patterns of culture. To what extent should AI system developers be required to ensure that the data their systems use are collected from a diverse mixture of races, ethnicities, genders, gender identities, sexual orientations, abilities and disabilities, socioeconomic classes, and so on? Should developers be required to apply some sort of principle of “equity” with respect to these classifications, and if so, whose vision of equity should they be required to enforce? To what extent should government be involved in making these decisions for system developers and users?

Copyright

AI-generated works like articles, drawings, animations, music and so on, raise two kinds of copyright issues:

  1. Input issues, i.e., questions like whether AI systems that create new works based on existing copyright-protected works infringe the copyrights in those works
  2. Output issues, such as who, if anybody, owns the copyright in an AI-generated work.

I’ve written about AI copyright ownership issues and AI copyright infringement issues in previous blog posts on The Cokato Copyright Attorney.

Patents and other IP

Computer programs can be patented. AI systems can be devised to write computer programs. Can an AI-generated computer program that meets the usual criteria for patentability (novelty, utility, etc.) be patented?

Is existing intellectual property law adequate to deal with AI-generated inventions and creative works? The World Intellectual Property Organization (WIPO) apparently does not think so. It is formulating recommendations for new regulations to deal with the intellectual property aspects of AI.

Conclusion

AI systems raise a wide range of legal issues. The ones identified in this article are merely a sampling, not a complete listing of all possible issues. Not all of these legal issues have answers yet. It can be expected that more AI regulatory measures, in more jurisdictions around the globe, will be coming down the pike very soon.

Contact attorney Thomas James

Contact Minnesota attorney Thomas James for help with copyright and trademark registration and other copyright and trademark related matters.

Exit mobile version
%%footer%%