Last Exit From Paradise

Copyright law “has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here. Human authorship is a bedrock requirement of copyright.”

The United States Supreme Court has put an end to Stephen Thaler’s crusade for machine rights. Okay, that’s the sensational news article way of putting it.  He wasn’t really crusading for machine rights. He was trying to establish a precedent for claiming copyright in AI-generated works.

I first wrote about this back in May, 2022 (“AI Can Create, But Is It Art?”). At that time, the U.S. Copyright Office had denied registration of “A Recent Entrance to Paradise.” This was an image that was generated by  Thaler’s AI tool, the Creativity Machine. Thaler had sought to register it as a work for hire made by the machine. The Copyright Office denied registration because it lacked human authorship.

The decision was consistent with appellate court decisions suggesting that stories allegedly written by “non-human spiritual beings” are not protected by copyright, although a human selection or arrangement of them might be. Urantia Foundation v. Kristen Maaherra, 114 F.3d 955 (9th Cir. 1997).  Neither are works created by non-human animals, such as a monkey selfie.

Thaler sought review by the federal district court. Judge Howell affirmed the Copyright Office’s decision, writing that copyright law “has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here. Human authorship is a bedrock requirement of copyright.”

The Court of Appeals affirmed the refusal of registration. Thaler petitioned for review by the United States Supreme Court. On March 2, 2026, the Court denied review, without comment.

An argument that Thaler advanced in the petition for certiorari was bascially that because images output by a camera are protected by copyright (See Burrow-Giles Lithographic v. Sarony), images generated by a computer should be, too.

The Copyright Office has since published guidance explaining that using AI as a tool in the creative process does not categorically rule out copyright protection. Rather, assessments must be made on a case-by-case basis about the nature and extent of human creativity that was contributed.

The narrowest interpretation of the Supreme Court’s denial of certiorari is that it did not see a need to disturb the ruling that a machine cannot be an “author,” for purposes of copyright law. The facts of the case did not present an opportunity to opine on whether, and under what circumstances, a human can claim to be an author of an AI-assisted creation.

Trademark News

Buc-ee’s, a popular chain of gas-and-convenience stores in the South, has filed a trademark infringement lawsuit against Mickey’s gas stations.  According to the complaint:

Consumers are likely to perceive a connection or association as to the source, sponsorship, or affiliation of the parties’ products and services, when in fact none exists, given the similarity of the parties’ logos, trade channels, and consumer bases.

Here are the two logos, side by side for comparison:

Buc-ees and Mickey's logos

Trademark infringement occurs when one company’s logo or other mark is used in commerce in a way that is likely to confuse consumers about the source of a product or service. What do you think, folks? Might a weary traveler mistake a moose for a beaver?

Clean responses only, please.

Court of Appeals Affirms Registration Refusal for AI-Generated Output

Court of Appeals Affirms Registration Refusal for AI-Generated Output

In 2019, Stephen Thaler developed an AI system he called The Creativity Machine. He generated output he called A Recent Entrance to Paradise. When he applied to register a copyright claim in the output, he listed the machine as the author. He claimed ownership of the work as a work made for hire. In his application, he asserted that the work was autonomously created by a machine. The Copyright Office denied the claim on the basis that human authorship is a required element of a copyright claim.

On appeal, the United States district court affirmed the Copyright Office’s decision. Thaler attempted to argue, for the first time, that it was copyrightable because he provided instructions and directed the machine’s creation of the work. The district court found that he had waived that argument.

The Court of Appeals Affirms

Thaler sought review in the Court of Appeals for the Federal Circuit. On March 18, 2025, the Court of Appeals affirmed. The Court cited language in the Copyright Act that suggested Congress intended only human beings to be authors. The Court did not reach the question whether the Copyright Clause of the U.S. Constitution might protect machine-generated works if Congress should choose someday to extend copyright protection to these kinds of materials.

The Court held that the question whether Thalercould claim authorship on the basis of the fact that he made and directed the operation of the Creativity Machine has not been preserved for appeal.

 

Copyrights in AI-Generated Content

Copyright registrations are being issued for works created with generative-AI tools, subject to some important qualifications. Also, Internet Archves revisited (briefly)

The U.S. Copyright Office has issued its long-awaited report on the copyrightability of works created using AI-generated output. The legality of using copyrighted works to train generative-AI systems is a topic for another day.

Key takeaways:

  • Copyright protects the elements of a work that are created by a human, but does not protect elements that were AI-generated (probably the key take-away from the Report) The is the “human authorship” requirement that the Copyight Office invoked in denying registration of Stephen Thaler’s AI-generated output. I wrote about that a couple of years ago in “AI Can Create But Is It Art?” 
  • The Copyright Office believes existing law is adequate to deal with AI copyright issues; it does not believe any new legislation is needed
  • Using AI to assist in the creative process does not affect copyrightability
  • Prompts do not provide sufficient control over the output to be considered creative works.
  • Protection exists for the following, if they involve sufficient human creativity:
    • Selection, coordination, and arrangement of AI-generated output
      • Modification of AI-generated content
        • Human-created elements distinguishable from AI-generated elements.

Prompts

A key question for the Copyright Office was whether a highly detailed prompt could suffice as human creative expression. The Office says no; “[P]rompts alone do not provide sufficient human control to make users of an AI system the authors of the output. Prompts essentially function as instructions that convey unprotectable ideas. While highly detailed prompts could contain the user’s desired expressive elements, at present they do not control how the AI system processes them in generating the output.”

How much control does a human need over the output-generation process to be considered an author? The answer, apparently, is “So much control that the AI mechanism’s contribution was purely rote or mechanical. “The fact that identical prompts can generate multiple different outputs further indicates a lack of human control.”

Expressive prompts

If the prompt itself is sufficiently creative and original, the expression contained in the prompt may qualify for copyright protection. For example, if a user prompts an AI tool to change a story from first-person to third-person point of view, and includes the first-person version in the prompt, then copyright may be claimed in the story that was included in the prompt. The author could claim copyright in the story as a “human-generated element” distinguishable from anything AI thereafter did to it. The human-created work must be perceptible in the output.

Registration of hybrid works

The U.S. Copyright Office has now issued several registrations for works that contain a combination of both human creative expression and AI-generated output. Examples:

Irontic, LLC has a registered copyright in Senzia Opera, a sound recording with “music and singing voices by [sic] generated by artificial intelligence,” according to the copyright registration. That material is excluded from the claim. The registration, however, does provide protection for the story, lyrics, spoken words, and the selection, coordination, and arrangement of the sound recording.

Computer programs can be protected by copyright, but if any source code was generated by AI, it must be excluded from the claim. Thus, the Adobe GenStudio for Performance Marketing computer program is protected by copyright, but any source code in it that was AI-generated is not.

A record company received a copyright registration for human additions and modifications to AI-generated art.

As an example of a “selection, coordination and arrangement” copyright, there is the registration of a work called “A Collection of Objects Which Do Not Exist,” consisting of a collage of AI-generated images. “A Single Piece of American Cheese,” is another example of a registered copyright claim based on the selection, coordination, or arrangement of AI-generated elements.

China

A Chinese court has taken a contrary position, holding that an AI-generated image produced by Stable Diffusion is copyrightable because the prompts he chose reflected his aesthetic choices.

Internet Archives Postscript

In January, the Second Circuit Court of Appeals affirmed the decision in Hachette Book Group, Inc. v. Internet Archive. This came as no surprise. A couple of important things that bear repeating came out of this decision, though.

First, the Court of Appeals reaffirmed that fair use is an affirmative defense. As such, the defendant bears the burden of establishing the level of market harm the use has caused or may cause. While a copyright owner may reasonably be required to identify relevant markets, he/she/it is not required to present empirical data to support a claim of market harm. The defendant bears the burden of proof of a fair use defense, including proof pertinent to each of the four factors comprising the defense.

Confusion seems to have crept into some attorneys’ and judges’ analysis of the issue. This is probably because it is well known that the plaintiff bears the burden of proof of damages, which can also involve evidence of market harm. The question of damages, however, is separate and distinct from the “market harm” element of a fair use defense.

The second important point the Second Circuit made in Hatchette is that the “public benefit” balancing that Justice Breyer performed in Google LLC v. Oracle America, Inc. needs to focus on something more than just the short-term benefits to the public in getting free access to infringing copies of works. Otherwise, the “public benefit” in getting free copies of copyright-protected stuff would outweigh the rights of copyright owners every time.  The long-term benefits of protecting the rights of authors must also be considered.

True, libraries and consumers may reap some short-term benefits from access to free digital books, but what are the long-term consequences? [Those consequences, i.e.,] depriv[ing] publishers and authors of the revenues due to them as compensation for their unique creations [outweigh any public benefit in having free access to copyrighted works.]

Id.

They reined in Google v. Oracle.

Thomas James is a human. No part of this article was AI-generated.

 

A Recent Exit from Paradise

In his application for registration, Thaler had listed his computer, referred to as “Creativity Machine,” as the “author” of the work, and himself as a claimant. The Copyright Office denied registration on the basis that copyright only protects human authorship.

Over a year ago, Steven Thaler filed an application with the United States Copyright Office to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” In the application, he listed a machine as the “author” and himself as the copyright owner. The Copyright Office refused registration  on the grounds that the work lacked human authorship. Thaler then filed a lawsuit in federal court seeking to overturn that determination. On August 18, 2023 the court upheld the Copyright Office’s refusal of registration. The case is Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236 (D.D.C. Aug. 18, 2023).

Read more about the history of this case in my previous blog post, “A Recent Entrance to Complexity.”

The Big Bright Green Creativity Machine

In his application for registration, Thaler had listed his computer, referred to as “Creativity Machine,” as the “author” of the work, and himself as a claimant. The Copyright Office denied registration on the basis that copyright only protects human authorship.

Taking the Copyright Office to court

Unsuccessful in securing a reversal through administrative appeals, Thaler filed a lawsuit in federal court claiming the Office’s denial of registration was “arbitrary, capricious, an abuse of discretion and not in accordance with the law….”

The court ultimately sided with the Copyright Office. In its decision, it provided a cogent explanation of the rationale for the human authorship requirement:

The act of human creation—and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts—was thus central to American copyright from its very inception. Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.

Id.

A Complex Issue

As I discussed in a previous blog post, the issue is not as simple as it might seem. There are different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. These tools are capable of generating content with very little user input. If the only human input is a user’s directive to “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

The difficult question – and one that is almost certain to be the subject of ongoing litigation and probably new Copyright Office regulations – is exactly how much, and what kind of, human input is necessary before a human can claim authorship.  Equally as perplexing is how much, if at all, the Copyright Office should involve itself in ascertaining and evaluating the details of the process by which a work was created. And, of course, what consequences should flow from an applicant’s failure to disclose complete details about the nature and extent of machine involvement in the creative process.

Conclusion

The court in this case did not dive into these issues. The only thing we can safely take away from this decision is the broad proposition that a work is not protected by copyright to the extent it is generated by a machine.

Let’s Stop Analogizing Human Creators to Machines

Of course, policy discussions usually begin with the existing framework, but in this instance, it can be a shaky starting place because generative AI presents some unique challenges—and not just for the practice of copyright law.

[Guest post by David Newhoff, author of The Illusion of More and Who Invented Oscar Wilde? The Photograph at the Center of Modern American Copyright.]

Just as it is folly to anthropomorphize computers and robots, it is also unhelpful to discuss the implications of generative AI in copyright law by analogizing machines to authors.[1] In 2019, I explored the idea that “machine learning” could be analogous to human reading if the human happens to have an eidetic memory. But this was a thought exercise, and in that post, I also imagined machine training that serves a computer science or research purpose—not necessarily generative AIs trained on protected works designed to produce works without authors.

In the present discussion, however, certain parties weighing in on AI and copyright seem to advocate policy that is premised on the language and principles of existing doctrine as applicable to the technological processes of both the input and output sides of the generative AI equation. Of course, policy discussions usually begin with the existing framework, but in this instance, it can be a shaky starting place because generative AI presents some unique challenges—and not just for the practice of copyright law.

We should be wary of analogizing machine functions to human activity for the simple reason that copyright law (indeed all law) has never been anything but anthropocentric. Although it is difficult to avoid speaking in terms of machines “learning” or “creating,” it is essential that we either constantly remind ourselves that these are weak, inaccurate metaphors, or that a new glossary is needed to describe what certain AIs may be doing in the world of creative production.

On the input (training) side of the equation, the moment someone says something like, “Humans learn to make art by looking at art, and generative AIs do the same thing,” the speaker should be directed to the break-out session on sci-fi and excused from any serious conversation about applicable copyright law. Likewise, on the output side, comparisons of AI to other technological developments—from the printing press to Photoshop—should be presumed irrelevant unless the AI at issue can plausibly be described as a tool of the author rather than the primary maker of a work of creative expression.

Copyright Office Guidance Highlights Some Key Difficulties

To emphasize the exceptional nature of this discussion, even experts are somewhat confused by both the doctrinal and administrative aspects in the new guidelines published by U.S. Copyright Office directing authors how to disclaim AI-generated material in a registration application. The confusion is hardly surprising because generative AI has prompted the Office to ask an unprecedented question—namely, How was this work made?

As noted in several posts, copyrightability has always been agnostic with regard to the creative process. Copyright rights attach to works that show a modicum of originality, and the Copyright Office does not generally ask what tools, methods, etc. the author used to make a work.[2] But this historic practice was then confronted by the now widely reported applications submitted by Stephen Thaler and Kris Kashtanova, both claiming copyright in visual works made with generative AI.

In both cases, the Copyright Office rejected registration applications for the visual works based on the longstanding, bright-line doctrine that copyright rights can only attach to works made by human beings. In Thaler’s case, the consideration is straightforward because the claimant affirmed that the image was produced entirely by a machine. Kashtanova, on the other hand, asserts more than de minimis authorship (i.e., using AI as a tool) to produce the visual works elements in a comic book.

Whether in response to Kashtanova—or certainly anticipating applications yet to come—the muddiness of the Office guidelines is an attempt to address the difficult question as to whether copyright attaches to a work that combines authorship and AI generation, and how to draw distinctions between the two. This is not only new territory for the Office as a doctrinal matter but is a potential mess as an administrative one.

The Copyright Office has never been tasked with separating the protectable expression attributable to a human from the unprotectable expression attributable to a machine. Even if it could be said that photography has always provoked this tension (a discussion on its own), the analysis has never been an issue for the Office when registering works, but only for the courts in resolving claims of infringement. In fact, Warhol v. Goldsmith, although a fair use case, is a prime example of how tricky it can be to separate the factual elements of a photograph from the expressive elements.

But now the Copyright Office is potentially tasked with a copyrightability question that, in practice, would ask both the author and the examiner to engage in a version of the idea/expression dichotomy analysis—first separating the machine generated material from the author’s material and then considering whether the author has a valid claim in the protectable expression.

This is not so easy to accomplish in a work that combines author and machine-made elements in a manner that may be subtly intertwined; it begs new questions about what the AI “contributed” to a given work; and the inquiry is further complicated by the variety of AI tools in the market or in development. Then, because neither the author/claimant nor the Office examiner is likely a copyright attorney (let alone a court), the inquiry is fraught with difficulty as an administrative process—and that’s if the author makes a good-faith effort to disclaim the AI-generated material in the first place.

Many independent authors are confused enough by the Limit of Claim in a registration application or the concept of “published” versus “unpublished.” Asking these same creators to delve into the metaphysics implied by the AI/Author distinction seems like a dubious enterprise, and one that is not likely to foster more faith in the copyright system than the average indie creator has right now.

Copyrightability Could Remain Blind But …

It is understandable that some creators (e.g., filmmakers using certain plug-ins) may be concerned that the Copyright Office has already taken too broad a view—connoting a per se rule that denies copyrightability for any work generated with any AI technology. This concern is a reminder that AI should not be discussed as a monolithic topic because not all AI enhanced products do the same thing. And again, this may imply a need for some new terms rather than the words we use to describe human activities.

In this light, one could follow a different line of reasoning and argue that the agnosticism of copyrightability vis-à-vis process has always implied a presumption of human authorship where other factors—from technological enhancements to dumb luck—invisibly contribute to the protectable expression. Relatedly, a photographer can add a filter or plug-in that changes the expressive qualities of her image, but doing so is considered part of the selection and arrangement aspect of her authorship and does not dilute the copyrightability of the image.

Some extraordinary visual work has already been produced by professional artists using AI to yield results that are too strikingly well-crafted to believe that the author has not exerted considerable influence over the final image. In this regard, then, perhaps the copyrightability question at the registration stage, no matter how sophisticated the “filter” becomes, should remain blind to process. The Copyright Office could continue to register works submitted by valid claimants without asking the novel How question.

But the more that works may be generated with little or no human spark, the more this agnostic, status-quo approach could unravel the foundation of copyright rights altogether. And it would not be the first time that major tech companies have sought to do exactly that. It is no surprise that an AI developer or a producer using AI would seek the financial benefits of copyright protection; but without a defensible presence of human expression in the work, the exclusive rights of copyright cannot vest in a person with the standing to defend those rights. Nowhere in U.S. law do non-humans have rights of any kind, and this foundational principle reminds us that although machine activity can be compared to human activity as an allegorical construct, this is too whimsical for a serious policy discussion.

Again, I highlight this tangle of administrative and doctrinal factors to emphasize the point that generative AI does not merely present new variations on old questions (e.g., photography), but raises novel questions that cannot easily be answered by analogies to the past. If the challenges presented by generative AI are to be resolved sensibly, and in a way that will serve independent creators, policymakers and thought leaders on copyright law should be skeptical of arguments that too earnestly attempt to transpose centuries of doctrine for human activity into principles applied to machine activity.


[1] I do not distinguish “human” authors, because there is no other kind.

[2] I say “generally” only because I cannot account for every conversation among claimants and examiners.

A Recent Entrance to Complexity

The United States Copyright Office recently reaffirmed its position that it will not register AI-generated content, because it is not created by a human. The rule is easy to state; the devil is in the details. Attorney Thomas James explains.

Last year, the United States Copyright Office issued a copyright registration to Kristina Kashtanova for the graphic novel, Zarya of the Dawn. A month later, the Copyright Office issued a notice of cancellation of the registration, along with a request for additional information.

The Copyright Office, consistent with judicial decisions, takes the position that copyright requires human authorship. The Office requested additional information regarding the creative process that resulted in the novel because parts of it were AI-generated. Kashtanova complied with the request for additional information.

This week, the Copyright Office responded with a letter explaining that the registration would be cancelled, but that a new, more limited one will be issued. The Office explained that its concern related to the author’s use of Midjourney, an AI-powered image generating tool, to generate images used in the work:

Because Midjourney starts with randomly generated noise that evolves into a final image, there is no guarantee that a particular prompt will generate any particular visual output”

U.S. Copyright Office letter

The Office concluded that the text the author wrote, as well as the author’s selection, coordination and arrangement of written and visual elements, are protected by copyright, and therefore may be registered. The images generated by Midjourney, however, would not be registered because they were “not the product of human authorship.” The new registration will cover only the text and editing components of the work, not the AI-generated images.

A Previous Entrance to Paradise

Early last year, the Copyright Office refused copyright registration for an AI-generated image. Steven Thaler had filed an application to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” He listed himself as the copyright owner. The Copyright Office denied registration on the grounds that the work lacked human authorship. Thaler filed a lawsuit in federal court seeking to overturn that determination. The lawsuit is still pending. It is currently at the summary judgment stage.

The core issue

The core issue, of course, is whether a person who uses AI to generate content such as text or artwork can claim copyright protection in the content so generated. Put another way, can a user who deploys artificial intelligence to generate a seemingly expressive work (such as artwork or a novel) claim authorship?

This question is not as simple as it may seem. There can be different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. Text and images can be generated by these systems with minimal human input. If the only human input is a user’s directive to “Write a story” or “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

Peering into the worm can

The complicating consideration with content-generative AI mechanisms is that they have the potential to allow many different levels of user involvement in the generation of output. The more details a user adds to the instructions s/he gives to the machine, the more it begins to appear that the user is, in fact, contributing something creative to the project.

Is a prompt to “Write a story about a dog” a sufficiently creative contribution to the resulting output to qualify the user as an “author”? Maybe not. But what about, “Write a story about a dog who joins a traveling circus”? Or “Write a story about a dog named Pablo who joins a traveling circus”? Or “Write a story about a dog with a peculiar bark that begins, ‘Once upon a time, there was a dog named Pablo who joined a circus,’ and ends with Pablo deciding to return home”?

At what point along the spectrum of user-provided detail does copyright protectable authorship come into existence?

A question that is just as important to ask is: How much, if at all, should the Copyright Office involve itself with ascertaining the details of the creative process that were involved in a work?

In a similar vein, should copyright registration applicants be required to disclose whether their works contain AI-generated content? Should they be required to affirmatively disclaim rights in elements of AI-generated content that are not protected by copyright?

Expanding the Rule of Doubt

Alternatively, should the U.S. Copyright Office adopt something like a Rule of Doubt when copyright is claimed in AI-generated content? The Rule of Doubt, in its current form, is the rule that the U.S. Copyright Office will accept a copyright registration of a claim containing software object code, even though the Copyright Office is unable to verify whether the object code contains copyrightable work. If effect, if the applicant attests that the code is copyrightable, then the Copyright Office will assume that it is and will register the claim. Under 37 C.F.R. § 202.20(c)(2)(vii)(B), this may be done when an applicant seeks to register a copyright in object code rather than source code. The same is true of material that is redacted to protect a trade secret.

When the Office issues a registration under the Rule of Doubt, it adds an annotation to the certificate and to the public record indicating that the copyright was registered under the Rule of Doubt.

Under the existing rule, the applicant must file a declaration stating that material for which registration is sought does, in fact, contain original authorship.

This approach allows registration but leaves it to courts (not the Copyright Office) to decide on a case-by-case basis whether material for which copyright is claimed contains copyrightable authorship.  

Expanding the Rule of Doubt to apply to material generated at least in part by AI might not be the most satisfying solution for AI users, but it is one that could result in fewer snags and delays in the registration process.

Conclusion

The Copyright Office has said that it soon will be developing registration guidance for works created in part using material generated by artificial intelligence technology. Public notices and events relating to this topic may be expected in the coming months.


Need help with a copyright matter? Contact attorney Thomas James.

A Thousand Cuts: AI and Self-Destruction

David Newhoff comments on generative AI (artificial intelligence) and public policy.

A guest post written by David Newhoff. AI, of course, stands for “artificial intelligence.” David is the author of Who Invented Oscar Wilde? The Photograph at the Center of Modern American Copyright (Potomac Books 2020) and a copyright advocate/writer at The Illusion of More.


I woke up the other day thinking about artificial intelligence (AI) in context to the Cold War and the nuclear arms race, and curiously enough, the next two articles I read about AI made arms race references. Where my pre-caffeinated mind had gone was back to the early 1980s when, as teenagers, we often asked that futile question as to why any nation needed to stockpile nuclear weapons in quantities that could destroy the world many times over.

Every generation of adolescents believes—and at times confirms—that the adults have no idea what the hell they’re doing; and watching the MADness of what often seemed like a rapturous embrace of nuclear annihilation was, perhaps, the unifying existential threat which shaped our generation’s world view. Since then, reasonable arguments have been made that nuclear stalemate has yielded an unprecedented period of relative global peace, but the underlying question remains:  Are we powerless to stop the development of new modes of self-destruction?

Of course, push-button extinction is easy to imagine and, in a way, easy to ignore. If something were to go terribly wrong, and the missiles fly, it’s game over in a matter of minutes with no timeouts left. So, it is possible to “stop worrying” if not quite “love the bomb” (h/t Strangelove); but today’s technological threats preface outcomes that are less merciful than swift obliteration. Instead, they offer a slow and seemingly inexorable decline toward the dystopias of science fiction—a future in which we are not wiped out in a flash but instead “amused to death” (h/t Postman) as we relinquish humanity itself to the exigencies of technologies that serve little or no purpose.

The first essay I read about AI, written by Anja Kaspersen and Wendell Wallach for the Carnegie Council, advocates a “reset” in ethical thinking about AI, arguing that giant technology investments are once again building systems with little consideration for their potential effect on people. “In the current AI discourse we perceive a widespread failure to appreciate why it is so important to champion human dignity. There is risk of creating a world in which meaning and value are stripped from human life,” the authors write. Later, they quote Robert Oppenheimer …

It is not possible to be a scientist unless you believe that the knowledge of the world, and the power which this gives, is a thing which is of intrinsic value to humanity, and that you are using it to help in the spread of knowledge, and are willing to take the consequences.

I have argued repeatedly that generative AI “art” is devoid of meaning and value and that the question posed by these technologies is not merely how they might influence copyright law, but whether they should exist at all. It may seem farfetched to contemplate banning or regulating the development of AI tech, but it should not be viewed as an outlandish proposal. If certain AI developments have the capacity to dramatically alter human existence—perhaps even erode what it means to be human—why is this any less a subject of public policy than regulating a nuclear power plant or food safety?

Of course, public policy means legislators, and it is quixotic to believe that any Congress, let alone the current one, could sensibly address AI before the industry causes havoc. At best, the tech would flood the market long before the most sincere, bipartisan efforts of lawmakers could grasp the issues; and at worst, far too many politicians have shown that they would sooner exploit these technologies for their own gain than they would seek to regulate it in the public interest. “AI applications are increasingly being developed to track and manipulate humans, whether for commercial, political, or military purposes, by all means available—including deception,” write Kaspersen and Wallach. I think it’s fair to read that as Cambridge Analytica 2.0 and to recognize that the parties who used the Beta version are still around—and many have offices on Capitol Hill.

Kaspersen and Wallach predict that we may soon discover that generative AI will have the same effect on education that “social media has had on truth.” In response, I would ask the following: In the seven years since the destructive power of social media became headline news, have those revelations significantly changed the conversation, let alone muted the cyber-libertarian dogma of the platform owners? I suspect that AI in the classroom threatens to exacerbate rather than parallel the damage done by social media to truth (i.e., reason). If social media has dulled Socratic skills with the flavors of narcissism, ChatGPT promises a future that does not remember what Socratic skills used to mean.

And that brings me to the next article I read in which Chris Gillard and Pete Rorabaugh, writing for Slate, use “arms race” as a metaphor to criticize technological responses to the prospect of students cheating with AI systems like ChatGPT. Their article begins:

In the classroom of the future—if there still are any—it’s easy to imagine the endpoint of an arms race: an artificial intelligence that generates the day’s lessons and prompts, a student-deployed A.I. that will surreptitiously do the assignment, and finally, a third-party A.I. that will determine if any of the pupils actually did the work with their own fingers and brain. Loop complete; no humans needed. If you were to take all the hype about ChatGPT at face value, this might feel inevitable. It’s not.

In what I feared might be another tech-apologist piece labeling concern about AI a “moral panic,” Gillard and Rorabaugh make the opposite point. Their criticism of software solutions to mitigate student cheating is that it is small thinking which erroneously accepts as a fait accompli that these AI systems are here to stay whether we like it or not. “Telling us that resistance to a particular technology is futile is a favorite talking point for technologists who release systems with few if any guardrails out into the world and then put the onus on society to address most of the problems that arise,” they write.

In other words, here we go again. The ethical, and perhaps legal, challenges posed by AI are an extension of the same conversation we generally failed to have about social media and its cheery promises to be an engine of democracy. “It’s a failure of imagination to think that we must learn to live with an A.I. writing tool just because it was built,” Gillard and Rorabaugh argue. I would like to agree but am skeptical that the imagination required to reject certain technologies exists outside the rooms where ethicists gather. And this is why I wake up thinking about AI in context to the Cold War, except of course that the doctrine of Mutually Assured Destruction was rational by contrast.


Photo by the author.

View the original article on The Illusion of More.

Contact attorney Tom James for copyright help

Need help registering a copyright or a group of copyrights in the United States, or enforcing a copyright in the United States? Contact attorney Tom James.

Exit mobile version
%%footer%%