Last Exit From Paradise

Copyright law “has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here. Human authorship is a bedrock requirement of copyright.”

The United States Supreme Court has put an end to Stephen Thaler’s crusade for machine rights. Okay, that’s the sensational news article way of putting it.  He wasn’t really crusading for machine rights. He was trying to establish a precedent for claiming copyright in AI-generated works.

I first wrote about this back in May, 2022 (“AI Can Create, But Is It Art?”). At that time, the U.S. Copyright Office had denied registration of “A Recent Entrance to Paradise.” This was an image that was generated by  Thaler’s AI tool, the Creativity Machine. Thaler had sought to register it as a work for hire made by the machine. The Copyright Office denied registration because it lacked human authorship.

The decision was consistent with appellate court decisions suggesting that stories allegedly written by “non-human spiritual beings” are not protected by copyright, although a human selection or arrangement of them might be. Urantia Found v. Kristen Maaherra, 114 F.3d 955 (9th Cir. 1997). Neither are works created by non-human animals, such as a monkey selfie.

Thaler sought review by the federal district court. Judge Howell affirmed the Copyright Office’s decision, writing that copyright law “has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here. Human authorship is a bedrock requirement of copyright.”

The Court of Appeals affirmed the refusal of registration. Thaler petitioned for review by the United States Supreme Court. On March 2, 2026, the Court denied review, without comment.

An argument that Thaler advanced in the petition for certiorari was bascially that because images output by a camera are protected by copyright (See Burrow-Giles Lithographic v. Sarony), images generated by a computer should be, too.

The Copyright Office has since published guidance explaining that using AI as a tool in the creative process does not categorically rule out copyright protection. Rather, assessments must be made on a case-by-case basis about the nature and extent of human creativity that was contributed.

The narrowest interpretation of the Supreme Court’s denial of certiorari is that it did not see a need to disturb the ruling that a machine cannot be an “author,” for purposes of copyright law. The facts of the case did not present an opportunity to opine on whether, and under what circumstances, a human can claim to be an author of an AI-assisted creation.

Trademark News

Buc-ee’s, a popular chain of gas-and-convenience stores in the South, has filed a trademark infringement lawsuit against Mickey’s gas stations.  According to the complaint:

Consumers are likely to perceive a connection or association as to the source, sponsorship, or affiliation of the parties’ products and services, when in fact none exists, given the similarity of the parties’ logos, trade channels, and consumer bases.

Here are the two logos, side by side for comparison:

Buc-ees and Mickey's logos

Trademark infringement occurs when one company’s logo or other mark is used in commerce in a way that is likely to confuse consumers about the source of a product or service. What do you think, folks? Might a weary traveler mistake a moose for a beaver?

Clean responses only, please.

Trump’s Executive Order on AI

News media headlines are trumpeting that the Executive Order preempts state AI laws. This is not true. It directs this administration to try to strike down some state AI laws. It contemplates working with Congress to formulate and enact preemptive legislation. It is doubtful that a President could constitutionally preempt state laws by executive order.

On December 11, 2025, President Trump issued another Executive Order. This one is intended to promote “national dominance” in “a race with adversaries for supremacy.” To “win,” the Order says, AI companies should not be encumbered by state regulation. “The policy of the United States,” the Order says, is “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” It sets up an AI Litigation Task Force to challenge state AI laws that allegedly do not do that.

Excepted from the Order are state laws on child safety protections, data center infrastructure, and state government use of AI.

Which State AI Laws?

The Order speaks generally about “state AI laws,” but does not define the term. Here are some examples of state AI laws:

Stalking and Harassment

A North Dakota statute criminalizes using a robot to frighten or harass another person. It defines a robot to include a drone or other system that uses AI technology. (N.D. Cent. Code § 12.1-17-07.(1), (2)(f)). This appears to be a “state AI law.” North Dakota statutes also prohibit stalking accomplished by using either a robot or a non-AI form of technology. (N.D. Cent. Code § 12.1-17-07.1(1)(d)). Preempting this statute would produce an anomalous result. It would be a crime to stalk somebody unless you use an AI-powered device to do it.

Political Deepfakes

Several states have enacted laws prohibiting the distribution of political deepfakes to influence an election. Regulations range from a prohibition against the distribution of a deepfake to influence an election within a specified time period before the election to requiring disclosure that it is AI-generated. Minn. Stat. § 609.771 is an example of such a regulation. The need for this kind of statute was highlighted in 2024 when someone used AI to clone Joe Biden’s voice and generate an audio file that sounded like Mr. Biden himself was urging people not to vote for him.

Sexual Deepfakes

Both state and federal governments have enacted laws aimed at curbing the proliferation of “revenge porn.” The TAKE IT DOWN Act is an example. Minn. Stat. § 604.32 is another example (deepfakes depicting intimate body parts or sexual acts).

State and federal laws in this area cover much of the same ground. The principal difference is that the federal crime must involve interstate commerce; state crimes do not. The only practical effect of preemption of this kind of state AI law, therefore, would be to eliminate state prohibitions of wholly intrastate sexual deepfakes. If the Executive Order succeeds in its objectives, then state laws that prohibit the creation or distribution of sexual deepfakes wholly within the same state, as some do, would be preempted, with the result that making and distributing sexual deepfakes would be lawful so long as you only transmit it to other people in your state and not to someone in a different state.

Digital Replicas

Many states have enacted laws prohibiting or regulating the unauthorized creation and exploitation of digital replicas. The California Digital Replicas Act and Tennessee’s ELVIS Act are examples. AI is used in the creation of digital replicas. It is unclear whether these kinds of enactments are “state AI laws.” Arguably, a person could use technologies more primitive than generative-AI to create a digital image of a person. If these statutes are preempted only to the extent they apply to AI-generated digital replicas, then it would seem that unauthorized exploiters of other people’s faces and voices for commercial gain would be incentivized to use AI to engage in unauthorized commerceial exploitation of other people.

Child Pornography

Several states have either enacted laws or amended existing laws to bring AI-generated images of what look like real children within the prohibition against child pornography. See, e.g., N.D. Cent. Code § 12.1.-27.2—01.  The Executive Order exempts “child safety protections,” but real children do not necessarily have to be used in AI-generated images. This kind of state statute arguably would not come within the meaning of a “child safety protection.”

Health Care Oversight

California’s Physicians Make Decisions Act requires a human person to oversee health care decisions about medical necessity. This is to ensure that medical care is not left entirely up to an AI bot. The law was enacted with the support of the California Medical Association to ensure that patients receive adequate health care. If the law is nullified, then it would seem that hospitals would be free to replace doctors with AI chatbots.

Chatbots

Some states prohibit the deceptive use of a chatbot, such as by falsely representing to people who interact with one that they are interacting with a real person. In addition, some states have enacted laws requiring disclosure to consumers when they are interacting with a non-human AI. See, e.g., the Colorado Artificial Intelligence Act.

Privacy

Some states have enacted either stand-alone laws or amended existing privacy laws to ensure they protect the privacy of personally identifiable information stored by AI systems. See, e.g., Utah Code 13-721-201, -203 (regulating the sharing of a person’s mental health information by a chatbot); and amendments to the California Consumer Privacy Act making it applicable to information stored in an AI system.

Disclosure

California’s Generative AI Training Data Transparency Act requires disclosure of training data used in developing generative-AI technology.

The Texas Responsible Artificial Intelligence Governance Act

Among other things, the Texas Responsible AI Governance Act prohibits the use of AI to restrict constitutional rights, to discriminate on the basis of race, or to encourage criminal activity. These seem like reasonable proscriptions.

Trump’s “AI czar,” venture capitalist David Sacks, has said the administration is not gong to “push back” on all state laws, only “the most onerous” ones. It is unclear which of these will be deemed “onerous.”

State AI Laws are Not Preempted

News media headlines are trumpeting that the Executive Order preempts state AI laws. This is not true. It directs this administration to try to strike down some state AI laws. It contemplates working with Congress to formulate and enact preemptive legislation. It is doubtful that a President could constitutionally preempt state laws by executive order.

Postscript

Striving for uniformity in the regulation of artificial intelligence is not a bad idea. There should be room, though, for both federal and state legislation. Rather than abolishing state laws, a uniform code or model act for states might be a better idea. Moreover, if we are going to start caring about an onerous complex of differing state laws, and feeling a need to establish a national framework, perhaps the President and Congress might wish to address the sprawling morass of privacy and data security regulations in the United States.

 

Generative-AI as Unfair Trade Practice

While Congress and the courts grapple with generative-AI copyright issues, the FTC weighs in on the risks of unfair competition, monopolization, and consumer deception.

Federal Trade Commission headline as illustration for Thomas James article
FTC Press Release exceprt

While Congress and the courts are grappling with the copyright issues that AI has generated, the federal government’s primary consumer watchdog has made a rare entry into the the realm of copyright law. The Federal Trade Commission (FTC) has filed a Comment with the U.S. Copyright Office suggesting that generative-AI could be (or be used as) an unfair or deceptive trade practice. The Comment was filed in response to the Copyright Office’s request for comments as it prepares to begin rule-making on the subject of artificial intelligence (AI), particularly, generative-AI.

Monopolization

The FTC is responsible for enforcing the FTC Act, which broadly prohibits “unfair or deceptive” practices. The Act protects consumers from deceptive and unscrupulous business practices. It is also intended to promote fair and healthy competition in U.S. markets. The Supreme Court has held that all violations of the Sherman Act also violate the FTC Act.

So how does generative-AI raise monopolization concerns? The Comment suggests that incumbents in the generative-AI industry could engage in anti-competitive behavior to ensure continuing and exclusive control over the use of the technology. (More on that here.)

The agency cited the usual suspects: bundling, tying, exclusive or discriminatory dealing, mergers, acquisitions. Those kinds of concerns, of course, are common in any business sector. They are not unique to generative-AI. The FTC also described some things that are matters of special concern in the AI space, though.

Network effects

Because positive feedback loops improve the performance of generative-AI, it gets better as more people use it. This results in concentrated market power in incumbent generative-AI companies with diminishing possibilities for new entrants to the market. According to the FTC, “network effects can supercharge a company’s ability and incentive to engage in unfair methods of competition.”

Platform effects

As AI users come to be dependent on a particular incumbent generative-AI platform, the company that owns the platform could take steps to lock their customers into using their platform exclusively.

Copyrights and AI competition

The FTC Comment indicates that the agency is not only weighing the possibility that AI unfairly harms creators’ ability to compete. (The use of pirated or the misuse of copyrighted materials can be an unfair method of competition under Section 5 of the FTC Act.) It is also considering that generative-AI may deceive, or be used to deceive, consumers. Specifically, the FTC expressed a concern that “consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist, but it has been generated by someone else using an AI tool.” (Comment, page 5.)

In one of my favorite passages in the Comment, the FTC suggests that training AI on protected expression without consent, or selling output generated “in the style of” a particular writer or artist, may be an unfair method of competition, “especially when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works….” (Comment, pages 5 – 6).

Fair Use

The significance of the FTC’s injection of itself into the generative-AI copyright fray cannot be overstated. It is extremely likely that during their legislative and rule-making deliberations, both Congress and the Copyright Office are going to be focusing the lion’s share of their attention on the fair use doctrine. They are most likely going to try to allow generative-AI outfits to continue to infringe copyrights (It is already a multi-billion-dollar industry, after all, and with obvious potential political value), while at the same time imposing at least some kinds of limitations to preserve a few shards of the copyright system. Maybe they will devise a system of statutory licensing like they did when online streaming — and the widespread copyright infringement it facilitated– became a thing.

Whatever happens, the overarching question for Congress is going to be, “What kinds of copyright infringement should be considered “fair” use.

Copyright fair use normally is assessed using a four-prong test set out in the Copyright Act. Considerations about unfair competition arguably are subsumed within the fourth factor in that analysis – the effect the infringing use has on the market for the original work.

The other objective of the FTC Act – protecting consumers from deception — does not neatly fit into one of the four statutory factors for copyright fair use. I believe a good argument can be made that it should come within the coverage of the first prong of the four-factor test: the purpose and character of the use. The task for Congress and the Copyright Office, then, should be to determine which particular purposes and kinds of uses of generative-AI should be thought of as fair. There is no reason the Copyright Office should avoid considering Congress’s objectives, expressed in the FTC Act and other laws, when making that determination.

 

 

 

Case Update: Andersen v. Stability AI

unlicensed use of copyright-protected artistic works in generative-AI systems.

 

Artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action lawsuit against Stability AI, DeviantArt, and MidJourney in federal district court alleging causes of action for copyright infringement, removal or alteration of copyright management information, and violation of publicity rights. (Andersen, et al. v. Stability AI Ltd. et al., No. 23-cv-00201-WHO (N.D. Calif. 2023).) The claims relate to the defendants’ alleged unlicensed use of their copyright-protected artistic works in generative-AI systems.

On October 30, 2023, U.S. district judge William H. Orrick dismissed all claims except for Andersen’s direct infringement claim against Stability. Most of the dismissals, however, were granted with leave to amend.

The Claims

McKernan’s and Ortiz’s copyright infringement claims

The judge dismissed McKernan’s and Ortiz’s copyright infringement claims because they did not register the copyrights in their works with the U.S. Copyright Office.

I criticized the U.S. requirement of registration as a prerequisite to the enforcement of a domestic copyright in a U.S. court in a 2019 Illinois Law Review article (“Copyright Enforcement: Time to Abolish the Pre-Litigation Registration Requirement.”) This is a uniquely American requirement. Moreover, the requirement does not apply to foreign works. This has resulted in the anomaly that foreign authors have an easier time enforcing the copyrights in their works in the United States than U.S. authors do. Nevertheless, until Congress acts to change this, it is still necessary for U.S. authors to register their copyrights with the U.S. Copyright Office before they can enforce their copyrights in U.S. courts.  

Since there was no claim that McKernan or Ortiz had registered their copyrights, the judge had no real choice under current U.S. copyright law but to dismiss their claims.

Andersen’s copyright infringement claim against Stability

Andersen’s complaint alleges that she “owns a copyright interest in over two hundred Works included in the Training Data” and that Stability used some of them as training data. Defendants moved to dismiss this claim because it failed to specifically identify which of those works had been registered. The judge, however, determined that her attestation that some of her registered works had been used as training images sufficed, for pleading purposes.  A motion to dismiss tests the sufficiency of a complaint to state a claim; it does not test the truth or falsity of the assertions made in a pleading. Stability can attempt to disprove the assertion later in the proceeding. Accordingly, Judge Orrick denied Stability’s motion to dismiss Andersen’s direct copyright infringement claim.

Andersen’s copyright infringement claims against DeviantArt and MidJourney

The complaint alleges that Stability created and released a software program called Stable Diffusion and that it downloaded copies of billions of copyrighted images (known as “training images”), without permission, to create it. Stability allegedly used the services of LAION (LargeScale Artificial Intelligence Open Network) to scrape the images from the Internet. Further, the complaint alleges, Stable Diffusion is a “software library” providing image-generating service to the other defendants named in the complaint. DeviantArt offers an online platform where artists can upload their works. In 2022, it released a product called “DreamUp” that relies on Stable Diffusion to produce images. The complaint alleges that artwork the plaintiffs uploaded to the DeviantArt site was scraped into the LAION database and then used to train Stable Diffusion. MidJourney is also alleged to have used the Stable Diffusion library.

Judge Orrick granted the motion to dismiss the claims of direct infringement against DeviantArt and MidJourney, with leave to amend the complaint to clarify the theory of liability.

DMCA claims

The complaint makes allegations about unlawful removal of copyright management information in violation of the Digital Millennium Copyright Act (DMCA). Judge Orrick found the complaint deficient in this respect, but granted leave to amend to clarify which defendant(s) are alleged to have done this, when it allegedly occurred, and what specific copyright management information was allegedly removed.

Publicity rights claims

 Plaintiffs allege that the defendants used their names in their products by allowing users to request the generation of artwork “in the style of” their names. Judge Orrick determined the complaint did not plead sufficient factual allegations to state a claim. Accordingly, he dismissed the claim, with leave to amend. In a footnote, the court deferred to a later time the question whether the Copyright Act preempts the publicity claims.

In addition, DeviantArt filed a motion to strike under California’s Anti-SLAPP statute. The court deferred decision on that motion until after the Plaintiffs have had time to file an amended complaint.

Unfair competition claims

The court also dismissed plaintiffs’ claims of unfair competition, with leave to amend.

Breach of contract claim against DeviantArt

Plaintiffs allege that DeviantArt violated its own Terms of Service in connection with their DreamUp product and alleged scraping of works users upload to the site. This claim, too, was dismissed with leave to amend.

Conclusion

Media reports have tended to overstate the significance of Judge Orrick’s October 30, 2023 Order. Reports of the death of the lawsuit are greatly exaggerated. It would have been nice if greater attention had been paid to the registration requirement during the drafting of the complaint, but the lawsuit nevertheless is still very much alive. We won’t really know whether it will remain that way unless and until the plaintiffs amend the complaint – which they are almost certainly going to do.

Need help with copyright registration? Contact attorney Tom James.

The Top 3 Generative-AI Copyright Issues

What are your favorite generative-AI copyright issues? In this capsule summary, Cokato attorney Tom James shares what his three favorites are.

Black hole consuming a star. Photo credit: NASA
Black hole consuming a star. Photo credit: NASA.

What are your favorite generative-AI copyright issues? In this capsule summary, Cokato attorney Tom James shares what his three favorites are.

Generative artificial intelligence refers collectively to technology that is capable of generating new text, images, audio/visual and possibly other content in response to a user’s prompts. They are trained by feeding them mass quantities of ABC (already-been-created) works. Some of America’s biggest mega-corporations have invested billions of dollars into this technology. They are now facing a barrage of lawsuits, most of them asserting claims of copyright infringement.

Issue #1: Does AI Output Infringe Copyrights?

Copyrights give their owners an exclusive right to reproduce their copyright-protected works and to create derivative works based on them. If a generative-AI user prompts the service to reproduce the text of a pre-existing work, and it proceeds to do so, this could implicate the exclusive right of reproduction. If a generative-AI user prompts it to create a work in the style of another work and/or author, this could implicate the exclusive right to create derivative works.

To establish infringement, it will be necessary to prove copying. Two identical but independently created works may each be protected by copyright. Put another way, a person is not guilty of infringement merely by creating a work that is identical or similar to another if he/she/it came up with it completely on his/her/its own.

Despite “training” their proteges on existing works, generative-AI outfits deny that their tools actually copy any of them. They say that any similarity to any existing works, living or dead, is purely coincidental. Thus, OpenAI has stated that copyright infringement “is an unlikely accidental outcome.”

The “accidental outcome” defense seems to me like a hard one to swallow in those cases where a prompt involves creating a story involving a specified fictional character that enjoys copyright protection. If the character is distinctive enough — and a piece of work in and of itself, so to speak — to enjoy copyright protection (such as, say, Mr. Spock from the Star Trek series), then any generated output would seem to be an unauthorized derivative work, at least if the AI tool is any good.

If AI output infringes a copyright in an existing work, who would be liable for it? Potentially, the person who entered the prompt might be held liable for direct infringement. The AI tool provider might arguably be liable for contributory infringement.

Issue #2: Does AI Training Infringe Copyrights?

AI systems are “trained” to create works by exposing a computer program system to large numbers of existing works downloaded from the Internet.

When content is downloaded from the Internet, a copy of it is made. This process will “involve the reproduction of entire works or substantial portions thereof.” OpenAI, for example, acknowledges that its programs are trained on “large, publicly available datasets that include copyrighted works” and that this process “involves first making copies of the data to be analyzed….” Making these copies without permission may infringe the copyright holders’ exclusive right to make reproductions of their works.

Generative-AI outfits tend to argue that the training process is fair use.

Fair use claims require consideration of four statutory factors:

  • the purpose and character of the use;
  • the nature of the work copied;
  • the amount and substantiality of the portion copied; and
  • the effect on the market for the work.

OpenAI relies on the precedent set in Authors Guild v. Google for its invocation of “fair use.” That case involved Google’s copying of the entire text of books to construct its popular searchable database.

A number of lawsuits currently pending in the courts are raising the question whether and how, the AI training process is “fair use.”

Issue #3: Are AI-Generated Works Protected by Copyright?

The Copyright Act affords copyright protection to “original works of authorship.” The U.S. Copyright Office recognizes copyright only in works “created by a human being.” Courts, too, have declined to extend copyright protection to nonhuman authors. (Remember the monkey selfie case?) A recent copyright registration applicant has filed a lawsuit against the U.S. Copyright Office alleging that the Office wrongfully denied registration of an AI-generated work. A federal court has now rejected his argument that human authorship is not required for copyright ownership.

In March 2023, the Copyright Office released guidance stating that when AI “determines the expressive elements of its output, the generated material is not the product of human authorship.” Moreover, an argument might be made that a general prompt, such as “create a story about a dog in the style of Jack London,” is an idea, not expression. It is well settled that only expression gets copyright protection; ideas do not.

In September 2023, the Copyright Office Review Board affirmed the Office’s refusal to register a copyright in a work that was generated by Midjourney and then modified by the applicant, on the basis that the applicant did not disclaim the AI-generated material.

The Office also has the power to cancel improvidently granted registrations. (Words to the wise: Disclose and disclaim.)

These are my favorite generative-AI legal issues. What are yours?

AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

A Recent Exit from Paradise

In his application for registration, Thaler had listed his computer, referred to as “Creativity Machine,” as the “author” of the work, and himself as a claimant. The Copyright Office denied registration on the basis that copyright only protects human authorship.

Over a year ago, Steven Thaler filed an application with the United States Copyright Office to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” In the application, he listed a machine as the “author” and himself as the copyright owner. The Copyright Office refused registration  on the grounds that the work lacked human authorship. Thaler then filed a lawsuit in federal court seeking to overturn that determination. On August 18, 2023 the court upheld the Copyright Office’s refusal of registration. The case is Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236 (D.D.C. Aug. 18, 2023).

Read more about the history of this case in my previous blog post, “A Recent Entrance to Complexity.”

The Big Bright Green Creativity Machine

In his application for registration, Thaler had listed his computer, referred to as “Creativity Machine,” as the “author” of the work, and himself as a claimant. The Copyright Office denied registration on the basis that copyright only protects human authorship.

Taking the Copyright Office to court

Unsuccessful in securing a reversal through administrative appeals, Thaler filed a lawsuit in federal court claiming the Office’s denial of registration was “arbitrary, capricious, an abuse of discretion and not in accordance with the law….”

The court ultimately sided with the Copyright Office. In its decision, it provided a cogent explanation of the rationale for the human authorship requirement:

The act of human creation—and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts—was thus central to American copyright from its very inception. Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them.

Id.

A Complex Issue

As I discussed in a previous blog post, the issue is not as simple as it might seem. There are different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. These tools are capable of generating content with very little user input. If the only human input is a user’s directive to “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

The difficult question – and one that is almost certain to be the subject of ongoing litigation and probably new Copyright Office regulations – is exactly how much, and what kind of, human input is necessary before a human can claim authorship.  Equally as perplexing is how much, if at all, the Copyright Office should involve itself in ascertaining and evaluating the details of the process by which a work was created. And, of course, what consequences should flow from an applicant’s failure to disclose complete details about the nature and extent of machine involvement in the creative process.

Conclusion

The court in this case did not dive into these issues. The only thing we can safely take away from this decision is the broad proposition that a work is not protected by copyright to the extent it is generated by a machine.