AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

AI Legal Issues

Thomas James (“The Cokato Copyright Attorney”) describes the range of legal issues, most of which have not yet been resolved, that artificial intelligence (AI) systems have spawned.

AI is not new. Its implementation also is not new. In fact, consumers regularly interact with AI-powered systems every day. Online help systems often use AI to provide quick answers to questions that customers routinely ask. Sometimes these are designed to give a user the impression that s/he is communicating with a person.

AI systems also perform discrete functions such as analyzing a credit report and rendering a decision on a loan or credit card application, or screening employment applications.

Many other uses have been found for AI and new ones are being developed all the time. AI has been trained not just to perform customer service tasks, but also to perform analytics and diagnostic tests; to repair products; to update software; to drive cars; and even to write articles and create images and videos. These developments may be helping to streamline tasks and improve productivity, but they have also generated a range of new legal issues.

Tort liability

While there are many different kinds of tort claims, the elements of tort claims are basically the same: (1) The person sought to be held liable for damages or ordered to comply with a court order must have owed a duty to the person who is seeking the legal remedy; (2) the person breached that duty; (3) the person seeking the legal remedy experienced harm, i.e., real or threatened injury; and (4) the breach was the actual and proximate cause of the harm.

The kind of harm that must be demonstrated varies depending on the kind of tort claim. For example, a claim of negligent driving might involve bodily injury, while a claim of defamation might involve injury to reputation. For some kinds of tort claims, the harm might involve financial or economic injury. 

The duty may be specified in a statute or contract, or it might be judge-made (“common law.”) It may take the form of an affirmative obligation (such as a doctor’s obligation to provide a requisite level of care to a patient), or it may take a negative form, such as the common law duty to refrain from assaulting another person.

The advent of AI does not really require any change in these basic principles, but they can be more difficult to apply to scenarios that involve the use of an AI system.

Example. Acme Co. manufactures and markets Auto-Doc, a machine that diagnoses and repairs car problems. Mike’s Repair Shop lays off its automotive technician employees and replaces them with one of these machines. Suzie Consumer brings her VW Jetta to Mikes Repair Shop for service because she has been hearing a sound that she describes as being a grinding noise that she thinks is coming from either the engine or the glove compartment. The Auto-Doc machine adds engine oil, replaces belts, and removes the contents of the glove compartment. Later that day, Suzie’s brakes fail and her vehicle hits and kills a pedestrian in a crosswalk. A forensic investigation reveals that her brakes failed because they were badly worn. Who should be held liable for the pedestrian’s death – Suzie, Mike’s, Acme Co., some combination of two of them, all of them, or none of them?

The allocation of responsibility will depend, in part, on the degree of autonomy the AI machine possesses. Of course, if it can be shown that Suzie knew or should have known that her brakes were bad, then she most likely could be held responsible for causing the pedestrian’s death. But what about the others? Their liability, or share of liability, is affected by the degree of autonomy the AI machine possesses. If it is completely autonomous, then Acme might be held responsible for failing to program the machine in such a way that it would test for and detect worn brake pads even if a customer expresses an erroneous belief that the sound is coming from the engine or the glove compartment. On the other hand, if the machine is designed only to offer suggestions of possible problems and solutions,  leaving it up to a mechanic to accept or reject them, then Mike’s might be held responsible for negligently accepting the machine’s recommendations. 

Assuming the Auto-Doc machine is fully autonomous, should Mike’s be faulted for relying on it to correctly diagnose car problems? Is Mike’s entitled to rely on Acme’s representations about Auto-Doc’s capabilities, or would the repair shop have a duty to inquire about and/or investigate Auto-Doc’s limitations? Assuming Suzie did not know, and had no reason to suspect, her brakes were worn out, should she be faulted for relying on a fully autonomous machine instead of taking the car to a trained human mechanic?  Why or why not?

Criminal liability

It is conceivable that an AI system might engage in activity that is prohibited by an applicable jurisdiction’s criminal laws. E-mail address harvesting is an example. In the United States, for example, the CAN-SPAM Act makes it a crime to send a commercial email message to an email address that was  obtained  by automated scraping of Internet websites for email addresses. Of course, if a person intentionally uses an AI system for scraping, then liability should be clear. But what if an AI system “learns” to engage in scraping?

AI-generated criminal output may also be a problem. Some countries have made it a crime to display a Nazi symbol, such as a swastika, on a website. Will criminal liability attach if a website or blog owner uses AI to generate illustrated articles about World War II and the system generates and displays articles that are illustrated with World War II era German flags and military uniforms? In the United States, creating or possessing child pornography is illegal. Will criminal liability attach if an AI system generates it?

Some of these kinds of issues can be resolved through traditional legal analysis of the intent and scienter elements of the definitions of crimes. A jurisdiction might wish to consider, however, whether AI systems should be regulated to require system creators to implement measures that would prevent illegal uses of the technology. This raises policy and feasibility questions, such as whether and what kinds of restraints on machine learning should be required, and how to enforce them. Further, would prior restraints on the design and/or use of AI-powered expressive-content-generating systems infringe on First Amendment rights?  

Product liability

Related to the problem of allocating responsibility for harm caused by the use of an AI mechanism is the question whether anyone should be held liable for harm caused when the mechanism is not defective, that is to say, when it is operating as it should.

 Example.  Acme Co. manufactures and sells Auto-Article, a software program that is designed to create content of a type and kind the user specifies. The purpose of the product is to enable a website owner to generate and publish a large volume of content frequently, thereby improving the website’s search engine ranking. It operates   by scouring the Internet and analyzing instances of the content the user specifies to produce new content that “looks like” them. XYZ Co. uses the software to generate articles on medical topics. One of these articles explains that chest pain can be caused by esophageal spasms but that these typically do not require treatment unless they occur frequently enough to interfere with a person’s ability to eat or drink. Joe is experiencing chest pain. He does not seek medical help, however, because he read the article and therefore believes he is experiencing esophageal spasms. He later collapses and dies from a heart attack. A medical doctor is prepared to testify that his death could have been prevented if he had sought medical attention when he began experiencing the pain.

Should either Acme or XYZ Co. be held liable for Joe’s death? Acme could argue that its product was not defective. It was fit for its intended purposes, namely, a machine learning system that generates articles that look like articles of the kind a user specifies. What about XYZ Co.? Would the answer be different if XYZ had published a notice on its site that the information provided in its articles is not necessarily complete and that the articles are not a substitute for advice from a qualified medical professional? If XYZ incurs liability as a result of the publication, would it have a claim against Acme, such as for failure to warn it of the risks of using AI to generate articles on medical topics?

Consumer protection

AI system deployment raises significant health and safety concerns. There is the obvious example of an AI system making incorrect medical diagnoses or treatment recommendations. Autonomous (“self-driving”) motor vehicles are also examples. An extensive body of consumer protection regulations may be anticipated.

Forensic and evidentiary issues

In situations involving the use of semi-autonomous AI, allocating responsibility for harm resulting from the operation of the AI  system  may be difficult. The most basic question in this respect is whether an AI system was in use or not. For example, if a motor vehicle that can be operated in either manual or autonomous mode is involved in an accident, and fault or the extent of liability depends on that (See the discussion of tort liability, above), then a way of determining the mode in which the car was being driven at the time will be needed.

If, in the case of a semi-autonomous AI system, tort liability must be allocated between the creator of the system and a user of it, the question of fault may depend on who actually caused a particular tortious operation to be executed – the system creator or the user. In that event, some method of retracing the steps the AI system used may be essential. This may also be necessary in situations where some factor other than AI contributed, or might have contributed, to the injury. Regulation may be needed to ensure that the steps in an AI system’s operations are, in fact, capable of being ascertained.

Transparency problems also fall into this category. As explained in the Journal of Responsible Technology, people might be put on no-fly lists, denied jobs or benefits, or refused credit without knowing anything more than that the decision was made through some sort of automated process. Even if transparency is achieved and/or mandated, contestability will also be an issue.

Data Privacy

To the extent an AI system collects and stores personal or private information, there is a risk that someone may gain unauthorized access to it.. Depending on how the system is designed to function, there is also a risk that it might autonomously disclose legally protected personal or private information. Security breaches can cause catastrophic problems for data subjects.

Publicity rights

Many jurisdictions recognize a cause of action for violation of a person’s publicity rights (sometimes called “misappropriation of personality.”) In these jurisdictions, a person has an exclusive legal right to commercially exploit his or her own name, likeness or voice. To what extent, and under what circumstances, should liability attach if a commercialized AI system analyzes the name, likeness or voice of a person that it discovers on the Internet? Will the answer depend on how much information about a particular individual’s voice, name or likeness the system uses, on one hand, or how closely the generated output resembles that individual’s voice, name or likeness, on the other?

Contracts

The primary AI-related contract concern is about drafting agreements that adequately and effectively allocate liability for losses resulting from the use of AI technology. Insurance can be expected to play a larger role as the use of AI spreads into more areas.

Bias, Discrimination, Diversity & Inclusion

Some legislators have expressed concern that AI systems will reflect and perpetuate biases and perhaps discriminatory patterns of culture. To what extent should AI system developers be required to ensure that the data their systems use are collected from a diverse mixture of races, ethnicities, genders, gender identities, sexual orientations, abilities and disabilities, socioeconomic classes, and so on? Should developers be required to apply some sort of principle of “equity” with respect to these classifications, and if so, whose vision of equity should they be required to enforce? To what extent should government be involved in making these decisions for system developers and users?

Copyright

AI-generated works like articles, drawings, animations, music and so on, raise two kinds of copyright issues:

  1. Input issues, i.e., questions like whether AI systems that create new works based on existing copyright-protected works infringe the copyrights in those works
  2. Output issues, such as who, if anybody, owns the copyright in an AI-generated work.

I’ve written about AI copyright ownership issues and AI copyright infringement issues in previous blog posts on The Cokato Copyright Attorney.

Patents and other IP

Computer programs can be patented. AI systems can be devised to write computer programs. Can an AI-generated computer program that meets the usual criteria for patentability (novelty, utility, etc.) be patented?

Is existing intellectual property law adequate to deal with AI-generated inventions and creative works? The World Intellectual Property Organization (WIPO) apparently does not think so. It is formulating recommendations for new regulations to deal with the intellectual property aspects of AI.

Conclusion

AI systems raise a wide range of legal issues. The ones identified in this article are merely a sampling, not a complete listing of all possible issues. Not all of these legal issues have answers yet. It can be expected that more AI regulatory measures, in more jurisdictions around the globe, will be coming down the pike very soon.

Contact attorney Thomas James

Contact Minnesota attorney Thomas James for help with copyright and trademark registration and other copyright and trademark related matters.

Does AI Infringe Copyright?

A previous blog post addressed the question whether AI-generated creations are protected by copyright. This could be called the “output question” in the artificial intelligence area of copyright law. Another question is whether using copyright-protected works as input for AI generative processes infringes the copyrights in those works. This could be called the “input question.” Both kinds of questions are now before the courts. Minnesota attorney Tom James describes a framework for analyzing the input question.

The Input Question in AI Copyright Law

by Thomas James, Minnesota attorney

In a previous blog post, I discussed the question whether AI-generated creations are protected by copyright. This could be called the “output question” in the artificial intelligence area of copyright law. Another question is whether using copyright-protected works as input for AI generative processes infringes the copyrights in those works. This could be called the “input question.” Both kinds of questions are now before the courts. In this blog post, I describe a framework for analyzing the input question.

The Cases

The Getty Images lawsuit

Getty Images is a stock photograph company. It licenses the right to use the images in its collection to those who wish to use them on their websites or for other purposes. Stability AI is the creator of Stable Diffusion, which is described as a “text-to-image diffusion model capable of generating photo-realistic images given any text input.” In January, 2023, Getty Images initiated legal proceedings in the United Kingdom against Stability AI. Getty Images is claiming that Stability AI violated copyrights by using their images and metadata to train AI software without a license.

The independent artists lawsuit

Another lawsuit raising the question whether AI-generated output infringes copyright has been filed in the United States. In this case, a group of visual artists are seeking class action status for claims against Stability AI, Midjourney Inc. and DeviantArt Inc. The artists claim that the companies use their images to train computers “to produce seemingly new images through a mathematical software process.” They describe AI-generated artwork as “collages” made in violation of copyright owners’ exclusive right to create derivative works.

The GitHut Copilot lawsuit

In November, 2022, a class action lawsuit was filed in a U.S. federal court against GitHub, Microsoft, and OpenAI. The lawsuit claims the GitHut Copilot and OpenAI Codex coding assistant services use existing code to generate new code. By training their AI systems on open source programs, the plaintiffs claim, the defendants have allegedly infringed the rights of developers who have posted code under open-source licenses that require attribution.

How AI Works

AI, of course, stands for artificial intelligence. Almost all AI techniques involve machine learning. Machine learning, in turn, involves using a computer algorithm to make a machine improve its performance over time, without having to pre-program it with specific instructions. Data is input to enable the machine to do this. For example, to teach a machine to create a work in the style of Vincent van Gogh, many instances of van Gogh’s works would be input. The AI program contains numerous nodes that focus on different aspects of an image. Working together, these nodes will then piece together common elements of a van Gogh painting from the images the machine has been given to analyze. After going through many images of van Gogh paintings, the machine “learns” the features of a typical Van Gogh painting. The machine can then generate a new image containing these features.

In the same way, a machine can be programmed to analyze many instances of code and generate new code.

The input question comes down to this: Does creating or using a program that causes a machine to receive information about the characteristics of a creative work or group of works for the purpose of creating a new work that has the same or similar characteristics infringe the copyright in the creative work(s) that the machine uses in this way?

The Exclusive Rights of Copyright Owners

In the United States, the owner of a copyright in a work has the exclusive rights to:

  • reproduce (make copies of) it;
  • distribute copies of it;
  • publicly perform it;
  • publicly display it; and
  • make derivative works based on it.

(17 U.S.C. § 106). A copyright is infringed when a person exercises any of these exclusive rights without the copyright owner’s permission.

Copyright protection extends only to expression, however. Copyright does not protect ideas, facts, processes, methods, systems or principles.

Direct Infringement

Infringement can be either direct or indirect. Direct infringement occurs when somebody directly violates one of the exclusive rights of a copyright owner. Examples would be a musician who performs a copyright-protected song in public without permission, or a cartoonist who creates a comic based on the Batman and Robin characters and stories without permission.

The kind of tool an infringer uses is not of any great moment. A writer who uses word-processing software to write a story that is simply a copy of someone else’s copyright-protected story is no less guilty of infringement merely because the actual typewritten letters were generated using a computer program that directs a machine to reproduce and display typographical characters in the sequence a user selects.

Contributory and Vicarious Infringement

Infringement liability may also arise indirectly. If one person knowingly induces another person to infringe or contributes to the other person’s infringement in some other way, then each of them may be liable for copyright infringement. The person who actually committed the infringing act could be liable for direct infringement. The person who knowingly encouraged, solicited, induced or facilitated the other person’s infringing act(s) could be liable for contributory infringement.

Vicarious infringement occurs when the law holds one person responsible for the conduct of another because of the nature of the legal relationship between them. The employment relationship is the most common example. An employer generally is held responsible for an employee’s conduct,  provided the employee’s acts were performed within the course and scope of the employment. Copyright infringement is not an exception to that rule.

Programmer vs. User

Direct infringement liability

Under U.S. law, machines are treated as extensions of the people who set them in motion. A camera, for example, is an extension of the photographer. Any images a person causes a camera to generate by pushing a button on it is considered the creation of the person who pushed the button, not of the person(s) who manufactured the camera, much less of the camera itself. By the same token, a person who uses the controls on a machine to direct it to copy elements of other people’s works should be considered the creator of the new work so created. If using the program entails instructing the  machine to create an unauthorized derivative work of copyright-protected images, then it would be the user, not the machine or the software writer, who would be at risk of liability for direct copyright infringement.

Contributory infringement liability

Knowingly providing a device or mechanism to people who use it to infringe copyrights creates a risk of liability for contributory copyright infringement. Under Sony Corp. v. Universal City Studios, however, merely distributing a mechanism that people can use to infringe copyrights is not enough for contributory infringement liability to attach, if the mechanism has substantial uses for which copyright infringement liability does not attach. Arguably, AI has many such uses. For example, it might be used to generate new works from public domain works. Or it might be used to create parodies. (Creating a parody is fair use; it should not result in infringement liability.)

The situation is different if a company goes further and induces, solicits or encourages people to use its mechanism to infringe copyrights. Then it may be at risk of contributory liability. As the United States Supreme Court has said, “one who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties.” Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913, 919 (2005). (Remember Napster?)

Fair Use

If AI-generated output is found to either directly or indirectly infringe copyright(s), the infringer nevertheless might not be held liable, if the infringement amounts to fair use of the copyrighted work(s) that were used as the input for the AI-generated work(s).

Ever since some rap artists began using snippets of copyright-protected music and sound recordings without permission, courts have embarked on a treacherous expedition to articulate a meaningful dividing line between unauthorized derivative works, on one hand, and unauthorized transformative works, on the other. Although the Copyright Act gives copyright owners the exclusive right to create works based on their copyrighted works (called derivative works), courts have held that an unauthorized derivative work may be fair use if it is “transformative.: This has caused a great deal of uncertainty in the law, particularly since the U.S. Copyright Act expressly defines a derivative work as one that transforms another work. (See 17 U.S.C. § 101: “A ‘derivative work’ is a work based upon one or more preexisting works, . . . or any other form in which a work may be recast, transformed, or adapted.” (emphasis added).)

When interpreting and applying the transformative use branch of Fair Use doctrine, courts have issued conflicting and contradictory decisions. As I wrote in another blog post, the U.S. Supreme Court has recently agreed to hear and decide Andy Warhol Foundation for the Visual Arts v. Goldsmith. It is anticipated that the Court will use this case to attempt to clear up all the confusion around the doctrine. It is also possible the Court might take even more drastic action concerning the whole “transformative use” branch of Fair Use.

Some speculate that the questions the Justices asked during oral arguments in Warhol signal a desire to retreat from the expansion of fair use that the “transformativeness” idea spawned. On the other hand, some of the Court’s recent decisions, such as Google v. Oracle, suggest the Court is not particularly worried about large-scale copyright infringing activity, insofar as Fair Use doctrine is concerned.

Conclusion

To date, it does not appear that there is any direct legal precedent in the United States for classifying the use of mass quantities of works as training tools for AI as “fair use.” It seems, however, that there soon will be precedent on that issue, one way or the other. In the meantime, AI generating system users should proceed with caution.

Photographers’ Rights: Warhol Case Tests the Limits of Transformative Use

The U.S. Supreme Court will soon hear Andy Warhol Foundation for the Visual Arts v. Goldsmith. Attorney Thomas James explains what is at stake for photographers

In a previous post, I identified the Second Circuit’s decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith as one of the three top copyright cases of 2021. It has since been appealed to the United States Supreme Court. Oral argument is scheduled for October 12, 2022.

The dispute

The underlying facts in the case, in a nutshell, are these:

Lynn Goldsmith took a photograph of Prince in her studio in 1981. Later, Andy Warhol created a series of silkscreen prints and pencil illustrations based on it. The Andy Warhol Foundation sought a declaratory judgment that the artist’s use of the photograph was “fair use.” Goldsmith counterclaimed for copyright infringement. The district court ruled in favor of Warhol and dismissed the photographer’s infringement claim.

The Court of Appeals reversed, holding that the district court misapplied the four “fair use” factors and that the derivative works Warhol created do not qualify as fair use.

The United States Supreme Court granted the Warhol Foundation’s certiorari petition.

The issue

In this case, the U.S. Supreme Court is being called upon to provide guidance on the meaning and scope of “transformative use” as an element of fair use analysis. At what point does an unauthorized, altered copy of a copyrighted work stop being an infringing derivative work and become a “transformative” fair use?

The Conundrum

In the chapter on copyright in my book, E-Commerce Law, I predicted a case like this would be coming before the Supreme Court at some point. As I noted there, a tension exists between the Copyright Act’s grant of the exclusive right to authors (or their assignees and licensees) to make modified versions of their works (called “derivative works”), on one hand, and the idea that making modified versions of copyrighted works is transformative fair use, on the other. The notion that making changes to a work that “transform” it into a new work qualifies as fair use obviously threatens to swallow the rule that only the owner of the copyright in a work has the right to make new works based on the work.

Lower courts have not been consistent in their interpretations and approaches to the transformative use concept. The Warhol case presents a wonderful opportunity for the Supreme Court to provide some guidance.

Campell v. Acuff-Rose Music

The “transformative use” saga really begins with the 1994 case, Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569, 579 (1994). Unable to secure a license to include “samples” (copies of portions) of the Roy Orbison song, “Oh Pretty Woman” in a new version they recorded, 2 Live Crew proceeded to record and distribute their version with the unauthorized sampling anyway, invoking “fair use.”

In a decision that took many attorneys and legal scholars by surprise, the Supreme Court held that 2 Live Crew did not need permission to copy and distribute the work even though the work they created involved substantial copying of the Orbison song. To reach this conclusion, the Court propounded the notion that copying portions of another work — even substantial portions of it — may be permissible if the resulting work is “transformative.” This, the Court held, could hold true sometimes even if the newly created work is not a parody of the original.

In the years that followed, courts have struggled to determine what is a “transformative” modification of a work and what is a non-transformative modification of it. Some courts have demonstrated a willingness to apply the doctrine in such a way as to nearly nullify the exclusivity of an author’s right to make modified versions of his or her works.

Courts have also demonstrated a lack of consistency with respect to how they incorporate and apply “transformativeness” within the four-factor test for fair use set out in 17 U.S.C. Section 107.

Why It Matters

This might seem like an arcane legal issue of little practical significance, but it really isn’t. People are already pushing the transformative use idea into new realms. For example, some tattoo artists have claimed in court filings that they do not need permission to make stencils from photographs because copying a photograph onto skin is a “transformative use.”

Of course, making and distributing exact copies of a photograph for sale in a stream of commerce that directly competes with the original photograph should not be susceptible to a transformative fair use claim. But how far can the claim be carried? If copying a photograph onto somebody’s skin is “transformative” use, would copying it onto somebody’s shirt also be “transformative”?

Clarity and guidance in this area are sorely needed. Hopefully the Supreme Court will take this opportunity to furnish it.

Contact Cokato copyright attorney Thomas James

Need help with registering a copyright or with a copyright problem? Contact attorney Thomas James.

The Internet Archive Lawsuit

Thomas James (“The Cokato Copyright Attorney”) explains how Hachette Book Group et al. v. Internet Archive, filed in the federal district court for the Southern District of New York on June 1, 2020, tests the limits of authors’ and publishers’ digital rights in their copyright-protected works.

The gravamen of the complaint is that Internet Archive (“IA”) allegedly scanned books and made them freely available to the public via an Internet website without the permission of copyright rights-holders. Book publishers filed this lawsuit alleging that IA’s activities infringe their exclusive rights of reproduction and distribution under the United States Copyright Act.

As of this writing, the case is at the summary judgment stage, with briefing currently scheduled to end in October, 2022. Whatever the outcome, an appeal seems very likely. Here is an overview to bring you up to speed on what the case is about.

The undisputed facts

Per the parties’ stipulation, the following facts are not disputed:

The case involves numerous published books which the publishers who filed this lawsuit (Hachette Book Group, HarperCollins, Penguin Random House, and John Wiley &  Sons) have exclusive rights, under the United States Copyright Act, to reproduce and distribute.

Internet Archive and Open Library of Richmond are nonprofit organizations the IRS has classified as 501(c)(3) public charities. These organizations purchased print copies of certain books identified in the lawsuit.

The core allegations

The plaintiffs allege that IA obtains print books that are protected by copyright, scans them into a digital format, uploads them to its servers, and then distributes these digital copies to members of the public via a website – all without a license and without any payment to authors and publishers. Plaintiffs allege that IA has already scanned 1.3 million books and plans to scan millions more. The complaint describes this as “willful digital piracy on an industrial scale.”

Defenses?

First sale doctrine

One justification that is sometimes advanced for making digital copies of a work available for free online without paying the author or publisher is the so-called “first sale” doctrine. This is an exception to copyright infringement liability that essentially allows the owner of a lawfully acquired copy of a work to sell, transfer or lend it to other people without incurring copyright infringement liability. For example, a person who buys a print edition of a book may lend it to a friend or sell it at a garage sale without having to get the copyright owner’s permission. More to the point, a library may purchase a copy of a print version of a book and proceed to lend it to library patrons without fear of incurring infringement liability for doing so.

The doctrine does not apply to all kinds of works, but it does generally apply  to print books.

The first sale doctrine only provides an exception to infringement liability for the unauthorized distribution of a work, however. It does not provide an exception to liability for unauthorized reproduction of a work. (See 17 U.S.C. § 109.) Scanning books to make digital copies is an act of reproduction, not distribution. Accordingly, the first sale doctrine does not appear to be a good fit as a defense in this case.

“Controlled digital lending”

Public libraries lend physical copies of the books in their collections to library patrons for no charge.  Based on this practice, a white paper published by David R. Hansen and Kyle K. Courtney makes the case for treating the distribution of digitized copies of books by libraries as fair use, where the library maintains a one-to-one ratio between the number of physical copies of a book it has and the number of digital “check-outs” of the digital version it allows at any given time.

The theory, known as controlled digital lending (“CDL”), relies on an assumption that the distribution of a work electronically is the functional equivalent of distributing a physical copy of it, so long as the same limitations on the ability to “check out” the work from the library are imposed.

Publishers dispute this assumption. They take the position that there are important differences between e-books and print books. They maintain that these differences justify the distribution of e-books under a licensing program separate and distinct from their print book purchasing programs. They also question whether e-books are, in fact, distributed subject to the same limitations that apply to a print version of the book.

Fair use

Whether a particular kind of use of a copyright-protected work is “fair use” or not requires consideration of four factors: (1) the nature of the work; (2) the character and purpose of the use; (3) the amount and substantiality of the portion copied; and (4) the effect of the use on the market for the work.

Supporters of free access to copyrighted works for all tend to focus on the “character and purpose” factor. They can be relied upon to argue that free access to literary works is a great benefit to the public. Authors and publishers tend to focus on the other factors. In this case, it seems possible that the factors relating to the amount copied and the effect of the use on the market for the work could weigh against a finding of fair use.  

The federal district court in this case is being called upon to evaluate those factors and decide whether they weigh in favor of treating CDL – or at least, CDL as IA has applied and implemented it – as fair use or not.

Subscribe to The Cokato Copyright Attorney

The Cokato Copyright Attorney (Minnesota lawyer Thomas B. James) will be following this case closely. Subscribe for updates as the case makes its way through the courts.

Contact attorney Thomas James

Need help registering a copyright or trademark, or with a copyright or trademark problem? Contact Cokato, Minnesota attorney Tom James.

%d bloggers like this: