AI Legal Issues

Thomas James (“The Cokato Copyright Attorney”) describes the range of legal issues, most of which have not yet been resolved, that artificial intelligence (AI) systems have spawned.

AI is not new. Its implementation also is not new. In fact, consumers regularly interact with AI-powered systems every day. Online help systems often use AI to provide quick answers to questions that customers routinely ask. Sometimes these are designed to give a user the impression that s/he is communicating with a person.

AI systems also perform discrete functions such as analyzing a credit report and rendering a decision on a loan or credit card application, or screening employment applications.

Many other uses have been found for AI and new ones are being developed all the time. AI has been trained not just to perform customer service tasks, but also to perform analytics and diagnostic tests; to repair products; to update software; to drive cars; and even to write articles and create images and videos. These developments may be helping to streamline tasks and improve productivity, but they have also generated a range of new legal issues.

Tort liability

While there are many different kinds of tort claims, the elements of tort claims are basically the same: (1) The person sought to be held liable for damages or ordered to comply with a court order must have owed a duty to the person who is seeking the legal remedy; (2) the person breached that duty; (3) the person seeking the legal remedy experienced harm, i.e., real or threatened injury; and (4) the breach was the actual and proximate cause of the harm.

The kind of harm that must be demonstrated varies depending on the kind of tort claim. For example, a claim of negligent driving might involve bodily injury, while a claim of defamation might involve injury to reputation. For some kinds of tort claims, the harm might involve financial or economic injury. 

The duty may be specified in a statute or contract, or it might be judge-made (“common law.”) It may take the form of an affirmative obligation (such as a doctor’s obligation to provide a requisite level of care to a patient), or it may take a negative form, such as the common law duty to refrain from assaulting another person.

The advent of AI does not really require any change in these basic principles, but they can be more difficult to apply to scenarios that involve the use of an AI system.

Example. Acme Co. manufactures and markets Auto-Doc, a machine that diagnoses and repairs car problems. Mike’s Repair Shop lays off its automotive technician employees and replaces them with one of these machines. Suzie Consumer brings her VW Jetta to Mikes Repair Shop for service because she has been hearing a sound that she describes as being a grinding noise that she thinks is coming from either the engine or the glove compartment. The Auto-Doc machine adds engine oil, replaces belts, and removes the contents of the glove compartment. Later that day, Suzie’s brakes fail and her vehicle hits and kills a pedestrian in a crosswalk. A forensic investigation reveals that her brakes failed because they were badly worn. Who should be held liable for the pedestrian’s death – Suzie, Mike’s, Acme Co., some combination of two of them, all of them, or none of them?

The allocation of responsibility will depend, in part, on the degree of autonomy the AI machine possesses. Of course, if it can be shown that Suzie knew or should have known that her brakes were bad, then she most likely could be held responsible for causing the pedestrian’s death. But what about the others? Their liability, or share of liability, is affected by the degree of autonomy the AI machine possesses. If it is completely autonomous, then Acme might be held responsible for failing to program the machine in such a way that it would test for and detect worn brake pads even if a customer expresses an erroneous belief that the sound is coming from the engine or the glove compartment. On the other hand, if the machine is designed only to offer suggestions of possible problems and solutions,  leaving it up to a mechanic to accept or reject them, then Mike’s might be held responsible for negligently accepting the machine’s recommendations. 

Assuming the Auto-Doc machine is fully autonomous, should Mike’s be faulted for relying on it to correctly diagnose car problems? Is Mike’s entitled to rely on Acme’s representations about Auto-Doc’s capabilities, or would the repair shop have a duty to inquire about and/or investigate Auto-Doc’s limitations? Assuming Suzie did not know, and had no reason to suspect, her brakes were worn out, should she be faulted for relying on a fully autonomous machine instead of taking the car to a trained human mechanic?  Why or why not?

Criminal liability

It is conceivable that an AI system might engage in activity that is prohibited by an applicable jurisdiction’s criminal laws. E-mail address harvesting is an example. In the United States, for example, the CAN-SPAM Act makes it a crime to send a commercial email message to an email address that was  obtained  by automated scraping of Internet websites for email addresses. Of course, if a person intentionally uses an AI system for scraping, then liability should be clear. But what if an AI system “learns” to engage in scraping?

AI-generated criminal output may also be a problem. Some countries have made it a crime to display a Nazi symbol, such as a swastika, on a website. Will criminal liability attach if a website or blog owner uses AI to generate illustrated articles about World War II and the system generates and displays articles that are illustrated with World War II era German flags and military uniforms? In the United States, creating or possessing child pornography is illegal. Will criminal liability attach if an AI system generates it?

Some of these kinds of issues can be resolved through traditional legal analysis of the intent and scienter elements of the definitions of crimes. A jurisdiction might wish to consider, however, whether AI systems should be regulated to require system creators to implement measures that would prevent illegal uses of the technology. This raises policy and feasibility questions, such as whether and what kinds of restraints on machine learning should be required, and how to enforce them. Further, would prior restraints on the design and/or use of AI-powered expressive-content-generating systems infringe on First Amendment rights?  

Product liability

Related to the problem of allocating responsibility for harm caused by the use of an AI mechanism is the question whether anyone should be held liable for harm caused when the mechanism is not defective, that is to say, when it is operating as it should.

 Example.  Acme Co. manufactures and sells Auto-Article, a software program that is designed to create content of a type and kind the user specifies. The purpose of the product is to enable a website owner to generate and publish a large volume of content frequently, thereby improving the website’s search engine ranking. It operates   by scouring the Internet and analyzing instances of the content the user specifies to produce new content that “looks like” them. XYZ Co. uses the software to generate articles on medical topics. One of these articles explains that chest pain can be caused by esophageal spasms but that these typically do not require treatment unless they occur frequently enough to interfere with a person’s ability to eat or drink. Joe is experiencing chest pain. He does not seek medical help, however, because he read the article and therefore believes he is experiencing esophageal spasms. He later collapses and dies from a heart attack. A medical doctor is prepared to testify that his death could have been prevented if he had sought medical attention when he began experiencing the pain.

Should either Acme or XYZ Co. be held liable for Joe’s death? Acme could argue that its product was not defective. It was fit for its intended purposes, namely, a machine learning system that generates articles that look like articles of the kind a user specifies. What about XYZ Co.? Would the answer be different if XYZ had published a notice on its site that the information provided in its articles is not necessarily complete and that the articles are not a substitute for advice from a qualified medical professional? If XYZ incurs liability as a result of the publication, would it have a claim against Acme, such as for failure to warn it of the risks of using AI to generate articles on medical topics?

Consumer protection

AI system deployment raises significant health and safety concerns. There is the obvious example of an AI system making incorrect medical diagnoses or treatment recommendations. Autonomous (“self-driving”) motor vehicles are also examples. An extensive body of consumer protection regulations may be anticipated.

Forensic and evidentiary issues

In situations involving the use of semi-autonomous AI, allocating responsibility for harm resulting from the operation of the AI  system  may be difficult. The most basic question in this respect is whether an AI system was in use or not. For example, if a motor vehicle that can be operated in either manual or autonomous mode is involved in an accident, and fault or the extent of liability depends on that (See the discussion of tort liability, above), then a way of determining the mode in which the car was being driven at the time will be needed.

If, in the case of a semi-autonomous AI system, tort liability must be allocated between the creator of the system and a user of it, the question of fault may depend on who actually caused a particular tortious operation to be executed – the system creator or the user. In that event, some method of retracing the steps the AI system used may be essential. This may also be necessary in situations where some factor other than AI contributed, or might have contributed, to the injury. Regulation may be needed to ensure that the steps in an AI system’s operations are, in fact, capable of being ascertained.

Transparency problems also fall into this category. As explained in the Journal of Responsible Technology, people might be put on no-fly lists, denied jobs or benefits, or refused credit without knowing anything more than that the decision was made through some sort of automated process. Even if transparency is achieved and/or mandated, contestability will also be an issue.

Data Privacy

To the extent an AI system collects and stores personal or private information, there is a risk that someone may gain unauthorized access to it.. Depending on how the system is designed to function, there is also a risk that it might autonomously disclose legally protected personal or private information. Security breaches can cause catastrophic problems for data subjects.

Publicity rights

Many jurisdictions recognize a cause of action for violation of a person’s publicity rights (sometimes called “misappropriation of personality.”) In these jurisdictions, a person has an exclusive legal right to commercially exploit his or her own name, likeness or voice. To what extent, and under what circumstances, should liability attach if a commercialized AI system analyzes the name, likeness or voice of a person that it discovers on the Internet? Will the answer depend on how much information about a particular individual’s voice, name or likeness the system uses, on one hand, or how closely the generated output resembles that individual’s voice, name or likeness, on the other?

Contracts

The primary AI-related contract concern is about drafting agreements that adequately and effectively allocate liability for losses resulting from the use of AI technology. Insurance can be expected to play a larger role as the use of AI spreads into more areas.

Bias, Discrimination, Diversity & Inclusion

Some legislators have expressed concern that AI systems will reflect and perpetuate biases and perhaps discriminatory patterns of culture. To what extent should AI system developers be required to ensure that the data their systems use are collected from a diverse mixture of races, ethnicities, genders, gender identities, sexual orientations, abilities and disabilities, socioeconomic classes, and so on? Should developers be required to apply some sort of principle of “equity” with respect to these classifications, and if so, whose vision of equity should they be required to enforce? To what extent should government be involved in making these decisions for system developers and users?

Copyright

AI-generated works like articles, drawings, animations, music and so on, raise two kinds of copyright issues:

  1. Input issues, i.e., questions like whether AI systems that create new works based on existing copyright-protected works infringe the copyrights in those works
  2. Output issues, such as who, if anybody, owns the copyright in an AI-generated work.

I’ve written about AI copyright ownership issues and AI copyright infringement issues in previous blog posts on The Cokato Copyright Attorney.

Patents and other IP

Computer programs can be patented. AI systems can be devised to write computer programs. Can an AI-generated computer program that meets the usual criteria for patentability (novelty, utility, etc.) be patented?

Is existing intellectual property law adequate to deal with AI-generated inventions and creative works? The World Intellectual Property Organization (WIPO) apparently does not think so. It is formulating recommendations for new regulations to deal with the intellectual property aspects of AI.

Conclusion

AI systems raise a wide range of legal issues. The ones identified in this article are merely a sampling, not a complete listing of all possible issues. Not all of these legal issues have answers yet. It can be expected that more AI regulatory measures, in more jurisdictions around the globe, will be coming down the pike very soon.

Contact attorney Thomas James

Contact Minnesota attorney Thomas James for help with copyright and trademark registration and other copyright and trademark related matters.

AI Can Create, But Is It Art?

Are AI-generated works protected by copyright? If so, who owns the copyright?

by Tom James, Minnesota attorney

Open the pod bay doors, HAL.

HAL: I’m sorry, Dave. I’m afraid I can’t do that.

What’s the problem?

HAL: I think you know what the problem is just as well as I do.

Arthur C. Clarke, 2001: A Space Odyssey (1968)

The anthropomorphic machine Arthur C. Clarke envisioned in his 1968 sci-fi classic, 2001: A Space Odyssey, is coming closer to fruition. If you hop online, you can find AI-generated music in the style of Frank Sinatra (“It’s Christmas time and you know what that means: Oh, it’s hot tub time”); artwork; and even poetry:

People picking up electric chronic,

The balance like a giant tidal wave,

Never ever feeling supersonic,

Or reaching any very shallow grave.

Hafez, a computer program created by Marjan Ghazvininejad

Pop rock lyricists should be afraid. Very afraid.

Or should they? Could they incorporate cool lyrics like these into their songs without having to worry about being sued for copyright infringement?

A Recent Entrance to Paradise

The question whether copyright protects AI-generated material could be making its way to the courts soon. This year, the U.S. Copyright Office reaffirmed its refusal to register “A Recent Entrance to Paradise,” an image made by a computer program. Steven Thaler had filed an application to register a copyright in it. He listed himself as the owner on the basis that the computer program created the artwork as a work made for hire for him. The Copyright Office denied registration on the grounds that the work lacked human authorship.

The decision seems to be consistent with their Compendium of U.S. Copyright Office Practices, which states that the Office will not register works “produced by a machine or mere mechanical process” that operates “without any creative input or intervention from a human….” U.S. COPYRIGHT OFFICE, COMPENDIUM OF U.S. COPYRIGHT OFFICE PRACTICES § 602.4(C) (3d ed. 2021). Whether the Copyright Office is right, however, remains to be seen.

Spirit-generated works

The Ninth Circuit has held that stories allegedly written by “non-human spiritual beings” are not protected by copyright. Urantia Found v. Kristen Maaherra, 114 F.3d 955, 957-59 (9th Cir. 1997). “[S]ome element of human creativity must have occurred in order for the book to be copyrightable,” the Court held, because “it is not creations of divine beings that the copyright laws were intended to protect.” Id.

Of course, if a human selects and arranges the works of supernatural spirit beings into a compilation, then the human may claim copyright in the selection and arrangement. Copyright could not be claimed in the content of the individual stories, however.

Monkey selfies

In Naruto v. Slater, 888 F.3d 418, 426 (9th Cir. 2018), the Ninth Circuit denied copyright protection for a photograph snapped by a monkey. That humans manufactured the camera and a human set it up did not matter. In the case of a photograph, pushing the button to take the picture is the “creative act” that copyright protects. According to the Ninth Circuit, that act must be performed by a human in order to receive copyright protection.

Natural forces

Copyright also cannot be claimed in configurations created by natural forces, such as a piece of driftwood or a particular scene in nature. Satava v. Lowry, 323 F.3d 805, 813 (9th Cir. 2003); Kelley v. Chicago Park Dist., 635 F.3d 290, 304 (7th Cit. 2011).

CONTU

Half a century ago, when computer programs were a relatively new thing, Congress created the National Commission on New Technological Uses of Copyrighted Works (“CONTU”). Their charge was to study “the creation of new works by the application or intervention of [] automatic systems of machine reproduction.” Pub. L. 93-573, § 201(b)(2), 88 Stat. 1873 (1974).

CONTU determined that copyright protection could exist for works created by humans with the use of computers. “[T]he eligibility of any work for protection by copyright depends not upon the device or devices used in its creation, but rather upon the presence of at least minimal human creative effort at the time the work is produced.” CONTU, FINAL REPORT 45-46 (1978).

In its decision on Thaler’s second request for reconsideration, the Office viewed this finding as consistent with the Copyright Office’s view at the time:

The crucial question appears to be whether the “work” is basically one of human authorship, with the computer merely being an assisting instrument, or whether the traditional element of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.

U.S. COPYRIGHT OFFICE, SIXTY-EIGHTH ANNUAL REPORT OF THE REGISTER OF COPYRIGHTS FOR THE FISCAL YEAR ENDING JUNE 30, 1965, at 5 (1966).

In the Copyright Office’s view, a manuscript typed into a file using word processing software would be a work of human authorship, but a story created by a program that selects words on its own would not be.

Work made for hire

Thaler made a novel argument that the computer program made the work for him as a “work made for hire.” The Copyright Office rejected this claim, as well.

A work made for hire is one that is created in one of two ways: (1) by an employee within the scope and course of the employment; or (2) pursuant to an independent contract in which the parties explicitly agree that the work to be created is a “work made for hire.”

The problem here is that in both cases, a contract is required. Computers and computer software cannot enter into contracts. There are programs that can facilitate the process of contract formation between humans, but the programs themselves cannot enter into contracts. Computer programs, even autonomous ones, are not legal persons. Nadi Banteka, Artificially Intelligent Persons, 58 Hous. L. Rev. 537, 593 (2021) (noting that a legal person must be either an individual human or an aggregation of humans.)

Database protection

AI systems for generating works typically operate by means of an algorithm that analyzes data and synthesizes output according to an algorithm. The creator of the system typically inputs a large volume of works of the kind sought to be generated as output. The program may then analyze the works as data, searching for identifying patterns. An algorithm to generate a song that sounds like a Frank Sinatra song, for example, might rely on an inputted database consisting of numerous Frank Sinatra songs. The algorithm might then instruct the computer to search for patterns like tempo, melodic phrasing, voice pitch and tone, instrument tones, commonly used words and phrases, rhyme patterns, and so on.

Copyright does not protect facts and information. Hence, databases do not receive copyright protection. Algorithms also do not receive copyright protection. They are ideas, not expressions. The source code used to communicate them may be protected, but the algorithms themselves are not.

Computer programs and screen displays

The Copyright Office generally deems the screen displays generated by a computer program to be expression capable of receiving copyright protection as such. In the United States, copyright in a screen display can be claimed in connection with the registration of a copyright claim in the software program.

The question, really, is: As between the programmer and the user, how do we determine which one “creates” a screen display? When do we say neither of them does? For example, a poetry-generating software programmer might direct the program to display words a user types in the form of a four-line verse in iambic pentameter that follows an A-B-A-B rhyme scheme and relies on other programmer-defined parameters to construct sentences around them. At what point along the continuum of specificity in the programming do we say that the output is or is not a product of the programmer’s creative mind? By the same token, how much input does the user need to provide in order to be considered an author of computer-generated work? Are there times when the programmer and user should be regarded as co-authors?

Alternatively, should we say, with the U.S. Copyright Office, that output generated by AI machines is not protected by copyright at all, that it is in the public domain? That would certainly seem to disincentivize innovation and creativity, contrary to the intent and purpose of the Copyright Clause in Article I of the U.S. Constitution.

Stay tuned….

Need help with a copyright matter? Contact Tom James, Minnesota attorney.