AI Legislative Update

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

Congressional legislation to regulate artificial intelligence (“AI”) and AI companies is in the early formative stages. Just about the only thing that is certain at this point is that federal regulation in the United States is coming.

In August, 2023, Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) introduced a Bipartisan Framework for U.S. AI Act. The Framework sets out five bullet points identifying Congressional legislative objectives:

  • Establish a federal regulatory regime implemented through licensing AI companies, to include requirements that AI companies provide information about their AI models and maintain “risk management, pre-deployment testing, data governance, and adverse incident reporting programs.”
  • Ensure accountability for harms through both administrative enforcement and private rights of action, where “harms” include private or civil right violations. The Framework proposes making Section 230 of the Communications Decency Act inapplicable to these kinds of actions. (Second 230 is the provision that generally grants immunity to Facebook, Google and other online service providers for user-provided content.) The Framework identifies the harms about which it is most concerned as “explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I. and election interference.” Noticeably absent is any mention of harms caused by copyright infringement.
  • Restrict the sharing of AI technology with Russia, China or other “adversary nations.”
  • Promote transparency: The Framework would require AI companies to disclose information about the limitations, accuracy and safety of their AI models to users; would give consumers a right to notice when they are interacting with an AI system; would require providers to watermark or otherwise disclose AI-generated deepfakes; and would establish a public database of AI-related “adverse incidents” and harm-causing failures.
  • Protect consumers and kids. “Consumer should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.”

The Framework does not address copyright infringement, whether of the input or the output variety.

The Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law held a hearing on September 12, 2023. Witnesses called to testify generally approved of the Framework as a starting point.

The Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety and Data Security also held a hearing on September 12, called The Need for Transparency in Artificial Intelligence. One of the witnesses, Dr. Ramayya Krishnan, Carnegie Mellon University, did raise a concern about the use of copyrighted material by AI systems and the economic harm it causes for creators.

On September 13, 2023, Sen. Chuck Schumer (D-NY) held an “AI Roundtable.” Invited attendees present at the closed-door session included Bill Gates (Microsoft), Elon Musk (xAI, Neuralink, etc.) Sundar Pichai (Google), Charlie Rivkin (MPA), and Mark Zuckerberg (Meta). Gates, whose Microsoft company, like those headed by some of the other invitees, has been investing heavily in generative-AI development, touted the claim that AI could target world hunger.

Meanwhile, Dana Rao, Adobe’s Chief Trust Officer, penned a proposal that Congress establish a federal anti-impersonation right to address the economic harms generative-AI causes when it impersonates the style or likeness of an author or artist. The proposed law would be called the Federal Anti-Impersonation Right Act, or “FAIR Act,” for short. The proposal would provide for the recovery of statutory damages by artists who are unable to prove actual economic damages.

Generative AI: Perfect Tool for the Age of Deception

For many reasons, the new millennium might well be described as the Age of Deception. Cokato Copyright Attorney Tom James explains why generative-AI is a perfect fit for it.

Illustrating generative AI
Image by Gerd Altmann on Pixabay.

What is generative AI?

“AI,” of course, stands for artificial intelligence. Generative AI is a variety of it that can produce content such as text and images, seemingly of its own creation. I say “seemingly” because in reality these kinds of AI tools are not really independently creating these images and lines of text. Rather, they are “trained” to emulate existing works created by humans. Essentially, they are derivative work generation machines that enable the creation of derivative works based on potentially millions of human-created works.

AI has been around for decades. It wasn’t until 2014, however, that the technology began to be refined to the point that it could generate text, images, video and audio so similar to real people and their creations that it is difficult, if not impossible, for the average person to tell the difference.

Rapid advances in the technology in the past few years have yielded generative-AI tools that can write entire stories and articles, seemingly paint artistic images, and even generate what appear to be photographic images of people.

AI “hallucinations” (aka lies)

In the AI field, a “hallucination” occurs when an AI tool (such as ChatGPT) generates a confident response that is not justified by the data on which it has been trained.

For example, I queried ChatGPT about whether a company owned equally by a husband and wife could qualify for the preferences the federal government sets aside for women-owned businesses. The chatbot responded with something along the lines of “Certainly” or “Absolutely,” explaining that the U.S. government is required to provide equal opportunities to all people without discriminating on the basis of sex, or something along those lines. When I cited the provision of federal law that contradicts what the chatbot had just asserted, it replied with an apology and something to the effect of “My bad.”

I also asked ChatGPT if any U.S. law imposes unequal obligations on male citizens. The chatbot cheerily reported back to me that no, no such laws exist. I then cited the provision of the United States Code that imposes an obligation to register for Selective Service only upon male citizens. The bot responded that while that is true, it is unimportant and irrelevant because there has not been a draft in a long time and there is not likely to be one anytime soon. I explained to the bot that this response was irrelevant. Young men can be, and are, denied the right to government employment and other civic rights and benefits if they fail to register, regardless of whether a draft is in place or not, and regardless of whether they are prosecuted criminally or not. At this point, ChatGPT announced that it would not be able to continue this conversation with me. In addition, it made up some excuse. I don’t remember what it was, but it was something like too many users were currently logged on.

These are all examples of AI hallucinations. If a human being were to say them, we would call them “lies.”

Generating lie after lie

AI tools regularly concoct lies. For example, when asked to generate a financial statement for a company, a popular AI tool falsely stated that the company’s revenue was some number it apparently had simply made up. According to Slate, in their article, “The Alarming Deceptions at the Heart of an Astounding New Chatbot,” users of large language models like ChatGPT have been complaining that these tools randomly insert falsehoods into the text they generate. Experts now consider frequent “hallucination” (aka lying) to be a major problem in chatbots.

ChatGPT has also generated fake case precedents, replete with plausible-sounding citations. This phenomenon made the news when Stephen Schwartz submitted six fake ChatGPT-generated case precedents in his brief to the federal district court for the Southern District of New York in Mata v. Avianca. Schwartz reported that ChatGPT continued to insist the fake cases were authentic even after their nonexistence was discovered. The judge proceeded to ban the submission of AI-generated filings that have not been reviewed by a human, saying that generative-AI tools

are prone to hallucinations and bias…. [T]hey make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices,… generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to…the truth.

Judge Brantley Starr, Mandatory Certification Regarding Generative Artificial Intelligence.

Facilitating defamation

Section 230 of the Communications Decency Act generally shields Facebook, Google and other online services from liability for providing a platform for users to publish false and defamatory information about other people. That has been a real boon for people who like to destroy other people’s reputations by means of spreading lies and misinformation about them online. It can be difficult and expensive to sue an individual for defamation, particularly when the individual has taken steps to conceal and/or lie about his or her identity. Generative AI tools make the job of defaming people even simpler and easier.

More concerning than the malicious defamatory liars, however, are the many people who earnestly rely on AI as a research tool. In July, 2023, Mark Walters filed a lawsuit against OpenAI, claiming its ChatGPT tool provided false and defamatory misinformation about him to journalist Fred Riehl. I wrote about this lawsuit in a previous blog post. Shortly after this lawsuit was filed, a defamation lawsuit was filed against Microsoft, alleging that its AI tool, too, had generated defamatory lies about an individual. Generative-AI tools can generate false and defamation statements about individuals even if no one has any intention of defaming anyone or ruining another person’s reputation.

Facilitating false light invasion of privacy

Generative AI is also highly effective in portraying people in a false light. In one recently filed lawsuit, Jack Flora and others allege, among other things, that Prisma Labs’ Lensa app generates sexualized images from images of fully-clothed people, and that the company failed to notify users about the biometric data it collects and how it will be stored and/or destroyed. Flora et al. v. Prisma Labs, Inc., No. 23-cv-00680 (N.D. Calif. February 15, 2023).

Pot, meet kettle; kettle, pot

“False news is harmful to our community, it makes the world less informed, and it erodes trust. . . . At Meta, we’re working to fight the spread of false news.” Meta (nee Facebook) published that statement back in 2017.  Since then, it has engaged in what is arguably the most ambitious campaign in history to monitor and regulate the content of conversations among humans. Yet, it has also joined other mega-organizations Google and Microsoft in investing multiple billions of dollars in what is the greatest boon to fake news in recorded history: generative-AI.

Toward a braver new world

It would be difficult to imagine a more efficient method of facilitating widespread lying and deception (not to mention false and hateful rhetoric) – and therefore propaganda – than generative-AI. Yet, these mega-organizations continue to sink more and more money into further development and deployment of these lie-generators.

I dread what the future holds in store for our children and theirs.

Another AI lawsuit against Microsoft and OpenAI

Last June, Microsoft, OpenAI and others were hit with a class action lawsuit involving their AI data-scraping technologies. On Tuesday (September 5, 2023) another class action lawsuit was filed against them. The gravamen of both of these complaints is that these companies allegedly trained their AI technologies using personal information from millions of users, in violation of federal and state privacy statutes and other laws.

Among the laws alleged to have been violated are the Electronic Communications Privacy Act, the Computer Fraud and Abuse Act, the California Invasion of Privacy Act, California’s unfair competition law, Illinois’s Biometric Information Privacy Act, and the Illinois Consumer Fraud and Deceptive Business Practices Act. The lawsuits also allege a variety of common law claims, including negligence, invasion of privacy, conversion, unjust enrichment, breach of the duty to warn, and such.

This is just the most recent lawsuit in a growing body of claims against big AI. Many involve allegations of copyright infringement, but privacy is a growing concern. This particular suit is asking for an award of monetary damages and an order that would require the companies to implement safeguards for the protection of private data.

Microsoft reportedly has invested billions of dollars in OpenAI and its app, ChatGPT.

The case is A.T. v. OpenAI LP, U.S. District Court for the Northern District of California, No. 3:23-cv-04557 (September 5, 2023).

Is Microsoft “too big to fail” in court? We shall see.

Sham Books: The latest generative-AI scam

Copyright issues raised by generative-AI (artificial intelligence) have been receiving extensive coverage and discussion lately. Generative-AI has given rise to another kind of problem, too, though. People are generating books “in the style of” books by well-known authors and marketing them to the public as if they were written by those authors when in fact they were not.

Fake books

Jane Friedman was one of the first to report the problem of AI-generated fake books.

The way it works is this: A person asks a generative-AI tool to write a book in the style of a particular named author. Usually it is a well-known author and/or one whose books sell well. The person then creates a listing on Amazon or another online marketplace for the book, misrepresenting it to be the work of the named author rather than AI-generated. Proceeds from sales of these unauthorized knock-offs are then shared between the marketplace provider (Amazon, eBay, etc.) and the fraudster.

Removal difficulties

It can be difficult for an author to get these knock-offs removed. Of course, if you are able to prove that one of these sham books infringes the copyright in one of your works, that should provide a basis for removal. In many cases, however, it can be difficult to prove that an AI-generated book actually copied from any particular book. A book “in the style of” so-and-so might have a completely different setting, plot, characters and so on. Generative-AI tools can generate a book on a theme that a named author commonly writes about, but copyright cannot be claimed in themes.

Trademark law is not necessarily of much help, either. Publishing under a name under which someone else is already publishing is not illegal. In fact, it is quite common. For example, five different people named Scott Adams publish under that name.

The sham books not being pirated or counterfeit copies of any existing work, and an author not having secured a trademark registration in his or her name (not always possible), can be obstacles to getting a title removed on the basis of copyright or trademark infringement.

The Lanham Act

The Lanham Act, sometimes called the Trademark Act, is a federal law that prohibits a wider range of activity than merely trademark infringement. It prohibits false and misleading designations of origin (false advertising), as well, including attempts to pass off a product as somebody else’s. No trademark registration is necessary for these kinds of Lanham Act claims.

These provisions offer a small glimmer of hope. Unfortunately, these kinds of claims are not as easy for marketplace providers like Amazon to sort out, as compared with a claim that someone is using a trademark that is confusingly similar to one that has been registered.

Other legal remedies

The Copyright Act and Lanham Act are not the only possible sources of legal recourse. Book authorship fraud is likely unlawful under state unfair competition and deceptive trade practices laws. In many jurisdictions, a claim for damages for misappropriation of name or likeness, or of exclusive publicity rights, may be viable.

As a practical matter, though, these rights may be difficult to enforce. Marketplace providers are equipped to handle claims where someone is able to produce a trademark or copyright registration certificate to support their claims, but they are not courts. They are not equipped to decide the kinds of fact issues that typically need to be decided in order to resolve competing claims to rights in a work, or likelihood of confusion and so on.

This seems to me to be yet another aspect of generative-AI that is ripe for legislation.


Photograph by Martin Vorel, https://creativecommons.org/licenses/by-sa/4.0, via Wikimedia Commons. The image has not been modified. No suggestion is made that the licensor endorses this author or this use.

AI Legal Issues

Thomas James (“The Cokato Copyright Attorney”) describes the range of legal issues, most of which have not yet been resolved, that artificial intelligence (AI) systems have spawned.

AI is not new. Its implementation also is not new. In fact, consumers regularly interact with AI-powered systems every day. Online help systems often use AI to provide quick answers to questions that customers routinely ask. Sometimes these are designed to give a user the impression that s/he is communicating with a person.

AI systems also perform discrete functions such as analyzing a credit report and rendering a decision on a loan or credit card application, or screening employment applications.

Many other uses have been found for AI and new ones are being developed all the time. AI has been trained not just to perform customer service tasks, but also to perform analytics and diagnostic tests; to repair products; to update software; to drive cars; and even to write articles and create images and videos. These developments may be helping to streamline tasks and improve productivity, but they have also generated a range of new legal issues.

Tort liability

While there are many different kinds of tort claims, the elements of tort claims are basically the same: (1) The person sought to be held liable for damages or ordered to comply with a court order must have owed a duty to the person who is seeking the legal remedy; (2) the person breached that duty; (3) the person seeking the legal remedy experienced harm, i.e., real or threatened injury; and (4) the breach was the actual and proximate cause of the harm.

The kind of harm that must be demonstrated varies depending on the kind of tort claim. For example, a claim of negligent driving might involve bodily injury, while a claim of defamation might involve injury to reputation. For some kinds of tort claims, the harm might involve financial or economic injury. 

The duty may be specified in a statute or contract, or it might be judge-made (“common law.”) It may take the form of an affirmative obligation (such as a doctor’s obligation to provide a requisite level of care to a patient), or it may take a negative form, such as the common law duty to refrain from assaulting another person.

The advent of AI does not really require any change in these basic principles, but they can be more difficult to apply to scenarios that involve the use of an AI system.

Example. Acme Co. manufactures and markets Auto-Doc, a machine that diagnoses and repairs car problems. Mike’s Repair Shop lays off its automotive technician employees and replaces them with one of these machines. Suzie Consumer brings her VW Jetta to Mikes Repair Shop for service because she has been hearing a sound that she describes as being a grinding noise that she thinks is coming from either the engine or the glove compartment. The Auto-Doc machine adds engine oil, replaces belts, and removes the contents of the glove compartment. Later that day, Suzie’s brakes fail and her vehicle hits and kills a pedestrian in a crosswalk. A forensic investigation reveals that her brakes failed because they were badly worn. Who should be held liable for the pedestrian’s death – Suzie, Mike’s, Acme Co., some combination of two of them, all of them, or none of them?

The allocation of responsibility will depend, in part, on the degree of autonomy the AI machine possesses. Of course, if it can be shown that Suzie knew or should have known that her brakes were bad, then she most likely could be held responsible for causing the pedestrian’s death. But what about the others? Their liability, or share of liability, is affected by the degree of autonomy the AI machine possesses. If it is completely autonomous, then Acme might be held responsible for failing to program the machine in such a way that it would test for and detect worn brake pads even if a customer expresses an erroneous belief that the sound is coming from the engine or the glove compartment. On the other hand, if the machine is designed only to offer suggestions of possible problems and solutions,  leaving it up to a mechanic to accept or reject them, then Mike’s might be held responsible for negligently accepting the machine’s recommendations. 

Assuming the Auto-Doc machine is fully autonomous, should Mike’s be faulted for relying on it to correctly diagnose car problems? Is Mike’s entitled to rely on Acme’s representations about Auto-Doc’s capabilities, or would the repair shop have a duty to inquire about and/or investigate Auto-Doc’s limitations? Assuming Suzie did not know, and had no reason to suspect, her brakes were worn out, should she be faulted for relying on a fully autonomous machine instead of taking the car to a trained human mechanic?  Why or why not?

Criminal liability

It is conceivable that an AI system might engage in activity that is prohibited by an applicable jurisdiction’s criminal laws. E-mail address harvesting is an example. In the United States, for example, the CAN-SPAM Act makes it a crime to send a commercial email message to an email address that was  obtained  by automated scraping of Internet websites for email addresses. Of course, if a person intentionally uses an AI system for scraping, then liability should be clear. But what if an AI system “learns” to engage in scraping?

AI-generated criminal output may also be a problem. Some countries have made it a crime to display a Nazi symbol, such as a swastika, on a website. Will criminal liability attach if a website or blog owner uses AI to generate illustrated articles about World War II and the system generates and displays articles that are illustrated with World War II era German flags and military uniforms? In the United States, creating or possessing child pornography is illegal. Will criminal liability attach if an AI system generates it?

Some of these kinds of issues can be resolved through traditional legal analysis of the intent and scienter elements of the definitions of crimes. A jurisdiction might wish to consider, however, whether AI systems should be regulated to require system creators to implement measures that would prevent illegal uses of the technology. This raises policy and feasibility questions, such as whether and what kinds of restraints on machine learning should be required, and how to enforce them. Further, would prior restraints on the design and/or use of AI-powered expressive-content-generating systems infringe on First Amendment rights?  

Product liability

Related to the problem of allocating responsibility for harm caused by the use of an AI mechanism is the question whether anyone should be held liable for harm caused when the mechanism is not defective, that is to say, when it is operating as it should.

 Example.  Acme Co. manufactures and sells Auto-Article, a software program that is designed to create content of a type and kind the user specifies. The purpose of the product is to enable a website owner to generate and publish a large volume of content frequently, thereby improving the website’s search engine ranking. It operates   by scouring the Internet and analyzing instances of the content the user specifies to produce new content that “looks like” them. XYZ Co. uses the software to generate articles on medical topics. One of these articles explains that chest pain can be caused by esophageal spasms but that these typically do not require treatment unless they occur frequently enough to interfere with a person’s ability to eat or drink. Joe is experiencing chest pain. He does not seek medical help, however, because he read the article and therefore believes he is experiencing esophageal spasms. He later collapses and dies from a heart attack. A medical doctor is prepared to testify that his death could have been prevented if he had sought medical attention when he began experiencing the pain.

Should either Acme or XYZ Co. be held liable for Joe’s death? Acme could argue that its product was not defective. It was fit for its intended purposes, namely, a machine learning system that generates articles that look like articles of the kind a user specifies. What about XYZ Co.? Would the answer be different if XYZ had published a notice on its site that the information provided in its articles is not necessarily complete and that the articles are not a substitute for advice from a qualified medical professional? If XYZ incurs liability as a result of the publication, would it have a claim against Acme, such as for failure to warn it of the risks of using AI to generate articles on medical topics?

Consumer protection

AI system deployment raises significant health and safety concerns. There is the obvious example of an AI system making incorrect medical diagnoses or treatment recommendations. Autonomous (“self-driving”) motor vehicles are also examples. An extensive body of consumer protection regulations may be anticipated.

Forensic and evidentiary issues

In situations involving the use of semi-autonomous AI, allocating responsibility for harm resulting from the operation of the AI  system  may be difficult. The most basic question in this respect is whether an AI system was in use or not. For example, if a motor vehicle that can be operated in either manual or autonomous mode is involved in an accident, and fault or the extent of liability depends on that (See the discussion of tort liability, above), then a way of determining the mode in which the car was being driven at the time will be needed.

If, in the case of a semi-autonomous AI system, tort liability must be allocated between the creator of the system and a user of it, the question of fault may depend on who actually caused a particular tortious operation to be executed – the system creator or the user. In that event, some method of retracing the steps the AI system used may be essential. This may also be necessary in situations where some factor other than AI contributed, or might have contributed, to the injury. Regulation may be needed to ensure that the steps in an AI system’s operations are, in fact, capable of being ascertained.

Transparency problems also fall into this category. As explained in the Journal of Responsible Technology, people might be put on no-fly lists, denied jobs or benefits, or refused credit without knowing anything more than that the decision was made through some sort of automated process. Even if transparency is achieved and/or mandated, contestability will also be an issue.

Data Privacy

To the extent an AI system collects and stores personal or private information, there is a risk that someone may gain unauthorized access to it.. Depending on how the system is designed to function, there is also a risk that it might autonomously disclose legally protected personal or private information. Security breaches can cause catastrophic problems for data subjects.

Publicity rights

Many jurisdictions recognize a cause of action for violation of a person’s publicity rights (sometimes called “misappropriation of personality.”) In these jurisdictions, a person has an exclusive legal right to commercially exploit his or her own name, likeness or voice. To what extent, and under what circumstances, should liability attach if a commercialized AI system analyzes the name, likeness or voice of a person that it discovers on the Internet? Will the answer depend on how much information about a particular individual’s voice, name or likeness the system uses, on one hand, or how closely the generated output resembles that individual’s voice, name or likeness, on the other?

Contracts

The primary AI-related contract concern is about drafting agreements that adequately and effectively allocate liability for losses resulting from the use of AI technology. Insurance can be expected to play a larger role as the use of AI spreads into more areas.

Bias, Discrimination, Diversity & Inclusion

Some legislators have expressed concern that AI systems will reflect and perpetuate biases and perhaps discriminatory patterns of culture. To what extent should AI system developers be required to ensure that the data their systems use are collected from a diverse mixture of races, ethnicities, genders, gender identities, sexual orientations, abilities and disabilities, socioeconomic classes, and so on? Should developers be required to apply some sort of principle of “equity” with respect to these classifications, and if so, whose vision of equity should they be required to enforce? To what extent should government be involved in making these decisions for system developers and users?

Copyright

AI-generated works like articles, drawings, animations, music and so on, raise two kinds of copyright issues:

  1. Input issues, i.e., questions like whether AI systems that create new works based on existing copyright-protected works infringe the copyrights in those works
  2. Output issues, such as who, if anybody, owns the copyright in an AI-generated work.

I’ve written about AI copyright ownership issues and AI copyright infringement issues in previous blog posts on The Cokato Copyright Attorney.

Patents and other IP

Computer programs can be patented. AI systems can be devised to write computer programs. Can an AI-generated computer program that meets the usual criteria for patentability (novelty, utility, etc.) be patented?

Is existing intellectual property law adequate to deal with AI-generated inventions and creative works? The World Intellectual Property Organization (WIPO) apparently does not think so. It is formulating recommendations for new regulations to deal with the intellectual property aspects of AI.

Conclusion

AI systems raise a wide range of legal issues. The ones identified in this article are merely a sampling, not a complete listing of all possible issues. Not all of these legal issues have answers yet. It can be expected that more AI regulatory measures, in more jurisdictions around the globe, will be coming down the pike very soon.

Contact attorney Thomas James

Contact Minnesota attorney Thomas James for help with copyright and trademark registration and other copyright and trademark related matters.

%d bloggers like this: