Trump’s Executive Order on AI

On December 11, 2025, President Trump issued another Executive Order. This one is intended to promote “national dominance” in “a race with adversaries for supremacy.” To “win,” the Order says, AI companies should not be encumbered by state regulation. “The policy of the United States,” the Order says, is “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” It sets up an AI Litigation Task Force to challenge state AI laws that allegedly do not do that.

Excepted from the Order are state laws on child safety protections, data center infrastructure, and state government use of AI.

Which State AI Laws?

The Order speaks generally about “state AI laws,” but does not define the term. Here are some examples of state AI laws:

Stalking and Harassment

A North Dakota statute criminalizes using a robot to frighten or harass another person. It defines a robot to include a drone or other system that uses AI technology. (N.D. Cent. Code § 12.1-17-07.(1), (2)(f)). This appears to be a “state AI law.” North Dakota statutes also prohibit stalking accomplished by using either a robot or a non-AI form of technology. (N.D. Cent. Code § 12.1-17-07.1(1)(d)). Preempting this statute would produce an anomalous result. It would be a crime to stalk somebody unless you use an AI-powered device to do it.

Political Deepfakes

Several states have enacted laws prohibiting the distribution of political deepfakes to influence an election. Regulations range from a prohibition against the distribution of a deepfake to influence an election within a specified time period before the election to requiring disclosure that it is AI-generated. Minn. Stat. § 609.771 is an example of such a regulation. The need for this kind of statute was highlighted in 2024 when someone used AI to clone Joe Biden’s voice and generate an audio file that sounded like Mr. Biden himself was urging people not to vote for him.

Sexual Deepfakes

Both state and federal governments have enacted laws aimed at curbing the proliferation of “revenge porn.” The TAKE IT DOWN Act is an example. Minn. Stat. § 604.32 is another example (deepfakes depicting intimate body parts or sexual acts).

State and federal laws in this area cover much of the same ground. The principal difference is that the federal crime must involve interstate commerce; state crimes do not. The only practical effect of preemption of this kind of state AI law, therefore, would be to eliminate state prohibitions of wholly intrastate sexual deepfakes. If the Executive Order succeeds in its objectives, then state laws that prohibit the creation or distribution of sexual deepfakes wholly within the same state, as some do, would be preempted, with the result that making and distributing sexual deepfakes would be lawful so long as you only transmit it to other people in your state and not to someone in a different state.

Digital Replicas

Many states have enacted laws prohibiting or regulating the unauthorized creation and exploitation of digital replicas. The California Digital Replicas Act and Tennessee’s ELVIS Act are examples. AI is used in the creation of digital replicas. It is unclear whether these kinds of enactments are “state AI laws.” Arguably, a person could use technologies more primitive than generative-AI to create a digital image of a person. If these statutes are preempted only to the extent they apply to AI-generated digital replicas, then it would seem that unauthorized exploiters of other people’s faces and voices for commercial gain would be incentivized to use AI to engage in unauthorized commerceial exploitation of other people.

Child Pornography

Several states have either enacted laws or amended existing laws to bring AI-generated images of what look like real children within the prohibition against child pornography. See, e.g., N.D. Cent. Code § 12.1.-27.2—01.  The Executive Order exempts “child safety protections,” but real children do not necessarily have to be used in AI-generated images. This kind of state statute arguably would not come within the meaning of a “child safety protection.”

Health Care Oversight

California’s Physicians Make Decisions Act requires a human person to oversee health care decisions about medical necessity. This is to ensure that medical care is not left entirely up to an AI bot. The law was enacted with the support of the California Medical Association to ensure that patients receive adequate health care. If the law is nullified, then it would seem that hospitals would be free to replace doctors with AI chatbots.

Chatbots

Some states prohibit the deceptive use of a chatbot, such as by falsely representing to people who interact with one that they are interacting with a real person. In addition, some states have enacted laws requiring disclosure to consumers when they are interacting with a non-human AI. See, e.g., the Colorado Artificial Intelligence Act.

Privacy

Some states have enacted either stand-alone laws or amended existing privacy laws to ensure they protect the privacy of personally identifiable information stored by AI systems. See, e.g., Utah Code 13-721-201, -203 (regulating the sharing of a person’s mental health information by a chatbot); and amendments to the California Consumer Privacy Act making it applicable to information stored in an AI system.

Disclosure

California’s Generative AI Training Data Transparency Act requires disclosure of training data used in developing generative-AI technology.

The Texas Responsible Artificial Intelligence Governance Act

Among other things, the Texas Responsible AI Governance Act prohibits the use of AI to restrict constitutional rights, to discriminate on the basis of race, or to encourage criminal activity. These seem like reasonable proscriptions.

Trump’s “AI czar,” venture capitalist David Sacks, has said the administration is not gong to “push back” on all state laws, only “the most onerous” ones. It is unclear which of these will be deemed “onerous.”

State AI Laws are Not Preempted

News media headlines are trumpeting that the Executive Order preempts state AI laws. This is not true. It directs this administration to try to strike down some state AI laws. It contemplates working with Congress to formulate and enact preemptive legislation. It is doubtful that a President could constitutionally preempt state laws by executive order.

Postscript

Striving for uniformity in the regulation of artificial intelligence is not a bad idea. There should be room, though, for both federal and state legislation. Rather than abolishing state laws, a uniform code or model act for states might be a better idea. Moreover, if we are going to start caring about an onerous complex of differing state laws, and feeling a need to establish a national framework, perhaps the President and Congress might wish to address the sprawling morass of privacy and data security regulations in the United States.

 

Joint Custody and Equal Shared Parenting Laws

Yes, this is off-topic. It is, however, the reason I haven’t been posting to this blog lately. In addition to finishing out some cases, I have been working on developing this 90-minute program for the past few months.

In what seems like a lifetime ago, I practiced family law. During that time, I witnessed first-hand the havoc the sole-custody regime wreaked on families, both parents and children. I’ve always believed there had to be a better way.

In this webinar, I will be presenting a brief overview of the joint custody and equal shared parenting laws of the fifty U.S. states. Professor Daniel Fernandez-Kranz will join me to talk about how equal shared parenting has been working in Spain. Kentucky family law attorney Carl Knochelmann, Jr. will talk about the impact Kentucky’s statute, which is the first-ever presumptive equal shared parenting time law, has been having. Professor Donald Hubin will round things out with a look at what can be learned from Ohio’s experiences with both equal shared parenting and the traditional sole custody model. He will also present findings about the interplay of equal shared parenting laws and domestic violence, based on data gathered from Kentucky and Ohio.

California has approved the webinar for 90-minutes of MCLE and LSCLE (family law specialist) continuing legal education credits. Continuing legal and mediator education credits are available in many other states as well.

The live webinar is on October 24, 2024. There will be a video replay on November 8, 2024.

If you have an interest, you can find more information, and registration links, at EchionCLE.com

I promise I will get back to copyright and trademark issues soon.

Top IP Developments of 2023

2023 was a big year for U.S. intellectual property law. Major developments occurred in every area. Here are the highlights.

Copyright

Fair Use

Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith et al.

I’ve written about this case before here and here. The Supreme Court issued a ruling in the case in May. The decision is significant because it finally reined in the “transformative use” doctrine that the Court first announced in Campbell v. Acuff-Rose Music back in 1994. In that case, 2 Live Crew had copied key parts of the Roy Orbison song, “Oh, Pretty Women” to make a parody of the song in its own rap style. The Court held that the 2 Live Crew version, although reproducing portions of both the original song and the original recording of it without permission, transformed it into something else. Therefore, even though it infringed the copyright, the 2 Live Crew version was for a transformative purpose and therefore protected as fair use.

In the thirty years since Campbell, lower courts have been applying the “transformative use” principle announced in Campbell in diverse and divergent ways. Some interpretations severely eviscerated the copyright owner’s exclusive right to make derivative works. Their interpretations often conflicted. What one circuit called transformative “fair use” another circuit called actionable infringement. Hence the need for Supreme Court intervention.

In 1984, Vanity Fair licensed one of photographer Lynn Goldsmith’s photographs of Prince to illustrate a magazine article about him. Per the agreement, Andy Warhol made a silkscreen using the photograph for the magazine and Vanity Fair credited the original photograph to Goldsmith. Unknown to her, however, Warhol proceeded to make 15 additional works based on Goldsmith’s photograph withour her permission.. In 2016, the Andy Warhol Foundation for the Arts licensed one of them to Condé Nast as an illustration for one of their magazines. The Foundation received a cool $10,000 for it, with neither payment nor credit given to Goldsmith. The Foundation then filed a lawsuit seeking a declaration that its use of the photograph was a protected fair use under 17 U.S.C. § 107. The district court granted declaratory judgment in favor of the Foundation. The Second Circuit Court of Appeals reversed, ruling that the four-factor “fair use” analysis favored Goldsmith. The Supreme Court sided with the Court of Appeals.

Noting that it was not ruling on whether Warhol’s making of works using the photograph was fair use, the Court limited its analysis to the narrow question whether the Foundation’s licensing of the Warhol work to Condé Nast was fair use. On that point, the Court determined that the use of the photograph to illustrate a story about Prince was identical to the use Goldsmith had made of the photograph (i.e., to illustrate a magazine article about Prince.) Unlike 2 Live Crew’s use of “Oh, Pretty Woman,” the purpose of the use in this case was not to mock or parody the original work.

The case is significant for vindicating the Copyright Act’s promise to copyright owners of an exclusive right to make derivate works. While Warhol put his own artistic spin on the photograph – and that might have been sufficient to sustain a fair use defense if he had been the one being sued – the Warhol Foundation’s and Condé Nast’s purpose was no different from Goldsmith’s, i.e., as an illustration for an article about Prince. Differences in the purpose or character of a use, the Court held, “must be evaluated in the context of the specific use at issue.” Had the Warhol Foundation been sued for displaying Warhol’s modifications of the photograph for purposes of social commentary in its own gallery, the result might have been different.

Although the holding is a seemingly narrow one, the Court did take the opportunity to disapprove the lower court practice of ending a fair use inquiry at the moment an infringer asserted that an unauthorized copy or derivative work was created for a purpose different from the original author’s.

Statute of Limitations and Damages

Warner Chappell Music, Inc. v. Nealy

The U.S. Supreme Court has granted certiorari to review this Eleventh Circuit decision. At issue is whether a copyright plaintiff may recover damages for infringement that occurred outside of the limitations period, that is, infringement occurring more than three years before a lawsuit was filed.

The circuits are split on this question. According to the Second Circuit, damages are recoverable only for acts of infringement that occurred during the 3-year period preceding the filing of the complaint. The Ninth and Eleventh Circuits, on the other hand, have held that as long as the lawsuit is timely filed, damages may be awarded for infringement that occurred more than three years prior to the filing, at least when the discovery rule has been invoked to allow a later filing. In Nealy, the Eleventh Circuit held that damages may be recovered for infringement occurring more than three years before the claim is filed if the plaintiff did not discover the infringement until some time after it first began.

A decision will be coming in 2024.

Artificial Intelligence

Copyrightability

Thaler v. Perlmutter, et. al.

This was an APA proceeding initiated in the federal district court of the District of Columbia for review of the United State Copyright Office’s refusal to register a copyright in an AI-generated work. In August, the district court upheld the Copyright Office’s decision that an AI-generated work is not protected by copyright, asserting that “human creativity is the sine qua non at the core of copyrightability….” For purposes of the Copyright Act, only human beings can be “authors.” Machines, non-human animals, spirits and natural forces do not get copyright protection for their creations.

An appeal of the decision is pending in the D.C. Circuit Court of Appeals.

Infringement

Many cases that were filed or are still pending in 2023 allege that using copyrighted works to train AI, or creating derivative works using AI, infringes the copyrights in the works so used. Most of these cases make additional claims as well, such as claims of unfair competition, trademark infringement, or violations of publicity and DMCA rights.

 I have been blogging about these cases throughout the year. Significant rulings on the issues raised in them are expected to be made in 2024.

Trademark

Parody Goods

Jack Daniels’s Properties Inc. v. VIP Products LLC

For more information about this case, read my blog post about it here.

This is the “parody goods” case. VIP Products used the “Bad Spaniels” name to market its dog toys, which were patterned on the distinctive shape of a Jack Daniel’s whiskey bottle. VIP filed a lawsuit seeking a declaratory judgment that its product did not infringe the Jack Daniel’s brand. Jack Daniel’s counterclaimed for trademark infringement and dilution. Regarding infringement, VIP claimed First Amendment protection. Regarding dilution, VIP claimed the use was a parody of a famous mark and therefore qualified for protection as trademark fair use. The district court granted summary judgment to VIP.

The Supreme Court reversed. The Court held that when an alleged infringer uses the trademark of another (or something confusingly similar to it) as a designation of source for the infringer’s own goods, it is a commercial, not an expressive, use. Accordingly, the First Amendment is not a consideration in such cases.

Rogers v. Grimaldi had held that when the title of a creative work (in that case, a film) makes reference to a trademark for an artistic or expressive purposes (in that case, Fred Astaire and Ginger Rogers), the First Amendment shields the creator from trademark liability. In the Jack Daniel’s case, the Court distinguished Rogers, holding that it does not insulate the use of trademarks as trademarks (i.e. as indicators of the source or origin of a product or service) from ordinary trademark scrutiny. Even through the dog toys may have had an expressive purpose, VIP admitted it used Bad Spaniels as a source identifier. Therefore, the First Amendment does not apply.

The Court held that the same rule applies to dilution claims. The First Amendment does not shield parody goods from a dilution claim when the alleged diluter uses a mark (or something confusingly similar to it) as a designation of source for its own products or services.

International Law

Abitron Austria v. Hetronic International

Here, the Supreme Court held that the Lanham Act does not have extraterritorial reach. Specifically, the Court held that Sections 1114(1)(a) and 1125 (a)(1) extend only to those claims where the infringing use in commerce occurs in the United States. They do not extend to infringement occurring solely outside of the United States, even if consumer confusion occurs in the United States.

The decision is a reminder to trademark owners that if they want to protect their trademark rights in other countries, they should take steps to protect their rights in those countries, such as by registering their trademarks there.

Patents

Patents are beyond the scope of this blog. Even so, a couple of developments are worth noting.

Enablement

Amgen v. Sanofi

In this case, the Supreme Court considered the validity of certain patents on antibodies used to lower cholesterol under the Patent Act’s enablement requirement (35 U.S.C. 112(a)).  At issue was whether Amgen could patent an entire genus of antibodies without disclosing sufficient information to enable a person skilled in the art to create the potentially millions of antibodies in it. The Court basically said no.

If a patent claims an entire class of processes, machines, manufactures, or compositions of matter, the patent’s specification must enable a person skilled in the art to make and use the entire class. In other words, the specification must enable the full scope of the invention as defined by its claims.

Amgen v. Sanofi, 598 U.S. ____ (2023)

Executive Power

In December, the Biden administration asserted that it can cite “excessive prices” to justify the exercise of Bayh-Dole march-in rights. The Biden Administration also has continued to support a World Trade Organization TRIPS patent waiver for COVID-19 medicines. These developments are obviously of some concern to pharmaceutical companies and members of the patent bar.

Conclusion

My vote for the most the significant IP case of 2023 is Andy Warhol Foundation v. Goldsmith. Lower courts had all but allowed the transofrmative use defense to swallow up the exclusive right of a copyright owner to create derivative works. The Supreme Court provided much-needed correction. I predict that in 2024, the most significant decisions will also be in the copyright realm, but they will have to do with AI.

Copyright Registration and Management Services

There is now an inexpensive, intelligent alternative to “copyright mills” that is creator-friendly as well as a time-saver for attorneys.

In the United States, as in most countries, it is possible to own a copyright without registering it. Copyright registration is not a prerequisite to copyright protection. Rather, a copyright comes into being when a human being fixes original, creative expression in a tangible medium (or when it is fixed in a tangible medium at a human being’s direction.) Nevertheless, there are important reasons why you should register a copyright in a work you’ve created, particularly if you live in the United States.

Reasons for registering copyrights

If you live in the United States, the most important reason for registering a copyright is that you will not be able to enforce it unless you do. As a condition of filing an infringement claim in court, the United States Copyright Act requires a copyright owner to have applied for registration and received either a certificate of registration or a denial of registration from the U.S. Copyright Office. Registration is not a prerequisite to serving a cease-and-desist letter or a DMCA take-down notice. If you want to enforce your copyright in court, though, then you will need to register it.

This is true, as well, of infringement claims filed in the new Copyright Claims Board (sometimes called the “copyright small claims court” or “CCB”). It is not necessary to have received either a registration certificate or denial letter from the Copyright Office before filing a claim with the CCB. It is necessary, however, to have at least applied for registration before filing a claim.

Prompt registration is also important. You may not be able to recover statutory damages and attorney fees in an action for copyright infringement unless you registered the copyright either within three months after first publication or before the infringement began.

Registration gives you the benefit of a legal presumption that the copyright is valid. It also gives rise to a presumption of ownership, and that all of the other facts stated in the certificate (date of creation, etc.) are true.

Registration is not only critical to enforcement; it is also an important defensive strategy. If someone else registers the work and you do not, then they get all of the benefits described above and you do not. As the original creator of a work, you do not want to find yourself in the position of being sued for “infringing” your own work.

Registration workarounds that aren’t

The “poor man’s copyright”

One dangerous myth that has been circulating for years is that simply mailing yourself a copy of your work will be enough to establish your rights in it. This is not true. Anybody can make a copy of someone else’s work and mail it to himself or herself. Even if doing that could establish a person’s rights in a work, it is still going to be necessary to register the copyright in the work in order to enforce it in the U.S. legal system. And you won’t get any of the other benefits of registration, either, unless you do.

Posting to YouTube or another Internet website

Posting a copy of a work to YouTube or another Internet website is a modern-day version of the “poor man’s copyright” myth. The best this will do, however, is provide a date and time of posting. That proves nothing about authorship, and it does not provide any of the benefits of registration.

Notary

Notarization only verifies the validity of a signature; it does not prove anything about authorship.

Having an agent, distributor or licensing agency

Having an agent or a distributor, or listing with ASCAP, for example, does not prove authorship, nor does it provide any of the benefits of registration.

Registries and databases

Some websites offer to list your work in a “registry” or other database, supposedly as a means of protecting your copyright in the work. Some of these websites border on fraud. “Registering” your work with a private company or service will not prove authorship and will not give you any of the other benefits of registration. In the United States, the benefits of registration flow only to those who register the copyrights in their works with the United States Copyright Office.

True copyright registration services

Not all online copyright registration services are scams. Some of them will actually get a customer’s copyright registered with the United States Copyright Office. It is still necessary to proceed with caution when using them, however. Here are some things to watch out for.

Per-work vs. per-application

Pay careful attention to whether service fees are charged “per work,” on one hand, or “per application,” on the other.

If you have more than one work to register, it may sometimes be possible to register them with the Copyright Office as a group rather than individually. For example, a group of up to ten unpublished works by the same author may be registered with the Office using one application and paying one filing fee. Similarly, up to 750 photographs can sometimes be registered together as a group using only one application and paying only one filing fee.

An online service that offers to register copyrights at the rate of $100 “per work” might not inform users about the Copyright Office’s group registration options. Imagine paying $75,000 plus $33,750 filing fees to register copyrights in 750 photographs when you might have done it yourself, using one application, for a $55 filing fee.

Single, standard or group application

Once you’ve selected a service whose rates are “per application” rather than “per work,” you will want to ensure that the service includes group registration options. If a service indicates that it will prepare a “single” or “standard” application, then this may mean that it will not prepare applications for group registrations. Find that out before proceeding.

GRAM and GRUW applications

If you are a musician or composer, you may be able to qualify for a significant discount on Copyright Office filing fees by filing a GRAM or GRUW application. These are special application forms that allow the registration of up to 10 unpublished songs, or up to 20 published songs on an album, using one application and paying one filing fee. They are relatively new additions to the Copyright Office’s application forms repertoire. Some registration services will not, or do not yet, work with them.

Fees

First, understand the difference between a service fee and the Copyright Office filing fee. The Copyright Office filing fee is usually going to be between $45 and $85, depending on the kind of application. When a website quotes a fee for the service it provides, the fee it quotes normally does not include the Copyright Office filing fee — unless, of course, the website expressly says so.

Online registration service companies charge different rates for their services. One attorney website I saw quoted a $500 flat fee “per work.” Apparently, he would intend to charge $5,000 to register a group of 10 works.

Other services quote a much lower fee, typically somewhere between $100 and $250, either per work or per application.

These services typically are limited to filing a registration application, and nothing more. Some of them stand behind their work. Others charge additional fees if an application is rejected and they need to do additional work to fix the problem.

RightsClick™

A new online copyright service entered the scene last year. Called RightsClick™ it boasts processing fees that are 85% lower than most other registration services. Rather than charging $100 to $500 plus the Copyright Office filing fee, RightsClick charges $15 plus the Copyright Office filing fee.

It is also one of the few services that processes applications for group registration, and is up-front and clear about the cost. A group of up to 10 unpublished works, for example, can be registered for $15, that is to say, the same low processing fee that is charged for a single application.

There are monthly subscription charges, but even adding these into the mix does not bring the cost up to anything near to what many online services are charging.

The services provided include more than copyright registration, and additional features are planned for the future.

Learn more

Because I believe this innovative new service can be a great time and money saver for attorneys who work with authors and other copyright owners, I am hosting a continuing legal education (CLE) course through EchionCLE. It will be presented by Steven Tepp and David Newhoff, the developers of RightsClick. It will include an update on registration law and a demonstration of what RightsClick can do and how it works.

This program is FREE and is open to both attorneys and non-attorneys.

EchionCLE has applied to the Minnesota Board of Continuing Legal Education for 1.0 Standard CLE credit.

The live webinar will be held on May 17, 2023.

There will be a video replay on June 1, 2023.

For more detailed information, or to register, click here.

Disclosure statement

I do not own or have any interest in RightsClick. I have not been paid and have not received anything of value in connection with this post. This post is not an endorsement or advertisement for RightsClick or the services it offers. It is simply an announcement of what appears to me to be a service that could be of considerable benefit to authors, creators, publishers and attorneys.

AI Legal Issues

Thomas James (“The Cokato Copyright Attorney”) describes the range of legal issues, most of which have not yet been resolved, that artificial intelligence (AI) systems have spawned.

AI is not new. Its implementation also is not new. In fact, consumers regularly interact with AI-powered systems every day. Online help systems often use AI to provide quick answers to questions that customers routinely ask. Sometimes these are designed to give a user the impression that s/he is communicating with a person.

AI systems also perform discrete functions such as analyzing a credit report and rendering a decision on a loan or credit card application, or screening employment applications.

Many other uses have been found for AI and new ones are being developed all the time. AI has been trained not just to perform customer service tasks, but also to perform analytics and diagnostic tests; to repair products; to update software; to drive cars; and even to write articles and create images and videos. These developments may be helping to streamline tasks and improve productivity, but they have also generated a range of new legal issues.

Tort liability

While there are many different kinds of tort claims, the elements of tort claims are basically the same: (1) The person sought to be held liable for damages or ordered to comply with a court order must have owed a duty to the person who is seeking the legal remedy; (2) the person breached that duty; (3) the person seeking the legal remedy experienced harm, i.e., real or threatened injury; and (4) the breach was the actual and proximate cause of the harm.

The kind of harm that must be demonstrated varies depending on the kind of tort claim. For example, a claim of negligent driving might involve bodily injury, while a claim of defamation might involve injury to reputation. For some kinds of tort claims, the harm might involve financial or economic injury. 

The duty may be specified in a statute or contract, or it might be judge-made (“common law.”) It may take the form of an affirmative obligation (such as a doctor’s obligation to provide a requisite level of care to a patient), or it may take a negative form, such as the common law duty to refrain from assaulting another person.

The advent of AI does not really require any change in these basic principles, but they can be more difficult to apply to scenarios that involve the use of an AI system.

Example. Acme Co. manufactures and markets Auto-Doc, a machine that diagnoses and repairs car problems. Mike’s Repair Shop lays off its automotive technician employees and replaces them with one of these machines. Suzie Consumer brings her VW Jetta to Mikes Repair Shop for service because she has been hearing a sound that she describes as being a grinding noise that she thinks is coming from either the engine or the glove compartment. The Auto-Doc machine adds engine oil, replaces belts, and removes the contents of the glove compartment. Later that day, Suzie’s brakes fail and her vehicle hits and kills a pedestrian in a crosswalk. A forensic investigation reveals that her brakes failed because they were badly worn. Who should be held liable for the pedestrian’s death – Suzie, Mike’s, Acme Co., some combination of two of them, all of them, or none of them?

The allocation of responsibility will depend, in part, on the degree of autonomy the AI machine possesses. Of course, if it can be shown that Suzie knew or should have known that her brakes were bad, then she most likely could be held responsible for causing the pedestrian’s death. But what about the others? Their liability, or share of liability, is affected by the degree of autonomy the AI machine possesses. If it is completely autonomous, then Acme might be held responsible for failing to program the machine in such a way that it would test for and detect worn brake pads even if a customer expresses an erroneous belief that the sound is coming from the engine or the glove compartment. On the other hand, if the machine is designed only to offer suggestions of possible problems and solutions,  leaving it up to a mechanic to accept or reject them, then Mike’s might be held responsible for negligently accepting the machine’s recommendations. 

Assuming the Auto-Doc machine is fully autonomous, should Mike’s be faulted for relying on it to correctly diagnose car problems? Is Mike’s entitled to rely on Acme’s representations about Auto-Doc’s capabilities, or would the repair shop have a duty to inquire about and/or investigate Auto-Doc’s limitations? Assuming Suzie did not know, and had no reason to suspect, her brakes were worn out, should she be faulted for relying on a fully autonomous machine instead of taking the car to a trained human mechanic?  Why or why not?

Criminal liability

It is conceivable that an AI system might engage in activity that is prohibited by an applicable jurisdiction’s criminal laws. E-mail address harvesting is an example. In the United States, for example, the CAN-SPAM Act makes it a crime to send a commercial email message to an email address that was  obtained  by automated scraping of Internet websites for email addresses. Of course, if a person intentionally uses an AI system for scraping, then liability should be clear. But what if an AI system “learns” to engage in scraping?

AI-generated criminal output may also be a problem. Some countries have made it a crime to display a Nazi symbol, such as a swastika, on a website. Will criminal liability attach if a website or blog owner uses AI to generate illustrated articles about World War II and the system generates and displays articles that are illustrated with World War II era German flags and military uniforms? In the United States, creating or possessing child pornography is illegal. Will criminal liability attach if an AI system generates it?

Some of these kinds of issues can be resolved through traditional legal analysis of the intent and scienter elements of the definitions of crimes. A jurisdiction might wish to consider, however, whether AI systems should be regulated to require system creators to implement measures that would prevent illegal uses of the technology. This raises policy and feasibility questions, such as whether and what kinds of restraints on machine learning should be required, and how to enforce them. Further, would prior restraints on the design and/or use of AI-powered expressive-content-generating systems infringe on First Amendment rights?  

Product liability

Related to the problem of allocating responsibility for harm caused by the use of an AI mechanism is the question whether anyone should be held liable for harm caused when the mechanism is not defective, that is to say, when it is operating as it should.

 Example.  Acme Co. manufactures and sells Auto-Article, a software program that is designed to create content of a type and kind the user specifies. The purpose of the product is to enable a website owner to generate and publish a large volume of content frequently, thereby improving the website’s search engine ranking. It operates   by scouring the Internet and analyzing instances of the content the user specifies to produce new content that “looks like” them. XYZ Co. uses the software to generate articles on medical topics. One of these articles explains that chest pain can be caused by esophageal spasms but that these typically do not require treatment unless they occur frequently enough to interfere with a person’s ability to eat or drink. Joe is experiencing chest pain. He does not seek medical help, however, because he read the article and therefore believes he is experiencing esophageal spasms. He later collapses and dies from a heart attack. A medical doctor is prepared to testify that his death could have been prevented if he had sought medical attention when he began experiencing the pain.

Should either Acme or XYZ Co. be held liable for Joe’s death? Acme could argue that its product was not defective. It was fit for its intended purposes, namely, a machine learning system that generates articles that look like articles of the kind a user specifies. What about XYZ Co.? Would the answer be different if XYZ had published a notice on its site that the information provided in its articles is not necessarily complete and that the articles are not a substitute for advice from a qualified medical professional? If XYZ incurs liability as a result of the publication, would it have a claim against Acme, such as for failure to warn it of the risks of using AI to generate articles on medical topics?

Consumer protection

AI system deployment raises significant health and safety concerns. There is the obvious example of an AI system making incorrect medical diagnoses or treatment recommendations. Autonomous (“self-driving”) motor vehicles are also examples. An extensive body of consumer protection regulations may be anticipated.

Forensic and evidentiary issues

In situations involving the use of semi-autonomous AI, allocating responsibility for harm resulting from the operation of the AI  system  may be difficult. The most basic question in this respect is whether an AI system was in use or not. For example, if a motor vehicle that can be operated in either manual or autonomous mode is involved in an accident, and fault or the extent of liability depends on that (See the discussion of tort liability, above), then a way of determining the mode in which the car was being driven at the time will be needed.

If, in the case of a semi-autonomous AI system, tort liability must be allocated between the creator of the system and a user of it, the question of fault may depend on who actually caused a particular tortious operation to be executed – the system creator or the user. In that event, some method of retracing the steps the AI system used may be essential. This may also be necessary in situations where some factor other than AI contributed, or might have contributed, to the injury. Regulation may be needed to ensure that the steps in an AI system’s operations are, in fact, capable of being ascertained.

Transparency problems also fall into this category. As explained in the Journal of Responsible Technology, people might be put on no-fly lists, denied jobs or benefits, or refused credit without knowing anything more than that the decision was made through some sort of automated process. Even if transparency is achieved and/or mandated, contestability will also be an issue.

Data Privacy

To the extent an AI system collects and stores personal or private information, there is a risk that someone may gain unauthorized access to it.. Depending on how the system is designed to function, there is also a risk that it might autonomously disclose legally protected personal or private information. Security breaches can cause catastrophic problems for data subjects.

Publicity rights

Many jurisdictions recognize a cause of action for violation of a person’s publicity rights (sometimes called “misappropriation of personality.”) In these jurisdictions, a person has an exclusive legal right to commercially exploit his or her own name, likeness or voice. To what extent, and under what circumstances, should liability attach if a commercialized AI system analyzes the name, likeness or voice of a person that it discovers on the Internet? Will the answer depend on how much information about a particular individual’s voice, name or likeness the system uses, on one hand, or how closely the generated output resembles that individual’s voice, name or likeness, on the other?

Contracts

The primary AI-related contract concern is about drafting agreements that adequately and effectively allocate liability for losses resulting from the use of AI technology. Insurance can be expected to play a larger role as the use of AI spreads into more areas.

Bias, Discrimination, Diversity & Inclusion

Some legislators have expressed concern that AI systems will reflect and perpetuate biases and perhaps discriminatory patterns of culture. To what extent should AI system developers be required to ensure that the data their systems use are collected from a diverse mixture of races, ethnicities, genders, gender identities, sexual orientations, abilities and disabilities, socioeconomic classes, and so on? Should developers be required to apply some sort of principle of “equity” with respect to these classifications, and if so, whose vision of equity should they be required to enforce? To what extent should government be involved in making these decisions for system developers and users?

Copyright

AI-generated works like articles, drawings, animations, music and so on, raise two kinds of copyright issues:

  1. Input issues, i.e., questions like whether AI systems that create new works based on existing copyright-protected works infringe the copyrights in those works
  2. Output issues, such as who, if anybody, owns the copyright in an AI-generated work.

I’ve written about AI copyright ownership issues and AI copyright infringement issues in previous blog posts on The Cokato Copyright Attorney.

Patents and other IP

Computer programs can be patented. AI systems can be devised to write computer programs. Can an AI-generated computer program that meets the usual criteria for patentability (novelty, utility, etc.) be patented?

Is existing intellectual property law adequate to deal with AI-generated inventions and creative works? The World Intellectual Property Organization (WIPO) apparently does not think so. It is formulating recommendations for new regulations to deal with the intellectual property aspects of AI.

Conclusion

AI systems raise a wide range of legal issues. The ones identified in this article are merely a sampling, not a complete listing of all possible issues. Not all of these legal issues have answers yet. It can be expected that more AI regulatory measures, in more jurisdictions around the globe, will be coming down the pike very soon.

Contact attorney Thomas James

Contact Minnesota attorney Thomas James for help with copyright and trademark registration and other copyright and trademark related matters.

The Top Copyright Cases of 2021

by Minnesota attorney Thomas James

I initially had set out to put together a “Top 10” list. Really, though, I think the list can be boiled down to three. Admittedly, this is only my personal opinion. Time will tell. Nevertheless, for what it’s worth, here is my list of the 3 Top Copyright Cases of 2021.

Google v. Oracle America

Google v. Oracle America, __ U.S. __, 141 S. Ct. 1183 (2021)

This United States Supreme Court decision is the culmination of many years of litigation between tech giants Google and Oracle.

At issue was Google’s copying of 11,500 lines of code of the Java SE API. Illustrating the murkiness of the “fair use” concept, the United States Supreme Court declared that this was fair use.

The case highlights the relatively weak protection that copyright offers for computer programs. The functional aspects of a computer program are better protected by patent than copyright.

It is dangerous to read too much into the decision, though. It does not mean that computer program copyrights are worthless To the contrary, the case was decided on the basis of fair use. Google’s copying of the code was infringement. “Fair use” simply means that a court came to the conclusion that a particular defendant should not be held liable for a particular kind or instance of infringement. Another court could come to a different conclusion in a different case involving different parties, a different kind of computer program, and a different kind of use of it.

Warhol v. Goldsmith

Andy Warhol Foundation for the Visual Arts v. Goldsmith, No. 19-2420 (2nd Cir. 2021).

This case is notable primarily because of the celebrities involved. Lynn Goldsmith took a photograph of Prince in her studio in 1981. Andy Warhol created a series of silkscreen prints and pencil illustrations based on it. Goldsmith sued for infringement of the copyright in the photograph. The district court found in favor of Warhol, citing the transformative use doctrine. The Court of Appeals reversed, asserting that the district court misapplied the four “fair use” factors.

Reversals of “fair use” findings on appeal are not uncommon. They illustrate the nebulous nature of the four-factor test that courts use to evaluate fair use claims.

Design Basics v. Signature Construction

Design Basics v. Signature Construction, No. 19-2716 (7th Cir. 2021).

Design Basics holds registered copyrights in thousands of floor plans for single-family homes. The company attempts to secure “prompt settlements” of infringement claims. The court ruled against the company on an infringement claim, finding that these designs consisted mainly of unprotectable stock elements, much of which were dictated by functional considerations and existing design considerations.

Architectural designs are protected by copyright, but the protection is thin. Only a “strikingly similar” work can give risk to an infringement claim. In other words, infringement of an architectural work requires a showing of extremely close copying.

Need help with a copyright registration or a copyright matter? Contact the Cokato Copyright Attorney Tom James.

“Sales and Use Tax Nexus: The Way Forward for Legislation” by Tom James

By Tom James, Published in 2020

By Tom James, Published in 2020

Read the full article at: open.mitchellhamline.edu

“Sales and Use Tax Nexus: The Way Forward for Legislation,” Mitchell-Hamline Law Journal of Public Policy & Practice, vol. 41, no. 1 (2020)