Balancing the First Amendment on Whiskey and Dog Toys

The US Supreme Court has heard oral arguments and will soon decide the fate of the “Bad Spaniels” dog toy.

The United States Supreme Court has weighed First Amendment rights in the balance against many things: privacy, national security, the desire to protect children from hearing a bad word on the radio, to name a few. Now the Court will need to balance them against trademark interests. The Court heard oral arguments in Jack Daniel’s Props. v. VIP Prods., No. 22-148, on March 22, 2023.

I’ve written about this case before. To summarize, it is a dispute between whiskey manufacturer Jack Daniel’s and dog-toy maker VIP Products. The dog toy in question is shaped like a bottle of Jack Daniel’s whiskey and has a label that looks like the famous whiskey label. Instead of “Jack Daniel’s,” though, the dog toy is called “Bad Spaniels.” Instead of “Old No. 7 Brand Tennessee sour mash whiskey,” the dog toy label reads, “Old No. 2 on your Tennessee carpet.”

VIP sued for a declaratory judgment to the effect that this does not amount to trademark infringement or dilution. Jack Daniel’s filed a counterclaim alleging that it does. The trial court ruled in favor of the whiskey maker, finding a likelihood of consumer confusion existed. The Ninth Circuit Court of Appeals, however, reversed. The appeals court held that the dog toys came within the “noncommercial use” exception to dilution liability. Regarding the infringement claim, the court held, basically, that the First Amendment trumps private trademark interests. A petition for U.S. Supreme Court review followed.

Rogers v. Grimaldi

Rogers v. Grimaldi, 875 F.2d 994 (2d Cir. 1989) is a leading case on collisions of trademark and First Amendment rights. In that case, Ginger Rogers, Fred Astaire’s famous dance partner, brought suit against the makers of a movie called “Ginger and Fred.” She claimed that the title created the false impression that the movie was about her or that she sponsored, endorsed or was affiliated with it in some way. The Second Circuit affirmed the trial court’s ruling against her, on the basis that the title of the movie was artistic expression, protected by the First Amendment as such.

When the medium is the message

Some commentators have suggested that the balance struck in favor of the First Amendment in Rogers v. Grimaldi should only apply in cases involving traditional conveyors of expressive content, i.e., books, movies, drawings, and the like. They would say that when the product involved has a primarily non-expressive purpose (such as an object for a dog to chew), traditional trademark analysis focused on likelihood of confusion should apply sans a First Amendment override.

Does this distinction hold water, though? True, commercial speech receives a lower level of protection than artistic or political speech does, but both dog toys and movies are packaged and marketed commercially. Books, movies, music, artwork, video games, software, and many other items containing expressive content are packaged and marketed commercially. Moreover, if a banana taped to a wall is a medium of artistic expression, on what basis can we logically differentiate a case where a dog toy is used as the medium of expression?

A decision is expected in June.

Is Jazz Confusingly Similar to Music?

Is jazz confusingly similar to music? No, that wasn’t the issue in this case. It was a contest between APPLE JAZZ and APPLE MUSIC involving tacking.

Bertini v. Apple Inc., No. 21-2301 (Fed. Cir. 2023). Apple, Inc. attempted to register the trademark APPLE MUSIC for both sound recordings and live musical performances (among other things). Bertini, a professional musician, filed an opposition, claiming to have used the mark APPLE JAZZ in connection with live performances since 1985, and to have started using it in connection with sound recordings in the 1990s. Bertini argued that APPLE MUSIC would likely cause confusion with APPLE JAZZ.

The first question that popped into my head, of course, was whether a consumer would really be likely to confuse jazz with music. I mean, come on.

Sadly, however, that was not the legal issue in this case. The legal issue was whether, and in what circumstances, priority of use can be established by “tacking” a new trademark registration onto an earlier one for a mark in a different category of goods or services.

The Opposition Proceeding

Apple applied to register APPLE MUSIC as a trademark in several categories of services in IC 41, including the production and distribution of sound recordings, and organizing and presenting live musical performances. Bertini, a professional jazz musician, filed a notice of opposition to Apple’s application, on the basis that he has used the mark APPLE JAZZ in connection with live performances since 1985. In the mid-1990s, Bertini began using APPLE JAZZ to issue and distribute sound recordings. Bertini opposed Apple’s registration of APPLE MUSIC on the ground that it would likely cause confusion with Bertini’s common law trademark APPLE JAZZ.

The Trademark Trials and Appeals Board (TTAB) dismissed the opposition. Bertini appealed to the Federal Circuit Court of Appeals.

The Appeal

On appeal, the parties did not dispute that there was a likelihood consumers would confuse Bertini’s use of APPLE JAZZ and Apple’s use of APPLE MUSIC. The only dispute on appeal was priority of use.

Apple, Inc. began using APPLE MUSIC in 2015, when it launched its music streaming service, nearly thirty years after Bertini had begun using APPLE JAZZ. Apple, however, argued that it was entitled to an earlier priority dating back to a 1968 registration of the mark APPLE for “Gramophone records featuring music” and “audio compact discs featuring music.” (The company had purchased the registration from Apple Corps, the Beatles’ record company.)

The TTAB agreed with Apple’s argument, holding that Apple was entitled to tack its 2015 use of APPLE MUSIC onto Apple Corp’s 1968 use of APPLE and therefore had priority over Bertini’s APPLE JAZZ.

The appellate court reversed, holding that Apple cannot tack its use of APPLE MUSIC for live musical performances onto Apple Corps’ use of APPLE for gramophone records, and that its application to register APPLE MUSIC must therefore be denied.

The Court of Appeals construed the tacking doctrine narrowly. Framing the question as “[W]hether a trademark applicant can establish priority for every good or service in its application merely because it has priority through tacking in a single good or service listed in its application,” the Court answered no. While Apple might have been able to use tacking to claim priority in connection with the production and distribution of sound recordings, it could not use that priority to establish priority with respect to other categories of services, such as organizing and presenting live performances.

Contact attorney Thomas James for help with trademark registration.

Copyright Registration and Management Services

There is now an inexpensive, intelligent alternative to “copyright mills” that is creator-friendly as well as a time-saver for attorneys.

In the United States, as in most countries, it is possible to own a copyright without registering it. Copyright registration is not a prerequisite to copyright protection. Rather, a copyright comes into being when a human being fixes original, creative expression in a tangible medium (or when it is fixed in a tangible medium at a human being’s direction.) Nevertheless, there are important reasons why you should register a copyright in a work you’ve created, particularly if you live in the United States.

Reasons for registering copyrights

If you live in the United States, the most important reason for registering a copyright is that you will not be able to enforce it unless you do. As a condition of filing an infringement claim in court, the United States Copyright Act requires a copyright owner to have applied for registration and received either a certificate of registration or a denial of registration from the U.S. Copyright Office. Registration is not a prerequisite to serving a cease-and-desist letter or a DMCA take-down notice. If you want to enforce your copyright in court, though, then you will need to register it.

This is true, as well, of infringement claims filed in the new Copyright Claims Board (sometimes called the “copyright small claims court” or “CCB”). It is not necessary to have received either a registration certificate or denial letter from the Copyright Office before filing a claim with the CCB. It is necessary, however, to have at least applied for registration before filing a claim.

Prompt registration is also important. You may not be able to recover statutory damages and attorney fees in an action for copyright infringement unless you registered the copyright either within three months after first publication or before the infringement began.

Registration gives you the benefit of a legal presumption that the copyright is valid. It also gives rise to a presumption of ownership, and that all of the other facts stated in the certificate (date of creation, etc.) are true.

Registration is not only critical to enforcement; it is also an important defensive strategy. If someone else registers the work and you do not, then they get all of the benefits described above and you do not. As the original creator of a work, you do not want to find yourself in the position of being sued for “infringing” your own work.

Registration workarounds that aren’t

The “poor man’s copyright”

One dangerous myth that has been circulating for years is that simply mailing yourself a copy of your work will be enough to establish your rights in it. This is not true. Anybody can make a copy of someone else’s work and mail it to himself or herself. Even if doing that could establish a person’s rights in a work, it is still going to be necessary to register the copyright in the work in order to enforce it in the U.S. legal system. And you won’t get any of the other benefits of registration, either, unless you do.

Posting to YouTube or another Internet website

Posting a copy of a work to YouTube or another Internet website is a modern-day version of the “poor man’s copyright” myth. The best this will do, however, is provide a date and time of posting. That proves nothing about authorship, and it does not provide any of the benefits of registration.

Notary

Notarization only verifies the validity of a signature; it does not prove anything about authorship.

Having an agent, distributor or licensing agency

Having an agent or a distributor, or listing with ASCAP, for example, does not prove authorship, nor does it provide any of the benefits of registration.

Registries and databases

Some websites offer to list your work in a “registry” or other database, supposedly as a means of protecting your copyright in the work. Some of these websites border on fraud. “Registering” your work with a private company or service will not prove authorship and will not give you any of the other benefits of registration. In the United States, the benefits of registration flow only to those who register the copyrights in their works with the United States Copyright Office.

True copyright registration services

Not all online copyright registration services are scams. Some of them will actually get a customer’s copyright registered with the United States Copyright Office. It is still necessary to proceed with caution when using them, however. Here are some things to watch out for.

Per-work vs. per-application

Pay careful attention to whether service fees are charged “per work,” on one hand, or “per application,” on the other.

If you have more than one work to register, it may sometimes be possible to register them with the Copyright Office as a group rather than individually. For example, a group of up to ten unpublished works by the same author may be registered with the Office using one application and paying one filing fee. Similarly, up to 750 photographs can sometimes be registered together as a group using only one application and paying only one filing fee.

An online service that offers to register copyrights at the rate of $100 “per work” might not inform users about the Copyright Office’s group registration options. Imagine paying $75,000 plus $33,750 filing fees to register copyrights in 750 photographs when you might have done it yourself, using one application, for a $55 filing fee.

Single, standard or group application

Once you’ve selected a service whose rates are “per application” rather than “per work,” you will want to ensure that the service includes group registration options. If a service indicates that it will prepare a “single” or “standard” application, then this may mean that it will not prepare applications for group registrations. Find that out before proceeding.

GRAM and GRUW applications

If you are a musician or composer, you may be able to qualify for a significant discount on Copyright Office filing fees by filing a GRAM or GRUW application. These are special application forms that allow the registration of up to 10 unpublished songs, or up to 20 published songs on an album, using one application and paying one filing fee. They are relatively new additions to the Copyright Office’s application forms repertoire. Some registration services will not, or do not yet, work with them.

Fees

First, understand the difference between a service fee and the Copyright Office filing fee. The Copyright Office filing fee is usually going to be between $45 and $85, depending on the kind of application. When a website quotes a fee for the service it provides, the fee it quotes normally does not include the Copyright Office filing fee — unless, of course, the website expressly says so.

Online registration service companies charge different rates for their services. One attorney website I saw quoted a $500 flat fee “per work.” Apparently, he would intend to charge $5,000 to register a group of 10 works.

Other services quote a much lower fee, typically somewhere between $100 and $250, either per work or per application.

These services typically are limited to filing a registration application, and nothing more. Some of them stand behind their work. Others charge additional fees if an application is rejected and they need to do additional work to fix the problem.

RightsClick™

A new online copyright service entered the scene last year. Called RightsClick™ it boasts processing fees that are 85% lower than most other registration services. Rather than charging $100 to $500 plus the Copyright Office filing fee, RightsClick charges $15 plus the Copyright Office filing fee.

It is also one of the few services that processes applications for group registration, and is up-front and clear about the cost. A group of up to 10 unpublished works, for example, can be registered for $15, that is to say, the same low processing fee that is charged for a single application.

There are monthly subscription charges, but even adding these into the mix does not bring the cost up to anything near to what many online services are charging.

The services provided include more than copyright registration, and additional features are planned for the future.

Learn more

Because I believe this innovative new service can be a great time and money saver for attorneys who work with authors and other copyright owners, I am hosting a continuing legal education (CLE) course through EchionCLE. It will be presented by Steven Tepp and David Newhoff, the developers of RightsClick. It will include an update on registration law and a demonstration of what RightsClick can do and how it works.

This program is FREE and is open to both attorneys and non-attorneys.

EchionCLE has applied to the Minnesota Board of Continuing Legal Education for 1.0 Standard CLE credit.

The live webinar will be held on May 17, 2023.

There will be a video replay on June 1, 2023.

For more detailed information, or to register, click here.

Disclosure statement

I do not own or have any interest in RightsClick. I have not been paid and have not received anything of value in connection with this post. This post is not an endorsement or advertisement for RightsClick or the services it offers. It is simply an announcement of what appears to me to be a service that could be of considerable benefit to authors, creators, publishers and attorneys.

Copyright owners prevail in Internet Archive lawsuit

A federal district court has ruled in favor of book publishers in their copyright infringement lawsuit against Internet Archives

In June, 2020 four book publishers filed a copyright infringement lawsuit against Internet Archive. The publishers asserted that the practice of scanning books and lending digital copies of them to online users infringed their copyrights in the books. On Friday, March 24, 2023, a federal district court judge agreed, granting the publishers’ motion for summary judgment.

The Internet Archive operation

Internet Archive is a nonprofit organization that has undertaken several archiving projects. For example, it created the “Wayback Machine,” an online archive of public webpages. This lawsuit involves another of its projects, namely, the creation of a digital archive of books. Some of these are in the public domain. Also included in this archive, however, are over 3 million books that are protected by copyright. The judge determined that 33,000 of them belong to the plaintiffs in the lawsuit.

According to the Order granting summary judgment, after scanning the books, Internet Archive made them publicly available online for free, without the permission of the copyright owners.

“Fair Use”

According to the Order, Internet Archive did not dispute that it violated copyright owners’ exclusive rights to reproduce the works, to make derivative works based on them, to distribute their works, to publicly perform them (Internet Archive offered a “read aloud” function on it website), and to display them (in this case, on a user’s browser.) In short, the Order determined that the operation violated all five of the exclusive rights of copyright owners protected by the United States Copyright Act (17 U.S.C. sec. 106).

Internet Archive asserted a “fair use” defense.

In previous cases involving massive operations to scan and digitize millions of books, Authors Guild v. Google., Inc. and Authors Guild v. HathiTrust, judicial analyses resulted in “fair use” determinations unfavorable to copyright owners. Internet Archive, of course, invited the judge to do the same thing here. The judge declined the invitation.

The judge distinguished this case from its predecessors by ruling that unlike the uses made of copyrighted works in those cases, the use in this case was not transformative. For example, Google had digitized the entire text of books in order to create a searchable index of books. “There is nothing transformative,” however, about copying and distributing the entire texts of books to the public, the judge declared.

The judge observed that Google reproduces and displays to the public only enough context surrounding the searched term to help a reader evaluate whether the book falls within the range of the reader’s interest. The Court of Appeals in Google had warned that “[i]f Plaintiff’s claim were based on Google’s converting their books into a digitized form and making that digitized version accessible to the public,” then the “claim [of copyright infringement] would be strong.”

The judge also determined that the alleged benefit to the public of having access to the entire text of books without having to pay for them “cannot outweigh the market harm to the publishers.”

Ultimately, the judge concluded that all four “fair use” factors (character and purpose of the use, nature of the work, amount and substantiality of the portion copied, and the effect on the market for the work) weighed against a finding of fair use.

What’s next?

Internet Archive apparently intends to appeal the decision. In the meantime, it appears that it will continue other kinds of digitized book services, such as interlibrary loans, citation linking, access for the print-disabled , text and data mining, purchasing e-books, and receiving and preserving books.

New AI Copyright Guidance

The Copyright Office is providing guidance to copyright applicants who wish to register works with AI-generated content in them.

On Thursday, March 16, 2023, the United States Copyright Office published new guidance regarding the registration of copyrights in AI-generated material. in the Federal Register. Here is the tl;dr version.

The Problem

Artificial intelligence (AI) technologies are now capable of producing content that would be considered expressive works if created by a human being. These technologies “train” on mass quantities of existing human-authored works and use patterns detected in them to generate like content. This creates a thorny question about authorship: To what extent can a person who uses AI technology to generate content be considered the “author” of such content?

It isn’t a hypothetical problem. The Copyright Office has already started receiving applications for registration of copyrights in works that are either wholly or partially AI-generated.

The U.S. Copyright Act gives the Copyright Office power to determine whether and what kinds of additional information it may need from a copyright registration applicant in order to evaluate the existence, ownership and duration of a purported copyright. On March 16, 2023, the Office exercised that power by publishing Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence in the Federal Register. [Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190 (March 16, 2023)]

Sorry, HAL, No Registration for You

Consistent with judicial rulings, the U.S. Copyright Office takes the position that only material that is created by a human being is protected by copyright. In other words, copyrights only protect human authorship. If a monkey can’t own a copyright in a photograph and an elephant can’t own a copyright in a portrait it paints, a computer-driven technology cannot own a copyright in the output it generates. Sorry, robots; it’s a human’s world.

As stated in the Compendium of Copyright Office Practices:

The Copyright Office “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.”

U.S. Copyright Office, Compendium of U.S.
Copyright Office Practices
sec. 313.2 (3d ed. 2021)

Partially AI-Generated Works

A work that is the product of a human being’s own original conception, to which s/he gave visible form clearly has a human author. A work that is entirely the result of mechanical reproduction clearly does not. Things get murkier when AI technology is used to generate content to which a human being applies some creativity.

According to the new guidance, merely prompting an AI technology to generate a poem, drawing or the like, without more, is not enough to establish human authorship if the AI technology determines the expressive elements of its output. This kind of content is not protected by copyright and a registration applicant therefore will need to disclaim it in the application.

On the other hand, if a human being selects and arranges AI-generated content, the selection and arrangement may be protected by copyright even if the content itself is not. Similarly, if a human being makes significant modifications to AI-generated content, then those modifications may receive copyright protection. In all cases, of course, the selection, arrangement or modification must be sufficiently creative in order to qualify for copyright protection.

Disclosure required

The new guidance imposes a duty on copyright registration applicants to disclose the inclusion of AI-generated content in any work submitted for registration.

Standard application

If you use AI technology to any extent in creating the work, you will need to use the Standard application, not the Single application, to register the copyright in it.

Claims and disclaimers

The applicant will need to describe the human author’s contributions to the work in the “Author Created” field of the application. A claim should only be made in this.

Any significant AI-generated content must be explicitly excluded (disclaimed), in the “Limitations of the Claim” section of the application, in the “Other” field, under the “Material Excluded” heading.

Previously filed applications

If you have already filed an application for a work that includes AI-generated material, you will need to make sure that it makes an adequate disclosure about that. The newly-issued guidance says you should contact the Copyright Office’s Public Information Office and report that you omitted AI information from the application. This will cause a notation to the record to be made. When an examiner sees the notation, s/he may contact you to obtain additional information if necessary.

If a registration has already been issued, you should submit a supplemntary registration form to correct it. Failing to do that could result in your registration being cancelled, if the Office becomes aware that information essential to its evaluation of registrability has been omitted. In addition, a court may ignore a registration in an infringement action if it concludes that you knowingly provided the Copyright Office with false information.


Need help with a copyright application or registration?

Contact attorney Tom James.

The CCB’s First 2 Determinations

The Copyright Claims Board (CCB) has issued its first two determinations. Here is what they were about and what the CCB did with them.

The United States Copyright Claims Board (CCB), an administrative tribunal that has been established for the purposes of resolving small copyright claims, began accepting case filings on June 16, 2022. Eight months later, it has issued its first two determinations. Here is a summary of them.

Flores v. Mitrakos, 22-CCB-0035

This was a DMCA case.

Michael Flores filed the claim against Michael Mitrakos. He alleged that Mitrakos filed a knowingly false takedown notice. The parties negotiated a settlement. On February 3, 2023 they submitted a joint request for a final determination dismissing the proceeding. It included a request to include findings that the respondent submitted false information in a takedown notice, resulting in the wrongful removal of the claimant’s material. The parties also agreed the respondent would inform Google that he was rescinding the takedown notice. The CCB incorporated the parties’ agreement into its final determination.

No damages were sought and the CCB did not award any.

Issued on February 15, 2023, this was the CCB’s first Final Determination. You can read it here.

Oppenheimer v. Prutton, 22-CCB-0045

While Flores v. Mitrakos was the first Final Determination the CCB issued, Oppenheimer v. Prutton was its first Final Determination on the merits. It is also the first copyright infringement case the Board has resolved.

The case involved alleged infringement of a copyright in a photograph. The facts, as reported in the CCB’s Final Determination, are as follows:

David G. Oppenheimer owns the copyright in a photograph he took of a federal building in Oakland, California. He registered the copyright in the photograph on July 29, 2017. On June 4, 2018, he discovered it was being displayed on the business website of attorney Douglas A. Prutton. Prutton admitted reproducing and displaying it without permission. He stated that his adult daughter found it on the Internet and put it on his website, in an effort to help improve his website, and that he removed it in 2019 upon receiving a letter from Oppenheimer objecting to the use. Oppenheimer sought an award of statutory damages for the unauthorized use of the photograph.

Prutton asserted two defenses: fair use and unclean hands.

The asserted defenses

Fair use

A person asserting fair use as a defense must address and discuss four factors: (1) purpose and character of the use; (2) nature of the work; (3) amount and substantiality of the portion copied; and (4) effect on the market for the work. Prutton only addressed the fourth factor. The failure to address the first three factors, the CCB ruled, was fatal to this defense.

Unclean hands

Prutton alleged that Oppenheimer was a copyright troll, earning revenue mostly from copyright litigation rather than from sales or licensing of his works. The CCB ruled that this is not a sufficient basis for a finding of unclean hands.

Damages

The CCB refused to reduce damages to $200 on the basis of “innocent infringement.” The CCB ruled that Prutton should have known the photograph was protected by copyright, emphasizing the fact that he was an attorney.

Oppenheimer requested statutory damages of $30,000. The CCB is limited by statute to awarding no more than $15,000 per work. The Board therefore construed it instead as a request for the maximum amount the Board can award. The CCB declined to award maximum damages.

While the amount of statutory damages does not have to be tied to the amount of actual damage, an award of statutory damages “must bear a plausible relationship to . . . actual damages.” Stockfood Am., Inc. v. Sequoia Wholesale Florist, Inc., 2021 WL 4597080, at *6 (N.D. Cal. June 22, 2021). Oppenheimer did not submit evidence of actual loss.

In the absence of any evidence of actual damage or harm, statutory damages will normally be set at $750 per work infringed. One member of the Board voted to do just that in this case. The other two members, however, believed a small increase from the minimum was justified for various reasons, such as that it was a commercial use and it had lasted for more than a year. The Board ultimately awarded Oppenheimer $1,000 statutory damages.

You can read the CCB’s Final Determination here.

Contact Thomas B. James, attorney

Need help with a CCB claim or defense? Contact Thomas B. James, Minnesota attorney.

A Recent Entrance to Complexity

The United States Copyright Office recently reaffirmed its position that it will not register AI-generated content, because it is not created by a human. The rule is easy to state; the devil is in the details. Attorney Thomas James explains.

Last year, the United States Copyright Office issued a copyright registration to Kristina Kashtanova for the graphic novel, Zarya of the Dawn. A month later, the Copyright Office issued a notice of cancellation of the registration, along with a request for additional information.

The Copyright Office, consistent with judicial decisions, takes the position that copyright requires human authorship. The Office requested additional information regarding the creative process that resulted in the novel because parts of it were AI-generated. Kashtanova complied with the request for additional information.

This week, the Copyright Office responded with a letter explaining that the registration would be cancelled, but that a new, more limited one will be issued. The Office explained that its concern related to the author’s use of Midjourney, an AI-powered image generating tool, to generate images used in the work:

Because Midjourney starts with randomly generated noise that evolves into a final image, there is no guarantee that a particular prompt will generate any particular visual output”

U.S. Copyright Office letter

The Office concluded that the text the author wrote, as well as the author’s selection, coordination and arrangement of written and visual elements, are protected by copyright, and therefore may be registered. The images generated by Midjourney, however, would not be registered because they were “not the product of human authorship.” The new registration will cover only the text and editing components of the work, not the AI-generated images.

A Previous Entrance to Paradise

Early last year, the Copyright Office refused copyright registration for an AI-generated image. Steven Thaler had filed an application to register a copyright in an AI-generated image called “A Recent Entrance to Paradise.” He listed himself as the copyright owner. The Copyright Office denied registration on the grounds that the work lacked human authorship. Thaler filed a lawsuit in federal court seeking to overturn that determination. The lawsuit is still pending. It is currently at the summary judgment stage.

The core issue

The core issue, of course, is whether a person who uses AI to generate content such as text or artwork can claim copyright protection in the content so generated. Put another way, can a user who deploys artificial intelligence to generate a seemingly expressive work (such as artwork or a novel) claim authorship?

This question is not as simple as it may seem. There can be different levels of human involvement in the use of an AI content generating mechanism. At one extreme, there are programs like “Paint,” in which users provide a great deal of input. These kinds of programs may be analogized to paintbrushes, pens and other tools that artists traditionally have used to express their ideas on paper or canvas. Word processing programs are also in this category. It is easy to conclude that the users of these kinds of programs are the authors of works that may be sufficiently creative and original to receive copyright protection.

At the other end of the spectrum are AI services like DALL-E and ChatGPT. Text and images can be generated by these systems with minimal human input. If the only human input is a user’s directive to “Write a story” or “Draw a picture,” then it would be difficult to claim that the author contributed any creative expression. That is to say, it would be difficult to claim that the user authored anything.

Peering into the worm can

The complicating consideration with content-generative AI mechanisms is that they have the potential to allow many different levels of user involvement in the generation of output. The more details a user adds to the instructions s/he gives to the machine, the more it begins to appear that the user is, in fact, contributing something creative to the project.

Is a prompt to “Write a story about a dog” a sufficiently creative contribution to the resulting output to qualify the user as an “author”? Maybe not. But what about, “Write a story about a dog who joins a traveling circus”? Or “Write a story about a dog named Pablo who joins a traveling circus”? Or “Write a story about a dog with a peculiar bark that begins, ‘Once upon a time, there was a dog named Pablo who joined a circus,’ and ends with Pablo deciding to return home”?

At what point along the spectrum of user-provided detail does copyright protectable authorship come into existence?

A question that is just as important to ask is: How much, if at all, should the Copyright Office involve itself with ascertaining the details of the creative process that were involved in a work?

In a similar vein, should copyright registration applicants be required to disclose whether their works contain AI-generated content? Should they be required to affirmatively disclaim rights in elements of AI-generated content that are not protected by copyright?

Expanding the Rule of Doubt

Alternatively, should the U.S. Copyright Office adopt something like a Rule of Doubt when copyright is claimed in AI-generated content? The Rule of Doubt, in its current form, is the rule that the U.S. Copyright Office will accept a copyright registration of a claim containing software object code, even though the Copyright Office is unable to verify whether the object code contains copyrightable work. If effect, if the applicant attests that the code is copyrightable, then the Copyright Office will assume that it is and will register the claim. Under 37 C.F.R. § 202.20(c)(2)(vii)(B), this may be done when an applicant seeks to register a copyright in object code rather than source code. The same is true of material that is redacted to protect a trade secret.

When the Office issues a registration under the Rule of Doubt, it adds an annotation to the certificate and to the public record indicating that the copyright was registered under the Rule of Doubt.

Under the existing rule, the applicant must file a declaration stating that material for which registration is sought does, in fact, contain original authorship.

This approach allows registration but leaves it to courts (not the Copyright Office) to decide on a case-by-case basis whether material for which copyright is claimed contains copyrightable authorship.  

Expanding the Rule of Doubt to apply to material generated at least in part by AI might not be the most satisfying solution for AI users, but it is one that could result in fewer snags and delays in the registration process.

Conclusion

The Copyright Office has said that it soon will be developing registration guidance for works created in part using material generated by artificial intelligence technology. Public notices and events relating to this topic may be expected in the coming months.


Need help with a copyright matter? Contact attorney Thomas James.

Getty Images Litigation Update

Getty Images has now filed a lawsuit for copyright infringement in the United States.

In a previous post, I reported on a lawsuit that Getty Images had filed in the United Kingdom against Stability AI. Now the company has filed similar claims against the company in the United States.

The complaint, which has been filed in federal district court in Delaware, alleges claims of copyright infringement; providing false copyright management information; removal or alteration of copyright management information; trademark infringement; trademark dilution; unfair competition; and deceptive trade practices. Both monetary damages and injunctive relief are being sought.

An interesting twist in the Getty litigation is that AI-generated works allegedly have included the Getty Images trademark.

Getty Images logo on AI-generated image
(Reproduction of a portion of the Complaint filed in Getty Images v. Stability AI, Inc. in U.S. district court for the district of Delaware, case no. Case 1:23-cv-00135-UNA (2023). The image has been cropped to avoid reproducing the likenesses of persons appearing in the image and to display only what is needed here for purposes of news reporting and commentary,)

Getty Images, which is in the business of collecting and licensing quality images, alleges (among other things) that affixing its trademark to poor quality AI-generated images tarnishes the company’s reputation. If proven, this could constitute trademark dilution, which is prohibited by the Lanham Act.

AI Legal Issues

Thomas James (“The Cokato Copyright Attorney”) describes the range of legal issues, most of which have not yet been resolved, that artificial intelligence (AI) systems have spawned.

AI is not new. Its implementation also is not new. In fact, consumers regularly interact with AI-powered systems every day. Online help systems often use AI to provide quick answers to questions that customers routinely ask. Sometimes these are designed to give a user the impression that s/he is communicating with a person.

AI systems also perform discrete functions such as analyzing a credit report and rendering a decision on a loan or credit card application, or screening employment applications.

Many other uses have been found for AI and new ones are being developed all the time. AI has been trained not just to perform customer service tasks, but also to perform analytics and diagnostic tests; to repair products; to update software; to drive cars; and even to write articles and create images and videos. These developments may be helping to streamline tasks and improve productivity, but they have also generated a range of new legal issues.

Tort liability

While there are many different kinds of tort claims, the elements of tort claims are basically the same: (1) The person sought to be held liable for damages or ordered to comply with a court order must have owed a duty to the person who is seeking the legal remedy; (2) the person breached that duty; (3) the person seeking the legal remedy experienced harm, i.e., real or threatened injury; and (4) the breach was the actual and proximate cause of the harm.

The kind of harm that must be demonstrated varies depending on the kind of tort claim. For example, a claim of negligent driving might involve bodily injury, while a claim of defamation might involve injury to reputation. For some kinds of tort claims, the harm might involve financial or economic injury. 

The duty may be specified in a statute or contract, or it might be judge-made (“common law.”) It may take the form of an affirmative obligation (such as a doctor’s obligation to provide a requisite level of care to a patient), or it may take a negative form, such as the common law duty to refrain from assaulting another person.

The advent of AI does not really require any change in these basic principles, but they can be more difficult to apply to scenarios that involve the use of an AI system.

Example. Acme Co. manufactures and markets Auto-Doc, a machine that diagnoses and repairs car problems. Mike’s Repair Shop lays off its automotive technician employees and replaces them with one of these machines. Suzie Consumer brings her VW Jetta to Mikes Repair Shop for service because she has been hearing a sound that she describes as being a grinding noise that she thinks is coming from either the engine or the glove compartment. The Auto-Doc machine adds engine oil, replaces belts, and removes the contents of the glove compartment. Later that day, Suzie’s brakes fail and her vehicle hits and kills a pedestrian in a crosswalk. A forensic investigation reveals that her brakes failed because they were badly worn. Who should be held liable for the pedestrian’s death – Suzie, Mike’s, Acme Co., some combination of two of them, all of them, or none of them?

The allocation of responsibility will depend, in part, on the degree of autonomy the AI machine possesses. Of course, if it can be shown that Suzie knew or should have known that her brakes were bad, then she most likely could be held responsible for causing the pedestrian’s death. But what about the others? Their liability, or share of liability, is affected by the degree of autonomy the AI machine possesses. If it is completely autonomous, then Acme might be held responsible for failing to program the machine in such a way that it would test for and detect worn brake pads even if a customer expresses an erroneous belief that the sound is coming from the engine or the glove compartment. On the other hand, if the machine is designed only to offer suggestions of possible problems and solutions,  leaving it up to a mechanic to accept or reject them, then Mike’s might be held responsible for negligently accepting the machine’s recommendations. 

Assuming the Auto-Doc machine is fully autonomous, should Mike’s be faulted for relying on it to correctly diagnose car problems? Is Mike’s entitled to rely on Acme’s representations about Auto-Doc’s capabilities, or would the repair shop have a duty to inquire about and/or investigate Auto-Doc’s limitations? Assuming Suzie did not know, and had no reason to suspect, her brakes were worn out, should she be faulted for relying on a fully autonomous machine instead of taking the car to a trained human mechanic?  Why or why not?

Criminal liability

It is conceivable that an AI system might engage in activity that is prohibited by an applicable jurisdiction’s criminal laws. E-mail address harvesting is an example. In the United States, for example, the CAN-SPAM Act makes it a crime to send a commercial email message to an email address that was  obtained  by automated scraping of Internet websites for email addresses. Of course, if a person intentionally uses an AI system for scraping, then liability should be clear. But what if an AI system “learns” to engage in scraping?

AI-generated criminal output may also be a problem. Some countries have made it a crime to display a Nazi symbol, such as a swastika, on a website. Will criminal liability attach if a website or blog owner uses AI to generate illustrated articles about World War II and the system generates and displays articles that are illustrated with World War II era German flags and military uniforms? In the United States, creating or possessing child pornography is illegal. Will criminal liability attach if an AI system generates it?

Some of these kinds of issues can be resolved through traditional legal analysis of the intent and scienter elements of the definitions of crimes. A jurisdiction might wish to consider, however, whether AI systems should be regulated to require system creators to implement measures that would prevent illegal uses of the technology. This raises policy and feasibility questions, such as whether and what kinds of restraints on machine learning should be required, and how to enforce them. Further, would prior restraints on the design and/or use of AI-powered expressive-content-generating systems infringe on First Amendment rights?  

Product liability

Related to the problem of allocating responsibility for harm caused by the use of an AI mechanism is the question whether anyone should be held liable for harm caused when the mechanism is not defective, that is to say, when it is operating as it should.

 Example.  Acme Co. manufactures and sells Auto-Article, a software program that is designed to create content of a type and kind the user specifies. The purpose of the product is to enable a website owner to generate and publish a large volume of content frequently, thereby improving the website’s search engine ranking. It operates   by scouring the Internet and analyzing instances of the content the user specifies to produce new content that “looks like” them. XYZ Co. uses the software to generate articles on medical topics. One of these articles explains that chest pain can be caused by esophageal spasms but that these typically do not require treatment unless they occur frequently enough to interfere with a person’s ability to eat or drink. Joe is experiencing chest pain. He does not seek medical help, however, because he read the article and therefore believes he is experiencing esophageal spasms. He later collapses and dies from a heart attack. A medical doctor is prepared to testify that his death could have been prevented if he had sought medical attention when he began experiencing the pain.

Should either Acme or XYZ Co. be held liable for Joe’s death? Acme could argue that its product was not defective. It was fit for its intended purposes, namely, a machine learning system that generates articles that look like articles of the kind a user specifies. What about XYZ Co.? Would the answer be different if XYZ had published a notice on its site that the information provided in its articles is not necessarily complete and that the articles are not a substitute for advice from a qualified medical professional? If XYZ incurs liability as a result of the publication, would it have a claim against Acme, such as for failure to warn it of the risks of using AI to generate articles on medical topics?

Consumer protection

AI system deployment raises significant health and safety concerns. There is the obvious example of an AI system making incorrect medical diagnoses or treatment recommendations. Autonomous (“self-driving”) motor vehicles are also examples. An extensive body of consumer protection regulations may be anticipated.

Forensic and evidentiary issues

In situations involving the use of semi-autonomous AI, allocating responsibility for harm resulting from the operation of the AI  system  may be difficult. The most basic question in this respect is whether an AI system was in use or not. For example, if a motor vehicle that can be operated in either manual or autonomous mode is involved in an accident, and fault or the extent of liability depends on that (See the discussion of tort liability, above), then a way of determining the mode in which the car was being driven at the time will be needed.

If, in the case of a semi-autonomous AI system, tort liability must be allocated between the creator of the system and a user of it, the question of fault may depend on who actually caused a particular tortious operation to be executed – the system creator or the user. In that event, some method of retracing the steps the AI system used may be essential. This may also be necessary in situations where some factor other than AI contributed, or might have contributed, to the injury. Regulation may be needed to ensure that the steps in an AI system’s operations are, in fact, capable of being ascertained.

Transparency problems also fall into this category. As explained in the Journal of Responsible Technology, people might be put on no-fly lists, denied jobs or benefits, or refused credit without knowing anything more than that the decision was made through some sort of automated process. Even if transparency is achieved and/or mandated, contestability will also be an issue.

Data Privacy

To the extent an AI system collects and stores personal or private information, there is a risk that someone may gain unauthorized access to it.. Depending on how the system is designed to function, there is also a risk that it might autonomously disclose legally protected personal or private information. Security breaches can cause catastrophic problems for data subjects.

Publicity rights

Many jurisdictions recognize a cause of action for violation of a person’s publicity rights (sometimes called “misappropriation of personality.”) In these jurisdictions, a person has an exclusive legal right to commercially exploit his or her own name, likeness or voice. To what extent, and under what circumstances, should liability attach if a commercialized AI system analyzes the name, likeness or voice of a person that it discovers on the Internet? Will the answer depend on how much information about a particular individual’s voice, name or likeness the system uses, on one hand, or how closely the generated output resembles that individual’s voice, name or likeness, on the other?

Contracts

The primary AI-related contract concern is about drafting agreements that adequately and effectively allocate liability for losses resulting from the use of AI technology. Insurance can be expected to play a larger role as the use of AI spreads into more areas.

Bias, Discrimination, Diversity & Inclusion

Some legislators have expressed concern that AI systems will reflect and perpetuate biases and perhaps discriminatory patterns of culture. To what extent should AI system developers be required to ensure that the data their systems use are collected from a diverse mixture of races, ethnicities, genders, gender identities, sexual orientations, abilities and disabilities, socioeconomic classes, and so on? Should developers be required to apply some sort of principle of “equity” with respect to these classifications, and if so, whose vision of equity should they be required to enforce? To what extent should government be involved in making these decisions for system developers and users?

Copyright

AI-generated works like articles, drawings, animations, music and so on, raise two kinds of copyright issues:

  1. Input issues, i.e., questions like whether AI systems that create new works based on existing copyright-protected works infringe the copyrights in those works
  2. Output issues, such as who, if anybody, owns the copyright in an AI-generated work.

I’ve written about AI copyright ownership issues and AI copyright infringement issues in previous blog posts on The Cokato Copyright Attorney.

Patents and other IP

Computer programs can be patented. AI systems can be devised to write computer programs. Can an AI-generated computer program that meets the usual criteria for patentability (novelty, utility, etc.) be patented?

Is existing intellectual property law adequate to deal with AI-generated inventions and creative works? The World Intellectual Property Organization (WIPO) apparently does not think so. It is formulating recommendations for new regulations to deal with the intellectual property aspects of AI.

Conclusion

AI systems raise a wide range of legal issues. The ones identified in this article are merely a sampling, not a complete listing of all possible issues. Not all of these legal issues have answers yet. It can be expected that more AI regulatory measures, in more jurisdictions around the globe, will be coming down the pike very soon.

Contact attorney Thomas James

Contact Minnesota attorney Thomas James for help with copyright and trademark registration and other copyright and trademark related matters.

Does AI Infringe Copyright?

A previous blog post addressed the question whether AI-generated creations are protected by copyright. This could be called the “output question” in the artificial intelligence area of copyright law. Another question is whether using copyright-protected works as input for AI generative processes infringes the copyrights in those works. This could be called the “input question.” Both kinds of questions are now before the courts. Minnesota attorney Tom James describes a framework for analyzing the input question.

The Input Question in AI Copyright Law

by Thomas James, Minnesota attorney

In a previous blog post, I discussed the question whether AI-generated creations are protected by copyright. This could be called the “output question” in the artificial intelligence area of copyright law. Another question is whether using copyright-protected works as input for AI generative processes infringes the copyrights in those works. This could be called the “input question.” Both kinds of questions are now before the courts. In this blog post, I describe a framework for analyzing the input question.

The Cases

The Getty Images lawsuit

Getty Images is a stock photograph company. It licenses the right to use the images in its collection to those who wish to use them on their websites or for other purposes. Stability AI is the creator of Stable Diffusion, which is described as a “text-to-image diffusion model capable of generating photo-realistic images given any text input.” In January, 2023, Getty Images initiated legal proceedings in the United Kingdom against Stability AI. Getty Images is claiming that Stability AI violated copyrights by using their images and metadata to train AI software without a license.

The independent artists lawsuit

Another lawsuit raising the question whether AI-generated output infringes copyright has been filed in the United States. In this case, a group of visual artists are seeking class action status for claims against Stability AI, Midjourney Inc. and DeviantArt Inc. The artists claim that the companies use their images to train computers “to produce seemingly new images through a mathematical software process.” They describe AI-generated artwork as “collages” made in violation of copyright owners’ exclusive right to create derivative works.

The GitHut Copilot lawsuit

In November, 2022, a class action lawsuit was filed in a U.S. federal court against GitHub, Microsoft, and OpenAI. The lawsuit claims the GitHut Copilot and OpenAI Codex coding assistant services use existing code to generate new code. By training their AI systems on open source programs, the plaintiffs claim, the defendants have allegedly infringed the rights of developers who have posted code under open-source licenses that require attribution.

How AI Works

AI, of course, stands for artificial intelligence. Almost all AI techniques involve machine learning. Machine learning, in turn, involves using a computer algorithm to make a machine improve its performance over time, without having to pre-program it with specific instructions. Data is input to enable the machine to do this. For example, to teach a machine to create a work in the style of Vincent van Gogh, many instances of van Gogh’s works would be input. The AI program contains numerous nodes that focus on different aspects of an image. Working together, these nodes will then piece together common elements of a van Gogh painting from the images the machine has been given to analyze. After going through many images of van Gogh paintings, the machine “learns” the features of a typical Van Gogh painting. The machine can then generate a new image containing these features.

In the same way, a machine can be programmed to analyze many instances of code and generate new code.

The input question comes down to this: Does creating or using a program that causes a machine to receive information about the characteristics of a creative work or group of works for the purpose of creating a new work that has the same or similar characteristics infringe the copyright in the creative work(s) that the machine uses in this way?

The Exclusive Rights of Copyright Owners

In the United States, the owner of a copyright in a work has the exclusive rights to:

  • reproduce (make copies of) it;
  • distribute copies of it;
  • publicly perform it;
  • publicly display it; and
  • make derivative works based on it.

(17 U.S.C. § 106). A copyright is infringed when a person exercises any of these exclusive rights without the copyright owner’s permission.

Copyright protection extends only to expression, however. Copyright does not protect ideas, facts, processes, methods, systems or principles.

Direct Infringement

Infringement can be either direct or indirect. Direct infringement occurs when somebody directly violates one of the exclusive rights of a copyright owner. Examples would be a musician who performs a copyright-protected song in public without permission, or a cartoonist who creates a comic based on the Batman and Robin characters and stories without permission.

The kind of tool an infringer uses is not of any great moment. A writer who uses word-processing software to write a story that is simply a copy of someone else’s copyright-protected story is no less guilty of infringement merely because the actual typewritten letters were generated using a computer program that directs a machine to reproduce and display typographical characters in the sequence a user selects.

Contributory and Vicarious Infringement

Infringement liability may also arise indirectly. If one person knowingly induces another person to infringe or contributes to the other person’s infringement in some other way, then each of them may be liable for copyright infringement. The person who actually committed the infringing act could be liable for direct infringement. The person who knowingly encouraged, solicited, induced or facilitated the other person’s infringing act(s) could be liable for contributory infringement.

Vicarious infringement occurs when the law holds one person responsible for the conduct of another because of the nature of the legal relationship between them. The employment relationship is the most common example. An employer generally is held responsible for an employee’s conduct,  provided the employee’s acts were performed within the course and scope of the employment. Copyright infringement is not an exception to that rule.

Programmer vs. User

Direct infringement liability

Under U.S. law, machines are treated as extensions of the people who set them in motion. A camera, for example, is an extension of the photographer. Any images a person causes a camera to generate by pushing a button on it is considered the creation of the person who pushed the button, not of the person(s) who manufactured the camera, much less of the camera itself. By the same token, a person who uses the controls on a machine to direct it to copy elements of other people’s works should be considered the creator of the new work so created. If using the program entails instructing the  machine to create an unauthorized derivative work of copyright-protected images, then it would be the user, not the machine or the software writer, who would be at risk of liability for direct copyright infringement.

Contributory infringement liability

Knowingly providing a device or mechanism to people who use it to infringe copyrights creates a risk of liability for contributory copyright infringement. Under Sony Corp. v. Universal City Studios, however, merely distributing a mechanism that people can use to infringe copyrights is not enough for contributory infringement liability to attach, if the mechanism has substantial uses for which copyright infringement liability does not attach. Arguably, AI has many such uses. For example, it might be used to generate new works from public domain works. Or it might be used to create parodies. (Creating a parody is fair use; it should not result in infringement liability.)

The situation is different if a company goes further and induces, solicits or encourages people to use its mechanism to infringe copyrights. Then it may be at risk of contributory liability. As the United States Supreme Court has said, “one who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, is liable for the resulting acts of infringement by third parties.” Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913, 919 (2005). (Remember Napster?)

Fair Use

If AI-generated output is found to either directly or indirectly infringe copyright(s), the infringer nevertheless might not be held liable, if the infringement amounts to fair use of the copyrighted work(s) that were used as the input for the AI-generated work(s).

Ever since some rap artists began using snippets of copyright-protected music and sound recordings without permission, courts have embarked on a treacherous expedition to articulate a meaningful dividing line between unauthorized derivative works, on one hand, and unauthorized transformative works, on the other. Although the Copyright Act gives copyright owners the exclusive right to create works based on their copyrighted works (called derivative works), courts have held that an unauthorized derivative work may be fair use if it is “transformative.: This has caused a great deal of uncertainty in the law, particularly since the U.S. Copyright Act expressly defines a derivative work as one that transforms another work. (See 17 U.S.C. § 101: “A ‘derivative work’ is a work based upon one or more preexisting works, . . . or any other form in which a work may be recast, transformed, or adapted.” (emphasis added).)

When interpreting and applying the transformative use branch of Fair Use doctrine, courts have issued conflicting and contradictory decisions. As I wrote in another blog post, the U.S. Supreme Court has recently agreed to hear and decide Andy Warhol Foundation for the Visual Arts v. Goldsmith. It is anticipated that the Court will use this case to attempt to clear up all the confusion around the doctrine. It is also possible the Court might take even more drastic action concerning the whole “transformative use” branch of Fair Use.

Some speculate that the questions the Justices asked during oral arguments in Warhol signal a desire to retreat from the expansion of fair use that the “transformativeness” idea spawned. On the other hand, some of the Court’s recent decisions, such as Google v. Oracle, suggest the Court is not particularly worried about large-scale copyright infringing activity, insofar as Fair Use doctrine is concerned.

Conclusion

To date, it does not appear that there is any direct legal precedent in the United States for classifying the use of mass quantities of works as training tools for AI as “fair use.” It seems, however, that there soon will be precedent on that issue, one way or the other. In the meantime, AI generating system users should proceed with caution.

%d bloggers like this: