Stability AI is looking to escape the copyright infringement, unfair competition, and right-of-publicity lawsuit waged against it, DeviantArt, and Midjourney early this year. In furtherance of the lawsuit, a trio of artists is accusing Stability AI and co. of engaging in “blatant and enormous infringement” by using their artworks – without authorization – to enable AI-image generators, including Stable Diffusion, to create what are being characterized as “new” images but what are really “infringing derivative works.” In the motion to dismiss that it filed with a federal court in San Francisco on Tuesday, Stability AI urges the court to toss out Sarah Andersen, Kelly McKernan, and Karla Ortiz (the “plaintiffs”)’s claims essentially on the basis that it did not actually copy any of their works.
Setting the stage in its filing, Stability AI asserts that its Stable Diffusion product is “an open-source generative AI text-to-image model” that enables users to “input prompts of their choosing to generate creative, entirely novel images.” To enable this functionality, Stable Diffusion was “trained on billions of images that were publicly available on the Internet.” Stability AI asserts that “to be clear, training a model does not mean copying or memorizing images for later distribution. Indeed, Stable Diffusion does not ‘store’ any images.” (All emphasis courtesy of Stability AI.)
Instead of copying and/or storing images, the training of such machine learning models “involves development and refinement of millions of parameters that collectively define—in a learned sense—what things look like,” namely, “lines, colors, shades, and other attributes associated with innumerable subjects and concepts,” per Stability AI, which notes that “the purpose of doing so is not to enable the models to reproduce copies of training images.” (“If someone wanted to engage in wholesale copying of images from the Internet, there are far easier methods to do so,” the company states.) Alternatively, Stable Diffusion “enables users to create entirely new and unique images utilizing simple word prompts,” the defendant asserts, pointing to the complaint, which it says concedes that “none of the Stable Diffusion output images provided in response to a particular Text Prompt is likely to be a close match for any specific image in the training data.” As such, Stability AI “enables creation; it is not a copyright infringer.”
Against that background, Stability AI states that the plaintiffs’ claims fail as follows …
Copyright Infringement – Primarily, Stability AI takes issue with the plaintiffs’ copyright claims, as they lack the necessary copyright registrations for the majority of the allegedly infringed images, making it so that “McKernan and Ortiz’s copyright claims—and any claim Andersen may present with respect to unregistered works—fail for [this] reason alone.” Even still, the plaintiffs’ direct copyright infringement claim based on the Stable Diffusion output images fails, according to Stability AI, because they “do not allege a single act of direct infringement, let alone any output that is substantially similar to the plaintiffs’ artwork.” Much to the contrary,” Stability AI states that the plaintiffs “affirmatively plead that ‘in general, none of the Stable Diffusion output images provided in response to a particular Text Prompt is likely to be a close match for any specific image in the training data.’”
The plaintiffs’ theory that all output images are somehow “necessarily … derivative work[s]” does not save their claim, Stability AI argues since that would require “finding that any work is a ‘derivative work’ under the Copyright Act simply because it makes reference in any way whatsoever to a prior work.” The problem: “The Ninth Circuit has rejected this ‘novel proposition,’ reiterating that ‘substantial similarity’ is required to show infringement.”
As for the plaintiffs’ vicarious copyright infringement claim, that independently fails on multiple fronts, as well, Stability AI maintains.
DMCA Violations – The plaintiffs’ Digital Millennium Copyright Act (“DMCA”) claim also fails “multiple times over because the plaintiffs do not allege a single work from which copyright management information was allegedly altered or removed, explain what CMI was allegedly removed, or allege any facts to support the double-scienter requirement.”
Right-of-Publicity Claims – The plaintiffs’ right-of-publicity claims are “expressly preempted by the Copyright Act because they are simply efforts to recast copyright claims under other legal rubrics,” Stability AI argues. And even if they were not preempted, the plaintiffs “offer no more than a bare recitation of the elements of such claims with cursory allegations that ‘Defendants’ (collectively, but neither individually nor with any specificity) somehow violated their rights of publicity.”
Additionally, Stability AI asserts that the plaintiffs fail here, as: (1) they do not sufficiently allege that Stability AI used their identities; (2) they do not allege sufficient identity use “on or in products” or for purposes of advertising, selling, or soliciting purchases; (3) they do not sufficiently allege that the defendants knowingly used their identities in a manner directly connected to a commercial purpose; and (4) their alleged injuries show that they were not injured by identity misappropriation
Unfair Competition Claims – These claims are similarly preempted, as the “unfair acts” that the plaintiffs allege are “expressly the same alleged acts of copyright infringement and DMCA violation for which the plaintiffs seek redress pursuant to their affirmative copyright and DMCA claims.”
Declaratory Judgment Claim – Finally, Stability AI claims that the plaintiffs’ “duplicative claim for declaratory relief is improper and serves no useful purpose” since it sees the plaintiffs seeking “a declaration that the defendants violated certain statutes and is facially duplicative of [their other claims], which pursue causes of action under each of those statutes.”
THE BIGGER PICTURE: The plaintiffs’ suit – which centers on their allegation that Stability AI and co. “are using copies of the training images interconnected with their AI image [generators] to generate digital images and other output that are derived exclusively from the training images, and that add nothing new” – is among a number of cases targeting the companies behind headline-making AI generators. Does v. GitHub, et al. and Getty Images v. Stability AI come to mind on the copyright front, as does Thaler v. Perlmutter, the latter of which centers on the Copyright Office’s refusal to register an AI-generated artwork.
“These cases, along with others that are likely to emerge, will have a significant impact on the future of generative AI and its relationship with human creators,” Brooks Kushman PC’s Benjamin Stasa stated in a note. As the debate over generative AI and copyright continues, “There are several possible paths forward to resolve the tension between human creators and technology.” What is likely to be one of the primary paths forward (based on past business precedent) is the creation of “licensing deals negotiated between generative AI platforms and creators [under which] creators can have rights in negotiating how they want their intellectual property to be used and how they will be compensated and generative AI platforms can continue to innovate and create new works.”
“Ultimately, finding common ground through licensing agreements or other solutions will be essential for ensuring that generative AI and human creators can coexist, while still protecting the value and integrity of existing intellectual property,” Stasa asserts. “As the technology continues to evolve, it will be important for all stakeholders to work together to find a way forward that balances innovation with the rights of creators.”
The case is Sarah Anderson, et al., v. Stability AI LTD., et al., 3:23-cv-00201 (N.D. Cal.).