The Federal Trade Commission has shed some light on some of its concerns over generative AI. In response to the Copyright Office’s call for comments on the law and policy issues posed by AI, the FTC suggested that regardless of whether generative AI-created works are – or are not – above board from a copyright perspective, there may be competition & consumer protection issues at play. Specifically, the FTC asserted in its Oct. 30 comment, “Conduct that may violate copyright laws – such as training an AI tool on protected works without the creator’s consent or selling output generated from such an AI tool, including by mimicking the creator’s writing style, vocal or instrumental performance, or likeness – may also constitute an unfair method of competition or an unfair or deceptive practice.”
This is especially true, according to the FTC, “when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works, reveals private information, or otherwise causes substantial injury to consumers.”
Beyond that, the U.S. consumer protection agency states that “conduct that may be consistent with the copyright laws nevertheless may violate Section 5,” which prohibits ”unfair or deceptive acts or practices in or affecting commerce.” For instance, the FTC maintains that “many large technology firms possess vast financial resources that enable them to indemnify the users of their generative AI tools or obtain exclusive licenses to copyrighted (or otherwise proprietary) training data, potentially further entrenching the market power of these dominant firms.” These types of issues “not only touch on copyright law & policy but also implicate consumer protection and competition concerns across a wide range of industries.”
Right of Publicity: Elsewhere in its comment, the FTC touched on looming right of publicity issues that are coming about in connection with generative AI, stating that participants in its “Creative Economy and Generative AI” roundtable said that “when generative AI tools use artists’ faces, voices, and performances without permission to make digital impersonations, it can not only create consumer confusion, but it also can cause serious harm to both fans and artists.”
Disclosure: And in another area of interest, the FTC addressed disclosure, stating that roundtable participants “expressed concerns about transparency and disclosure with respect to both the data used to train AIs and the provenance of new works generated by AI tools.”
On the input side, participants said it is difficult for them to tell whether their work has been included in AI training. Participants asked for mandatory disclosure of the content of training data sets. Participants also expressed desire for clear and specific disclosures about the intended uses of their work.
On the output side, AI-generated content can easily flood markets, making it difficult for customers and other stakeholders to discern whether content is AI generated. AI-generated content can mimic the style of specific creators, and users of generative AI tools can exploit the name and reputation of the creator to gain sales and potentially compete with the creator.
The Big Tech-VC View: The FTC’s comment (and the Copyright Office’s study more broadly) seems particularly striking amid big tech and VCs’ apparent pushback against holding companies to established copyright/licensing standards when it comes to the training of AI models. Business Insider reported this week that Andreessen Horowitz, for one, is “warning that billions of dollars in AI investments could be worth a lot less if companies developing the technology are forced to pay for the copyrighted data that makes it work.”
– adidas v. Thom Browne: adidas filed a reply memo of law in support of its motion for a new trial, arguing that four “bad faith” emails (that showed Thom Browne employees’ addressing the likelihood that its 4-stripe mark might cause confusion with adidas’ 3-stripe mark and that were allegedly withheld from discovery ahead of the parties trial) are admissible and “likely would have changed the outcome of trial.”
– Kadrey v. Meta: In a motion hearing, N.D. Cal. Judge Vince Chhabri said that he will dismiss the bulk of the plaintiffs’ copyright claims (with leave to amend) – save for their training-stage copyright claim, which accuses Meta of engaging in infringement by using their copyright-protected books as training materials for its LLaMA model. (More about that case here.)
– Lontex v. Fashion Nova: Lontex – which has done business as SWEAT IT OUT and manufactured and sold athletic apparel since the 1990s – is suing Fashion Nova for allegedly cop-opting its mark to sell fast fashion apparel. (That complaint is right here.)
As of Oct. 31, the Philadelphia Eagles are looking to register KELLY GREEN for use on “education and entertainment services in the nature of professional football games and organizing exhibitions for sporting purposes; providing sports and entertainment information via a global computer network,” etc. (Class 41). Among the specimens provided by the Eagles? A screenshot from the team’s website …
– UK-based fashion e-comm marketplace Cult Mia has raised £2.5M in a Seed round.
– AI-powered virtual try-on and styling platform Zelig has raised $15M in a Series A round.
– Llama – an access control & governance platform for smart contracts – has raised a $6M in a Seed round.
– Cloud-based retail inventory management, visual merchandising & pricing analysis solutions provider Trax has recieved $50M venture debt.
– Boucheron has acquired a Paris-based High Jewelry workshop “to reinforce [its] production capacity.”
– Octarine – which creates sustainable dyeing technologies – has raised €4.35M ($4.6M) in new funding.
– Risk Ledger, a collaborative platform for supplier due diligence, has raised £6.25M in a Series A round.