Holding off on all things Farfetch for a moment, the Federal Trade Commission released a report this week with takeaways from a virtual roundtable that it held in October to “examine how generative artificial intelligence (tools that can generate outputs like text, images, and audio on command) is being used and is affecting professionals in music, filmmaking, software development, and other creative fields.”
Among some of the key issues expressed by the parties that spoke (namely, creative professionals, including artists, writers, actors, musicians and other fields, such as the modeling industry, as well as Policy Fellow at Software Freedom Conservancy) were: (1) Concerns about how their work was being collected and used to train generative AI models; (2) the impact that generative AI outputs are already having on their industry and livelihoods; (3) issues associated with solutions being proposed by AI companies to address creators’ concerns; and (4) alternative approaches that creators are pursuing to protect themselves and their industry, including by enshrining their right to choose whether they want to use AI in their work through union contracts. Additionally, the speakers are pointed to the following as potential drawbacks of the widespread adoption of generative AI …
> Concerns about how their work was being collected and used to train generative AI models;
Legislation: The FTC notes that throughout the event, participants mentioned “specific pieces of legislation they supported and hoped would help creative professionals and add guardrails for how generative AI is built and used.”
> On the state level, Sara Ziff of the Model Alliance said her organization supported the Fashion Workers Act in New York, which would establish basic labor protections for models and content creators in the state’s fashion industry. Ziff said the bill would help address the lack of transparency around how models’ body scans are being used.
> Jen Jacobsen, the executive director of the Artist Rights Alliance, referenced the Protect Working Musicians Act of 2023, which Jacobsen said would give small and independent musicians an antitrust exemption to negotiate collectively with AI developers and streaming platforms.
> At least one participant mentioned the idea of a federal right of publicity. State-level laws have been passed in places like New York and California, but not every state has its own version. Right of publicity laws generally protect a person’s likeness from being misused for commercial purposes without their consent and could potentially give creative professionals greater control over how things like their voices or personal styles are being used. Since the event took place, a bipartisan group of senators released a discussion draft of the No Fakes Act of 2023, which would create such federal protections specifically addressing misuse of generative AI.
The FTC’s Conclusion: “Although many of the concerns raised at the roundtable lay beyond the scope of the Commission’s jurisdiction, targeted enforcement under the FTC’s existing authority in AI-related markets can help to foster fair competition and protect people in creative industries and beyond from unfair or deceptive practices.” The bottom line here from the FTC it seems is this: “There is no ‘AI exemption’ from the laws on the books. The FTC will continue to vigorously use the full range of its authorities to protect Americans from deceptive and unfair conduct and maintain open, fair, and competitive markets.”
As for the pushback at play here … at least some have taken issue with the creative-heavy makeup of the FTC’s roundtable, with University of California, Berkeley Law professor (and Co-Director, Berkeley Center for Law & Technology) Pamela Samuelson, for one, stated on Twitter that the FTC “should hear from many different sources & voices before publishing a report on the ‘creative economy.’” While “individual creators should of course have a voice in policy debates about generative AI, it would be better to do listening sessions like the Copyright Office did last spring to hear from people with different perspectives,” she asserted.
NYU Law professor Chris Sprigman echoed these concerns, stating that “the FTC report does not provide a balanced (or nearly complete) account of the issues presented by AI training on copyrighted works,” noting that “11 of the 12 witnesses at FTC hearing appeared to be or to represent individual creators; one represented open-source software developers objecting to AI training on their code. No witness provided perspectives from technologists who have developed or work with AI agents.”
The Bigger Picture: The roundtable and corresponding report comes after the FTC asserted in the Oct. 30 comment that it submitted in response to call from comments from the Copyright Office that infringing AI activities may also run afoul of the FTC Act. Specifically, the FTC stated, “Conduct that may violate copyright laws – such as training an AI tool on protected works without the creator’s consent or selling output generated from such an AI tool, including by mimicking the creator’s writing style, vocal or instrumental performance, or likeness – may also constitute an unfair method of competition or an unfair or deceptive practice.” This is especially true, according to the FTC, “when the copyright violation deceives consumers, exploits a creator’s reputation or diminishes the value of her existing or future works, reveals private information, or otherwise causes substantial injury to consumers.”