General Purpose Artificial Intelligence – interchangeably referred to as generative AI – has been prominent in the news with the release of large language models like OpenAI’s ChatGPT and Google’s Bard. At the same time, the technology has made headlines in the field of copyright law, as Getty Images announced in January that it has “commenced legal proceedings “in the High Court of Justice in London against Stability AI, the creator of AI image generator Stable Diffusion, for copyright infringement. (It filed a copyright and trademark infringement lawsuit against Stability AI in the U.S. in February.)
With the draft European Union Artificial Intelligence Act (“AI Act”) currently being considered by the EU Parliament, recent speculation suggests that the EU Parliament is considering adding generative AI to the category of “high risk” AI systems. It may be that generative AI will default to the transparency category but that its ultimate uses in products could be high-risk. In other words, the determination will be based on final use rather than the model itself.
Generative AI refers to a category of algorithms that generate new outputs based on the input data they have been trained on. The EU Council’s draft of the AI Act added a new Title IA, and while it only contains three clauses, it has been added to account for situations where AI systems can be used for many different purposes and in circumstances where generative AI technology gets integrated into another system which itself may become high-risk. The EU Council’s text specifies in Article 4b(1) that certain requirements for high-risk AI systems would also apply to generative AI.
The draft AI Act defines a “general purpose AI system” as one that is intended by the provider to perform generally applicable functions, such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others. This definition applies irrespective of how the system is placed on the market or put into service, including as open-source software. A generative AI system may also be one that is used in a plurality of contexts and is integrated in a plurality of other AI systems.
There will be significant compliance documentation required for providers of generative AI systems, including the production of technical documentation in accordance with Article 11 of the AI Act, which requires that “technical documentation of a high-risk AI system be drawn up before that system is placed on the market or put into service and shall be kept up-to date.” This technical documentation must demonstrate that the high-risk AI system complies with the requirements set out in the Act. In addition, the documentation must provide national competent authorities – and notified bodies – with all the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements. And this documentation must be kept at the disposal of the national competent authorities for a period ending 10 years after the general purpose AI system is placed on the Union market or put into service in the Union.
In terms of exemptions, Article 4(b) of the AI Act will not apply when the provider has explicitly excluded all high-risk uses in the instructions of use or information accompanying the general purpose AI system.
LOOKING AHEAD: We await the final text of the AI Act to see if generative AI will be classified in the same category as “high risk” AI systems. Recent reports suggest that lawmakers may default generative AI to the transparency category but that its ultimate uses in products could be high-risk. The EU Parliament is expected to finalize its position on the AI Act in the coming weeks. The intense nature of the negotiations and the constant reports of progress and compromises suggests a willingness on the part of European Union lawmakers to meet the projected deadlines for the passage of this law through Parliament and on to the Commission.
Brian McElligott is a Partner on Mason Hayes & Curran LLP’s Technology Law team and is Head of the firm’s Artificial Intelligence (AI) team.