A New EU Law is Targeting AI & Other Countries Can Learn From It

Image: Unsplash

A New EU Law is Targeting AI & Other Countries Can Learn From It

Around the world, governments are grappling with how best to manage the increasingly unruly beast that is artificial intelligence (“AI”). This fast-growing technology promises to boost national economies and make completing menial tasks easier. But it also poses ...

August 20, 2024 - By Rita Matulionyte

A New EU Law is Targeting AI & Other Countries Can Learn From It

Image : Unsplash

Case Documentation

A New EU Law is Targeting AI & Other Countries Can Learn From It

Around the world, governments are grappling with how best to manage the increasingly unruly beast that is artificial intelligence (“AI”). This fast-growing technology promises to boost national economies and make completing menial tasks easier. But it also poses serious risks, such as AI-enabled crime and fraud, increased spread of misinformation and disinformation, increased public surveillance and further discrimination of already disadvantaged groups.

The European Union has taken a world-leading role in addressing these risks. In recent weeks, its Artificial Intelligence Act came into force. This is the first law internationally designed to comprehensively manage AI risks – and other countries can learn much from it as they too try to ensure AI is safe and beneficial for everyone.

AI: A Double-Edged Sword

AI use is already widespread; it is the basis of the algorithms that recommend music, films, and television shows on applications, such as Spotify or Netflix. It is in cameras that identify people in airports and shopping malls. And it is increasingly used in retail, hiring, education, and healthcare services.

But AI is also being used for more troubling purposes. It can create deepfake images and videos, facilitate online scams, fuel massive surveillance, and violate our privacy and human rights. For example, in November 2021, the Australian Information and Privacy Commissioner, Angelene Falk, ruled a facial recognition tool, Clearview AI, breached privacy laws by scraping peoples photographs from social media sites for training purposes. However, a Crikey investigation earlier this year found the company is still collecting photos of Australians for its AI database.

Cases like this underscore the urgent need for better regulation of AI technologies. Indeed, AI developers have even called for laws to help manage AI risks.

The EU Artificial Intelligence Act

The European Union’s new AI law came into force on August 1. Crucially, it sets requirements for different AI systems based on the level of risk they pose. The more risk an AI system poses for health, safety or human rights of people, the stronger requirements it has to meet. The AI Act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by law enforcement authorities – similar to those currently used in China

Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these are not prohibited, they must comply with many requirements. For example, these systems must have their own risk management plan, be trained on quality data, meet accuracy, robustness and cybersecurity requirements and ensure a certain level of human oversight. 

Lower risk AI systems, such as various chatbots, need to comply with only certain transparency requirements. For example, individuals must be told they are interacting with an AI bot and not an actual person. AI-generated images and text also need to contain an explanation they are generated by AI, and not by a human. Designated EU and national authorities will monitor whether AI systems used in the EU market comply with these requirements and will issue fines for non-compliance.

Other Countries Are Following Suit

The EU is not alone in taking action to tame the AI revolution; other countries and lawmaking bodies are looking to rein in the risks associated with AI. Earlier this year the Council of Europe, an international human rights organization with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law. Canada is also discussing the AI and Data Bill. Like the EU laws, this will set rules to various AI systems, depending on their risks. Meanwhile, instead of a single law, the U.S. government has proposed a number of different laws addressing different AI systems in various sectors.

Still yet, the federal government in Australia ran a public consultation on safe and responsible AI. It then established an AI expert group which is currently working on the first proposed legislation on AI. The government also plans to reform laws to address AI challenges in healthcare, consumer protection, and creative industries.

The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies. However, a single law on AI will never be able to address the complexities of the technology in specific industries. For example, AI use in healthcare will raise complex ethical and legal issues that will need to be addressed in specialized healthcare laws. A generic AI Act will not suffice. 

Regulating diverse AI applications in various sectors is not an easy task, and there is still a long way to go before all countries have comprehensive and enforceable laws in place. Policymakers will have to join forces with industry and communities to ensure AI brings the promised benefits to society – without the harms.


Rita Matulionyte is an Associate Professor in Law at Macquarie University.

related articles