The AI Act: The World's First Comprehensive Laws to Regulate AI

24.04.2024

The European Parliament has adopted a legislative proposal on the use of artificial intelligence ('AI') submitted by the European Commission in 2021. This AI Act (‘Act’) represents a significant step towards strengthening trust in AI, promoting innovation, and protecting fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI. As per Dr. Marco Buschmann, the Federal Minister of Justice in Germany, the Act aims to balance innovation and risk protection.


As the first comprehensive regulation in the world to guarantee people's health, safety and fundamental rights to ensure that AI is safe and reliable, the Act imposes a number of obligations on both providers and users.   In this respect, the Actadopts a risk-based approach and classifies AI systems into four levels of risk: 'unacceptable', 'high-risk', 'limited-risk', and 'minimal-risk'.


First of all, AI systems that manipulate people psychologically or employ social scoring are considered high-risk and, therefore, unacceptable. For instance, Remote biometric identification (‘RBI’) systems for law enforcement in public places, AI systems for face recognition from the internet or video footage, and AI systems for emotion recognition in the workplace and educational institutions are, with some exceptions, prohibited due to their unacceptable risk.


Secondly, AI systems that have a negative impact on safety or fundamental rights are considered high-risk and fall into two categories. The first category includes AI systems used in products falling under the EU’s product safety legislation. The second category includes AI systems that fall into certain areas that need to be registered in an EU database. For example, AI systems used in educational institutions, workplaces (such as a CV screening tool that ranks job applicants), law enforcement, and essential private and public services (such as credit scoring that denies citizens the opportunity to obtain credit), immigration, justice, and electoral areas, will be defined as high-risk. Before deployment, strict obligations will be imposed on high-risk AI systems.


The use of remote biometric identification ('RBI') systems defined as unacceptably risky, as described above, for law enforcement purposes in public places is prohibited in principle.  Exceptions have been identified where it is necessary to use these systems to prosecute specific crimes, such as human trafficking or terrorism. In addition, such use is subject to authorisation by a judicial or other independent body and to appropriate limitations in terms of time, geographical access and databases searched.


It is important to note that this only applies to live surveillance. "Real-time" RBIs should only be used under strict safeguards, such as limiting their use in time and geography and requiring prior judicial or administrative authorisation.


The third category, limited risk, refers to the risks associated with the lack of transparency in the use of AI. However, the Act provides specific transparency obligations to ensure individuals are informed and can trust the technology. For instance, when individuals interact with AI-based chatbots, they must be notified that they are interacting with an AI system.


Finally, minimum-risk AI systems include applications such as AI-enabled video games or spam filters. The Act permits the unrestricted use of AI systems with minimal risk.


In line with the risk-based approach of AI legislation, more stringent obligations are imposed as the level of risk increases. AI systems deemed to pose an unacceptable risk are prohibited entirely, while high-risk AI systems are subject to strict technical and organisational requirements. Low-risk applications, on the other hand, are only subject to certain transparency and information obligations.


The law applies to a variety of artificial intelligence systems, including but not limited to those based on risk assessment. ‘’Foundation Models’’, which are a form of generative AI, such as ChatGPT, are subject to regulation under the Act. Transparency obligations apply to these systems as well. If companies use applications based on basic models, they must declare the test procedures and training data used to feed the base model unless they are open-source models. Furthermore, companies must provide a list of copyrighted materials used to train the models. According to the Act, regulations will be implemented for AI models that generate content such as texts and images. In addition, providers must identify AI-generated content. Any text generated by AI and published with the intention of providing information of public interest must include a label indicating that it has been generated by AI. Providers are also required to indicate when artificial or altered images, audio or video content, known as deepfakes, have been created through intervention. It is crucial for the viewer to be able to differentiate between an AI-generated image and a genuine photograph. This differentiation is necessary to prevent any misunderstandings or misrepresentations.


There are unregulated issues in the Act, such as the use of AI systems for military applications and national security. Autonomous AI systems pose a significant threat in the military sector as they can make decisions independently of humans. Therefore, it is crucial to subject them to specific rules.


The EU Parliament's adoption of an AI Act is a significant step towards the regulation of AI and will place Europe at the forefront of AI regulation. The EU AI Act has the potential to become a standard, similar to the EU's General Data Protection Regulation in 2018.


The Act will be published in the Official Gazette after approval by the Council of Europe and come into effect twenty days after its publication in the Official Gazette, wheras the prohibitions will be implemented gradually. The European AI Office is expected to produce voluntary codes of practice, adherence to which would create a presumption of conformity and will facilitate the adaptation process for providers and users. Users and providers are highly recommended to review the ACT in terms of their obligations before the law enters into force in order to be able to adapt their AI systems to the law in due time and to comply with the Act for avoiding any potential legal consequences.

This website is available “as is.” Turkish Law Blog is not responsible for any actions (or lack thereof) taken as a result of relying on or in any way using information contained in this website, and in no event shall they be liable for any loss or damages.
Ready to stay ahead of the curve?
Share your interest anonymously and let us guide you through the informative articles on the hottest legal topics.
|
Successful Your message has been sent