Artificial Intelligence Revolution from the EU: AI Act Approved
The European Union (EU) aimed to regulate revolutionary technology in a regulatory manner with the Artificial Intelligence Act (AI Act). Accordingly, European Parliament initiated inter-institutional negotiations on artificial intelligence regulation on 14 June 2023.
On 13 March 2024, the European Parliament approved the AI Act, taking one of the significant steps that will determine the future of artificial intelligence. AI Act is critical in terms of security and innovation as well as regulating the use of artificial intelligence technologies.
As per the scope of it, AI Act will apply to (i) providers of artificial intelligence systems to be offered on the EU market, regardless of whether they are based in the EU, (ii) users of artificial intelligence systems located in the EU, and (iii) providers and users of non-EU artificial intelligence systems in case they are used in the EU.
We have compiled the significant regulations introduced by the AI Act below:
Respect for sensitive data and privacy protection: AI Act introduces regulations to ensure that artificial intelligence systems to be used in EU countries are safe and that they respect fundamental rights.
Risk-based approach and regulations: AI Act introduces rules based on the risk level of artificial intelligence systems. In other words, the higher the risk, the stricter the rules to be applied. In this context, examples of high-risk AI use cases include critical infrastructure, education and vocational training, employment, basic private and public services, certain systems in law enforcement, rules that closely concern public order such as immigration.
Regulations to encourage and support innovation: AI Act provides regulatory arrangements for small and medium-sized enterprises (SMEs) and entrepreneurs at the national level. Accordingly, AI Act will enable the development and testing of innovative artificial intelligence products within an ethical framework.
Prohibited practices and ethical principles: AI Act envisages several prohibited practices aimed at promoting the ethical use of AI-based applications. Some of these practices are as follows:
- Analysing the emotions of those concerned in business and educational settings,
- Crime prediction based on character analysis,
- Use of applications aimed at manipulating human behaviour,
- Biometric categorisation systems using sensitive characteristics (e.g. religious, philosophical beliefs, sexual orientation, race, etc.), and
- Untargeted scraping of facial images from the internet or CCTV images to create facial recognition databases.
In case of non-compliance with the rules introduced by the AI Act, the penalties to be paid by companies will vary depending on the size of the violation and the company.
AI Act will enter into force 20 days after its publication in the Official Journal of the EU. However, the AI Act will become fully applicable gradually within 24 months from the date of entry into force as follows:
- 6 months after entry into force; EU member states will phase out prohibited systems;
- 12 months after entry into force; obligations on general-purpose AI systems will enter into force
- 24 months after entry into force; high-risk system obligations defined in the list of high-risk use cases will enter into force; and
- 36 months after entry into force; high-risk system obligations that are already subject to other EU legislation will come into force.
In summary, the European Parliament's approval of the AI Act is one of the biggest steps towards AI-based technologies. In addition to being a concrete step to ensure that AI technologies are developed and used in a safe and ethical manner, AI Act also sets out a comprehensive roadmap. For this reason, it is critical for companies operating in the field of artificial intelligence to initiate compliance processes with the systematics envisaged by the AI Act.
You can access the European Parliament's announcement of the approval of the AI Act of 13 March 2024 here.