The Landmark Agreement On EU AI Act

The Landmark Agreement On EU AI Act

Article by Tsaaro

7 min read

The Landmark Agreement On EU AI Act


In a ground-breaking development, the European Union (EU) has reached a provisional agreement on the first-ever comprehensive rules for artificial intelligence (AI). Following intensive negotiations between the Council presidency and the European Parliament, the agreement marks a significant step forward in regulating AI systems to ensure safety, respect for fundamental rights, and the promotion of innovation within the European Union. The EU AI Act represents a comprehensive legislative framework aimed at regulating artificial intelligence technologies within the European Union, emphasizing transparency, accountability, and ethical considerations.

The AI Act stands as a leading legislative endeavour poised to promote the growth and adoption of secure and reliable AI within the European Union’s single market, encompassing participation from both private and public entities. The core concept involves the regulation of AI, guided by its potential to pose harm to society.  

While additional revisions may be made to the provisions, and co-legislators are anticipated to finalize the text in the first quarter of 2024 through a forthcoming vote, there is a consensus at the political level regarding several crucial aspects related to artificial intelligence. In this article, we will see some of the features of this provisional agreement and what comes next.  

Know More:


The provisional agreement comprehensively covers various aspects related to AI regulation and delicately balances between rights and innovation. It covers the following aspects: 

  1. Definition and Scope: 

The provisional agreement makes a clear distinction between AI and simpler software systems. Excluding free and open-source software from the regulatory scope, unless it falls under the categories of a high-risk AI system, a prohibited application, or an AI system with the potential to cause manipulation.

The definition of AI aligns with the one proposed by the OECD, which reads, “an AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.” 

Furthermore, the provisional agreement additionally specifies that the regulation is not applicable to areas outside the scope of EU law and should, under any circumstances, not impinge on the national security competences of member states or entities assigned responsibilities in this realm. Moreover, the AI Act will not be enforced on systems exclusively employed for military or defence objectives. Likewise, the agreement stipulates that the regulation does not extend to AI systems utilized solely for research and innovation, or by individuals employing AI for non-professional purposes. 

  1. Categorization of AI systems into high-risk categories and identification of prohibited AI practices: 

The consensus agreement establishes a broad protective framework that includes a high-risk categorization. The regulation ensures that it does not encompass AI systems with a low likelihood of causing serious violations of fundamental rights or significant risks. It imposes limited transparency obligations on AI systems posing minimal risks, including the disclosure that the content is AI-generated, allowing users to make informed decisions about further use.

Contingent upon meeting specified requirements and obligations to access the EU market, the consensus agreement authorizes various high-risk AI systems. Developing and disseminating AI systems through intricate value chains introduces modifications in the consensus agreement. These modifications aim to clarify the assignment of responsibilities and roles for various actors within those chains, specifically addressing providers and users of AI systems.. It also delineates the relationship between responsibilities outlined in the AI Act and those existing under other legislation, such as relevant EU data protection or sector-specific laws. 

Certain applications of AI are considered to carry unacceptable risks, leading to the prohibition of these systems in the EU. The provisional agreement, for instance, prohibits cognitive behavioural manipulation, indiscriminate scraping of facial images from the internet or CCTV footage, emotion recognition in workplaces and educational institutions, social scoring, biometric categorization to deduce sensitive data like sexual orientation or religious beliefs, and specific instances of predictive policing for individuals. 

  1. Law enforcement exceptions: 

In recognition of the distinctive needs of law enforcement authorities and the essential need to preserve their capacity to deploy AI in crucial operations, the provisional agreement incorporates specific exceptions for these authorities. For instance, the regulation introduced an emergency procedure, enabling law enforcement agencies to deploy a high-risk AI tool in cases of urgency, even if it hasn’t undergone the conformity assessment procedure. Nevertheless, a specific mechanism has been implemented to ensure that fundamental rights are adequately protected against potential misuses of AI systems. These exceptions, accompanied by appropriate safeguards, aim to acknowledge the importance of respecting the confidentiality of sensitive operational data related to law enforcement activities. 

  1. A revised framework for governance: 

National competent authorities will make sure AI systems follow the rules set out in the Act. They will work together at the EU level through the European Artificial Intelligence Board, which includes representatives from different countries and advises the Commission. The AI Office will watch over basic AI models, getting advice from a group of independent experts on these types of models. 

  1. General purpose AI systems and foundation models: 

The regulation establishes general obligations for all basic AI models and imposes more stringent standards on particularly powerful ones that pose broader risks – the AI Office will identify these models. The Act emphasizes specific transparency and copyright requirements, including the obligation to disclose information about the data used to train models, which apply to all basic AI models. 

Introducing a fundamental rights impact assessment and increased transparency measures emphasizes the importance of informing individuals when exposed to emotion recognition systems.

  1. Fines: 

The AI act determines penalties for breaching based on a percentage of the violating company’s global annual turnover from the preceding financial year or a predetermined amount—whichever is higher. For offenses related to banned AI applications, the fines are set at €35 million or 7%. Violations of the AI act’s obligations incur a penalty of €15 million or 3%, and supplying incorrect information results in a fine of €7.5 million or 1.5%. Notably, the provisional agreement introduces more proportionate limits on administrative fines for small and medium-sized enterprises (SMEs) and startups in the event of non-compliance with the AI act. 

Additionally, the compromise agreement explicitly outlines the right of both natural and legal persons to lodge a complaint with the relevant market surveillance authority in cases of AI act non-compliance. It ensures that the authority will process such complaints in accordance with the dedicated procedures established by it.

Know More:


The provisional agreement paves the way for the finalization of details in the coming weeks. Member states’ representatives will endorse the compromise text, followed by confirmation and legal-linguistic revision before formal adoption by the co-legislators.

Two years after its entry into force, ongoing technical work will ensure a seamless transition of the AI act into a new era of AI governance within the EU. As the regulatory landscape evolves, the world watches with keen interest, recognizing the potential of this framework to shape global standards for responsible AI development.


From the exploration of the EU’s landmark agreement on the AI Act, it is evident that a new era of responsible AI governance is on the horizon. The comprehensive provisional agreement, shaped through intensive negotiations, demonstrates the EU’s commitment to balancing innovation with safeguards for fundamental rights. 

With its meticulous categorization of AI systems, prohibition of high-risk applications, and exceptions for law enforcement, the agreement outlines a robust framework. As the provisions undergo potential revisions and await the final vote in early 2024, there is a shared understanding among co-legislators regarding the critical aspects of AI regulation. As the EU continues to lead in shaping the future of AI governance, the next two years will unfold a transformative journey into a more secure and innovative digital landscape. 

Shubham Bansal

INTRODUCTION:  The enactment of the Digital Personal Data Protection Act, 2023, marks a significant milestone in the realm of data …

Shubham Bansal

Introduction  The introduction of the DPDPA, 2023 has brought in the opportunity for various sectors including the pharma companies to …

Shubham Bansal

INTRODUCTION:  The enactment of data protection legislation across various jurisdictions have necessitated strict mandates to protect people’s personal information. India …

Shubham Bansal

Introduction  In today’s digital age, data protection and privacy are crucial for businesses, especially those operating online. As companies increasingly …

Shubham Bansal

INTRODUCTION Last year, India achieved a significant mark when the long-awaited data protection legislation known as the Digital Personal Data …


Would you like to read regular updates from Tsaaro.
Subscribe to our newsletter

Our Latest Blogs

Read what the latest hapennings in the cyber world are and learn what the
experts have to say about them