Skip to content

First Rules of the EU AI Act Come into Effect: What Does it Mean?

Article by Tsaaro

7 min read

The evolving digital landscape in the 21st century have placed a challenge for governments and organizations as they attempt to keep pace with the technological advancements since the internet’s inception. From social media regulations to data protection and consumer rights in e-commerce, the legal frameworks have struggled to stay relevant. In the present decade, artificial intelligence (AI) has emerged as an integral part of daily life, influencing industries from healthcare to finance. However, the absence of comprehensive regulations has left this type of tech largely unchecked. Crafting meaningful legislation for AI is particularly challenging, as it not only requires a deep understanding of the technology but also extensive input from diverse stakeholders as the scope of AI keeps evolving at an astronomical pace.

The European Union (EU) has taken a pioneering step in this direction by introducing the AI Act, the first of its kind by any governmental or intergovernmental organization. Passed in 2024, the Act sets a precedent for governing AI systems, emphasizing safety, transparency, and fundamental rights. As of February 2, 2025, two of its provisions have come into effect, marking the beginning of a new regulatory era for AI technologies in Europe. The Act aims to balance innovation with ethical deployment, setting the stage for a safer and more transparent digital future, while the initial provisions that have come into effect try to increase literacy in regard with AI and restrict high risk AI platforms immediately.

Key Rules That Have Come into Effect

The EU has placed a timeline for the Act to be enforced over the next 2 years. This has been done specifically to ensure that both organizations and governmental agencies can firstly grasp at concepts of AI before they try to govern them or ensure compliances with the Act. Thus, Article 4 of the Act which places an AI Literacy requirement and mandates that organizations deploying AI systems must ensure that their personnel possess an adequate knowledge and understanding of AI technologies has been one of the first parts of the Act to come into effect. This rule aims to promote informed and responsible deployment of AI systems by equipping staff with the necessary skills to recognize the opportunities and risks associated with AI. The rule applies to all providers and deployers of AI systems, irrespective of the risk level of the AI system. Organizations must tailor AI literacy programs according to the technical knowledge, experience, and roles of their staff. Recital 20 of the AI Act emphasizes the broader scope of AI literacy, suggesting that it should extend beyond staff to all relevant actors in the AI value chain, including affected individuals.

Article 5, which prohibits certain AI practices deemed to be fundamentally unethical or dangerous has also come into effect along with Article 4. This includes the banning of systems that deploy subliminal techniques to manipulate users or exploit vulnerable groups. Notably, this prohibition is grounded in safeguarding fundamental rights as enshrined in the EU Charter of Fundamental Rights, ensuring AI applications do not compromise human dignity, freedom, or democracy. The AI Act classifies AI systems into four different risk categories:

Unacceptable risk: AI systems posing unacceptable risks to fundamental rights and Union values are prohibited under Article 5 of AI Act.

High risk: AI systems posing high risks to health, safety and fundamental rights are subject to a set of requirements and obligations. These systems are classified as ‘high-risk’ in accordance with Article 6 AI Act in conjunction with Annexes I and III AI Act.

Transparency risk: AI systems posing limited transparency risk are subject to transparency obligations under Article 50 AI Act.

Minimal to no risk: AI systems posing minimal to no risk are not regulated, but providers and deployers may voluntarily adhere to voluntary codes of conduct.

AI systems categorized as posing an “unacceptable risk” are those that present a clear threat to fundamental rights and Union values. This ensures that fundamental rights, democratic values and public safety is safeguarded. The practices that have been banned by Article 5 are:

Subliminal Manipulation: AI systems designed to influence individuals’ behaviour without their conscious awareness are prohibited. This includes manipulative advertising techniques that exploit cognitive biases.

Exploitation of Vulnerabilities: AI systems that exploit the vulnerabilities of specific groups, such as children, the elderly, or socio-economically disadvantaged individuals, are banned. This measure aims to safeguard sensitive populations from targeted manipulation.

Social Scoring Systems: AI systems that evaluate or classify individuals based on their social behaviour, personal characteristics, or socio-economic status are prohibited. This ban is inspired by concerns over discriminatory practices associated with social credit systems.

Predictive Policing and Profiling: AI systems that assess personality traits or predict criminal behaviour based on profiling techniques are banned due to potential biases and ethical concerns.

Untargeted scraping for facial recognition databases: The untargeted collection of facial images from public sources for identification purposes.
Biometric Data Collection: Real-time biometric identification in public spaces for law enforcement without stringent oversight.

Biometric categorization of persons based on sensitive characteristics: AI systems which categorise individuals based on gender, ethnicity, race, religion or sexual orientation.

Assessing the emotional state of a person: Any assessment of emotions of a person through AI, at a workplace or educational institution. Exceptions exist for safety features, such as warning systems in cars for a driver’s attention level.

Furthermore, Article 3 provides comprehensive definitions of all relevant topics, including AI systems, providers, deployers etc., setting the scope for regulation. It defines an AI system as a machine-based system designed to operate autonomously, potentially exhibiting adaptive behaviours post-deployment. The European Commission’s Guidelines on the Definition of an AI System, released on February 2, 2025, elaborate on this, highlighting seven key elements including machine-based operation, autonomy, and adaptive capabilities. This definition is crucial for identifying systems that fall under the Act’s jurisdiction.

Practical Implications

The enforcement of these rules is expected to have a profound impact on various industries:

General-Purpose AI Models: Providers of Gen-AI models, like ChatGPT, will have to comply with strict transparency obligations as outlined in Article 53. This includes maintaining technical documentation of the model’s training and testing processes, enabling downstream providers to understand its capabilities and limitations. Additionally, providers are required to disclose information about the training data used, enhancing transparency and accountability. This could lead to more informed usage while fostering trust among users. The next two phases of implementation of the Act will ensure compliance by Gen AI models as the obligations placed on them by Article 53 will come into force by 2nd of August 2025. The same phases will also ensure that the member states of the EU will have to appoint competent authorities to oversee the AI Act.

High-Risk AI Systems: AI systems used in sectors like healthcare, finance, and law enforcement are categorized as high-risk under Annex III of the AI Act. These systems must undergo rigorous conformity assessments to ensure they meet safety and accuracy standards. For instance, AI diagnostic tools in healthcare will require detailed technical documentation and post-market monitoring as per Articles 16 and 72, ensuring continuous compliance with safety regulations.

Consumer-Facing AI Systems: Direct interactions between natural persons and AI has increased as organizations deploy better and more advanced AI systems in their businesses. These organizations will face obligations to maintain transparency regarding any deployment of AI as Article 50 of the Act comes into force on 2nd August 2026. Providers will have to inform users that they are interacting with an AI system, ensuring clarity and reducing the risk of manipulation. This rule has significant implications for e-commerce platforms and social media companies that rely on personalized recommendations.

AI in Financial Services: Financial institutions leveraging AI for risk assessment or fraud detection must comply with both the AI Act and existing EU financial regulations. Article 74(6) designates financial market surveillance authorities as the primary regulators for AI systems in this sector, ensuring consistency and alignment with existing financial laws.

Data Privacy and GDPR Compliance: The AI Act complements the General Data Protection Regulation (GDPR) by enforcing stricter data usage guidelines for AI systems. Article 59 permits the use of personal data in AI regulatory sandboxes under stringent conditions, ensuring that privacy is maintained while fostering innovation. This synergy between the AI Act and GDPR strengthens data protection frameworks, particularly in sectors dealing with sensitive personal information.

Workforce Training: Companies must invest in comprehensive training programs to enhance their workforce’s AI literacy. This is especially crucial for sectors heavily reliant on AI, such as Fintech, Healthcare, and E-commerce.

Public Awareness: Organizations may need to engage in public awareness campaigns to educate consumers about AI’s role in decision-making processes, ensuring transparency and trust.

Vendor Engagement: The AI literacy requirement impacts interactions with third-party vendors, necessitating transparency and compliance checks throughout the AI supply chain.

Legal Basis and Enforcement Mechanisms

The legal basis for the prohibitions under Article 5 is rooted in safeguarding fundamental rights as outlined in the EU Charter of Fundamental Rights. The prohibitions specifically address practices that exploit vulnerabilities or manipulate users subliminally, thereby protecting human dignity, autonomy, and democratic values. This legal grounding reinforces the ethical deployment of AI technologies across the EU.

To ensure effective enforcement, the AI Act empowers market surveillance authorities with robust monitoring and evaluation capabilities. Article 92 grants the AI Office the power to conduct evaluations and request access to AI systems, including source code, for compliance assessments. This is particularly significant for general-purpose AI models, ensuring they adhere to transparency and safety standards. The next phase of implementation will mandate all EU member states to establish agencies to oversee the use of AI within their jurisdictions.

Moreover, Article 101 introduces stringent penalties for non-compliance, with fines reaching up to 3% of a company’s total worldwide annual turnover or EUR 15 million, whichever is higher. These penalties are designed to be effective, proportionate, and dissuasive, ensuring that companies prioritize compliance. The enforcement of mechanisms laid down under these Articles will come into effect over a period of 2 years, allowing both member states and the providers and deployers of AI to ensure compliance.

Challenges and Future Outlook

While the AI Act sets a global precedent for responsible AI regulation, its implementation presents challenges. Compliance costs, particularly for SMEs and startups, can be significant due to the need for AI literacy programs and thorough compliance checks. Additionally, striking a balance between fostering innovation and ensuring ethical AI use remains a critical challenge. Harmonizing the AI Act with international AI regulations is also essential for facilitating cross-border operations and maintaining global competitiveness.

The Act’s risk-based approach ensures that the most impactful AI systems are subject to the highest level of scrutiny, balancing innovation with ethical deployment. As the first comprehensive AI regulation worldwide, the EU AI Act can serve as a blueprint for other nations, potentially influencing global AI governance frameworks.

Conclusion

The EU AI Act represents a monumental step in governing AI, balancing innovation with ethical and safe deployment. The rules that came into effect on February 2, 2025, mark the beginning of a new regulatory landscape, emphasizing transparency, accountability, and the protection of fundamental rights. The implementation of A. 4 and 5 in the first phase itself also showcases clear priority of the EU, as they aim to provide time for AI literacy to increase and at the same time prohibit highly exploitative AI practices that hamper with their citizen’s fundamental rights. As industries adapt to these changes, the Act is set to shape the digital future in Europe and beyond, fostering a trustworthy and innovative AI ecosystem.

With its approach, the EU AI Act not only protects users but also sets a global benchmark for AI governance. As the digital landscape continues to evolve, this legislation ensures that technological advancements align with societal values, paving the way for a safer and more ethical digital future.

Tsaaro Consulting, in collaboration with PSA Legal Counsellors and Advertising Standards Council of India, has authored a whitepaper titled ‘Navigating Cookies: Recalibrating Your Cookie Strategy in Light of the DPDPA’. If you want to learn more about cookie consent management, read the whitepaper by clicking here. 

The Ministry of Electronics and Information Technology (MeitY) has released the Draft DPDP Rules, 2025 for Public Consultation!

Learn more about the Draft Rules here:

Understanding the Draft DPDP Rules  
Consent Notice  
Consent Manager  
•  Processing of Children’s Data  
•  Data Retention  
Data Principal Rights  
Breach Management
Obligations of Significant Data Fiduciaries
Security Safeguards 
Exemptions
•  Data Protection Board of India  

Tsaaro Consulting

The evolving digital landscape in the 21st century have placed a challenge for governments and organizations as they attempt to …

Tsaaro Consulting

Introduction  The Digital Personal Data Protection (DPDP) Act, 2023, and the Digital Personal Data Protection Rules, 2025 establish a comprehensive …

Tsaaro Consulting

In today’s interconnected world, cybersecurity plays a crucial role in protecting our digital lives. From protecting personal data to safeguarding …

Tsaaro Consulting

Introduction  A Transfer Impact Assessment (TIA) is a critical evaluation conducted under the General Data Protection Regulation (GDPR) to assess …

Tsaaro Consulting

Introduction The Digital Personal Data Protection Act (DPDPA), 2023 and the Draft DPDP Rules, 2025 have ushered in a new …

SHARE THIS POST

Would you like to read regular updates from Tsaaro.
Subscribe to our newsletter

Our Latest Blogs

Read what the latest hapennings in the cyber world are and learn what the
experts have to say about them

Call Our Experts:

+91 95577 22103

small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png

We’d love to help your organization achieve your Data Protection goals!

Schedule a complimentary consultation with our Team of Experts.