Introduction
South Korea’s AI Basic Act represents a groundbreaking step in the country’s efforts to both foster rapid AI innovation and ensure the responsible, ethical development of artificial intelligence. Enacted by the National Assembly on December 26, 2024, and set to take effect in January 2026, the Act consolidates a range of previously fragmented regulatory proposals into a unified framework. It reflects South Korea’s ambition to become a global leader in AI technology while safeguarding human rights and societal values.
A Unified Vision for AI Development
The primary objective of the AI Basic Act is to create a coherent legal framework that not only promotes AI research and industry growth but also establishes robust measures for risk management and user protection. By consolidating 19 separate regulatory proposals into a single legislative instrument, the Act aims to reduce corporate uncertainty and stimulate large-scale public and private investment in AI. Authorities, including the Ministry of Science and ICT, will be responsible for developing detailed subordinate regulations and policies to support this vision. This unified approach is intended to streamline compliance obligations for both domestic and international AI developers operating in South Korea.
A Risk-Based Regulatory Approach
Central to the Act is its risk-based categorization of AI systems. The legislation differentiates between “high-impact AI” and “generative AI,” among other classifications. High-impact AI systems are those used in critical sectors such as healthcare, public safety, and employment, where failures or misjudgements could have significant consequences on human rights, safety, and public welfare. Providers of these systems must implement comprehensive risk management measures, including advance user notification, human oversight, and rigorous impact assessments. In contrast, generative AI systems—which are capable of producing creative outputs like text, images, or sound—are subject to labelling requirements to ensure that users are aware of the AI-generated nature of outputs. This nuanced approach allows the government to balance innovation with the need to mitigate risks associated with AI technologies.
Comparison with the EU AI Act
Like its European counterpart, the South Korean law adopts a risk-based framework and imposes specific obligations on high-impact AI systems, such as mandatory risk assessments, transparency in operations, and user notification.
The EU AI Act establishes detailed risk tiers that classify AI systems into distinct categories—which include unacceptable risk, high risk, limited risk, and minimal risk—and specifies tailored obligations for various roles in the AI value chain, including providers, deployers, importers, and distributors. These provisions mandate specific requirements in risk management, transparency, and human oversight. In contrast, South Korea’s AI Basic Act primarily distinguishes between high-impact AI ((defined as AI systems that pose significant risks or impacts on human life, physical safety, or fundamental rights) and generative AI (defined as AI systems designed to mimic the structure and characteristics of input data to generate outputs such as text, sound, images, videos, and other creative content). Rather than prescribing exhaustive, role-specific obligations, it sets broad operative principles designed to be refined over time through subordinate regulations. The Korean framework consolidates several regulatory proposals into one statute.
Implications for Domestic and International Stakeholders
One of the Act’s distinctive features is its extraterritorial reach. Non-Korean AI providers that exceed certain user or revenue thresholds will be required to designate a domestic representative in South Korea. This measure ensures that foreign companies remain accountable to local regulatory standards and can be directly engaged by Korean authorities for compliance and reporting purposes. For domestic companies, the Act provides a stable regulatory environment that could spur investment in AI research and infrastructure, including data centres and talent development initiatives.
Non-compliance with the AI Basic Act may result in fines up to ₩30 million.
It is thus apparent that the law is a signal for the government’s commitment to creating a safe and competitive ecosystem for AI, which is expected to enhance national competitiveness in the global market.
Looking Ahead: A Catalyst for Change
The AI Basic Act is poised to be a catalyst for significant change in both the legal and industrial landscapes of South Korea. By promoting transparency, fostering innovation, and setting up risk management protocols, the Act aims to ensure that AI technologies are developed and deployed in a manner that is safe, ethical, and aligned with societal needs. The transitional period before enforcement—spanning one year from the Act’s promulgation—will likely see the emergence of detailed guidelines and additional regulations that refine and operationalize the law’s provisions.
The law may also serve as a model for other jurisdictions seeking a balanced approach between regulation and innovation. The collaborative efforts between government bodies, industry leaders, and academic experts will be crucial in navigating the challenges ahead and ensuring that AI remains a force for good.
South Korea’s AI Basic Act combines ambitious industry promotion with essential safeguards, setting a regulatory framework that reflects both global influences and local priorities. As future subordinate legislations are enacted, the Act’s impact on the AI sector is expected to grow, shaping the way technology is developed and utilized in a rapidly changing digital era.
If your organization is dealing with copious amounts of data, do visit www.tsaaro.com.
Tsaaro Consulting, in collaboration with PSA Legal Counsellors and the Advertising Standards Council of India, has authored a whitepaper titled ‘Navigating Cookies: Recalibrating Your Cookie Strategy in Light of the DPDPA’. If you want to learn more about cookie consent management, read the whitepaper by clicking here.
The Ministry of Electronics and Information Technology (MeitY) has released the Draft DPDP Rules, 2025 for Public Consultation!
Learn more about the Draft Rules here:
News of the Week
- Health Data Protection in New York to Get Bolstered

A major development for businesses dealing with health data comes in from New York as a new law called the New York Health Information Privacy Act (NYHIPA) is introduced by lawmakers. The legislation seeks to restrict the sale of sensitive health information, covering data from reproductive health applications, menstrual tracking, and other digital health tools. Proponents argue that the measure is essential to shield consumers from exploitation by tech giants and law enforcement misuses, particularly when data might be used to prosecute or stigmatize patients. The law also imposes stricter consent requirements and improved data handling protocols. If enacted, it will mark a significant shift toward stronger digital privacy standards in New York State for citizens.
2. UK Government Orders Apple to Unlock Encrypted Cloud Data

The UK government has mandated that Apple unlock its encrypted iCloud service, forcing the tech giant to create a back door into its cloud storage. This controversial order, issued under the UK Investigatory Powers Act, aims to aid law enforcement in accessing critical data for criminal investigations. Critics warn such measures risk weakening global security and set a dangerous precedent for privacy violations. Privacy advocates argue that forced decryption undermines consumer trust, while industry experts caution it may expose billions of users to cyber threats. The order has ignited debate over balancing national security with individual privacy and public safety.
3. India to Host Next AI Summit

India is set to host the next international artificial intelligence summit, following its successful co-chairing of the current AI Action Summit in Paris. Prime Minister Narendra Modi confirmed this announcement, emphasizing India’s commitment to fostering inclusive, sustainable AI governance. During the summit, Modi highlighted India’s achievements in building affordable digital infrastructure for 1.4 billion people and its growing status as a global AI talent hub. The next summit aims to address critical issues such as ethical standards, security, and the needs of the Global South, ensuring that AI development drives innovation and benefits all segments of society worldwide, fuelling progress.
4. Italy Fines Trento €50,000 for AI Surveillance Violations

Italy’s Data Protection Authority fined the city of Trento €50,000 for using AI-driven surveillance systems in violation of privacy rules. The fine concerns two EU-funded projects that employed advanced artificial intelligence for urban monitoring. The authority found that Trento failed to adequately anonymize collected video, audio, and social media data, and improperly shared sensitive information with third parties. Moreover, the municipality did not perform a required data protection impact assessment. Trento, the first Italian city penalized for AI surveillance misuse, is considering an appeal, highlighting ongoing challenges in regulating emerging technologies and ensuring citizen privacy effectively.
5. EU Abandons ePrivacy Regulation and AI Liability Directive

On February 12, 2025, the European Commission announced it would abandon two key proposals: the ePrivacy Regulation and the AI Liability Directive. The abandoned ePrivacy Regulation, intended to modernize privacy protections for online communications, would have replaced the outdated 2002 directive but failed to secure sufficient legislative support. Similarly, the AI Liability Directive, aimed at establishing a framework for compensating harms caused by artificial intelligence, was withdrawn amid predictions of persistent deadlock among EU lawmakers and intense industry lobbying.