As Artificial Intelligence grows globally, governments are focusing on regulation. In October 2023, President Biden signed the Executive Order (EO) on Safe, Secure, and Trustworthy Development and use of Artificial Intelligence to enhance AI safety and security in the U.S. The order defines AI broadly as any machine-based system that makes predictions, recommendations, or decisions. Therefore, this definition is broad and includes various systems beyond generative AI and neural networks. The EO outlines eight key principles, including safety, privacy, and civil rights, affecting both private industry and the federal government, including research conducted or funded by the government.
Key Guiding Principles and Priorities:
1. AI Safety and Security: AI must be thoroughly tested to mitigate risks, with a focus on securing systems in critical areas like cybersecurity and ensuring they comply with federal laws.
2. Promoting Innovation and Competition: The U.S. seeks AI leadership by investing in education and research, fostering a competitive ecosystem that supports small businesses while curbing unfair practices.
3. Supporting American Workers: AI should enhance job opportunities without compromising workers’ rights. The government will update job training and involve workers in AI development to ensure it benefits all and improves workplace conditions.
4. Advancing Equity and Civil Rights: AI policies must advance equity and civil rights, preventing discrimination. The government will enforce regulations to ensure responsible AI use, especially in areas like justice and housing.
5. Protecting Consumers: AI products must follow consumer protection laws. The government will enforce safeguards, especially in critical sectors, while promoting responsible AI use that benefits consumer.
6. Safeguarding Privacy and Civil Liberties: The government will safeguard privacy as AI advances, ensuring lawful and secure data use with privacy-enhancing technologies to protect personal data and civil liberties.
7. Managing AI in Government: The federal government will enhance its AI capabilities by attracting professionals, modernizing infrastructure, and providing training to ensure safe and effective AI use in public services.
8. Global Leadership in AI: The U.S. seeks global leadership in responsible AI by collaborating internationally to manage risks, promote safety, and ensure AI benefits are shared equitably, without worsening inequities or harming human rights.
Regulatory Framework and Collaborative Initiatives:
The EO focuses on both private industry and the federal government, including research funded by the government. While it does not impose direct regulations on private industry, it requires the U.S. Department of Commerce to set detailed reporting requirements for companies developing certain AI models. Additionally, the order mandates the National Science Foundation (NSF) to establish a National AI Research Resource (NAIRR) to promote collaboration between the government and private sector. In line with the NAIRR Task Force’s recommendations, the program will pilot an integration of distributed computational, data, model, and training resources to support AI-related research and development.
Conclusion:
In conclusion, the Executive Order on AI reflects the U.S. government’s commitment to fostering safe, secure, and equitable AI development. By establishing comprehensive guiding principles and promoting collaboration between the government and private sector, the EO aims to balance innovation with critical safeguards.
If you’re an organization dealing with copious amounts of data, do visit www.tsaaro.com.
News of the Week
1. SDAIA Updates Data Transfer Regulations to Strengthen Protection
On September 1, 2024, the Saudi Data & Artificial Intelligence Authority (SDAIA) introduced amendments to the Data Transfer Regulations to enhance personal data protection when transferred outside Saudi Arabia. The changes outline standards to assess foreign data protection levels, including regulations for personal data rights, supervisory authorities, and data disclosure. Controllers can now implement Standard Contractual Clauses (SCCs), Binding Common Rules (BCRs), or accreditation to bypass certain PDPL requirements. Additionally, controllers must conduct risk assessments when transferring personal or sensitive data internationally. SDAIA will review these safeguards every two years or as necessary.
2. Netherlands Fines Clearview AI for Illegal Facial Recognition Database
The Dutch data protection authority fined U.S.-based Clearview AI €30.5 million ($33.7 million) for creating an illegal database of billions of facial images taken from the internet without consent. The watchdog accused Clearview of using this biometric data to sell services to law enforcement without transparency or legal authorization. Despite Clearview’s claim that it operates outside of EU jurisdiction, the authority highlighted the dangers of such databases and emphasized that facial recognition should only be used in exceptional cases, overseen by regulators. Clearview has faced multiple sanctions in Europe for GDPR violations.
3. CrowdStrike Executive to Testify on Faulty Software Update Before U.S. House Subcommittee
Adam Meyers, a senior executive at CrowdStrike, will appear before a U.S. House of Representatives subcommittee on September 24 to address a faulty software update that resulted in a global IT outage. Meyers, the Senior Vice President for Counter Adversary Operations, will testify before the House Homeland Security Cybersecurity and Infrastructure Protection subcommittee, according to the panel’s statement on Friday.
4. Brazil’s Data Transfer Regulation and the Role of ANPD
On 23rd August, Brazil’s data protection authority, the Autoridade Nacional de Proteção de Dados (ANPD), published its much-anticipated International Data Transfer Regulation. The regulation was developed through a public consultation process, following a call for contributions in May 2022. It provides detailed guidelines on international data transfers, including contractual instruments like standard contractual clauses (SCCs). The ANPD aims to ensure cross-border data transfers comply with Brazil’s General Data Protection Law (LGPD) by assessing the adequacy of foreign protections and promoting transparency in international data handling practices.
5. Meta Faces $3.62 Million Fine After Losing Lawsuit to Brazilian Retailer Havan
Meta Platforms could face a fine of up to $3.62 million after losing a lawsuit filed by Brazilian department store chain Havan. The lawsuit accused Meta of allowing fraudulent paid advertisements that misuse Havan’s name to deceive consumers. A judge in Santa Catarina ruled on Monday that Meta must block the unauthorized ads within 48 hours, mentioning either Havan or its owner, billionaire Luciano Hang. Failure to comply could result in fines of up to 20 million reais.