The National Institute of Standards and Technology (NIST) has emerged as a leader in addressing the potential risks associated with Artificial Intelligence (AI). In response to the National Artificial Intelligence Initiative Act of 2020, NIST developed the AI Risk Management Framework (AI RMF). This framework serves as a voluntary resource for organizations involved in the design, development, deployment, or use of AI systems. Its primary objective is to equip these organizations, or “AI actors” as defined by the OECD, with a structured approach to managing risks and fostering the responsible development and use of AI.
Core Principles and Design
The AI RMF is built on several key principles. First, it is voluntary. Organizations are free to adopt the framework in its entirety or adapt specific elements to their unique needs and AI systems. Second, it prioritizes the preservation of rights. The framework acknowledges the importance of protecting privacy, civil liberties, and other fundamental rights throughout the AI lifecycle. Third, the framework is non-sector-specific. This means it can be applied by organizations of all sizes and across various industries. Finally, the AI RMF is designed to be adaptable to different use cases. It acknowledges the diverse applications of AI and provides a flexible structure for managing risks specific to each scenario.
The framework itself is designed around four core functions: Govern, Map, Measure, and Manage (GMM M). These functions provide a cyclical process for organizations to implement and continually improve their AI risk management practices.
The GMMM Process
1. Govern: This function establishes the foundation for responsible AI development and deployment. It emphasizes the importance of leadership commitment, clear governance structures, and robust policies for managing AI risks. Here, organizations define their risk tolerance levels, establish accountability mechanisms, and ensure alignment with relevant legal and ethical principles.
2. Map: The Map function focuses on understanding the AI system and its potential risks. This involves creating a detailed profile of the AI system, including its purpose, data sources, algorithms used, and intended outputs. Organizations then identify potential risks associated with the system, such as bias, security vulnerabilities, or privacy concerns.
3. Measure: Following the identification of risks, the Measure function delves into assessing the likelihood and potential impact of each risk. This may involve employing various techniques like risk scoring, threat modeling, or vulnerability assessments. By quantifying risks, organizations can prioritize their mitigation efforts and allocate resources effectively.
4. Manage: The Manage function centers on implementing controls to mitigate identified risks. This might involve using bias detection techniques in the training data, implementing robust security protocols, or developing clear communication strategies to address potential societal impacts. The framework emphasizes the importance of continual monitoring and improvement, encouraging organizations to revisit and adapt their risk management practices as needed.
Companion Resources and Benefits
The AI RMF is accompanied by a set of valuable resources designed to facilitate its implementation. These include the AI RMF Playbook, which offers practical guidance on applying the framework’s functions, and an explainer video providing a concise overview. Additionally, NIST maintains a Trustworthy and Responsible AI Resource Center that offers further guidance, case studies, and best practices.
Conclusion
By adopting the AI RMF, organizations can reap significant benefits. The framework promotes the development of trustworthy AI systems, minimizing the potential for negative societal impacts. It fosters transparency and accountability, building trust with stakeholders and users. Furthermore, the framework can contribute to improved decision-making by organizations, allowing them to identify and address risks proactively. Ultimately, the AI RMF serves as a valuable tool for organizations navigating the evolving landscape of AI, empowering them to harness the power of this technology responsibly.
If your organisation that deals with copious amounts of data, do visit www.tsaaro.com.
Privacy News
1. New Zealand tightens Privacy Regulations
Michael Webster, the Privacy Commissioner of New Zealand, has advocated for the tightening of privacy legislation and a boost in financial resources to address rising privacy issues, according to RNZ. He pointed out that organizations currently face minimal repercussions for breaches of privacy laws, noting that there has been a 79% surge in complaints to the Privacy Commissioner’s Office over the last year.
2. USCRB Recommends All Providers of Cloud Services to Strengthen Security Structures
After conducting an independent investigation into the Microsoft Exchange Online data breach during the summer of 2023, which led to the exposure of emails from officials in the EU and U.S. government, the U.S. Cyber Safety Review Board suggested that all providers of cloud services should enhance their security measures and strengthen their defenses against possible cyber-attacks. Additionally, the Cybersecurity and Infrastructure Agency intends to create guidelines for significant cloud service providers, aligning with the recommendations made by the CSRB.
3. During a Data Breach, a Ransomware Gang Demanded $30 Million From a Las Vegas Casino.
The Wall Street Journal disclosed that the ransomware collective Star Fraud demanded a ransom of USD 30 million for data they pilfered in a cyberattack on MGM Resorts International in September 2023. This cyber incursion compromised the personal information of consumers and disrupted the casino’s operations, resulting in a revenue shortfall of approximately USD 100 million.
https://www.wsj.com/tech/cybersecurity/mgm-hack-casino-hackers-group-0366c641
4. MLB experiments with facial recognition induced ticketing system
ESPN has reported that certain Major League Baseball teams are set to pilot a facial recognition technology for entry, offering fans the option to use their face for verification instead of a physical ticket. This innovative system works by examining a fan’s facial features and transforming the selfie into a distinct numerical code. This code is then linked to the tickets they’ve bought and matched against the numerical code produced by cameras at the stadium.
https://www.espn.in/mlb/story/_/id/39827008/mlb-facial-recognition-admission-privacy-technology
5. Michael McEvoy has extended his term as Information and Privacy Commissioner
Michael McEvoy, the Information and Privacy Commissioner for British Columbia, will continue in his role for a temporary period until a new commissioner takes over. McEvoy, who completed a six-year tenure as the IPC, was supposed to finish his term on March 31.
Great read with a touch of humor! For further details, check out: READ MORE. What are your thoughts?