Skip to content

AI Security Simplified: Understanding the CSA Guidelines on Securing Artificial Intelligence Systems

Article by Tsaaro

7 min read

AI Security Simplified: Understanding the CSA Guidelines on Securing Artificial Intelligence Systems

At the Singapore International Cyber Week 2024, The Cyber Security Agency (CSA) of Singapore released Guidelines on Securing Artificial Intelligence Systems (the Guidelines) accompanied by a Companion Guide (Guide). Recognising the rapidly evolving AI industry, the CSA issued the Guidelines aiming to help system owners adopt AI safely by addressing cybersecurity risks, including adversarial attacks and potential data breaches, which could lead to harmful outcomes. The accompanying Guide complements the main guidelines, providing practical security measures and detailing optional practices to support system owners in securely managing AI.

Scope

The Guidelines aim to support those system owners adopting or considering AI by identifying specific risks associated with AI and suggesting risk mitigation measures across the entire lifecycle of the AI system. The Guide, elaborating on the principles of the Guidelines, provides practical suggestions and security measures. These guidelines are not mandatory or universally applicable and should be adapted based on each organization’s specific use case and AI development stage.

The scope of the Guidelines and the Guide is restricted to cybersecurity risks associated with AI systems and excludes other aspects like AI safety, transparency and ethics. Furthermore, the Guide is structured to be a living document that will be regularly updated to reflect new developments and expert insights in the field of AI.

You May Also Like: The EU Artificial Intelligence Act

How to Secure AI

Life- Cycle Approach

The Guidelines highlight five key stages in the lifecycle of an AI, i.e., planning and design, development, deployment, operation and maintenance and end of life. It is emphasised that any AI system owner must ideally consider each stage separately and take the lifecycle approach while assessing security risks.

Risk Assessment

The Guidelines recommend starting with a thorough risk assessment to tailor AI security measures to specific AI systems and use cases. Organizations should integrate continuous monitoring and feedback into their AI security strategy. 

The CSA has suggested a four-step framework to identify risks and customise security or control measures.

  • Step 1: Conduct risk assessments focusing specifically on security risks associated with the AI system based on best practices of the industry or existing risk assessment frameworks of the company. 
  • Step 2: Following a comprehensive risk assessment, it is necessary to prioritise areas that must be addressed. This prioritisation may be done on the basis of identified risks, impact and/ or available resources.
  • Step 3: Identify and implement measures to secure AI systems and implement relevant control measures across the AI lifecycle. The Guide provides an extensive list of possible control measures that can be adopted at different stages of the lifecycle. 

For example, at the stage of planning and design, raise awareness of security risks and conduct risk assessments; during development, secure the supply chain, evaluate model security trade-offs, protect AI assets, and secure the development environment; at deployment, secure infrastructure, establish incident management, and release responsibly; in operations, monitor inputs and outputs, implement secure-by-design updates, and establish vulnerability disclosures; and in the final stage, ensure proper disposal of data.

  • Step 4: After implementing relevant security measures, it is important to evaluate residual risks and then make a decision regarding the next step.

Guidelines for Each Lifecycle Stage

  • Planning and Design
    • Raise Awareness and Competency: Organisations should take steps to educate and train their employees about potential security risks associated with AI to ensure that informed decisions are taken in relation to the adoption, use or deployment of AI.
    • Conduct Security Risk Assessments: It is beneficial to implement a security risk management system aligned with industry standards and best practices to identify key risks, prioritise them and address them appropriately.
  • Development
    • Secure Supply Chain: Securing the supply chain while developing an AI system can be done by assessing and monitoring security risks in the AI supply chain which includes training data, APIs and AI models. The organisation should ensure that the suppliers adhere to sufficient security policies and risk management practices.
    • Consider Security benefits and trade-offs when selecting an AI model: Before an AI system is developed and deployed in the market, it is important to evaluate the different characteristics and risks associated with each type of AI model and choose to work with the best-suited AI model. At this stage, factors such as explainability, complexity, sensitivity of training data and risk factors must be considered.
    • Identify, track and protect AI-related assets: Implement processes to track, authenticate, and secure AI-related assets like models, data and prompts to recognise their strategic value and protect data and intellectual property from potential threats.
    • Secure the Development Environment: Apply industry standard infrastructure and security principles such as access controls, monitoring, environment segregation, and secure-by-default configurations to the development environment to prevent security breaches.
  • Deployment
    • Secure deployment infrastructure: Similar to securing the development environment, it is essential to apply industry-standard security practices to secure the AI deployment environment.
    • Establish Incident Management Procedure: Develop incident response plans, including escalation and remediation strategies, to address a range of unpredictable AI system behaviours.
    • Responsibly release AI: Lastly, while finally deploying an AI system, it is essential to ensure that the vulnerabilities of the system are duly considered and addressed. Post-deployment monitoring procedures must be established.
  • Operations and Maintenance
    • Once an AI system has been deployed, continuous monitoring and updating of the AI system and procedures is necessary to detect anomalies, adjust operational protocols, and address evolving threats. 
  • End of Life
    • When an AI system is discontinued or retired, it is essential to secure the disposal or decommissioning of the AI system and related data to prevent unauthorized access or data leaks. Data must be disposed of in accordance with data protection laws (PDPA), regulations and guidelines.

Technical Testing and System Validation

The Guide, through its annexures, provides an insight into AI testing which is essential for security by design and privacy by design and ensuring that the AI system meets the needs and expectations of the end user. Testing helps expose vulnerabilities and implement safeguards to mitigate them. 

There are three main categories of AI testing, each with different levels of access to the internal workings of the AI system:

  • White-Box Testing: Full access to source code and internal logic allowing for detailed testing.
  • Grey-Box Testing: Partial access to algorithms enabling focused testing of functionalities without deep code analysis.
  • Black-Box Testing: Treats the AI system as a complete unit, focusing on inputs, outputs, and expected behaviours without knowledge of internal workings.

Furthermore, the Guide provides a list of AI testing tools divided into three categories which include – Offensive AI testing tools that identify vulnerabilities, Defensive AI testing tools aimed at enhancing system resilience and Governance AI testing tools that assess the trustworthiness, fairness, and transparency of AI systems.

Defending AI Models 

The Guide emphasises that AI models are inherently fragile and vulnerable to adversarial attacks and provides certain key techniques to enhance their robustness including any one or a combination of the following techniques:

  • Adversarial Training: Incorporating adversarial samples into training datasets to improve resilience.
  • Ensemble Models: Using multiple models to enhance detection and prevent bypass attacks.
  • Defensive Distillation: Training a “student” model using soft labels from a “teacher” model, which makes it harder for adversarial attacks to succeed.
  • Explainability: Utilising Explainable AI (XAI) techniques to clarify the rationale behind model predictions, which can help identify vulnerabilities and improve model robustness.

Furthermore, beyond defending AI models, to protect systems from further breaches and attacks, organizations should focus on:

  • Continuous monitoring and threat intelligence
  • Implementing best practices
  • Conducting user awareness training to counter social engineering
  • Regular security testing and vulnerability assessments
  • Investing in security automation to streamline processes

Conclusion

The CSA Guidelines on Securing Artificial Intelligence Systems provide a crucial framework for organisations aiming to securely and responsibly adopt or deploy AI. By emphasising on the lifecycle approach towards risk management, the Guidelines ensure that security risks are adequately addressed at every stage. Through thorough risk assessments, continuous monitoring, and tailored security measures, organisations can protect their AI systems from security threats and other potential risks.

Moreover, the Companion Guide enriches these Guidelines by offering practical security measures and strategies to mitigate risks specific to AI technologies. The Guide additionally simplifies the principles and suggestions by providing detailed walkthrough examples of how the suggested control and security measures can be effectively implemented.

Tsaaro Consulting

INTRODUCTION: In a recent ruling, the Competition Commission of India (CCI) has slapped a heavy fine of 213.14 crore on …

Tsaaro Consulting

In today’s dynamic and fast-paced corporate environment businesses are increasingly adopting staff augmentation as a flexible workforce solution to address …

Tsaaro Consulting

In today’s fast-paced business environment, organisations are constantly seeking innovative methods to adapt and scale efficiently. Staff Augmentation Consulting services, …

Tsaaro Consulting

INTRODUCTION: In today’s interconnected world, businesses operate across borders, serving customers globally. This inevitably leads to the transfer of personal …

Krishna

INTRODUCTION: The Personal Data Protection Law No. 6698, known as Kişisel Verileri Koruma Kanunu (KVKK), is Türkiye’s landmark data protection …

SHARE THIS POST

Would you like to read regular updates from Tsaaro.
Subscribe to our newsletter

Our Latest Blogs

Read what the latest hapennings in the cyber world are and learn what the
experts have to say about them

Call Our Experts:

+91 95577 22103

small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png
small_c_popup.png

We’d love to help your organization achieve your Data Protection goals!

Schedule a complimentary consultation with our Team of Experts.