The Cyber Security Agency of Singapore (“CSA”) has launched Guidelines on Securing Artificial Intelligence (“AI”) Systems (“Guidelines”) and a Companion Guide on Securing AI Systems (“Companion Guide”).
A key point raised by the Guidelines is the vulnerability of AI systems to novel attacks, particularly adversarial machine learning. These attacks can distort the behaviour of AI models, leading to inaccurate, biased, or harmful outputs. Common adversarial machine learning attacks include injecting malicious data into training datasets to corrupt the model (data poisoning), manipulating inputs to deceive the model into making incorrect predictions (evasion attacks), and probing the model to expose sensitive data or steal the model itself (inference and extraction attacks). The Guidelines emphasise that securing AI systems requires more than just traditional cybersecurity measures; it necessitates a specialised approach to address unique AI-related risks.
Lifecycle Approach to AI Security
The CSA recommends addressing security risks at each stage of the AI lifecycle to ensure a holistic defence against threats. The five key stages are:
- Planning and Design:
- Raise Awareness and Competency: Provide training and guidance on AI security risks to all personnel, including developers, system owners, and senior leaders.
- Conduct Security Risk Assessments: Perform comprehensive risk assessments to identify key risks and priorities, using industry standards and best practices.
- Development:
- Secure the Supply Chain: Assess and monitor the security of the AI supply chain, including training data, models, APIs, and software libraries. Use tools like software bills of material and vulnerability databases.
- Evaluate Security Benefits and Trade-offs: Consider the security implications of different AI models, such as machine learning, deep learning, and generative models.
- Identify, Track, and Protect AI-related Assets: Implement processes to track, authenticate, version control, and secure AI assets, including models, data, prompts, and logs.
- Secure the Development Environment: Apply infrastructure security principles, such as access controls, logging/monitoring, and secure-by-default configurations.
- Deployment:
- Secure the Deployment Infrastructure: Implement standard infrastructure security measures, including access controls, logging/monitoring, and firewalls.
- Establish Incident Management Procedures: Develop incident response plans tailored to AI systems, covering a range of potential incidents from minor malfunctions to critical disruptions.
- Operations and Maintenance:
- Monitor AI System Inputs and Outputs: Continuously monitor and log inputs and outputs to detect anomalies and potential attacks.
- Adopt a Secure-by-Design Approach to Updates: Ensure that updates and continuous learning processes consider and manage associated risks.
- Establish a Vulnerability Disclosure Process: Implement a feedback mechanism for users to report potential vulnerabilities.
- End of Life: Securely dispose of data and model artifacts in accordance with industry standards and regulations to prevent unauthorised access.
As a leading hub for AI innovation, Singapore is at the forefront of integrating advanced technologies across various sectors. For companies operating in this dynamic environment, ensuring the security of AI systems is crucial to maintaining trust with clients, safeguarding sensitive data, protecting investments, enhancing reputation, and ensuring compliance with industry standards, thereby fostering a safe and thriving AI ecosystem.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.