Skip to the main content.

3 min read

Securing Artificial Intelligence (SAI)- ETSI TS 104 223 V1.1.1

Securing Artificial Intelligence (SAI)- ETSI TS 104 223 V1.1.1

The recently published ETSI TS 104 223 V1.1.1 document, "Securing Artificial Intelligence (SAI), Baseline Cyber Security Requirements for AI Models and Systems," represents a significant step forward in addressing the unique security challenges of AI. As practitioners, we recognize the importance of this framework and want to provide an in-depth look at its key principles, offering practical insights and analysis.

The ETSI specification is organized into five phases: Secure Design, Secure Development, Secure Deployment, Secure Maintenance, and Secure End of Life. Within each phase, specific security principles are outlined.

Let's explore each of these principles and how we can adopt them into building trustworthy AI systems.

Secure Design

  • Principle 1: Raise awareness of AI security threats and risks. This principle emphasizes the fundamental need for ongoing education and awareness. AI security is not a static field; new threats emerge constantly. Organizations must invest in tailored training programs and ensure that all stakeholders, from developers to end-users, are informed about potential vulnerabilities and mitigation strategies.  
  • Principle 2: Design the AI system for security as well as functionality and performance. Security cannot be an afterthought. This principle advocates for a "security by design" approach, where security considerations are integrated ("baked-in") into the AI system's architecture from the outset. This includes conducting thorough risk assessments, designing systems to withstand adversarial attacks, and establishing audit trails for accountability.  
  • Principle 3: Evaluate the threats and manage the risks to the AI system. AI systems are susceptible to unique threats such as data poisoning and model inversion. This principle emphasizes the need for proactive threat modeling and risk management, with continuous monitoring and updates throughout the AI lifecycle. It also highlights the importance of communicating threats to stakeholders and establishing clear risk tolerance levels.  
  • Principle 4: Enable human responsibility for AI systems.   While AI can automate tasks, human oversight remains crucial. This principle stresses the need to design AI systems that allow for human intervention and assessment, ensuring that humans can understand AI outputs and maintain control. It also calls for making end-users aware of prohibited use cases to prevent misuse of AI systems.  

Secure Development

  • Principle 5: Identify, track and protect the assets. AI systems involve a complex interplay of data, models, and algorithms. This principle underscores the importance of maintaining a comprehensive inventory of these assets, along with robust version control and security measures. It also highlights the need to protect sensitive data and implement proper data sanitization techniques.  
  • Principle 6: Secure the infrastructure. The underlying infrastructure that supports AI systems must be secure. This principle calls for strong access control frameworks, secure APIs, and dedicated development environments. It also emphasizes the need for vulnerability disclosure policies and incident management plans to respond effectively to security breaches.  
  • Principle 7: Secure the supply chain. AI systems often rely on components from various sources, making supply chain security critical. This principle advocates for secure software supply chain processes and careful evaluation of external models or components. It also stresses the importance of documenting decisions and communicating them to end-users.  
  • Principle 8: Document data, models and prompts.  Transparency and accountability are essential for AI security. This principle requires detailed documentation of system design, training data sources, model limitations, and other relevant information. This documentation is crucial for risk assessment, incident response, and ongoing maintenance.  
  • Principle 9: Conduct appropriate testing and evaluation.  Rigorous testing is essential to identify vulnerabilities and ensure the security of AI systems. This principle emphasizes the need for security assessments, independent security testers, and sharing of testing results. It also highlights the importance of evaluating model outputs to prevent reverse engineering and unintended influence.  

Secure Deployment

  • Principle 10: Communication and processes associated with End-users and Affected Entities. Clear communication with end-users is vital for building trust and ensuring responsible AI use. This principle stresses the need to inform users about how their data will be used, provide guidance on system usage, and communicate security updates promptly. It also addresses the importance of supporting users and affected entities during security incidents.  

Secure Maintenance

  • Principle 11: Maintain regular security updates, patches and mitigations. AI systems, like any software, require ongoing maintenance to address vulnerabilities. This principle emphasizes the need for developers to provide security updates and for system operators to deploy them promptly. It also highlights the importance of having mitigation plans and treating major AI system updates with the same rigor as new releases.  
  • Principle 12: Monitor the system's behavior. Continuous monitoring is essential for detecting anomalies and security breaches in AI systems. This principle calls for logging system and user actions and analyzing logs to identify deviations from expected behavior. It also suggests monitoring internal states and overall performance to proactively address security threats.  

Secure End of Life

  • Principle 13: Ensure proper data and model disposal. Securely disposing of data and models is crucial to prevent unauthorized access and potential misuse. This principle emphasizes the need to involve data custodians in disposal processes and to ensure that data and configuration details are securely deleted when a model or system is decommissioned.  

Your Partner in AI Security

Implementing these principles effectively requires a blend of cybersecurity expertise and AI/ML knowledge. Palindrome Technologies is uniquely positioned to help organizations navigate this complex landscape. Our services are designed to provide:

  • Deep Expertise: We possess specialized knowledge in AI security, enabling us to address the unique challenges outlined in the ETSI standard.
  • Comprehensive Assessments: We go beyond traditional security assessments to provide in-depth evaluations of your AI systems.
  • Tailored Solutions: We deliver customized mitigation strategies aligned with your specific needs and risk tolerance.
  • Proactive Approach: We help you build a proactive security posture to anticipate and mitigate emerging AI threats.

By partnering with Palindrome Technologies, you can confidently secure your AI initiatives, ensuring they are robust, reliable, and trustworthy.

 

Learn more about Auditing and Penetration Testing your AI system.

Managing Risk in Artificial Intelligence Systems-A Practitioners Approach 2025

Overview of AI security frameworks and recommendations for practitioners

Read More

Managing AI Risk the Smart Way — Why ISO/IEC 42001 can be a Game Changer

As artificial intelligence (AI) rapidly integrates into nearly every aspect of business, from customer service to data analytics, organizations face...

Read More

Build Secure, Trusted IoT Systems with the IEEE IoT Sensor Devices Cybersecurity Framework

In today's hyperconnected world, every connection introduces new risk and securing your IoT sensor devices is no longer optional, it’s a competitive...

Read More