Global Framework to Secure AI Systems

The European Telecommunications Standards Institute (ETSI) has published the first globally applicable cybersecurity standard for artificial intelligence systems, establishing baseline security requirements across the full AI lifecycle.

ETSI EN 304 223, published on 15 January 2026, provides a structured framework to protect AI models and systems from sophisticated cyber threats including data poisoning, model obfuscation and indirect prompt injection. The standard has been formally approved by National Standards Organisations voting, giving it broader international scope and authority across global markets.

The European Standard defines 13 principles and requirements across five phases - secure design, secure development, secure deployment, secure maintenance and secure end of life. Each phase aligns with internationally recognised AI lifecycle models, ensuring consistency and interoperability with existing standards and guidance.

The standard provides a practical baseline for securing AI systems throughout their lifecycle. Organisations implementing AI-driven workflows for compliance processes, records management or business automation can use the standard to ensure security is embedded by design.

ETSI standards provide an internationally recognised framework that aligns with the standards-led approaches adopted by both Australia and New Zealand for AI governance.

Australia participates actively in ETSI through several government bodies including the Australian Signals Directorate, the Department of Home Affairs and CSIRO, who are ETSI members. Australia frequently adopts or adapts ETSI standards for local use, with ETSI standards forming the backbone of several critical cybersecurity regulations.

The most significant recent development is the Cyber Security Act 2024, which mandates specific security requirements for smart devices. The Cyber Security (Security Standards for Smart Devices) Rules 2025 take effect on 4 March 2026, requiring manufacturers and suppliers of internet-connected consumer devices to meet mandatory security standards based on ETSI EN 303 645.

"ETSI EN 304 223 represents an important step forward in establishing a common, rigorous foundation for securing AI systems," said Scott Cadzow, Chair of ETSI's Technical Committee for Securing Artificial Intelligence.

"At a time when AI is being increasingly integrated into critical services and infrastructure, the availability of clear, practical guidance that reflects both the complexity of these technologies and the realities of deployment cannot be underestimated."

The standard builds on ETSI's earlier Technical Specification TS 104 223 but carries greater weight as a European Standard. It acknowledges that AI represents a distinct cybersecurity challenge compared to traditional software, requiring cyber defences that account for unique AI characteristics and vulnerabilities.

ETSI EN 304 223 covers AI systems incorporating deep neural networks, including generative AI, and is developed for systems intended for real-world deployments. The standard provides stakeholders throughout the AI supply chain - from vendors to integrators and operators - with a clear baseline for AI security.

An upcoming Technical Report, ETSI TR 104 159, will apply the ETSI EN 304 223 principles specifically to generative AI, focusing on deepfakes, misinformation, disinformation, confidentiality risks and copyright concerns.