HomeRisk ManagementsETSI Unveils New Baseline Requirements for AI Security

ETSI Unveils New Baseline Requirements for AI Security

Published on

spot_img

European Standards Organization Sets New Global Benchmark for Securing AI

The European Telecommunications Standards Institute (ETSI) has launched a groundbreaking set of technical specifications aimed at establishing an international benchmark for securing artificial intelligence (AI) models and systems. This new specification, titled ETSI TS 104 223 and subtitled Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems, is poised to make a significant impact across various industries that rely on AI technology.

At its core, this specification outlines a comprehensive framework consisting of 13 fundamental principles that expand into a total of 72 trackable principles. These guidelines span five critical lifecycle phases: secure design, development, deployment, maintenance, and end-of-life. Such a structure enables stakeholders to ensure that security is factored into every stage of an AI system’s life, providing a holistic view of risk management throughout the system’s operational timeline.

The implications of this new specification extend to all parties involved in the AI supply chain. Whether they are developers, vendors, integrators, or operators, each group stands to benefit from a unified approach to securing AI systems. ETSI claims that adopting these standards will enhance trust and reliability in AI technologies, fostering a more secure environment for AI innovation.

ETSI’s initiative comes in the wake of increasing concerns surrounding the security of AI models. The organization has addressed not only tried-and-true security best practices but also innovative techniques designed to tackle novel challenges associated with AI systems. These include threats like data poisoning, model obfuscation, indirect prompt injection, and vulnerabilities related to complex data management. Such a comprehensive approach allows organizations to prepare for the multifaceted risks presented by AI technologies.

Scott Cadzow, the chair of ETSI’s Technical Committee for Securing Artificial Intelligence, has emphasized the necessity of this specification in today’s rapidly evolving cyber threat landscape. He characterized it as a “global first” aimed at establishing a baseline that protects AI from malicious attacks and unwanted inferences. Cadzow remarked, “In an era where cyber-threats are growing in both volume and sophistication, it is vital that the design, development, deployment, and operation and maintenance of AI models are protected.”

His assertion underlines the importance of making security a core requirement not just during the development phase but throughout the entire lifecycle of an AI system. Cadzow expressed that this specification will serve as a valuable resource not only in Europe but also globally.

Controversy Surrounding Attribution

Interestingly, while ETSI has claimed that this specification was jointly developed by its Technical Committee on Securing Artificial Intelligence, which includes representatives from international organizations, government bodies, and cybersecurity experts, there remains ambiguity regarding the role of the UK in this initiative.

At first glance, ETSI TS 104 223 appears to be indistinguishable from the UK government’s AI Code of Practice, published earlier in February. Both documents present identical sets of 13 principles as well as the five lifecycle phases, which raises questions about originality and ownership.

Notably, the UK government articulated at the time of its code’s publication that it aimed to form the basis of a global ETSI standard, creating expectations around the interplay between the two sets of guidelines.

In light of this, an ETSI spokesperson subsequently provided a statement addressing the collaboration with UK authorities, asserting that the technical specification would not have been achievable without input from the UK’s Department for Science, Innovation and Technology (DSIT) and the National Cyber Security Centre. The spokesperson highlighted the foundational role that the UK’s precursor guidance played in shaping the ETSI technical specification and expressed gratitude for the partnership in developing this standard.

As the discourse surrounding AI security standards continues, the establishment of ETSI TS 104 223 is a step towards a more secure and resilient AI landscape. This initiative reflects not only the urgency of addressing cybersecurity risks associated with AI but also the necessity of collaboration across borders and sectors to develop comprehensive strategies that protect against evolving threats.

With the rapid advancement of AI technologies, the establishment of these standards will undoubtedly play a crucial role in shaping how stakeholders approach security, making it an essential topic for ongoing discussion and action within the global community.

Source link

Latest articles

Mature But Vulnerable: Pharmaceutical Sector’s Cyber Reality

In a digital world where every click can open a door for attackers,...

The Hidden Lag Killing Your SIEM Efficiency

 If your security tools feel slower than they should, you’re not imagining it....

AI-fueled cybercrime may outpace traditional defenses, Check Point warns

 As AI reshapes industries, it has also erased the lines between truth and...

When Your “Security” Plugin is the Hacker

Source: The Hacker NewsImagine installing a plugin that promises to protect your WordPress...

More like this

Mature But Vulnerable: Pharmaceutical Sector’s Cyber Reality

In a digital world where every click can open a door for attackers,...

The Hidden Lag Killing Your SIEM Efficiency

 If your security tools feel slower than they should, you’re not imagining it....

AI-fueled cybercrime may outpace traditional defenses, Check Point warns

 As AI reshapes industries, it has also erased the lines between truth and...