HomeCyber Balkanswhat you need to know

what you need to know

Published on

spot_img


The Guidelines for Secure AI System Development, published by the NCSC and developed with the US’s Cybersecurity and Infrastructure Security Agency (CISA) and agencies from 17 other countries, advise on the design, development, deployment and operation of AI systems. The guidelines help organisations deliver secure outcomes, rather than providing a static list of steps for developers to apply. By thinking about the overall security of systems containing AI components, stakeholders at all levels of an organisation can prepare to respond to system failure, and appropriately limit the impact on users and systems that rely on them.

Crucially, keeping AI systems secure is as much about organisational culture, process, and communication as it is about technical measures. Security should be integrated into all AI projects and workflows in your organisation from inception. This is known as a ‘secure by design’ approach, and it requires strong leadership that ensures security is a business priority, and not just a technical consideration.

Leaders need to understand the consequences to the organisation if the integrity, availability or confidentiality of an AI-system were to be compromised. There may be operational and reputational consequences, and your organisation should have an appropriate response plan in place. As a manager you should also be particularly aware of AI-specific concerns around data security. You should understand whether your organisation is legally compliant and adhering to established best practice when handling data related to these systems.

It’s also important to note that the burden of using AI safely should not fall on the individual users of the AI products; customers typically won’t have the expertise to fully understand or address AI-related risks. That is, developers of AI models and systems should take responsibility for the security outcomes of their customers.

In addition to the AI Guidelines, the NCSC’s Principles for the security of machine learning (published in 2022) provide context and structure to help organisations make educated decisions about where and how it is appropriate to use ML, and the risks this may entail. Some of the principles are particularly relevant to those in senior decision making and executive or board level roles. These are highlighted in the quick reference table on the front page of the principles.

Reference: https://www.ncsc.gov.uk/guidance/ai-and-cyber-security-what-you-need-to-know#section_2 

AH 



Source link

Latest articles

Sébastien Raoult, the French hacker and aspiring millionaire, anticipates his sentence

Sébastien Raoult, a 22-year-old Frenchman from Epinal, is facing a crucial moment in his...

Criminal IP Achieves PCI DSS v4.0 Certification, Strengthening Payment Security through High-Level Compliance

In a recent development, AI SPERA, a prominent Cyber Threat Intelligence (CTI) company based...

Human firewalls play a vital role in safeguarding SaaS environments

In today's modern business landscape, the reliance on Software as a Service (SaaS) solutions...

The Cybersecurity Game of Cat and Mouse

In the ever-evolving landscape of cybersecurity, the battle between threat actors and defenders continues...

More like this

Sébastien Raoult, the French hacker and aspiring millionaire, anticipates his sentence

Sébastien Raoult, a 22-year-old Frenchman from Epinal, is facing a crucial moment in his...

Criminal IP Achieves PCI DSS v4.0 Certification, Strengthening Payment Security through High-Level Compliance

In a recent development, AI SPERA, a prominent Cyber Threat Intelligence (CTI) company based...

Human firewalls play a vital role in safeguarding SaaS environments

In today's modern business landscape, the reliance on Software as a Service (SaaS) solutions...
en_USEnglish