HomeCyber Balkanswhat you need to know

what you need to know

Published on

spot_img


The Guidelines for Secure AI System Development, published by the NCSC and developed with the US’s Cybersecurity and Infrastructure Security Agency (CISA) and agencies from 17 other countries, advise on the design, development, deployment and operation of AI systems. The guidelines help organisations deliver secure outcomes, rather than providing a static list of steps for developers to apply. By thinking about the overall security of systems containing AI components, stakeholders at all levels of an organisation can prepare to respond to system failure, and appropriately limit the impact on users and systems that rely on them.

Crucially, keeping AI systems secure is as much about organisational culture, process, and communication as it is about technical measures. Security should be integrated into all AI projects and workflows in your organisation from inception. This is known as a ‘secure by design’ approach, and it requires strong leadership that ensures security is a business priority, and not just a technical consideration.

Leaders need to understand the consequences to the organisation if the integrity, availability or confidentiality of an AI-system were to be compromised. There may be operational and reputational consequences, and your organisation should have an appropriate response plan in place. As a manager you should also be particularly aware of AI-specific concerns around data security. You should understand whether your organisation is legally compliant and adhering to established best practice when handling data related to these systems.

It’s also important to note that the burden of using AI safely should not fall on the individual users of the AI products; customers typically won’t have the expertise to fully understand or address AI-related risks. That is, developers of AI models and systems should take responsibility for the security outcomes of their customers.

In addition to the AI Guidelines, the NCSC’s Principles for the security of machine learning (published in 2022) provide context and structure to help organisations make educated decisions about where and how it is appropriate to use ML, and the risks this may entail. Some of the principles are particularly relevant to those in senior decision making and executive or board level roles. These are highlighted in the quick reference table on the front page of the principles.

Reference: https://www.ncsc.gov.uk/guidance/ai-and-cyber-security-what-you-need-to-know#section_2 

AH 



Source link

Latest articles

Urgent Need for Data Minimization Standards

Data minimization is a key principle in various data protection laws worldwide, but the...

Malaysian national arrested in Tamil Nadu for cyber fraud amounting to Rs 2.81 crore

A Malaysian national was recently arrested in Tamil Nadu for allegedly being involved in...

HackerOne Cybersecurity Platform Partners with AWS Marketplace

The cybersecurity tools offered by HackerOne, including bug bounty programs and vulnerability disclosure services,...

Awareness of Cyber Threats in the Holiday Season

The holiday season may be a time of merriment and joy, but it also...

More like this

Urgent Need for Data Minimization Standards

Data minimization is a key principle in various data protection laws worldwide, but the...

Malaysian national arrested in Tamil Nadu for cyber fraud amounting to Rs 2.81 crore

A Malaysian national was recently arrested in Tamil Nadu for allegedly being involved in...

HackerOne Cybersecurity Platform Partners with AWS Marketplace

The cybersecurity tools offered by HackerOne, including bug bounty programs and vulnerability disclosure services,...