CyberSecurity SEE

Apple refuses to support suggested changes to UK’s Investigatory Powers Act. Colombia to establish a nationwide cybersecurity organization. Nominee for DIRNSA gives testimony. Principles for AI in the US.

Apple refuses to support suggested changes to UK’s Investigatory Powers Act. Colombia to establish a nationwide cybersecurity organization. Nominee for DIRNSA gives testimony. Principles for AI in the US.

The White House has announced that seven major tech companies have signed on to the government’s new policy on artificial intelligence (AI). The policy aims to preserve safety, security, and trust in the AI systems developed by these companies. The companies, which are significant players in the field of AI research and development, have agreed to work under certain principles outlined by the White House.

One of the key principles outlined in the policy is the commitment to ensuring the safety of AI products before they are released to the public. The companies have agreed to conduct internal and external security testing of their AI systems, with the help of independent experts. This testing will help prevent risks such as biosecurity and cybersecurity, as well as broader societal effects.

Another important principle is building AI systems that prioritize security. The companies have committed to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are crucial components of AI systems, and it is essential that they are released only when intended and when security risks are considered.

The companies also commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. This ensures that any issues that persist even after the systems are released can be quickly identified and fixed.

Earning the public’s trust is another key aspect of the policy. The companies commit to developing technical mechanisms to ensure that users are aware when content is AI-generated. This helps prevent fraud and deception. The companies also commit to publicly reporting the capabilities, limitations, and appropriate and inappropriate uses of their AI systems. This includes reporting on security risks and societal risks, such as fairness and bias.

The companies also commit to prioritizing research on the societal risks posed by AI systems, such as harmful bias and discrimination, and protecting privacy. The goal is to mitigate these risks and ensure that AI is used in a way that contributes to the prosperity, equality, and security of all.

The White House has emphasized that this is an international effort, with active consultation with partners in countries such as Australia, Brazil, Canada, France, Germany, Japan, and the UK.

Experts in the field have highlighted the need for regulation and ethical considerations when it comes to AI. Mike Britton, CISO of Abnormal Security, pointed out the importance of transparency, ethics, and human intervention in AI systems. He also acknowledged the risk of AI being used maliciously but expressed confidence in the big players in the industry to address security concerns.

Rob Vamosi, Senior Security Analyst at ForAllSecure, stressed the urgency in addressing fundamental concerns around AI. He highlighted the risk of data leaks and the need for more guidance on safeguarding AI systems. He also mentioned the potential for AI to be used both offensively and defensively and the importance of watermarks to determine the integrity of AI-generated results.

James Campbell, CEO of Cado Security, discussed the complex issues surrounding voluntary standards for AI. He mentioned the potential privacy issues and the need for companies to conduct risk assessments before releasing AI-enabled technologies. He also highlighted the impact of cybersecurity controls on the development process.

Overall, the White House’s new policy on AI aims to address safety, security, and trust concerns associated with AI systems. The involvement of major tech companies in this initiative shows a commitment to responsible development and deployment of AI. By working together with governments, civil society, and academia, these companies aim to mitigate the risks and maximize the benefits of AI technology.

Source link

Exit mobile version