HomeCyber BalkansAI Governance: A Global Challenge

AI Governance: A Global Challenge

Published on

spot_img

The United Nations (UN) has recently established an advisory board focused on the regulation of artificial intelligence (AI) technology. The advisory board consists of thirty-nine members from six continents, including government officials, experts from various academic and professional backgrounds, and executives from tech companies such as Sony, Open AI, and Microsoft. UN Secretary-General António Guterres emphasized the importance of AI governance, stating that while AI has the potential for good, it can also be maliciously used and pose risks to institutions, social cohesion, and democracy itself.

The primary objectives of the advisory board include establishing a global scientific consensus on the risks and challenges presented by AI and enhancing international cooperation in governing AI-supported technology. The board held its first meeting last week, and it is expected to issue preliminary recommendations for global collaboration in addressing the risks associated with AI by the end of 2023, with final recommendations scheduled for summer 2024.

Meanwhile, in the UK, Prime Minister Rishi Sunak is hosting an AI Safety Summit with approximately one hundred world leaders, tech executives, and academic experts. The summit aims to discuss how to harness the benefits of AI technology while minimizing risks such as bio-terrorism, cyberattacks, and deepfakes. Prime Minister Sunak’s goal is to make the UK a global leader in AI safety. The summit has received support from many in the private sector who believe in the importance of AI safety for a safe and competitive Britain. However, some critics argue that the summit is not focusing enough on more immediate risks such as job loss and energy strain.

In the United States, President Joe Biden has signed an executive order focused on the secure usage of AI technology. The order aims to make the US a leader in managing the risks and benefits of AI by establishing new standards for AI safety and security, protecting privacy and civil rights, promoting innovation and competition, and more. The order provides guidelines on various aspects, including privacy, civil rights, consumer protections, scientific research, and worker rights. It also calls for the creation of new government offices and task forces focused on utilizing AI in areas such as healthcare, housing, trade, and education. Some members of the tech industry are concerned that the order could hinder innovation and government oversight is overreaching.

In the realm of cybersecurity, the International Counter Ransomware Initiative (CRI) has made a commitment to refuse ransom demands made by ransomware gangs in cyberattacks. The CRI, which consists of forty-eight countries, the European Union, and Interpol, aims to counter the illicit finance that supports the ransomware ecosystem. The pledge aims to deter future attacks and minimize the chances of stolen data being returned. While not all members have agreed to the pledge, efforts are being made to secure the commitment from all participants. The CRI meeting also focused on using AI and blockchain analysis to combat ransomware. Additionally, plans were made to share a list of blacklisted cryptocurrency wallets associated with ransomware operations.

Furthermore, the US Federal Trade Commission (FTC) has modified the breach reporting requirements of the Safeguards Rule. Under the amended rule, non-banking financial institutions, including mortgage brokers, motor vehicle dealers, and payday lenders, would need to report data breaches impacting more than five hundred individuals and other security events to the agency. This change aims to enhance cybersecurity and protect consumers’ sensitive information.

Overall, the establishment of the UN advisory board, the UK’s AI Safety Summit, the US executive order on AI, the commitments made by the International Counter Ransomware Initiative, and the modifications to the Safeguards Rule all highlight the growing global focus on addressing the risks and challenges posed by AI and cybersecurity threats. These initiatives reflect the recognition of the potential benefits and risks associated with AI and the need for international collaboration, regulation, and oversight in governing AI technologies to ensure their secure and responsible use.

Source link

Latest articles

Hackers Take Advantage of Vercel’s Trust in AI Integration

Vercel Issues Warning Following Data Breach Linked to Third-Party AI Application In a recent development,...

Attackers Exploit Microsoft Teams to Impersonate IT Helpdesk in New Enterprise Intrusion Strategy

Collaboration Platforms Under Scrutiny: Importance of Integrated Security Measures In the evolving landscape of digital...

CSLE: A Platform for Reinforcement Learning

Advancements in Autonomous Security Management: The Introduction of CSLE In a significant breakthrough for autonomous...

NCSC Unveils Coordinated Strategy to Enhance NHS Cyber Resilience

The UK’s National Cyber Security Centre (NCSC) has outlined a strategic plan aimed at...

More like this

Hackers Take Advantage of Vercel’s Trust in AI Integration

Vercel Issues Warning Following Data Breach Linked to Third-Party AI Application In a recent development,...

Attackers Exploit Microsoft Teams to Impersonate IT Helpdesk in New Enterprise Intrusion Strategy

Collaboration Platforms Under Scrutiny: Importance of Integrated Security Measures In the evolving landscape of digital...

CSLE: A Platform for Reinforcement Learning

Advancements in Autonomous Security Management: The Introduction of CSLE In a significant breakthrough for autonomous...