US President Joe Biden signed an executive order on Monday that focuses on the secure use of AI technology, signaling a more aggressive approach to AI regulation. The aim of the order is to make the US a leader in managing the risks and promises of artificial intelligence. It establishes new standards for AI safety and security, protects privacy, advances equity and civil rights, promotes innovation and competition, and enhances American leadership globally.
The United Nations also recognized the need for AI governance, establishing an advisory body to address the same challenges. UN Secretary-General António Guterres emphasized the transformative potential of AI for good but also acknowledged the risks associated with malicious use of the technology.
In the UK, Prime Minister Rishi Sunak hosted an AI Safety Summit, bringing together government leaders, tech executives, and scholars. The summit aimed to make the UK a global leader in AI safety and resulted in the Bletchley Declaration, which outlined directions for further work on AI safety risks and risk-based policies.
The executive order in the US offers guidelines on privacy, civil rights, consumer protections, scientific research, and worker rights. It also prioritizes immigration of highly-skilled individuals with expertise in critical areas and the creation of new government offices and task forces focused on harnessing AI’s potential in healthcare, housing, trade, and education. Federal agencies are directed to set standards for data privacy, cybersecurity, and anti-discrimination in the AI industry.
The executive order also invokes the emergency federal powers of the Defense Production Act, requiring major AI companies to notify the government when developing systems that pose a serious risk to national security, economic security, or public health and safety. The president calls for advanced AI product developers to submit test results to ensure the tech cannot be used for manufacturing biological or nuclear weapons.
The Department of Commerce has already taken steps to establish the U.S. Artificial Intelligence Safety Institute (USAISI), which will lead the government’s efforts on AI safety and trust, particularly in evaluating advanced AI models.
The order received a positive response from tech interest groups, cybersecurity experts, and Democratic lawmakers. However, some members of the tech industry are concerned about government oversight and potential limitations on innovation.
Experts have highlighted the need for skilled professionals to enforce the mandates outlined in the order and questioned whether the government has the technical knowledge and manpower to keep pace with the rapid proliferation of AI programs.
The swift adoption of AI technology poses challenges, and the order aims to get ahead of the disruptive nature of AI by establishing new standards, preventing cyber vulnerabilities, and prioritizing privacy and the public good.
Moving forward, AI governance will require collaboration between public-sector, business, and academic institutions. The executive order is just the beginning, and ongoing efforts will be necessary to address the risks and potential of AI technology effectively.
