Researchers from OpenAI, Cambridge University, Harvard University, and the University of Toronto have provided insights into how to regulate AI chips and hardware and discussed how security policies could prevent the abuse of advanced AI systems. The recommendations presented by these institutions aim to lay down guidelines for measuring and auditing the development as well as the use of sophisticated AI systems and the chips that enable them. The policy enforcement guidelines include curtailing the performance of these systems and introducing security features that can remotely disable rogue chips.
According to the researchers, the training of highly capable AI systems currently involves the accumulation and orchestration of thousands of AI chips. However, if these systems are potentially hazardous, limiting the accumulation of computing power could help in restricting the production of potentially dangerous AI systems. This view stands in contrast to the focus of governments on AI policy software, with the paper serving as a companion piece that addresses the hardware aspect of the discussion, as noted by Nathan Brookwood, principal analyst of Insight 64.
However, the attempt to make AI safe through hardware could face resistance from the industry. Brookwood warns that the industry may not be receptive to security features that impact the performance of AI. While admitting that ensuring AI safety through hardware is a noble aspiration, he expressed doubts about the feasibility of implementing such measures, considering the prevalence and rapid advancement of AI technology.
One of the proposals put forward by the researchers is to implement a cap to limit the computing processing capacity available to AI models. This move aims to enable the identification of abuses in AI systems and subsequently cut off and restrict the use of chips. Specifically, the researchers recommend a targeted approach of reducing the bandwidth between memory and chip clusters. The paper acknowledged that determining the optimal bandwidth limit for external communication requires further research.
The further discussion centered on the notion of disabling chips remotely, a feature that has already been incorporated by Intel into its latest server chips. This feature, known as On-Demand, allows Intel customers to activate on-chip features like AI extensions through a subscription service, similar to activating certain features in a Tesla car. Additionally, the researchers proposed an attestation scheme to allow only authorized parties to access AI systems via cryptographically signed digital certificates, a concept akin to confidential computing securing applications on chips.
However, the researchers also observed that there are risks associated with remotely enforcing policies. According to their findings, remote enforcement mechanisms could come with significant downsides and may only be deemed necessary if the expected harm from AI is exceptionally high. Brookwood also echoed these concerns, noting that artificial constraints could prove to be ineffective against bad actors pursuing means to bypass these security measures.
It is clear from the recommendations put forth by these esteemed research institutions that the regulation of AI chips and hardware, as well as the enforcement of security policies, is an area of paramount importance as AI continues to evolve and permeate various facets of modern life. These insights provide a foundation for ongoing discussions and initiatives aimed at ensuring the responsible and secure development and use of advanced AI systems.
