In the realm of deploying artificial intelligence (AI) at the edge, the benefits are substantial but accompanied by new security vulnerabilities. These vulnerabilities, if exploited by adversaries, can lead to serious consequences such as intercepting models in transit, manipulating inputs to degrade performance, or even reverse-engineering AI systems to use them against their creators.
In a recent interview with Help Net Security, Jags Kandasamy, CEO at Latent AI, shed light on the technical and strategic measures required to protect AI models, the delicate balance between security and performance in constrained environments, and the lessons professionals can glean as they navigate the deployment of AI in high-risk sectors.
Deploying AI at the edge, especially in military and critical infrastructure environments, exposes a network to various security challenges. Even devices assumed to be disconnected could intermittently connect to transmit data, expanding the security footprint. Ensuring the security and trustworthiness of every component deployed at the edge, including the edge device itself, is crucial to safeguarding against potential threats.
When it comes to the edge AI, the movement of the trained model, runtime engine, and application from a central location to the edge opens up the possibility of a person-in-the-middle attack. Once the model and application reside on edge devices, the risk of theft, reverse engineering, and misuse by adversaries escalates. This risk is amplified in both commercial and military settings, where cyber exploits could lead to dire consequences.
Traditional AI systems, reliant on cloud-based service architectures, face limitations in terms of availability and latency. Edge computing offers a solution by deploying models closer to the data source, reducing bandwidth consumption, minimizing latency, and optimizing resource utilization. This localized approach not only enhances security but also improves performance in critical applications.
Despite the risks involved, there are compelling reasons to deploy AI at the edge. In military operations, where real-time decision-making is crucial, edge computing enables immediate responses to vast amounts of data generated by sensors. Similarly, in commercial settings with unreliable connectivity, edge AI minimizes the need for constant communication with the cloud, thereby enhancing security and efficiency.
Implementing security measures without compromising performance in edge devices with limited computational resources is a delicate balance. Techniques such as watermarking, encryption, and version control can enhance the security of AI models while minimizing the impact on performance. These built-in protections ensure the integrity and ownership of the model, even in the face of potential cyber threats.
In critical infrastructure environments, cybersecurity strategies must be seamlessly integrated into the core technology to minimize computational overhead and inefficiencies. By embedding security measures directly into the model’s architecture, the performance impact can be mitigated while ensuring robust protection against cyberattacks.
For militaries relying on AI systems in operational environments, a combination of system controls and human oversight policies is essential to maintain the trustworthiness of AI systems. Regular human reviews of outputs and vigilant policies are crucial in ensuring the accuracy and reliability of AI decisions, especially in high-stakes scenarios.
Professionals deploying or managing AI systems in these environments should approach each edge device as an isolated island connected to a network of bridges. Robust security measures at both the individual device and network levels are necessary to protect sensitive data and prevent breaches that could compromise the entire network’s security. The interconnected nature of the edge continuum underscores the importance of secure communication channels and defensive strategies to safeguard against potential threats.
