CyberSecurity SEE

The Risks of Generative AI: The Sky Isn’t Falling

The Risks of Generative AI: The Sky Isn’t Falling

Generative artificial intelligence (GenAI) and large language models (LLMs) are becoming increasingly prominent in the business world, raising questions about how they will shape the future of human-computer interactions. Amidst the ongoing debate surrounding the social and cultural impact of AI, a recent report from the Israeli venture firm Team8 titled “Generative AI and ChatGPT Enterprise Risk” sheds light on the technical, compliance, and legal risks that GenAI and LLMs pose to corporations and cybersecurity personnel.

The report acknowledges the potential operational and regulatory vulnerabilities of GenAI, but it also dispels certain concerns as premature. For example, the report dismisses the worry that private data submitted to a GenAI application such as ChatGPT could be exposed to others in real-time, stating that current LLMs cannot update themselves in real-time and therefore cannot share one’s inputs with another’s response. However, the report does caution that this may not hold true for future iterations of these models.

The Team8 report identifies various high-risk areas, including the data privacy and confidentiality of nonpublic enterprise and private data, the security of nonpublic and enterprise data, AI behavioral vulnerabilities, and legal and regulatory compliance. Medium-risk areas outlined in the report include threats such as phishing, fraud, and social engineering, copyright and ownership vulnerabilities, insecure code generation, bias and discrimination, and trust and corporate reputation.

Gadi Evron, CISO-in-residence at Team8 and one of the report’s authors, highlights that the CISO (Chief Information Security Officer) plays a crucial role in addressing these risks. He suggests that upcoming European Union regulations may require the CISO to take responsibility for AI-related processes, elevating their position to that of an “Ambassador of Trust.”

Chris Hetner, cybersecurity advisor at Nasdaq, emphasizes the importance of conducting an initial risk assessment to identify potential issues related to access, data integration with existing applications, and possible exposure to proprietary information. Once these decisions are made, companies can proceed forward while mitigating potential risks.

While the threats posed by GenAI are not entirely new, they do accelerate the speed at which private data can reach a wider audience. Richard Bird, Chief Security Officer at Traceable AI, points out that many companies were already struggling to protect their data even before the rise of generative AI technologies. He warns that employees’ increasing use of AI with minimal security controls accentuates the risk to companies and highlights human behavior as a more significant threat than AI itself.

In analyzing the interaction between users and GenAI, it is crucial to consider their existing habits and experiences. Andrew Obadiaru, CISO for Cobalt Labs, notes that iPhone users who already have experience with AI through Siri will likely adapt more quickly to GenAI. However, this familiarity may also make them more susceptible to misusing applications and inputting data that should remain within an organization’s direct control. Personal device usage outside of IT department oversight or treating GenAI like a personal digital assistant could potentially compromise confidential data.

Sagar Samtani, an assistant professor at the Indiana University Kelley School of Business, raises concerns about the vulnerabilities present in AI models that are shared via open source software. CISOs must be aware of these vulnerabilities and update their software development workflows accordingly. Samtani also highlights the importance of asset management and automated tools for detecting and categorizing data and assets to effectively manage corporate networks. LLMs, in particular, can aid in creating inventory and priority lists, developing vulnerability management strategies, and formulating incident response plans.

In conclusion, while GenAI and LLMs offer immense potential for enterprises, they also come with risks that must be understood and addressed. Corporations need to prioritize data privacy, compliance, and security, while also considering the impact of human behavior on these technologies. By conducting thorough risk assessments, implementing proper security measures, and leveraging the capabilities of AI, organizations can navigate the evolving landscape and ensure responsible and effective use of these disruptive technologies.

Source link

Exit mobile version