CISOs face significant challenges when it comes to keeping up with the rapid advancements in security capabilities, and the introduction of generative AI only adds to the complexity. As Jason Revill, head of Avanade’s Global Cybersecurity Center of Excellence, points out, CISOs are often a few steps behind the curve due to skill shortages, regulatory demands, and the exponential growth of security concerns. To stay ahead in the realm of generative AI, businesses may need to enlist external expertise early on instead of simply relying on internal resources.
One crucial aspect of implementing generative AI in a secure manner is data control. Businesses must establish internal policies that govern what type of information can be used with generative AI tools. The risks associated with sharing sensitive business data with advanced self-learning AI algorithms are well-documented. To mitigate these risks, appropriate guidelines and controls should be put in place to determine what data can be used and how it can be used by generative AI systems.
Data encryption methods, anonymization techniques, and other data security measures should be implemented to prevent unauthorized access, usage, or transfer of data. Brian Sathianathan, co-founder and CTO of Iterate.ai, emphasizes the importance of strong data security measures to protect the significant quantities of data that AI systems typically handle.
In addition to data control, security policies should also address the content produced by generative AI. Large language models (LLMs) used by generative AI chatbots, such as ChatGPT, can sometimes produce inaccurate information. This becomes a significant risk when organizations rely on this output for critical decision-making. Therefore, security policies must include clear processes for manually reviewing the accuracy of generated content to ensure rationalization and prevent adverse consequences.
It is also essential to consider the potential for generative AI-enhanced attacks and how businesses should respond to them. These attacks can make fake content indistinguishable from reality, posing a threat to organizations. Traditional social engineering controls such as detecting spelling mistakes or malicious links in emails may be rendered ineffective. Therefore, security policies need to be updated to address the enhanced social engineering threats that generative AI introduces.
Effective communication and training are crucial for the success of any security policy, particularly regarding generative AI. Stakeholders must be well-informed about the policy, and CISOs must present it from a business perspective. Additionally, employees should receive training on how to use generative AI responsibly and understand the associated risks.
Supply chain and third-party management are also important considerations when implementing generative AI security policies. Organizations must assess the generative AI usage, risk levels, and policies of external parties that they collaborate with. Due diligence should be performed on all third-party suppliers, including cloud service providers, to ensure that their generative AI usage aligns with the organization’s security requirements.
While implementing security policies for generative AI may seem daunting, it is essential to make them as exciting and interactive as possible. By showcasing the benefits of generative AI and its potential to boost productivity and make employees’ lives easier, businesses can ensure that their workforce is responsible when using this technology. This approach not only protects the business as a whole but also fosters a culture of innovation and security facilitation.
In conclusion, CISOs must proactively tackle the challenges posed by generative AI by establishing comprehensive security policies. These policies should address data control, content accuracy, generative AI-enhanced attacks, communication and training, supply chain management, and employee engagement. By doing so, businesses can harness the power of generative AI while maintaining a strong security posture.

