A recent study by ISACA has revealed a troubling lack of preparedness among IT and cybersecurity professionals regarding the management of artificial intelligence (AI) systems during cyber-attacks or security incidents. The research indicates that a significant 56% of surveyed professionals are uncertain about how rapidly they could deactivate AI systems compromised by security threats. This alarming revelation comes from a comprehensive survey involving over 3,400 security and digital professionals, published on March 23, 2023, by the global certification body known for its commitment to security and governance.
Among the respondents, just under a third—32%—expressed confidence that they could halt potentially threatened AI systems within an hour. Conversely, 7% acknowledged that they believed it would take more than an hour to respond effectively in such situations. This disparity highlights a critical gap in readiness and crisis response capabilities within organizations that increasingly rely on AI technology.
### Confusion Over Ownership Compounds Security Risks
One of the underlying issues contributing to this uncertainty is the ambiguity surrounding accountability for the management of enterprise AI applications. The study found that 20% of participants did not know who bore responsibility for their organization’s AI systems, hinting at a potential governance gap. Meanwhile, 28% attributed this responsibility to board-level executives, while 18% pointed to chief information officers (CIOs) or chief technology officers (CTOs). Notably, 13% believed that security chiefs (CISOs) held this responsibility, implying a lack of clear direction in AI governance.
Despite varying opinions on accountability, the survey revealed that less than half—43%—of security professionals expressed high confidence in their organization’s ability to investigate significant AI incidents and communicate findings effectively to leadership or regulators. On the contrary, over a quarter—27%—reported having little to no confidence in this regard, indicating a pervasive sense of uncertainty about the structures in place to manage AI risks effectively.
ISACA’s research underscored that many professionals fear their organizations may struggle to identify potential AI-related security concerns due, in part, to insufficient human oversight in monitoring these systems. Intriguingly, only 36% of respondents indicated that human intervention is required to approve most AI actions before they occur. An alarming 26% claimed that AI activities were merely reviewed after being executed, which raises significant concerns about preemptive risk management.
Further complicating the picture, 11% admitted that AI actions would only be scrutinized if flagged, and 20% of respondents were unaware of the extent of human oversight within their organizations regarding AI decision-making. This staggering lack of clarity suggests that many organizations may unintentionally expose themselves to heightened risk due to insufficient governance measures.
### The Urgent Need for Governance Frameworks
Jenai Marinkovic, vCISO and CTO of Tiro Security, and co-founder and board chair of GRCIE, stressed the urgency of the situation. “While organizations may feel the push to adopt AI technology quickly to keep pace and leverage its capabilities, it is imperative they have the proper guardrails and governance in place before doing so,” she stated. Marinkovic’s perspective underscores the critical importance of establishing appropriate frameworks to manage the complexities introduced by AI technologies.
Organizations must ensure that the right personnel, policies, processes, and contingency plans are in place to leverage AI responsibly and effectively. Marinkovic highlighted the potential for significant disruption if proper safeguards are not maintained during a crisis. This urgent call to action reflects a broader trend in the industry, emphasizing that as AI systems become more integral to operations, organizations must not only be proactive about the implementation of AI but also vigilant in managing potential risks.
In conclusion, the ISACA survey highlights a concerning level of uncertainty and lack of preparedness among IT and cybersecurity professionals regarding the management of AI in the context of security incidents. The findings underscore the pressing need for clear lines of accountability and robust governance frameworks to navigate the challenges presented by AI technologies. Without these measures in place, organizations risk facing significant vulnerabilities that could have far-reaching implications for their operational integrity and public trust.

