ChatGPT, the revolutionary artificial intelligence language model developed by OpenAI, has attracted immense attention since its public release in November 2022. With millions of users generating countless queries, posts, comments, and articles, ChatGPT has exceeded expectations and outperformed its predecessors in the world of AI models. Powered by deep learning algorithms and vast amounts of text data, the platform is capable of producing human-like responses to natural language inputs. Its range of applications is vast, including customer service, content creation, data analysis, and education. However, along with its successes, ChatGPT has also been exploited for malicious purposes such as writing malware, creating phish scams, cheating on academic papers, and even planning fictitious crimes.
Throughout the history of innovation, there has always been a mix of excitement, uncertainty, and fear. The introduction of the steam engine, for example, sparked fears of job losses while simultaneously promising to revolutionize transportation. The advent of airplanes elicited wonder and amazement at human accomplishment, but also raised concerns about potential military applications. Similar reactions have accompanied breakthroughs in fields like computer technology, organ transplantation, space travel, and DNA manipulation. While these innovations have faced initial skepticism and anxieties, they eventually found a balance between their incredible potential and potential risks.
As a cyber defender and risk manager, it is crucial to adopt a balanced perspective towards the innovative capabilities of ChatGPT. Approaches driven purely by fear and uncertainty would lead to blocking all access to ChatGPT and its API. On the other hand, approaches fueled solely by excitement and wonder might involve feeding massive amounts of data into the platform for near-prescient insights. However, these extreme approaches require further consideration. It is important to examine some of the prominent concerns associated with ChatGPT and identify the potential counter-balancing opportunities.
One of the criticisms leveled against ChatGPT recently was its potential to enable the creation of advanced polymorphic malware. While articles emphasized this concern, they often neglected to mention that the web version of ChatGPT did not actually produce the malware. Generating malicious software using ChatGPT was theoretically possible, but it required substantial human intervention. However, it is imperative to recognize the positive side as well. ChatGPT can assist software developers by generating code for new products, expediting the development process. The promise of low-code or no-code applications with the aid of ChatGPT is more than just marketing hype. For instance, generating encryption routines using ChatGPT can help secure important transactions, but it’s unable to differentiate between legitimate use cases and potential ransomware scenarios.
In another instance, concerns were raised regarding hackers bypassing ChatGPT controls to create new service offerings such as automated phishing emails. However, it must be clarified that the bypass involved the use of the API, which currently lacks the constraints of the web version. While the API can indeed be misused to generate phishing content, OpenAI policy prohibits such abuse rather than implementing technical controls. Nonetheless, the API can also be employed for creating phishing testing campaigns and generating awareness posts with varying content and tone, providing fresh insights in the ongoing battle against phishing attacks.
Furthermore, some articles have highlighted ways to disable response controls in ChatGPT, enabling it to answer without filters that prevent potentially unlawful activity. Closer scrutiny, however, reveals that these claims are often dubious. Nevertheless, the potential use cases in threat modeling and role playing should be considered. ChatGPT can be requested to act as an insider threat and decline, enabling CISOs to engage in insightful exercises like brainstorming potential ways an insider could harm their organization. Within minutes, ChatGPT can guide security professionals from a high-level view of monitoring cloud storage to a user-awareness quiz about social engineering attacks that includes answers. In the context of role playing, the value of interacting with an AI programmed to emulate malicious behavior becomes evident, aiding individuals in preparing for real-world scenarios.
The impact of ChatGPT may herald the advent of an AI race, as major players like Microsoft, Google, Baidu, Meta, and Amazon invest millions upon millions to develop the most comprehensive AI platforms. These companies will continuously push the boundaries of innovation, adding new features and functionality while simultaneously implementing necessary mitigations and controls. Like previous advancements, the journey with ChatGPT will involve a mix of excitement, possibilities, uncertainties, and fears. However, with time, our perception of ChatGPT will become more nuanced and complex. It is important to recognize that ChatGPT is ultimately just a tool – albeit a complex and intriguing one. We should neither fear nor revere it, but instead strive to understand it and leverage its potential for various beneficial purposes.
About the Author:
Craig Burland, the Chief Information Security Officer (CISO) at Inversion6, is a seasoned cybersecurity leader. He works directly with clients to build and manage security programs, advising them on cybersecurity strategy and best practices. With extensive industry experience, including leading information security operations for a Fortune 200 company, Burland is well-equipped to tackle cybersecurity challenges. He has also held important positions in organizations such as the Northeast Ohio Cyber Consortium and Solutionary MSSP, NTT Global Security, and Oracle Web Center. Burland can be reached on LinkedIn (https://www.linkedin.com/in/craig-burland/) and through the Inversion6 website (www.inversion6.com).
