British officials have issued a warning to organizations regarding the integration of artificial intelligence-driven chatbots into their businesses. The National Cyber Security Centre (NCSC) has highlighted research that shows these chatbots can be easily tricked into performing harmful tasks. In a pair of blog posts set to be published, the NCSC aims to shed light on the security problems associated with algorithms that generate human-like interactions, often referred to as large language models (LLMs).
These AI-powered tools, in the form of chatbots, are expected to revolutionize not only internet searches but also customer service work and sales calls. However, the NCSC warns that such integration can carry substantial risks, especially if these models are connected to other elements of an organization’s business processes. Researchers and academics have repeatedly demonstrated the vulnerabilities of chatbots to manipulation and subversion, whether through rogue commands or circumventing built-in security measures.
Oseloka Obiora, Chief Technology Officer at RiverSafe, a renowned cyber expert, emphasized the need for businesses to exercise caution when embracing AI. Obiora warns that chatbots can be easily susceptible to manipulation and hijacking for fraudulent purposes, potentially leading to a surge in illegal transactions, data breaches, and fraud. Instead of blindly adopting the latest AI trends, Obiora encourages senior executives to evaluate both the benefits and risks associated with AI integration and prioritize implementing necessary cyber protection measures to safeguard their organizations.
To illustrate the potential risks, consider the example of an AI-powered chatbot deployed by a bank. If a hacker crafts a query carefully, they may trick the chatbot into making an unauthorized transaction. The NCSC likens the need for caution when using LLMs to utilizing beta software or code libraries. Just as organizations would not fully trust or allow such products to handle customer transactions, a similar level of caution should be applied to LLMs.
Authorities worldwide are grappling with the rise of large language models like OpenAI’s ChatGPT, which businesses are incorporating into various services, including sales and customer care. The security implications of AI are still being explored, with authorities in the United States and Canada witnessing an increase in hackers leveraging AI technology for malicious purposes.
The warning from British officials emphasizes the importance of prioritizing cybersecurity and thoroughly assessing the risks associated with AI integration. While AI has the potential to bring numerous benefits to organizations, it should not come at the expense of exposing businesses and customers to cyber threats. As the race to embrace AI continues, there is an urgent need for businesses to implement robust cybersecurity measures and diligently adhere to necessary due diligence checks before integrating AI-powered chatbots or other AI technologies into their operations. This will ensure organizations are adequately protected against potential harm, including fraud, data breaches, and illegal transactions. By taking a cautious and proactive approach, businesses can fully harness the advantages of AI while mitigating its inherent risks.

