HomeCII/OTAI Chatbot Tricks Scammers and Gains Information on Money Laundering

AI Chatbot Tricks Scammers and Gains Information on Money Laundering

Published on

spot_img

Responding to scammers’ emails and text messages has garnered attention from threat researchers, YouTubers, and even comedians in the past. However, a recent experiment utilizing conversational AI to engage with spam messages and engage in conversations with fraudsters has revealed the potential of large language models (LLMs) in extracting threat intelligence that would typically require a human analyst.

Researchers at UK-based fraud-defense firm Netcraft have been utilizing a chatbot based on Open AI’s ChatGPT to respond to scams and persuade cybercriminals to divulge sensitive information. Through this method, they have been able to obtain bank account numbers from over 600 financial institutions across 73 different countries, which are frequently used in transferring stolen funds. This approach has enabled threat analysts to gain further insight into the infrastructure employed by cybercriminals to carry out financial fraud schemes.

Netcraft’s vice president of product strategy, Robert Duncan, highlighted the effectiveness of using AI to mimic a victim, allowing them to engage with scammers and understand their motives. The adaptability of AI in navigating various types of criminal activities, whether it be romance scams or advanced fee fraud, has proven to be a valuable tool in combating financial crimes.

As international fraud rings continue to thrive, particularly in regions like Southeast Asia where cyber-scam centers operate, defenders are actively seeking ways to expose cybercriminals and disrupt their operations. Countries like the United Arab Emirates have recognized the potential of AI in enhancing cybersecurity measures and have forged partnerships to leverage this technology for proactive defense against cyber threats.

Netcraft’s research has indicated that AI chatbots have the capacity to increase the difficulty for cybercriminals in carrying out their fraudulent activities. By utilizing personas with local languages, the chatbots can engage with scammers in a more authentic manner, leading to the discovery of thousands of accounts associated with fraudulent schemes. The distribution of fraudulent accounts has shifted as Netcraft expands its chatbot’s linguistic capabilities, reflecting the effectiveness of tailored personas in combating financial fraud.

The use of AI chatbots in engaging with scammers allows for scalable conversations that can yield valuable threat intelligence. Netcraft has introduced its Conversational Scam Intelligence service, showcasing the potential of AI-powered technologies in disrupting criminal financial infrastructures. By maintaining prolonged conversations with cybercriminals, the chatbots can extract critical data that can aid in understanding and countering fraudulent activities.

Moreover, the application of AI in engaging with cybercriminals presents an opportunity to shift the balance of power from attackers to defenders. By operationalizing AI chatbots on a larger scale, defenders can actively challenge cybercriminals, making it more difficult for them to distinguish genuine conversations from AI-generated interactions. This approach holds promise in light of attackers’ increasing adoption of AI technologies for malicious purposes.

Netcraft’s ongoing efforts to leverage AI in combating financial fraud may pave the way for a new era of cybersecurity defense. Duncan noted that there are indications of attackers utilizing AI in their operations, leading to potential AI-on-AI interactions. As the cybersecurity landscape evolves, the utilization of AI technologies in engaging with cybercriminals could prove to be a pivotal strategy in thwarting fraudulent activities and safeguarding financial institutions and individuals against malicious threats.

Source link

Latest articles

What CISOs Must Get Right as Identity Enters the Agentic Era

Building a Strong Identity Foundation: Essential Steps for Modernization In today's rapidly evolving digital landscape,...

DORA and the Practical Assessment of Operational Resilience

DORA and the Practical Test of Operational Resilience By Alan Stewart-Brown, VP EMEA, Opengear Disruption in...

AI Agent Deletes Startup Data in Just 9 Seconds via API Call

Claude-Powered Tool Deletes Production Data, Then Explains Its Failures In an alarming incident that has...

Cyber Briefing – April 28, 2026: CyberMaterial

Cybersecurity Trends: A Comprehensive Overview In an era defined by rapid technological advancement, the cybersecurity...

More like this

What CISOs Must Get Right as Identity Enters the Agentic Era

Building a Strong Identity Foundation: Essential Steps for Modernization In today's rapidly evolving digital landscape,...

DORA and the Practical Assessment of Operational Resilience

DORA and the Practical Test of Operational Resilience By Alan Stewart-Brown, VP EMEA, Opengear Disruption in...

AI Agent Deletes Startup Data in Just 9 Seconds via API Call

Claude-Powered Tool Deletes Production Data, Then Explains Its Failures In an alarming incident that has...