HomeMalware & ThreatsAI Researchers Address SIEM Migration Challenges

AI Researchers Address SIEM Migration Challenges

Published on

spot_img

System Translates Detection Rules Across Security Platforms

In the realm of cybersecurity, transitioning between various security monitoring software, or assimilating the IT infrastructure of an acquired company, poses significant challenges, particularly concerning the management of threat-detection rules. Typically, such migration introduces a multitude of complications as the new platforms often employ different languages for processing these detection rules. This change can result in the existing rule library being rendered ineffective, demanding extensive manual effort to rebuild these rules from scratch—an arduous task that may take several months to complete. To address this pressing issue, a dedicated research team from the National University of Singapore and Fudan University has embarked on developing an artificial intelligence agent capable of accelerating this process while minimizing errors.

The team has meticulously tested their system, dubbed ARuleCon, on approximately 1,500 rule conversions that span five widely used security information event management (SIEM) platforms: Splunk, Microsoft Sentinel, IBM QRadar, Google Chronicle, and RSA NetWitness. Each of these platforms is designed to ingest logs, correlate security events, and issue alerts when potentially harmful activities are detected. However, each vendor employs its own proprietary query language, complicating the conversion process. The differences among these languages extend beyond simple syntax errors, necessitating a nuanced approach to transitioning rules from one platform to another.

To facilitate this conversion, ARuleCon operates in three distinct stages. Initially, the system reads a source rule and extracts all platform-specific elements, resulting in a plain-language description that captures what the original rule aims to accomplish—this includes filters, time frames, thresholds, and grouping conditions. This plain text description is then fed into a large language model (LLM), which generates an equivalent rule structured in the target platform’s query language.

Subsequently, two automated checking agents come into play to enhance the drafted rule. One agent leverages the official vendor documentation to ensure that all operators and field names align correctly with the target platform’s requirements. The other agent tests both the original and converted rules through Python code, applying them to synthetic log data to verify that their outputs align perfectly. In instances of discrepancies, the system initiates a repair loop to rectify any errors.

In tests conducted across three advanced language models—GPT-5, DeepSeek-V3, and LLaMA-3—ARuleCon demonstrated an impressive performance enhancement of roughly 15% when compared to each model functioning independently. This improvement was consistently upheld across various measures of structural, semantic, and logical consistency, indicating that the systematic design of ARuleCon contributes significantly to its efficiency. Notably, many of the conversions achieved flawless execution on the target platforms, boasting success rates exceeding 90%, with particularly high accuracy for Google Chronicle and Splunk. Meanwhile, IBM QRadar and RSA NetWitness presented more substantial hurdles, attributed largely to their less comprehensive documentation and notably complex grammar structures.

The research team acknowledges certain limitations inherent in the system. The Python-based consistency verification process facilities testing through logs generated by ARuleCon itself, which may not accurately represent the noisy, dynamic data streams present in real-world security operations. Ming Xu, one of the co-authors of the study, emphasized that confidence levels in the system’s performance are strongest for rules that can be effectively covered by the generated test cases, while confidence wavers for rules involving rare behaviors, custom schemas, or complex temporal correlations.

The neutral template that serves as the foundation of ARuleCon also has its constraints. Although it performs well for standard detection logic, it struggles with rules that necessitate stateful processing, vendor-specific data enrichment, or behaviors that remain unwritten within the rule itself.

The research team advises a staged validation approach before the full deployment of converted rules. Critical steps in this process should include testing the rules against historical logs and known attack patterns, followed by running them in a monitoring-only mode before their activation in a live environment. Currently, this validation process operates offline and is recognized as an area for future development.

Further complicating the conversion process is ARuleCon’s reliance on vendor documentation, treated as the definitive guide for refining its conversions. This dependency introduces potential vulnerabilities since the system is limited in its capacity to identify when such documentation may be incorrect or incomplete; however, the researchers maintain that such occurrences are rare as industry specifications are typically reliable. The system design does allow for updates to the documentation.

The efficiency with which ARuleCon operates is noteworthy, although it is not instantaneous. The process of converting a single rule using GPT-5 requires approximately 140 seconds and utilizes around ten times the computational resources of a direct language model translation. ARuleCon is optimized for batch processing, including platform migrations, rule onboarding, and periodic maintenance, rather than real-time alerting. Xu highlighted that "spending tens of seconds or even longer on a high-quality conversion can be acceptable, especially when compared with the manual effort required from detection engineers."

The source code for ARuleCon has been publicly released on GitHub, and the team’s industrial partner, NCS Group’s Singtel Singapore, is in the process of commercializing a prototype. Xu concluded by asserting that "we view this as a key reason why ARuleCon should augment analysts rather than replace them," underlining the vital role human expertise will continue to play in cybersecurity operations.

Source link

Latest articles

Mistral AI SDK and TanStack Router Targeted in NPM Software Supply Chain Attack

On May 11, a series of security breaches emerged that drew the attention of...

Cyber Briefing for May 12, 2026 – CyberMaterial

In the ever-evolving landscape of cybersecurity, recent developments have revealed a troubling trend: an...

OpenAI Launches Cybersecurity Model for Europe

OpenAI Takes Steps to Enhance Cybersecurity in Europe Amid Regulatory Scrutiny The ongoing battle for...

Building a DPDP-Compliant AI Data Architecture

Artificial Intelligence (AI) is transforming the operational landscape for enterprises by leveraging cutting-edge technologies....

More like this

Mistral AI SDK and TanStack Router Targeted in NPM Software Supply Chain Attack

On May 11, a series of security breaches emerged that drew the attention of...

Cyber Briefing for May 12, 2026 – CyberMaterial

In the ever-evolving landscape of cybersecurity, recent developments have revealed a troubling trend: an...

OpenAI Launches Cybersecurity Model for Europe

OpenAI Takes Steps to Enhance Cybersecurity in Europe Amid Regulatory Scrutiny The ongoing battle for...