Governance & Risk Management,
IT Risk Management,
Operational Technology (OT)
Individual Vulnerability Severity Not Always a Good Measure of Risk Exposure

The Common Vulnerability Scoring System (CVSS) has long served as a fundamental component of IT security programs worldwide, yet it faces significant challenges when applied to operational technology (OT). This specialized realm requires a distinct understanding of risk factors that diverges from conventional expectations surrounding vulnerability assessments. Experts in the OT community have consistently criticized CVSS for its inadequacies. Concerns have heightened since November 2023, when the Forum of Incident Response and Security Teams, the body responsible for maintaining the CVSS, attempted to rectify these concerns with the introduction of CVSS 4.0.
However, a growing faction of OT security specialists is expressing skepticism regarding the efficacy of this new version. The consensus is leaning towards the belief that merely refining the measurement of individual vulnerabilities is insufficient. Rather, they argue for adopting an entirely new methodology that emphasizes the cascading consequences of security breaches, acknowledges sector interdependencies, and integrates effective consequence management strategies.
CVSS 4.0: An Imperfect Fix
Patrick Miller, the president and CEO of Ampyx Cyber, offered an analogy to illustrate the shortcomings of CVSS. He likened traditional vulnerability assessments to reading a thermometer, emphasizing that temperature perception is influenced by multiple environmental factors like humidity. While CVSS 4.0 introduces more contextual metrics, transforming its value for users, the process of incorporating these variables has proven laborious for OT operators.
Some industry leaders acknowledge the enhancements in CVSS 4.0, noting improvements such as the integration of safety impacts into base scores and the consideration of environmental metrics that account for subsequent system effects. However, as former U.S. Cybersecurity and Infrastructure Security Agency (CISA) Senior Advisor Allan Friedman pointed out, the effort to contextualize vulnerabilities is not something that can easily be standardized or disseminated. Organizations must provide their unique data, which is often neither easily accessible nor formatted for machine readability. As a result, the task becomes overwhelmingly time-consuming, posing challenges especially for smaller entities with limited resources.
Compounding this issue is the fact that many contextual factors require separate assessments for each asset impacted by a vulnerability. Large enterprises often oversee numerous assets, each functioning within distinct segments of a broader network. This makes it exceedingly complex for security teams to aggregate and analyze the required contextual information, further delaying the adoption of the new standard.
The non-profit FIRST, responsible for CVSS, openly admits to the lack of comprehensive feedback from OT system owners about the new version. Although the modifications have generally been well received, the system serves as just one component of a more extensive vulnerability management strategy.
Correlations and Cross-Checks: Additional Dimensions of Vulnerability and Threats
Historically, CVSS scores have often provided overly broad vulnerability assessments that have little relevance to OT defenders. The complexity intensifies due to the nature of OT systems, which typically are not updated or patched. Sean Tufts, field CTO at Claroty, shared concerns about how the infrastructures commonly used in OT environments frequently remain static, making even minor changes potentially hazardous. For instance, a security patch applied to a desktop monitoring an industrial oven has previously led to system failures, underscoring the precarious balance between cybersecurity and operational reliability.
In light of these challenges, Tufts advocates for implementing a more nuanced approach to vulnerability management. By correlating CVSS scores with real-time intelligence and databases detailing exploits, organizations can prioritize vulnerabilities according to their true risk levels. Nevertheless, this still leaves unanswered fundamental questions about actionable insights organizations seek.
The Future for Vulnerability Scores: Automation or Alternatives?
Experts like Miller assert that automation will be critical for reworking these processes, enabling organizations to manage vulnerabilities comprehensively across their operations. While larger enterprises have begun using AI tools for data processing, the lack of universally applicable solutions remains a challenge. Miller highlights the importance of standardized communication formats, such as the Common Security Advisory Framework, to efficiently disseminate this information across the board.
Furthermore, Friedman recommends that vendors should offer clear alternatives to patching as a mitigation strategy, streamlining guidance for clients on implementing security measures suited to their specific systems.
Moving forward, many in the industry recognize the shortcomings of merely enhancing CVSS. They are advocating for alternative approaches, such as the cross-sector prioritization methodology developed by The Atlantic Council. This strategy emphasizes the holistic evaluation of vulnerabilities, concentrating on elements such as infrastructure significance and systemic dependencies across different sectors, rather than primarily focusing on specific software flaws.
Danielle Jablanski, one of the methodology’s key architects, has underscored the localized nature of emergency scenarios, advocating for an emphasis on the regional impacts of cybersecurity events rather than purely numerical assessments of vulnerability severity. This refocusing seeks to prioritize potential scenarios that could trigger widespread consequences, thereby enhancing overall preparedness and response capabilities in critical infrastructure settings.

