Center for Internet Security
Security Metrics v1.1.0 (2010)
CIS has been consistently promoting good network/cybersecurity practices for a couple of decades, publishing detailed technical security configuration advice.
The CIS Security Metrics v1.1.0 (2010) is a ‘consensus set’ of ~28 security metrics definitions developed by a team of 150 industry experts who set out to create “a collection of unambiguous, logically defensible outcome and practice metrics measuring: the frequency and severity of security incidents; incident recovery performance; and the use of security practices that were generally regarded as effective.”
The ~28 metrics described in ~150 pages do not cover the entire information security metrics landscape but are technology-centric, covering certain aspects within IT (now ‘cyber’) security. Each metric is specified in a standardized and explicit manner (in the style of a technical specification for a software function or perhaps an electronic component), and is accompanied by paragraphs briefly describing its objectives, uses and limitations, plus references. This gets a little tedious and repetitive, particularly for simple metrics such as “Number of Applications” which takes a page and a half to describe. However, since the metrics are intended “to be used across organizations to collect and analyze data on security process performance and outcomes”, the specifications are explicitly detailed in order to encourage consistency and comparability between organizations.
As with most other lists-of-security-things-that-can-be-measured, there is precious little attempt to justify the selection or really explain the value of the chosen metrics to readers. The development team’s discussions around each metric are not provided or summarized for example. We don’t know why or on what basis these particular 28 metrics were selected from the dozens that were presumably considered and rejected. Why would you want to measure “cost of incidents” and “mean cost of incidents” separately, for example, when both are derived from the same base data? There may be legitimate reasons for including these two metrics separately but if so we are left guessing.
Although the CIS Security Metrics have not been updated since 2010, CIS offers and maintains more topical advice:
The CIS benchmarks are extremely detailed and specific.
The CIS Microsoft Windows 10 Enterprise (Release 21H1 or older) Benchmark, for instance, has steadily accumulated over 1,200 pages from its initial publication at the end of 2015. Many of the configuration settings can be checked programatically, suggesting the idea of automatically gathering security status data from an organization’s population of IT systems, supplementing that with periodic checks/audits for the remaining settings - a bottom-up form of technical security metric that may
be useful for operational or tactical reasons.
Accompanying system security audit tools (CIS-CAT
) can check systems for compliance with the CIS benchmark recommendations, generating a simple compliance score, a metric giving an overall indication for a rather complex security configuration.
The CIS Critical Security Controls are numerous cybersecurity controls within 18 categories, all of which are deemed ‘critical’ for various organizations by consensus of a panel of experts within or consulted by CIS. Again, an organization’s status against these recommendations could be determined by a systematic audit/review process, perhaps partly automated.