CSA metrics
Go home

Cloud Security Alliance
Continuous Audit Metrics

Critiqued Dec 2021 The CSA has published a catalog of cloud security metrics, accompanied a very brief (~2 page) ‘code of practice’ overview specifying the following 8+ characteristics of good metrics:

  1. Strategic and aligned: metrics should align with (support the achievement of) corporate/business objectives.
  2. Simple: ‘key metrics’ (also known as KPIs) must be understandable which, according to the paper, means stakeholders should be trained to comprehend and make use of metrics. Good luck with that!
  3. Establish RACI: metrics need to be owned by someone held accountable for their outcomes, and recipients (audiences) must be identified.
  4. Actionable: so “if a metric trends downward, employees should know the corrective action process to facilitate the decision on what corrective actions to take to improve performance”.  Hmmm.  Furthermore, metrics apparently need targets (defined objectives) and thresholds that trigger mandatory root-cause analysis.
  5. Frequency: the description concerns the need for timely information, with frequencies matching the audiences’ information needs.
  6. Referenceable and relevant: the description concerns understanding the origins/purpose of a metric, and notes that metrics may become less relevant over time (they have a finite lifecycle).
  7. Accurate and correlated: the description for this criterion states that ”The measurement system needs to be Repeatable and Reliable” and “A good assessment should be reliable, valid, and free from bias, i.e. [produce] stable and consistent results”. It also mentions that metrics should “correlate and drive the desired outcomes”. 
  8. Tamper-proof: testing should confirm that metrics are incapable of being ‘circumvented’, and the data sources plus data collection and analytical processes should be consistent, with any process changes being proactively managed.

The characteristics seem straightforward enough although the structure is curious (e.g. several distinct criteria are lumped into number 7) and there is no real basis stated for choosing these specific characteristics, rather than others (such as predictiveness and cost-effectiveness - two of the most important PRAGMATIC criteria). Furthermore, and despite the title, the 2-pager is short of guidance on implementing and maintaining the metrics, and says virtually nothing about designing and using them (e.g. “Key metrics measure progress, which means there needs to be room for improvement.)  All in all, the paper is disappointing and of little practical value except possibly as a prompt for thought and debate. It appears to have been hurriedly drafted to accompany the metrics catalog ...

The Continuous Audit Metrics Catalog (version 1.0) has been compiled by the CSA’s Continuous Audit Metrics Working Group. Its 60 pages includes a better, more practical introduction to cloud security metrics than the troublesome ‘code of practice’ paper, albeit with several references to ‘continuous auditing’ (meaning real-time metrics reporting, I think) and to the frequency of measurement (although relevant, I’m not sure why that particular aspect is so prominent). In the CSA/cloud context, ‘continuous auditing’ alludes to increasing pressure on Cloud Service Providers from their customers to monitor and disclose security metrics on an ongoing basis, as opposed to (or in addition to) periodic supplier security audits. [Presumably such information demands would be mandated through clauses in the supply contracts with formally-defined security-related aspects of the cloud services, or would be covered by CSA’s Security Trust Assurance and Risk certification scheme - implying strong compliance and transparency drivers. Using the metrics to support an ongoing CSP security certification scheme is mentioned.]

Section 2.2 describes three specific benefits of metrics:

  1. Measuring the Effectiveness of an Information System: effectiveness is an important aspect of processes, activities, relationships, contracts, technologies, decisions etc., not just IT systems, but maybe the term ‘Information System’ is intended to encompass all that! If not, this item betrays the myopic IT-focused perspective typical of the cyber security realm.
  2. Increasing the Maturity of an Organization’s Governance and Risk Management Approach: the text states that “metrics are a key tool for fostering the maturity of the organization’s risk management program” and mentions resource allocation and benchmarking. It also acknowledges that implementing a sound approach to security metrics is itself an indication of the organization’s maturity in information security management, driving a stronger security posture.  Good point!  Unfortunately, it then asserts that metrics should be specific, measurable (!), achievable, relevant (to what?) and time-bound i.e. the classical SMART approach, without explainig or expanding on those requirements
  3. Increasing Transparency, Fostering accountability, and Enabling Continuous Auditing and Compliance: this benefit clearly concerns the commercial relationships between CSPs and their customers, and is phrased primarily from the perspective of the suppliers (who are, after all, the main driving force within CSA) - for instance, it mentions the need to maintain confidentiality on the fien details of a CSP’s internal security policies, while at the same time being sufficiently transparent on compliance with those policies. That’s a tricky balancing act to pull off. Having said that, the text makes it clear that there are significant drawbacks to manual supplier audits conducted every 6 to 12 months, so there is a legitimate need to adopt a more efficient and effective approach. Cue: suitable continuous security assurance metrics!

Each of the 34 proposed metrics in the catalog are described using a standard template that:

  • References the primary and other controls from the Cloud Control Matrix, described as ”CSA’s flagship cybersecurity framework for cloud computing”. Consequently, the metrics are intended to quantify certain aspects of the CSP’s technical security control performance - as opposed to security metrics supporting achievement of the CSP and its customers’ business objectives.
  • Briefly describes the metric and its ‘expression’ (a mathematical formula - mostly simple proportions expressed as percentates, or a description of the measurement conditions and ‘rules’ - which in practice is further explanation about the source data that qualifies to be reported)
  • Indicates “Industry best-practice recommended objectives ... e.g. “minmum expected level” or targets for the metric).
  • Implementation guidelines - notes about how to intepret the rules and so generate the measurements.

You are invited to PRAGMATICally-evaluate the 34 CSA metrics, or find some other way to determine and assess their value in your organization, whether CSP or client, or for that matter using private in-house cloud systems. Hint: consider your organization’s strategic and business objectives relating to cloud computing services, particularly the information risk and security aspects, to determine your measurement requirements independently of the metrics suggested by the CSA for its members. Possibly the CSA metrics partly address your needs, but what’s missing, and are any of those critical to your orgnization? For example, a commercial CSP faces information risks like any other similar business: do the CSA metrics concern those risks and the expected controls?

Copyright © 2021 Gary Hinson & Krag Brotby