FAQ
Go home

This FAQ (Frequently Asked Questions) concerns the selection and use of information security metrics. The FAQ draws on our own inspiration and experience, plus questions and issues raised and addressed by colleagues on the Forum. The FAQ is minimalist right now but watch out for changes as it grows plus our fast-start metrics tips in due course. By all means raise further questions and join the discussion on the Forum, or contact us directly for specific advice (within reason! We are consultants, after all!).

Security Metrics FAQ quick links

 

 

1. The purpose of security metrics

1.1 What is the point of security metrics? Why bother?

    The primary purpose of security metrics is to provide pertinent information relating to decisions concerning information risks and security. They achieve this normally by addressing questions such as:

    • What information risks are we facing?
    • How big or how serious are those risks (both in isolation and relative to other kinds of risk such as strategic, financial, market and people risks)?
    • What are the most significant security issues we need to address urgently?
    • Which information security investments will deliver the best value?
    • How strong are our information security arrangements?

    ‘Pertinent information’ is an important point. Given the ready availability of security-related data from many IT systems, there is a tendency for technical people to gather and forward it en masse to management in the vain hope that some of it might be useful. By the same token, managers often don’t know what metrics to ask for, and aren’t sure what information might be available, so their requirements are unclear.

    Both ‘sides’ really need to get together and thrash it out!

    From another perspective, good security metrics tell us how far off-track we are, and which way we need to steer to get back on-track. The track in this analogy could be a path towards a business or technical goal, a project plan, a budget, or simply a desire to head in the general direction of a broad objective such as “being compliant” or “being secure”. This kind of analogy can be a powerful tool to set people thinking about (a) the target and (b) the route, which in turn helps define (c) the direction and (d) the speed leading to an understanding of (e) the metrics and hence (f) the measuring instruments needed.

    For example, imagine that the organization is migrating its payroll system into the cloud. In information security terms, what are its objectives? One objective might be to ensure that the cloud service pays the right employees the correct amounts, in other words data integrity is a key business and information security requirement in this case. How would management tell whether the cloud service was in fact paying the right employees the correct amounts? One way would be to keep an eye on complaints by employees that they have been under-paid, and another would be to track and compare the pay totals each period against prior periods to identify unexpected changes that might indicate under or over payments. These suggest (at least) two metrics: (1) employee pay complaints rate, and (2) variance between current period totals and those projected from previous periods. In both cases, it ought to be possible to figure out acceptable tolerances within which the figures are acceptable but beyond which they indicate issues that needs to be explored further. Generating the numbers and perhaps analyzing them, calculating the trends and creating exception reports for follow-up checks may be something that gets specified in the contract with the cloud service provider, while those ‘follow-up checks’ and other matters should be fleshed out into procedures for using and acting on the metrics. Thinking it through, ‘acting on the metrics’ could involve getting additional reports, cross-checking pay amounts, validating unanticipated discrepancies, checking/adjusting tolerance limits and delegated authorities etc. - in other words a whole bunch of decisions and actions arising from/triggered by the metrics. In this example, the metrics have operational value for various junior and middle managers through the organization as well as Finance and HR, while senior management may feel the need for higher-level overview/summary metrics to confirm that the payroll system and the associated processes are working well ... and pretty soon we have figured out a suite of information security metrics. This is hardly rocket surgery.

 

 

2. Selecting security metrics

2.1 How do we identify potential metrics?

    There are loads of sources of inspiration if you are hunting for security metrics - this is covered at length in the book. For now, here is a shortlist:

    • Consider existing security metrics, plus other kinds of metrics already used by your organization (e.g. financial metrics, risk metrics, performance and capacity metrics ...).
    • Use standards such as ISO/IEC 27004 or NIST SP800-55.
    • Use books such as Debra Herrmann’s tome outlining hundreds of security and privacy metrics.
    • Take advice from professional bodies such as ISACA and the Information Security Forum.
    • Use your social networks: ask professional colleagues and peers, raise metrics at the next ISSA meeting or security conference, or discuss it online through social media.
    • Brainstorm. Think creatively. Use the PRAGMATIC approach in workshops or collaborative projects understand and evaluate current and potential security metrics, identifying useful refinements and stimulating fresh new ideas.

    Finding potential metrics - information-security-related things that could be measured - is very much the easy part. Deciding which of the thousands of candidate security metrics are actually worth measuring, reporting and using is a different matter entirely ...

 

2.2 How do we choose between possible metrics?

    Easy: pick the best ones and toss the rest aside!

    For a less facetious and far more explicit solution, I’m afraid you’ll have to read the book. Sifting through a bunch of metrics to determine which, if any, are going to be worthwhile is the core challenge we address. In practice there is no simple answer. This is a tough problem, complicated and difficult even to frame, but extremely important nevertheless.

    By the way, we recommend a more environmentally-sound approach: rather than simply disposing of unwanted metrics, recycle them. Keep notes on metrics that don’t make the grade because one day your needs may change. With additional experience of the PRAGMATIC method, you will soon find yourself seizing opportunities to adapt or rework existing, proposed and previously-discarded metrics, using the PRAGMATIC ratings to identify and drive improvements.

    Imagine, for instance, that your security awareness metrics are proving somewhat unsatisfactory and unpopular with management, leading to a pent-up demand for better awareness metrics. Check your metrics notebook to be reminded about awareness metrics you have discounted, plus others that you have heard about in the meantime. Apply the PRAGMATIC method, thinking hard about their individual PRAGMATIC ratings and value relative to the metrics you are currently using. Then talk through the options with management, drawing on the PRAGMATIC analysis to help them reach an objective, sensible decision - since, at the end of the day, they are the ones in most need of the information. Having analyzed and explained the options, choosing which metrics to adopt becomes a collective decision, in some cases no-brainers.

     

2.3 Which metrics are the best?  What do you recommend? What security metrics are most other organizations using?

    We wish we could give you a straightforward, easy answer to questions of that nature but alas you are going to have to work things out for yourself.

    The thing is, your situation is unique. Your information risks are unique. The maturity of your approach to information security management and measurement is unique. Your organization and its management are unique. Your goals and objectives for information security are unique ...

    Consequently, without having first researched your information needs, we could at best offer generic guidance on which security metrics seem ‘quite good’ to us, but we’d be guessing based on our experience and situation, not yours. What’s best for us or for others is probably not best for you.

    Do you always rush out to buy the number one best-selling pop album topping the charts? What about, say, the top jazz, rap or classical album? Would you not even consider the #2 or #3? What if you only like one track? Do your friends and family share your tastes in music? Metrics are a bit like that.

    The book provides a more radical and valuable solution (to metrics, not music!): the PRAGMATIC method is a tool you can use systematically to determine your measurement requirements, assess candidate metrics, score the metrics, and so end up with a shortlist of metrics that are worth further consideration. From there, we offer more advice on selecting metrics that complement and support each other, taking a systems approach and in time developing an information security measurement system that suits your unique circumstances, and becomes an integral, essential part of your organization’s approach to information risk and security management.

    That said, the individual PRAGMATIC criteria and ratings are reasonably consistent in a given context thanks to the defined scoring norms, and more importantly the overall PRAGMATIC scores turn out to be a reliable guide to the relative merits of different metrics. The scoring process may not be entirely mechanistic and objective, but it’s definitely more rational than the ‘finger in the air’ methods that we used to use, and a tremendous advance over the purely subjective approach to choosing security metrics according to gut feel and someone else’s (usually unstated and seldom rationalized and defined) criteria.

     

2.4 Which security metrics are the most Predictive?

    If you have or can create a list of all your potential and existing security metrics, we could sit down together to consider first of all your context and need for metrics, then to PRAGMATIC-score them. We would assign initial Predictiveness scores and then, by contemplating their merits and ranking them on that criterion relative to each other, spread out the scores across the range. Then, simply by sorting the scoring spreadsheet on the Predictiveness score, we could easily answer your question ... but you’ll notice that there’s quite a lot of work involved to get to that point.

    If you were simply hoping that we’d tell you our top-scoring most Predictive security metric, you’re out of luck. Sorry.

    The security metrics that scored highly in the context of Acme Enterprises Inc, the fictional organization used as a case-study in the book, would not score exactly the same in any real-world organization, facing actual problems, practical constraints and with genuine needs for management information. In short, our most Predictive security metric may not be yours.

     

2.5 Shouldn’t metrics be SMART?

    You may have come across the idea that metrics should be S.M.A.R.T., usually meaning something like Specific, Meaningful, Attainable, Relevant and Timely ... although in fact different people often interpret the mnemonic in markedly different ways. Take a look at the Wikipedia page for some of the variants.

    At a superficial level, you may think PRAGMATIC is just a fancy shmancy version of SMART. Three of the letters are the same, for starters. However there are subtle but important differences in their interpretations. Take M for instance: in SMART, the M can mean Meaningul, Motivational, Manageable, Measurable ... or something else entirely, depending on who is using the term. The M in PRAGMATIC is specifically a measure of how Meaningful the metric is to its intended audience, expressed as a percentage on a pre-defined scale.

    As well as explicitly defining and describing the PRAGMATIC criteria, we have developed a structured way to assess and measure or rate metrics against the criteria, generating their PRAGMATIC scores. SMART, in contrast, is generally used as a rough guide or objective in a non-specific and unmeasured way. Someone might argue that “metric X is SMARTer than metric Y,” but that is usually a highly subjective opinion based on a load of unstated assumptions (not least, what SMART means!). It would be pointless, perhaps literally impossible, to compare multiple metrics using SMART without first being explicit about the criteria, and second providing a sensible and repeatable way to measure the metrics against each criterion in turn.

    Highly PRAGMATIC metrics are also likely to be SMART metrics, but the converse does not necessarily hold true: SMART metrics may not be highly PRAGMATIC.

     

2.6 What makes a metric ‘actionable’ - or not?

    Good question! Highly Actionable metrics:

    • Clearly differentiate between levels that are acceptable and unacceptable being either below or above an explicit threshold, or clearly in ‘the green or red zones’.
    • Trigger, prompt or stimulate responses to patently unacceptable values, by the appropriate people. They are hard to ignore or overlook. They have impact, resonating specifically with their intended audience.
    • Indicate or at least suggest what should be done to bring the metric back ‘into the green’.
    • Give some basis for determining how significant the matter is, hence a degree of urgency for the response.

    In contrast, lame metrics merely sit there on the screen or page. Nobody feels personally responsible for doing anything about the levels. Issues are unclear, responses uncertain. Although the metrics themselves may be misunderstood (an aspect covered more directly by the Meaningful criterion), the key point is that they fail to trigger appropriate action - hence the name of this PRAGMATIC criterion.

    Note that the way a given metric is presented, discussed and interpreted has some bearing on this: an inherently Actionable metric can be buried in a report, lost in the noise, languishing unloved and forgotten in some dark corner of the detailed screens beneath the glittery corporate security metrics dashboard. It may even be deliberately hidden by someone hoping to conceal or deflect attention from bad news and/or their part in the situation. This possibility raises governance concerns, emphasizing the design of appropriate measurement processes, assignation of accountability, compliance activities and so on - particularly for ‘key metrics’ relating to ‘key controls’ and ‘key risks’.

     

2.7 What are rich and complementary metrics?

    Many aspects of information security that would be good to measure are quite complex. There are often numerous factors involved, and various facets of concern. Take ‘security culture’ for example: it is fairly straightforward to measure employees’ knowledge of and attitudes towards information security using a survey approach, and that is a useful metric in its own right. It becomes more valuable if we broaden the scope to compare and contrast different parts of the organization, using the same survey approach and the same survey data but analyzing the numbers in more depth. We might discover, for instance, that one business unit or department has a very strong security culture, whereas another is relatively weak. Perhaps we can learn something useful from the former and apply it to the latter. This is what we mean by ‘rich’ metrics. Basically, it involves teasing out the relevant factors and getting as much useful information as we can from individual metrics, analyzing and presenting the data in ways that facilitate and suggest security improvements.

    ‘Complementary’ metrics, on the other hand, are sets of distinct but related metrics that, together, give us greater insight than any individual metric taken in isolation. Returning to the security culture example, we might supplement the employee cultural survey with metrics concerning security awareness and training activities, and compliance metrics that measure actual behaviors in the workplace. These measure the same problem space from different angles, helping us figure out why things are the way they are.

    Complementary metrics are also useful in relation to critical controls, where control failure would be disastrous. If we are utterly reliant on a single metric, even a rich metric, to determine the status of the control, we are introducing another single point of failure. And, yes, metrics do sometimes fail. An obvious solution (once you appreciate the issue, that is!) is to make the both the controls and the metrics more resilient and trustworthy, for instance through redundancy. Instead of depending on, say, a single technical vulnerability scanner tool to tell us how well we are doing on security patching, we might use scanners from different vendors, comparing the outputs for discrepancies. We could also measure patching status by a totally different approach, such as patch latency or half-life (the time taken from the moment a patch is released to apply it successfully to half of the applicable population of systems), or a maturity metric looking at the overall quality of our patching activities, or metrics derived from penetration testing. Even if the vulnerability scanner metric is nicely in the green zone, an amber or red indication from one of the complementary metrics should raise serious questions, hopefully in good time to avert disaster.

    A natural extension of this concept would be to design an entire suite of security metrics using a systems engineering approach. We expand on this idea in the book, describing an ‘information security measurement system’ as an essential component of, or complement to, an effective ‘information security management system’.

     

3. Using security metrics

3.1 How should security metrics be reported?

    “Reporting” metrics often implies a rather tedious written management report stuffed with graphs and tables, but there are sound reasons for being far more creative in your approach.

    For a start, think about who you are reporting to. What do they want from you? What type or types of communication do they prefer - written reports with all the gory details, short executive summaries, web pages, presentations, rough notes discussed over coffee or something else?

    Often it is better to discuss metrics with the recipients rather than simply submitting a report. Discussion gives everyone the chance to explain things, ask questions, provide feedback, and generally mull over the information. Given that the prime purpose of metrics is for decision support, discussion and persuasion seems far more likely to facilitate sensible decisions than passively providing written information in a report, although it makes sense to provide the figures, graphs etc. on paper or on screen as well as discussing them - the best of both worlds.

    A bonus to presenting and discussing metrics is the opportunity to get instant feedback on the metrics themselves, and to make sure that everyone understands exactly what is being measured, why and how if necessary. Simple metrics are generally self-evident but more complicated or convoluted ones deserve and may in fact require explanation. Since Meaningfulness is one of the PRAGMATIC criteria, you will naturally avoid the most confusing security metrics, but occasionally there is little alternative. Information security is inherently complex.

     

3.2  What if the metrics are bad?

    We’re not entirely sure what you mean here: metrics that are inherently bad (in the sense of having low PRAGMATIC scores, low utility, low value) should not be used. If however the measurements themselves - the numbers - are bad, that is a different matter entirely.

    Good metrics sometimes do show bad numbers for two reasons: either the subjects of measurement have turned bad in some measurable way (for example, the actual rate and/or severity of security incidents has markedly increased) or the measurement process has gone wrong (for example, security incidents are occurring at about the same rate as ever but the reporting of incidents has dramatically improved, or new sources of information such as additional classes of incident reports have been incorporated into the metric).  Either way, that is potentially useful information provided it can be explained and understood - and to do that will probably require additional analysis and information. This is where the ability to dig deeper, going beneath the headline figures to identify the specific factors involved, pays off. Interpreting security metrics combines science with art!

    Reporting really bad numbers may not seem a sensible move - indeed, in extreme cases, it could be career-limiting. On the other hand, not reporting those numbers could have severe if the information in question, or the fact that it was withheld, eventually comes out. On top of that, the recipients of metrics may well smell a fish if a regular report is late or doesn’t show up, or the figures appear suspiciously good, or the written analysis and/or verbal description paint a rosier picture than the numbers (discordant reporting). It takes guts to report really bad numbers.

    Just remember that bad numbers focus attention on issues and present ‘improvement opportunities’. Good numbers tend to just wash over us, having little impact and hence limited information value. In fact, the most useful metrics tend to highlight and provide some explanation for changes in values rather than absolute numbers. If the numbers are consistently good, why bother reporting them when there are doubtless other issues that deserve attention? [This is a common complaint about those voluminous Service Level Accounting reports often delivered by service providers to their customers. Is the real reason why so many numbers are presented simply to hide or divert attention from the few bad ones?]

     

3.3  How do metrics support decision-making?

    While decisions can be made on a whim, many business decisions have serious consequences, hence the risk of making wrong decisions, or indeed not making necessary decisions in time, can be substantial. Gathering and assessing information that is relevant and timely for a decision could therefore be deemed a risk management activity, and naturally metrics are a key source of relevant information.

    Consider for example the use of Ishikawa (“fishbone”) diagrams in quality assurance and process engineering to assess the factors that contribute to or cause some effect on a process. While approaches vary, a popular method involves analyzing the possible causes of a problems on a process along six lines radiating out from the backbone, each covering one of the Ms:

  1. Manpower: the people performing activities - are they suitably trained and competent? Are they over- or under-worked?  Are they well motivated and energetic?
  2. Machines: including machine tools, computer systems etc. - are they working efficiently and effectively? Are they functional and reliable?
  3. Materials: raw materials, supplies and other process inputs - are they of suitable quantity, quality and reliability? Do they always arrive in time or sometimes cause delays? Are they within specifications?
  4. Methods: how people use the machines to perform activities on the materials- are they doing the right things, and doing things right?  Are the procedures suitable and efficient, or are there better ways?
  5. Mother Nature (the Mvironment): the surroundings in which activities are performed - are they conducive to good work?  Is the workplace comfortable and safe, or is it an impediment?
  6. Metrics: measures relating to the process - do we know what is going on and what might be going wrong? Do we have the information necessary to plan, direct, control and improve the process?

  7. Metrics, then, provide information about the people, the machinery, the inputs, the processes, and the environment, both statically (e.g. in planning/designing the process, or reviewing the start-of-day situation in a morning quality meeting) and dynamically (e.g. monitoring and where necessary adjusting the process during the course of the day according to events and feedback).  Metrics don’t replace the other Ms - they complement and support them, enabling management to get more out of the available resources and cope with perturbations. Just as importantly, metrics don’t exist in isolation. They have negligible inherent value (information that is just nice-to-know) but immense value for managing business - or indeed other - activities. They have a purpose in life.

 

 

4. Improving security metrics

4.1 Our security metrics are rotten. What ever shall we do?

  • Believe it or not, you have already made a start: not only do you appreciate that your metrics are rotten, but you have begun looking for A Better Way!
     
  • Back up a step or two. In what way are your existing security metrics rotten? What makes them so bad? It’s no good glibly stating “They stink” or “They don’t work” - dig a bit deeper to understand the issues. An excellent way to do this is to review your existing security metrics systematically using the PRAGMATIC approach. Take any one of your rotten metrics and run it through the mill. How does it rate for Predictiveness, Relevance, Actionability and so on? Assess its PRAGMATIC score against the following scale:
     
    • 80% or more: according to the PRAGMATIC criteria, this is undoubtedly a strong metric. If you are convinced it is rotten, however, is that perhaps more to do with the way it is being presented and used, rather than an inherent problem with the metric itself? If so, look for guidance on statistics and reporting techniques. It may be possible to tweak the metric to achieve an even higher PRAGMATIC score, but you are almost certainly better advised to work on improving any lower-scoring metrics first. Further improving this metric is probably not worth the effort.

      65 to 80%: this is a reasonably good score for a security metric, but the PRAGMATIC analysis may have indicated weaknesses in some respects. Focus-in on the PRAGMATIC criteria with the lowest ratings and consider whether changes to the metric’s definition, source data or analysis might improve those ratings, but be wary of making changes that improve those criteria (e.g. gathering and assessing additional data for each measurement point) while worsening others (e.g. making the metric less cost-effective).

      40 to 65%: a mediocre score generally indicates deficiencies in one or a few PRAGMATIC criteria. The PRAGMATIC analysis probably made you aware of specific concerns with the metric, so you have a simple choice: either revise the metric to address the issue/s or retire it. Revising the metric involves some creative thinking and preferably an open discussion with the metric’s primary audience.

      Less than 40%: you’re right - this is a rotten metric with serious deficiencies in several PRAGMATIC criteria. You might be able to change things in order to make it more PRAGMATIC but, let’s face it, you are probably flogging a dead horse. Put it out of its misery. Retire the metric in favor of one with a better PRAGMATIC score. There are plenty of ideas in the book on where to find new security metrics if you don’t have any in mind already.

    Retiring rotten metrics is easier thanks to PRAGMATIC since not only do you have a rational basis on which to explain and discuss their deficiencies, but you have the mechanism to identify suitable replacements (if appropriate).

     

4.2 Where can we turn for more specific advice about our security metrics?

    Ask about our training and consulting services. We are the definitive reference on PRAGMATIC security metrics: we wrote the book on it. Literally.

    We offer both public and in-house/custom courses, workshops, briefings, presentations, keynotes and so forth.  We can help you develop a ‘measurement strategy’ in support of your information risk and security strategy, broadly or more specifically aligned with your business strategies. Or how about elaborating on a set of policies, procedures and guidelines in this area? A critique of your existing security metrics, maybe, with creative suggestions to plug the gaps?

 

.... To be continued. Help us build, elaborate and improve on this FAQ. If you have questions, answers or additional information on security metrics that you are willing to share with the community, please raise them on the Forum or contact us directly. If you disagree with our suggestions or wish to point out alternative perspectives, do get in touch.

Copyright © 2021 Gary Hinson & Krag Brotby