This FAQ (Frequently Asked Questions) concerns the selection and use of information security metrics. The FAQ draws on our own inspiration and experience, plus questions and issues raised and addressed by colleagues on the Forum. The FAQ is minimalist right now but watch out for changes as it grows plus our fast-start metrics tips in due course. By all means raise further questions and join the discussion on the Forum, or contact us directly for specific advice (within reason! We are consultants, after all!).
Security Metrics FAQ quick links
1. Selecting security metrics
2. Using security metrics
1. Selecting security metrics
1.1 How do we identify potential metrics?
There are loads of sources of inspiration if you are hunting for security metrics - this is covered at length in the book. For now, here is a shortlist:
Consider existing security metrics, or indeed other kinds of metrics already used by your organization
Use standards such as ISO/IEC 27004
(not the best example, admittedly!)
Take advice from professional bodies such as ISACA
and the Information Security Forum
Use your social networks: ask professional colleagues and peers, raise metrics at the next ISSA meeting or security conference, or discuss it online (e.g
. on the forum
Finding potential metrics - information-security-related things that could be measured - is very much the easy part. Deciding which of the thousands of candidate security metrics are actually worth measuring, reporting and using is a different matter entirely ...
1.2 How do we choose between possible metrics?
Easy: pick the best ones and toss the rest aside!
For a less facetious and far more explicit solution, I’m afraid you’ll have to read the book. Sifting through a bunch of metrics to determine which, if any, are going to be worthwhile is the core challenge we address. In reality there is no simple answer. This is a tough problem, complicated and difficult even to frame, but extremely important nevertheless.
By the way, we recommend a more environmentally-sound approach: rather than simply disposing of unwanted metrics, recycle them. Keep notes on metrics that don’t make the grade because one day your needs may change. With additional experience of the PRAGMATIC method, you will soon find yourself seizing opportunities to adapt or rework existing, proposed and previously-discarded metrics, using the PRAGMATIC ratings to identify and drive improvements.
Imagine, for instance, that your security awareness metrics are proving somewhat unsatisfactory and unpopular with management, leading to a pent-up demand for better awareness metrics. Check your metrics notebook to be reminded about awareness metrics you have discounted, plus others that you have heard about in the meantime (such as this one for example). Apply the PRAGMATIC method, thinking hard about their individual PRAGMATIC ratings and value relative to the metrics you are currently using. Then talk through the options with management, drawing on the PRAGMATIC analysis to help them reach an objective, sensible decision - since, at the end of the day, they are the ones in most need of the information.
1.3 Which metrics are the best?
We wish we could give you a straightforward, easy answer to that but alas you are going to have to work it out for yourself. The thing is, your situation is unique. Your security risks are unique. The maturity of your approach to information security management is unique. Your organization and its management are unique. Your goals and objectives for information security are unique ... Consequently, without having first researched your information needs, we could at best offer generic guidance on which security metrics seem ‘quite good’ to us, but we’d be guessing based on our experience and situation, not yours. What’s best for us is probably not best for you.
On the other hand, the book provides a more radical and valuable solution: the PRAGMATIC method is a tool you can use to determine your requirements, assess candidate metrics, score the metrics, and so end up with a shortlist of metrics that are worth further consideration. From there, we offer more advice on selecting metrics that complement and support each other, taking a systems approach and in time developing an information security measurement system that suits your unique circumstances.
If you need more help from an independent source, ask about our consultancy and training services. We are the definitive reference on PRAGMATIC security metrics: we wrote the book on it. Literally.
1.4 Which security metrics are the most Predictive?
If you have or can create a list of all your potential and existing security metrics, we could sit down together to consider first of all your context and need for metrics, then to PRAGMATIC-score them. We would assign initial Predictiveness scores and then, by contemplating their merits and ranking them on that criterion relative to each other, spread out the scores across the range. Then, simply by sorting the scoring spreadsheet on the Predictiveness score, we could easily answer your question ... but you’ll notice that there’s quite a lot of work involved to get to that point. If you were simply hoping that we’d tell you our top-scoring most Predictive security metric, you’re out of luck.
The security metrics that scored highly in the context of Acme Enterprises Inc, our fictional case-study organization, would not score exactly the same in any real-world organization, facing actual problems, practical constraints and with genuine needs for management information. In short, our most Predictive security metric may not be yours.
That said, the individual PRAGMATIC criteria are reasonably consistent in a given context thanks to the defined scoring norms, and more importantly the overall PRAGMATIC scores turn out to be a reliable guide to the relative merits of the candidate metrics. The scoring process may not be entirely mechanistic and objective, but it’s definitely more rational than the ‘finger in the air’ methods that we used to use, and a tremendous advance over the purely subjective approach to choosing security metrics according to gut feel and someone else’s (usually unstated and seldom rationalized and defined) criteria.
1.5 Shouldn’t metrics be SMART?
You may have come across the idea that metrics should be SMART, usually meaning something like Specific, Meaningful, Attainable, Relevant and Timely ... although in fact different people often use the mnemonic in markedly different ways. Take a look at the Wikipedia page for some of the variants.
At a superficial level, you may think PRAGMATIC is just a fancy shmancy version of SMART. Three of the letters are the same, for starters. However there are subtle but important differences in their interpretations. Take M for instance: in SMART, the M can mean Meaningul, Motivational, Manageable, Measurable ... or something else entirely, depending on who is using the term. The M in PRAGMATIC is a measure of how Meaningful the metric is to its intended audience. The qualification makes the point that certain metrics that appear complex or confusing to the layman are entirely appropriate and suitable for particular people. Some financial and technical/IT metrics fall into this camp: whether they make much sense to everyone is irrelevant, so long as the professionals who are going to be using them understand them.
As well as explicitly defining and describing the PRAGMATIC criteria, we have developed a structured way to assess and measure or rate metrics against the criteria, generating their PRAGMATIC scores. SMART, in contrast, is generally used as a rough guide or objective in a non-specific and unmeasured way. Someone might argue that “metric X is SMARTer than metric Y,” but that is usually a highly subjective opinion based on a load of unstated assumptions (not least, what SMART means!). It would be pointless, perhaps literally impossible, to compare multiple metrics using SMART without first being explicit about the criteria, and second providing a sensible and repeatable way to measure the metrics against each criterion in turn.
Highly PRAGMATIC metrics are also likely to be SMART metrics, but the converse does not necessarily hold true: SMART metrics may not be highly PRAGMATIC.
1.6 What are rich and complementary metrics?
Many aspects of information security that would be good to measure are quite complex. There are often numerous factors involved, and various facets of concern. Take ‘security culture’ for example: it is fairly straightforward to measure employees’ knowledge of and attitudes towards information security using a survey approach, and that is a useful metric in its own right. It becomes more valuable if we broaden the scope to compare and contrast different parts of the organization, using the same survey approach and the same survey data but analyzing the numbers in more depth. We might discover, for instance, that one business unit or department has a very strong security culture, whereas another is relatively weak. Perhaps we can learn something useful from the former and apply it to the latter. This is what we mean by ‘rich’ metrics. Basically, it involves teasing out the relevant factors and getting as much useful information as we can from individual metrics, analyzing and presenting the data in ways that facilitate and suggest security improvements.
‘Complementary’ metrics, on the other hand, are sets of distinct but related metrics that, together, give us greater insight than any individual metric taken in isolation. Returning to the security culture example, we might supplement the employee cultural survey with metrics concerning security awareness and training activities, and compliance metrics that measure actual behaviors in the workplace. These measure the same problem space from different angles, helping us figure out why things are the way they are.
Complementary metrics are also useful in relation to critical controls, where control failure would be disastrous. If we are utterly reliant on a single metric, even a rich metric, to determine the status of the control, we are introducing another single point of failure. And, yes, metrics do sometimes fail. An obvious solution (once you appreciate the issue, that is!) is to make the both the controls and the metrics more resilient and trustworthy, for instance through redundancy. Instead of depending on, say, a single technical vulnerability scanner tool to tell us how well we are doing on security patching, we might use scanners from different vendors, comparing the outputs for discrepancies. We could also measure patching status by a totally different approach, such as patch latency or half-life (the time taken from the moment a patch is released to apply it successfully to half of the applicable population of systems), or a maturity metric looking at the overall quality of our patching activities, or metrics derived from penetration testing. Even if the vulnerability scanner metric is nicely in the green zone, an amber or red indication from one of the complementary metrics should raise serious questions, hopefully in good time to avert disaster.
A natural extension of this concept would be to design an entire suite of security metrics using a systems engineering approach. We expand on this idea in the book, describing an ‘information security measurement system’ as an essential component of, or complement to, an effective ‘information security management system’.
2. Using security metrics
2.1 How should security metrics be reported?
“Reporting” metrics often implies a rather tedious written management report stuffed with graphs and tables, but there are sound reasons for being far more creative in your approach.
For a start, think about who you are reporting to. What do they want from you? What type or types of communication do they prefer - written reports with all the gory details, short executive summaries, web pages, presentations, rough notes discussed over coffee or something else?
Often it is better to discuss metrics with the recipients rather than simply submitting a report. Discussion gives everyone the chance to explain things, ask questions, provide feedback, and generally mull over the information. Given that the prime purpose of metrics is for decision support, discussion and persuasion seems far more likely to facilitate sensible decisions than passively providing written information in a report, although it makes sense to provide the figures, graphs etc. on paper or on screen as well as discussing them - the best of both worlds.
A bonus to presenting and discussing metrics is the opportunity to get instant feedback on the metrics themselves, and to make sure that everyone understands exactly what is being measured, why and how if necessary. Simple metrics are generally self-evident but more complicated or convoluted ones deserve and may in fact require explanation. Since Meaningfulness is one of the PRAGMATIC criteria, you will naturally avoid the most confusing security metrics, but occasionally there is little alternative. Information security is inherently complex.
2.2 What if the metrics are bad?
We’re not entirely sure what you mean here: metrics that are inherently bad (in the sense of having low PRAGMATIC scores, low utility, low value) should not be used. If however the measurements themselves - the numbers - are bad, that is a different matter entirely.
Good metrics sometimes do show bad numbers for two reasons: either the subjects of measurement have turned bad in some measurable way (for example, the actual rate and/or severity of security incidents has markedly increased) or the measurement process has gone wrong (for example, security incidents are occurring at about the same rate as ever but the reporting of incidents has dramatically improved, or new sources of information such as additional classes of incident reports have been incorporated into the metric). Either way, that is potentially useful information provided it can be explained and understood - and to do that will probably require additional analysis and information. This is where the ability to dig deeper, going beneath the headline figures to identify the specific factors involved, pays off. Interpreting security metrics combines science with art!
Reporting really bad numbers may not seem a sensible move - indeed, in extreme cases, it could be career-limiting. On the other hand, not reporting those numbers could have severe if the information in question, or the fact that it was withheld, eventually comes out. On top of that, the recipients of metrics may well smell a fish if a regular report is late or doesn’t show up, or the figures appear suspiciously good, or the written analysis and/or verbal description paint a rosier picture than the numbers (discordant reporting). It takes guts to report really bad numbers.
Just remember that bad numbers focus attention on issues and present ‘improvement opportunities’. Good numbers tend to just wash over us, having little impact and hence limited information value. In fact, the most useful metrics tend to highlight and provide some explanation for changes in values rather than absolute numbers. If the numbers are consistently good, why bother reporting them when there are doubtless other issues that deserve attention? [This is a common complaint about those voluminous Service Level Accounting reports often delivered by service providers to their customers. Is the real reason why so many numbers are presented simply to hide or divert attention from the few bad ones?]
2.3 How do metrics support decision-making?
While decisions can be made on a whim, many business decisions have serious consequences, hence the risk of making wrong decisions, or indeed not making necessary decisions in time, can be substantial. Gathering and assessing information that is relevant and timely for a decision could therefore be deemed a risk management activity, and naturally metrics are a key source of relevant information.
Consider for example the use of Ishikawa (“fishbone”) diagrams in quality assurance and process engineering to assess the factors that contribute to or cause some effect on a process. While approaches vary, a popular method involves analyzing the possible causes of a problems on a process along six lines radiating out from the backbone, each covering one of the Ms:
Manpower: the people performing activities - are they suitably trained and competent? Are they over- or under-worked? Are they well motivated and energetic?
Machines: including machine tools, computer systems etc. - are they working efficiently and effectively? Are they functional and reliable?
Materials: raw materials, supplies and other process inputs - are they of suitable quantity, quality and reliability? Do they always arrive in time or sometimes cause delays? Are they within specifications?
Methods: how people use the machines to perform activities on the materials- are they doing the right things, and doing things right? Are the procedures suitable and efficient, or are there better ways?
Mother Nature (the Mvironment): the surroundings in which activities are performed - are they conducive to good work? Is the workplace comfortable and safe, or is it an impediment?
Metrics: measures relating to the process - do we know what is going on and what might be going wrong? Do we have the information necessary to plan, direct, control and improve the process?
Metrics, then, provide information about the people, the machinery, the inputs, the processes, and the environment, both statically (e.g. in planning/designing the process, or reviewing the start-of-day situation in a morning quality meeting) and dynamically (e.g. monitoring and where necessary adjusting the process during the course of the day according to events and feedback). Metrics don’t replace the other Ms - they complement and support them, enabling management to get more out of the available resources and cope with perturbations. Just as importantly, metrics don’t exist in isolation. They have negligible inherent value (information that is just nice-to-know) but immense value for managing business - or indeed other - activities. They have a purpose in life.
.... To be continued. Help us build, elaborate and improve on this FAQ. If you have questions, answers or additional information on security metrics that you are willing to share with the community, please raise them on the Forum or contact us directly. If you disagree with our suggestions or wish to point out alternative perspectives, do get in touch.