Skip to main content

How to Measure Anything in Cybersecurity Risk

How to Measure Anything in Cybersecurity Risk

Book written by Douglas W. Hubbard and Richard Seiersen

Book review by Steve Winterfeld

Executive Summary

How to Measure Anything in Cybersecurity Risk is a book that reads like a college statistics textbook (but the good kind you highlight a lot). It is a book anyone who is responsible for measuring risk, developing metrics, or determining return on investment should read. It is grounded in classic quantitative analysis methodologies and provides a good balance of background and practical examples. This book belongs in the Cybersecurity Canon under Governance Risk and Compliance (GRC).

Review

As I said, this book reads like an education in quantitative modeling and how to apply the methodology to cybersecurity. It truly challenges the current common practices in use to develop expert opinion-based risk frameworks. Here is a snippet from the book:

"So let’s be clear about our position on current methods: They are a failure. They do not work. A thorough investigation of the research on these methods and decision-making methods in general indicates the following: There is no evidence that the types of scoring and risk matrix methods widely used in cybersecurity improve judgment. On the contrary, there is evidence these methods add noise and error to the judgment process. Any appearance of “working” is probably a type of “analysis placebo.” That is, a method may make you feel better even though the activity provides no measurable improvement in estimating risks (or even adds error). There is overwhelming evidence in published research that quantitative, probabilistic methods are effective. Fortunately, most cybersecurity experts seem willing and able to adopt better quantitative solutions. But common misconceptions held by some—including misconceptions about basic statistics—create some obstacles for adopting better methods. How cybersecurity assesses risk, and how it determines how much it reduces risk, are the basis for determining where cybersecurity needs to prioritize the use of resources. And if this method is broken—or even just leaves room for significant improvement—then that is the highest-priority problem for cybersecurity to tackle!”

The authors lay out the book in three sections:

  • Part I sets the stage for reasoning about uncertainty in security. It outlines terms on things like security, uncertainty, measurement and risk management. Plus, it argues against toxic misunderstandings of these terms and why we need a better approach to measuring cybersecurity risk and, for that matter, measuring the performance of cybersecurity risk analysis itself. Finally, it introduces a simple quantitative method that could serve as a starting point for anyone, no matter how averse the person may be to complexity.
  • Part II delves further into evolutionary steps we can take with a simple quantitative model. It explains how to add further complexity to a model and how to use even minimal amounts of data to improve those models.
  • Part III describes what is needed to implement these methods in the organization. It addresses the implications of this book for the entire cybersecurity “ecosystem,” including standards organizations and vendors.

The cybersecurity community suffers from not having standard evaluation metrics, like earnings before interest, taxes, depreciation and amortization (EBITDA). The authors try to bring some discipline to terms by offering standard definitions coming from the quantitative analytics field. From the book:

  • Definitions for Uncertainty and Risk, and Their Measurements Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The “true” outcome/state/ result/value is not known. Measurement of Uncertainty: A set of probabilities assigned to a set of possibilities. For example: “There is a 20% chance we will have a data breach sometime in the next five years.” Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome. Measurement of Risk: A set of possibilities, each with quantified probabilities and quantified losses. For example: “We believe there is a 10% chance that a data breach will result in a legal liability exceeding $10 million.”

They also walk the reader through established methodologies like: Monte Carlo simulations, Bayesian interpretation, risk matrix, loss exceedance curve, heat maps, chain rule tree, beta distribution changes, regression model predations, analytics maturity mode, power law distribution, subjective probability, calibration, dimensional modeling, expected opportunity loss, bunch of guys sitting around talking, expected value of prefect information, NIST and ISO. They explain how, in Excel, so they are truly practical. They also lay out survey results from attitudes toward quantitative methods, global information security workforce study, and stats literacy and acceptance studies.

This work follows other work like Factor Analysis of Information Risk (FAIR) which is a well-recognized value at risk (VaR) framework. They outline another Monte Carlo–based methodology and tools like those developed by Jack Jones and Jack Freund. Another similar work is The Wisdom of Crowds by James Surowiecki.

Finally the book has some great online resources. You can find eight sample downloads of the methods explained, as well as webinar/blog info.

Conclusion

How to Measure Anything in Cybersecurity Risk is an extension of Hubbard’s successful first book, How to Measure Anything: Finding the Value of “Intangibles” in Business. It lays out why statistical models beat expertise every time. It is a book anyone who is responsible for measuring risk, developing metrics, or determining return on investment should read. It provides a strong foundation in qualitative analytics with practical application guidance.

Bottom line: The authors lay out a solid case for why other industries with the similar challenges of lack of quantifiable, standardized or historical actuarial table-like data are able to use classic statistical modeling and methodologies to measure risk in a qualified, repeatable way. Definitely worth considering.

More Books

Cybersecurity First Principles: A Reboot of Strategy and Tactics
Navigating the Cybersecurity Career Path
If It's Smart, It's Vulnerable