Liars & Outliers: Enabling the Trust that Society Needs to Survive
I don’t recommend this nonfiction book for the Cybersecurity Canon Hall of Fame, but if you are interested in the topic, this is a good one to read.
Any good book editor makes sure there’s a hook somewhere on the front or back cover, designed to draw the would-be reader in, and for this book it’s prominently placed at the top center of the back cover: “How does society function when you can’t trust anyone?” This is not a book you might recognize as your standard cybersecurity fare, but it absolutely pertains to that (our!) world. And the answer to that central question is clearly addressed over the course of this excellent book.
Earlier in his career the author, a public-interest technologist, devoted his book-length work primarily towards the field of cryptography. But over the past two decades, he has delivered more than a dozen books appropriate for much wider audiences, addressing security-centric and security-adjacent topics such as connecting computer security and risk management for a business audience, pervasive surveillance enabled by technology, and the risks presented by the myriad ways we are all interconnected with one another, each of which are reviewed elsewhere right here in the Cybersecurity Canon.
And here, the topic is trust. It’s hard enough to find people in your life to trust, especially outside your family - how do we know who to trust? You may have pondered this question yourself, if only indirectly - Dunbar’s number, a concept from the field of primatology, suggests that each of us primates is capable of maintaining relationships up to an average of 150 “known” people in our universe. Beyond that, we’re simply not wired to succeed. And yet, the volume of individuals who comprise our companies, our towns, and our countries easily dwarf that number, and these large collections of individuals somehow still manage to exist as cohesive entities.
And so figuring out how to scale trust is essential, especially when we think about the implications for commerce, which cannot succeed without trust.
Every society, or as more broadly defined here, every ecosystem contains parasites. Rule-breakers. Individuals who purposefully ignore established norms. These are the actors who pose the greatest threat to trust within an ecosystem, and these “defectors” are the central actors throughout the book.
Defectors are rational beings - they know that there are costs to their actions, and they know there are benefits, which (in their mind) outweigh those costs. It’s why they defect in the first place. The book speaks to controls in place to combat defections, from social norms designed to drive all of us towards cooperation, to penalties and incentives to encourage the “right” behavior.
Designing incentives with few side effects is no easy task in a complex system. Why do some companies view fines simply as a cost of doing business? Along the same lines, why do some parents unexpectedly embrace late pickup fees for day-care when they are first introduced? In some cases, purpose-built incentives and natural pressures can even encourage, not discourage, defection.
Defections are not binary, all-or-nothing events; a defector can defect slowly, starting small, as they work their way up to something more recognizable as an action against an established norm. There’s an important lesson here when thinking about your insider threat program - there are almost always clues ahead of a negative insider event, if only you are looking for them.
This book samples widely from the fields of social science, psychology, decision science, game theory, and others, including (yes!) security. Beyond the already-mentioned Dunbar’s number, we are also walked through the Prisoner’s Dilemma (how two rational people may not cooperate when they should), the Tragedy of the Commons (placing personal gain over the well-being of society as a whole), the Hawk-Dove game (two people should both yield to avoid conflict, but may not), the Red Queen effect (adapting faster and faster just to survive), the Principal-Agent problem (an asset owner and the asset controller may not be in sync), Prospect Theory (individuals value gains and losses differently), Goodhart’s law (when a measure becomes a target, it ceases to be a good measure) and other models, rules or frameworks that bear directly on the question of how to scale trust.
Often, but not always, driven by the rapid pace of technology change, society innovates, and defectors innovate in turn. But don’t jump to the conclusion that defectors must be stamped out and eradicated in order for society to function. In most cases, society accepts a certain amount of defection - to wipe it out completely would both be cost-prohibitive and damaging to the non-defectors (through widespread infringement of rights or other actions). The author goes even further in pointing out that society needs defectors. Defectors often drive innovation. They definitely drive new business models, models which might not have worked fifty, or twenty, or ten years ago, but are now ripe.
This is a very well-structured but somewhat dense read - don’t be afraid to pick it up and chew on it a chapter or two at a time. If you are a reader who checks to see how long books are before committing to read them, note that the final ~90 pages (about one-quarter of the book’s length) are devoted to endnotes and additional references.
As he closes in on the end of the final chapter of the book, the author quotes the philosopher Sissela Bok: “...trust is a social good to be protected just as much as the air we breathe or the water we drink. When it is damaged, the community as a whole suffers; and when it is destroyed, societies falter and collapse.” A strong closing thought from a strong, relevant and timely book.