This Is How They Tell Me the World Ends

Book written by Nicole Perlroth
Book review by Rick Howard
Executive Summary
I don't recommend this nonfiction book for the Cybersecurity Canon Hall of Fame, but if you're interested in the evolution of the zero day exploit market, this is a good one to read. For exploit developers though, this is a must read. Also, there is an account of the Chinese government's infiltration of the Google networks back in 2010, to my knowledge the only detailed account in public and the catalyst to Google redesigning their security architecture to fully incorporate the Zero Trust philosophy. There is also a detailed account of the NSA's Project Gunman, the 1984 classified six-month operation to remove every single piece of electrical equipment from the U.S. embassy in Moscow, bring it back to Fort Meade for examination, and replace it with equipment the agency could guarantee was not bugged. If you are interested in those two historic moments in our collective cybersecurity history, this book may be valuable to you for those stories alone.
Review
The exploitation market isn't something that many security professionals focus on during their day-to-day. They mostly react to the downstream effects after bad guys use those developed exploits in offensive cyber campaigns somewhere in the world. That's why I believe this is not a Hall of Fame type of book. Most security practitioners don't need to know the details of exploitation markets. But the history of it and the evolution of it since the early two thousands is fascinating.
One question at the center of Perlroth's thesis is whether the exploitation market is evil and should be treated as such by "good" actors on the world stage (governments, independent contractors, etc) or are they benign in that the market simply sells tools and the operators in the exploitation market bare no responsibility on how the tools are used?
It's not illegal to be in the exploitation market. But that doesn't mean that the resulting activity isn't bad. One take-away is that these groups are in the same category as gun makers (like Smith & Wesson, Colt, and Remington). Most Americans don't say that those companies are bad just because, in 2018, the US averaged almost four school shootings a month, just over seven people shot a month, and just over two killed a month for the past five years. (Source: NYTs).
In the security space, one parallel example is the NRO Group; the makers of the no-click mobile device exploitation tool called Pegasus. NRO Group leaders say that they run a legitimate business that sells tools. They are not responsible for how the tool is used. To argue against that logic, you would have to say that Facebook supports Russian influence operations or that Google facilitates child pornographers. That said, there's no arguing that tools like Pegasus, Eternal Blue (the NSA developed zero day exploit used by other nation states like Russia and North Korea), and others have caused serious damage to the world (NotPetya, WannaCry, etc). Those are all bad things. If that wasn't the toolmaker's intent, does that relieve them of any responsibility? If I'm being honest with myself, I don't think that it does even if you can outline legitimate uses for the tools (Smith and Wesson - Hunting and self protection; The NSA - tracking terrorists). I'm assuming that's one of the reasons AWS pulled the plug on NRO.
Another take-away is that these groups, these purveyors of zero day exploits, are in the same category as independent contractors who sell exploits to governments in an effort to help those governments fight bad guys. That seems like a noble cause. But who are the bad guys in this world and who gets to decide?
One crystallization moment in Perlroth's book is a discussion she had with a greybeard exploit developer named Ivan Arce. She interviewed him at an exploitation conference (Ekoparty) in Argentina. Apparently the Argentines are the most advanced exploit developers in the world right now. Who knew? Anyway, in a weak moment, she asked, “So will they [Argentine exploit developers] only sell their exploits to good Western governments?” Arce replied, “Good Western governments? You need to dispose of your view, Nicole. In Argentina, who is good? Who is bad? The last time I checked, the country that bombed another country into oblivion wasn’t China or Iran.”
Ouch! That hurts a bit. And it's absolutely true. Most of the other 195 sovereign countries in the world don't view the US as the good guy in the world any more, if they ever did. The only people that think the Americans are the good guys are the Americans.
And still another take-away is from Eric Soulliage, a security practitioner at the National Bank of Canada. He says that "AWS and [other giant tech companies] are industrializing the weaponization of our enemies, by providing service to anybody that pays. [It's no different] with the arms dealer serving both sides of a conflict." I don't disagree with that. It's one thing to provide destructive tools in support of your government. It's quite another to sell to both sides, which perpetuates your market and almost guarantees an escalation from all customers.
Full transparency: my last military assignment (2002 - 2004) was the Commander of the Army Computer Emergency Response Team (ACERT). I coordinated offensive and defensive operations for the US Army. One of my duties was to have in my back pocket 30 zero day exploits ready to go at any given time. This meant that I was in the business of buying zero day exploits from government contractors. After I retired from the military, I was the intelligence director and eventually the General Manager for iDefense, a Verisign owned company, the very company that started the exploitation market back in 2002. I arrived right after the original founders began leaving Verisign after their mandatory two years after the acquisition. When I was there, we actively built zero day exploits and sold them to the US government. The point is that I have been on both sides of this market and if it is evil, I had a hand in it if ever so slightly.
The question that Perlroth made me struggle with is whether or not, 20 years after I actively participated in the exploitation market, do I regret my actions? I have to admit, I'm struggling with this one. Now that I'm older and more cynical, and I start to think about how these tools are being used, I am much more torn. I want to help our government, but I know our government has slipped over the line many times when it comes to surveillance of American citizens. And, according to Perlroth, the NRO Group and others sell to governments who don't aspire to the same lofty and self correcting principles that we Americans, and other democracies, do. Those governments use those tools, like Pegasus, to spy on their own citizens, harass journalists, and conduct other nefarious activities that sometimes lead to death or incarceration.
As Perlroth points out, some exploit researchers have decided not to play in that admittedly lucrative game. They've seen what those kinds of tools can do and don't want to be part of it. They have decided to stay in the white hat lane, selling bugs to companies for a much smaller price. Other researchers have said it's not their fault how the tools are used.
For me, it comes down to choice. If you're an exploit developer, you can make a lot of fast cash selling your work to the highest bidder. According to Perlroth, offers for completely reliable exploits start at $1 million and only go up from there. That's a big temptation. But if you're like me and are conflicted about how your exploit tools might be used in the world, you could go the white hat route and sell them to vendors through their bug bounty programs. You won't make nearly as much money and it's a hassle, but you won't have to sell your soul to do it. And I know that's easy for me to say. I sold these things to the government for six years and never batted an eye. But I think if I had to do it over again, I would pass and find another way to bring in revenue.
One last thing, when Perlroth published this book, many in the tech community, including many women, came out strong against it claiming numerous technical errors that she refused to acknowledge. My spidey-sense immediately started to tingle. Writing a book is hard and, regardless of how good the author is, there are always little things that creep in that could be more precise or, even if they are wrong, are so small compared to the overall story, that they make no difference. As a reader, you note them and move on. For Perlroth, she is a veteran NYTs reporter. This is not her first rodeo with fact checking. My worry was that you don't typically see this kind of scrutiny when men write these books like David Sanger, Peter Singer, and Andy Greenberg. I suspected a bit of security tech misogyny but I wasn't sure. So, I collected all of the alleged errors:
1: Mischaracterized Dave Aitel's Title when he was at the NSA.
2: Got the number of Stuxnet zero days wrong.
3: Got the date wrong for when tools (like EternalBlue) were exfiltrated for the NSA.
4: Often conflates encryption backdoors, zero-days, vulnerabilities, bugs, and exploits, and she calls any backdoor into a system a “zero-day.”
5: Incorrectly claimed that the NSA had control of the zero day market and then lost it.
6: Claims explanation of malware that changes to avoid detection is exceptional when it is routine (Sandworm in 2015).
7: Charlie Miller says that she got the details wrong about why his family moved. Perlroth says that they moved because his wife got a job teaching Anthropology at Washington University. In reality it was Sociology at Webster University.
8: Claims that Morgan Marquis-Boire, accused of sexual assault by multiple women on two continents over a period of more than a decade, saved the day for Google during the Aurora Campaign in 2010.
9: Claims that Baltimore was attacked with EternalBlue. Perlroth ’s response was to claim that they were fixating on a “technical detail to avoid responsibility.”
My Opinion
1-7: Except for the Charlie Miller item, these are not factual errors but more fuzzy interpretations depending on how you defined things. Could the language be tighter? Maybe. But these are not errors. For #4, I didn't see her claiming anything like that. For the Charlie Miller point, she botched it but it wasn't essential to the story she was telling.
8: She addresses this in the book. She used him as a source before he was discovered, before anybody in the public knew he was a low-life. And then he disappeared. And she doesn't claim that he saved the day, just that he was an integral part of the investigation.
9: This seems to be the biggest sticking point. Technically, she doesn't say that the ransomware gang used EternalBlue, just that Microsoft analysts found evidence of the tool present in the Baltimore networks during their investigation. Did she imply that it was used? Maybe. But this is a far cry from what the critics accuse.
Here's the thing. I think a lot of people didn't read the book. They heard the criticism and glommed on. And I love the tech community, but sometimes they can be so rigid. If something isn’t perfect, it's horrible and shouldn't be touched. There’s no middle ground. And that’s a shame because the book has some amazing detail. I didn’t agree with everything she said. The last time I looked though, that wasn’t a requirement to decide if a book was good or not. In my mind, I’m looking for an exchange of ideas. I got that in spades here.
So with all of that, I recommend this book for anybody interested in the evolution of the exploitation market. It's not a must read for the general security community by any means, but if you are interested in the topic, I think you will get a lot out of it.