American University
International
Constructing a 'Theory of Sabotage'
When you think about the word “sabotage,” what comes to mind? Do examples in history, like the attempt by the German government to sabotage American defense equipment production in World War II, occur to you? Do you think about cyber espionage or hackers tapping into sensitive databases?
In a new article published in the French journal Études Françaises De Renseignement et de Cyber [Journal of French Intelligence and Cyber Studies], SIS professor Joshua Rovner lays out a theory of sabotage, explains why sabotage is appealing to militaries and governments, and discusses cyberspace sabotage. We asked Rovner to answer a few questions about his new article, “Theory of Sabotage.”
- Your article seeks to articulate what a theory of sabotage could entail. You write, “To my knowledge, no one has written a theory of sabotage,” despite recent interest from scholars in covert actions and international relations theory. What were some key considerations you made when articulating a theory of sabotage?
- I started with three questions: Who are the saboteurs? Who are their targets? And what is the point? The last question was most important. We have a lot of theories about the logic of military force, economic sanctions, and so on, but not about this peculiar tool of statecraft. This is surprising, given how much the fear of sabotage has been in the news. Just think of the warnings we hear about cyber threats to public utilities and voting machines. A lot of smart analysts have explored the technical aspects of these operations, real and hypothetical. I thought it was time we looked harder at the underlying logic.
- You’ve written that the “goal of sabotage is to make [friction] intolerable.” You define friction as the “routine hiccups that affect any organization’s performance,” and explain that sabotage has different effects in peacetime and war. Are there specific examples in history that informed your view of sabotage in peacetime and war?
- During World War II the Office of Strategic Services, the forerunner of the CIA, wrote a fascinating little guide for saboteurs operating behind enemy lines. I've assigned it for years in my classes on intelligence, and I returned to it when I started sketching the outlines for a theory. The manual is striking because it talks about the ways that quiet saboteurs can weaponize ordinary bureaucratic friction. Sabotage is less about spectacular violence and more about the cumulative effect of on-the-job frustration. Saboteurs usually succeed not by blowing things up but by eroding organizational efficiency and morale, forcing rival policymakers and bureaucrats to divert their time and resources to unseen threats. And by working carefully and quietly, they can go about sabotage without being captured.
- But this case also says something about the limits of sabotage. There is a trade-off between secrecy and effects: the larger the effects of any given operation, the more likely it will be discovered. For this reason, sabotage may be most effective as a small-scale enabler for larger policy instruments. Sabotage didn't defeat the Nazis, but it probably diverted their attention in advance of military force. I suspect the same is true today, despite the very different technologies involved.
- You dedicated a section of the article to discussing “Sabotage in Cyberspace.” You wrote, “Although the goals and methods of each case [of offensive cyberspace operations] are unique, the logic is the same: they weaponize friction, reduce efficiency, and cause frustration to accumulate.” You also discuss three examples of cyberspace sabotage: the Stuxnet campaign, Russia’s information campaign in the 2016 election, and the 2021 ransomware attack on the Colonial Pipeline. Briefly, in your view, what do the outcomes of each of these operations tell us about the future of cyberspace sabotage?
- Cyberspace sabotage will continue to appeal to policymakers. The domain itself seems tailor-made for such operations: if you want to increase friction in someone else's bureaucracy, you need to attack the information and communications systems on which it depends. Moreover, the growing connections between the cyber and physical worlds hint at a future in which cyberspace operations can have big physical consequences. And even if they don't, it’s a low-risk proposition, because it's a lot safer to deploy malware than human agents.
- That said, recent experience suggests that even sophisticated operations may not have lasting effects. Iran actually increased the amount of uranium it enriched during the Stuxnet campaign. Russia was not able to affect the actual vote count in 2016. And while the Colonial Pipeline operation caused some panic among gas buyers, it was short-lived and no harm was done to the energy infrastructure. That said, cyberspace sabotage is clearly an unsettling experience for the targets. The interesting question is how the lingering memory affects their organizational and strategic choices in the aftermath.
- The article ends with three propositions surrounding the effects of sabotage: “the effects of sabotage depend on the bureaucratic characteristics of the target”; “the practice and psychological effects of sabotage depend on bureaucratic culture”; and “the broader political consequences of sabotage depend on preexisting political circumstances.” In addition to these three propositions, you recognize in the article that “there are almost certainly other ways in which states and non-state actors may seek to undermine their rivals.” What did you consider when determining these three propositions?
- I want to know what kind of evidence will prove me wrong. That means thinking up the kind of theoretical propositions that we can convert into testable claims. I think these ones fit the bill.
- I also want to encourage a conversation among scholars with different interests (cybersecurity, intelligence, strategy, etc.) and from different disciplinary backgrounds (political science, history, economics, psychology, organization theory, etc.). The article draws on a pretty eclectic mix of scholarship. This was probably inevitable, given that sabotage implicates so many different fields. I've learned a lot from scholars who approach the problem from different directions. Synthesizing their views helped me generate some ideas about how sabotage works. I hope more attention to this question will help put those ideas to the test.
- Finally, the propositions are relevant to ongoing policy interest in "resiliency." The issue is how the United States bounces back from various kinds of attacks, presuming it can't stop all of them. By thinking about sabotage as a bureaucratic issue as much as a technical threat, officials may be able to implement modest but useful bureaucratic processes that limit the damage and help avoid an overreaction.