The moral case for military drones is not like the moral case for modern medicine, where there is a broad consensus that its use results in a clear increase in human welfare. At the risk of understatement, the case for combat-ready unmanned aerial vehicles is more vexed. Similarly, the case against combat drones is not like the argument against nuclear holocaust or firebombing (like in Dresden or Tokyo). When it comes to warfare, the age of seemingly easy moral decision making is over.
Perhaps this comes as a disappointment.
Amid our iPads, microwaves, and industrialized agriculture, it is tempting to think that everything, including the task of acting morally in war, should come easily. This is the temptation of our technological age. In response, we offer an unpopular argument: When it comes to war, if it’s easy, it’s probably not moral.
More carefully put, the ease of a particular action (such as the killing of 15 people in North Waziristan in June by a U.S. drone strike) should give us moral pause for at least three reasons. First, easy actions are often carried out habitually, without the reflection that’s required for moral responsibility. Complacency is not the stuff of moral decisions. Second, easy actions often reflect an imbalance in power (think of how easy it is to exploit oppressed individuals, who are unable to resist their oppression), and these same actions are often unjust (think of the injustice involved in this oppression). The institutions of oppression, those that create problematic power differentials, are self-perpetuating and opaque. Third, as philosophers since Plato have observed, actions undertaken in the name of self-interest are often the easiest to accomplish. Self-interest, however, should not be confused with moral justification—even if it is easy to do so.
“We must catch up morally and internationally with the machine age. We must catch up with it, and we must catch up with it in such a way as to create peace in the world, or it will destroy us and everybody else.” When President Harry S. Truman spoke those words on April 17, 1947, he did so at a unique moment in military history. America was in the early phases of developing thermonuclear weapons, culminating in the testing of the first hydrogen bomb in 1952. The Soviets followed suit, and in 1961 tested what became known as the “Tsar Bomb,” the most powerful nuclear device ever detonated, estimated at 3,800 times the force of the bomb dropped on Hiroshima.
Truman called for a new type of moral decision making that would guard against nuclear holocaust. As it turned out, thermonuclear conflagration served as its own safeguard. Mutual assured destruction, the informal policy that was accepted to a greater or lesser extent all the way through the cold war, didn’t rest on a moral rationale but on the principle of self-preservation, a principle with questionable moral status. Think of two barroom fighters who, after sizing each other up, decide that it’s best to call it a day. That judgment would be prudent but not moral in any meaningful sense of the word. The logic of MAD was simple: We would not bomb someone who could bomb us back. Cost-benefit analysis stood in rather nicely for moral deliberation.
For better or worse, that time is over. We have entered the drone age.
Never have the array of strategic choices been so expansive for modern militaries; never have modern militaries faced the question of ethics in such a pointed way. In the words of Franklin D. Roosevelt, “great power involves great responsibility.” We can now see that this is not some finger-wagging moralism. Roosevelt is stating an ethical truism: Only those who have power, who have freedom of choice, can be deemed truly good or blameworthy. Drones and precision-guided weapons give the United States unprecedented control over the battlefield. And therefore, Truman’s plea to “catch up morally” has begun to make sense. The question of waging a truly just war—one that has preoccupied philosophers for nearly a thousand years—can finally be asked in a meaningful way.
Interestingly and disturbingly, the technologies that allow us to uphold just-war principles also obscure the moral and legal complexities of modern warfare. The surveillance capabilities of drones and the accuracy of precision-guided weapons are purported to allow militaries to uphold the just-war tenet of “distinction,” which insists on the separate treatments of combatants and noncombatants. Indeed, the mere use of these technologies is often seen as proof that the distinction has been observed. In truth, however, targeting has become ever trickier. This difficulty is masked by catch-all expressions like “terrorist,” “suspected militant,” “contingency threats,” or “covered person"—all of which have been used to describe legitimate targets in U.S. drone strikes. But none of these vague terms prove the legitimacy of targets. What exactly is a covered person?
The rhetoric and moral thinking about war has become woollier as our weaponry has become more precise. And our reliance on precision weaponry has become a stand-in for making hard moral or legal distinctions. But our trust in technology is dangerously misplaced. Combatant status cannot be determined by an algorithm. Instead, we should recognize the unshakeably human character of war, and identify new ethical and legal resources to regulate armed conflict.
The concept of just war has been around for centuries, but up until this point, warfare resembled nasty brawls or barroom standoffs that rarely afforded aggressors the time to reflect on the concept of justice without endangering their troops or themselves. The moment of calm reflection—that moment that defines the business of ethics—is upon us. Thanks to the technological advantage of precision-guided munitions and drones, we are now responsible if we squander it.