Archive | April, 2012

Obama Awards Billions in Government Contracts to Labor Law Violators

25 Apr

Josh Eidelson, one of the best up and coming labor reporters around, writes at Salon:

A 2010 report from the Government Accountability Office found that the federal government had awarded over $6 billion in contracts in fiscal 2009 to contractors that had been cited for violating federal labor laws, from wage and hour rules to organizing rights. Earlier in 2010, the New York Times reported that the White House was planning to implement a “high road” contracting policy that would direct more government contracts to companies with better labor and environmental records. But by 2011, Obama OMB nominee Heather Higginbottom told senators in a confirmation hearing that there were no such plans afoot.

Imagine the outcry if the government was giving big contracts to companies that violated anti-terrorism laws.

Read more here.

The American Creed: You give us a color, we’ll wipe it out.

25 Apr

George Carlin:

This country was founded by slave owners who wanted to be free. Am I right? A group of slave owners who wanted to be free! So they killed a lot of white English people in order to continue owning their black African people, so they could wipe out the rest of the red Indian people, and move west and steal the rest of the land from the brown Mexican people, giving ‘em a place to take off and drop their nuclear weapons on the yellow Japanese people. You know what the motto for this country ought to be? “You give us a color, we’ll wipe it out.”

 

h/t Greg Grandin

Ex-Cons Make the Best Workers!

24 Apr

A reader of the blog sends me to this article from SmartMoney:

Halloran says the former convicts are among his best employees. “They never miss a day, get drug tested and will work any shift,” he says.

Hiring ex-felons is an experiment that hundreds of business owners have tried — and one that state and federal governments have supported with tax breaks. Uncle Sam offers businesses a credit of up to 40 percent of income taxes on the first $6,000 of wages paid to each former inmate they hire, a deal similar to those offered for hiring from other targeted categories, like welfare recipients and the disabled….

For the most part, the ex-cons are humbled by circumstances and grateful for any job they can get. “‘Oh, thank you for giving me this job!’ isn’t something you hear from the general population,” says Karim Khowaja, who operates 16 Dunkin’ Donuts in downtown Chicago and has hired at least six ex-cons in the past 18 months. “They are very humble.” Apparently, working a coffee counter, sweeping floors or doing anything useful is better than being restricted to a half-way house — a step up from prison, but not a leap. What’s more, keeping a steady job is generally the only way an inmate can leave transitional housing and earn, say, a weekend pass to visit family.

. . .

Bob Strauss has owned Chicago convenience stores since the mid-1970s. Over the years, Strauss has hired as many as 80 employees qualified for the favorable tax treatment, including ex-cons. “It isn’t altruistic,” says Strauss; he’s reaped thousands of dollars in tax credits each year.

. . .

“You would be ridiculous not to [use] the program,” agrees Sherri Modrak. Modrak manages nine Chicago-area McDonald’s franchises; they employ about 15 tax-qualified employees a year in each of their restaurants and save, on average, $70,000 a year in payroll taxes. “It’s free money,” she enthuses.

Boss to Worker: Thanks for Your Kidney. And, Oh, You’re Fired!

23 Apr

From today’s New York Post:

A “kind and generous” Long Island mom donated a kidney to save the life of her boss — who then turned around after she got what she wanted and helped fire the poor woman, according to an explosive new legal complaint.

Then, two months later, in January 2011, Stevens told The Post, Brucia “called me into her office and said, ‘My donor was denied. Were you serious when you said that?’ I said, ‘Sure, yeah.’ She was my boss, I respected her. It’s just who I am. I didn’t want her to die.’’

“I felt I was giving her life back,’’ Stevens told The Post. “My kidney ended up going to St. Louis, Missouri, and hers came from San Francisco.”

Stevens said she did not realize that she was in for serious pain, discomfort in her legs and digestive problems after the surgery on Aug. 10, 2011. She said she felt pressured to return to work Sept. 6, before she was ready — even while her boss was still recovering at home. When Stevens went home sick three days after her return, she said, Brucia actually called her from home to berate her.

“She . . . said, ‘What are you doing? Why aren’t you at work?’ I told her I didn’t feel good,’’ Stevens told The Post. “She said, ‘You can’t come and go as you please. People are going to think you’re getting special treatment.’ ”

After Brucia returned to work, she’d yell at Stevens in front of co-workers over alleged mistakes, Stevens said.

Stevens said that her office and overtime were eventually taken away and that she was demoted to a dealership 50 miles from her home in a high-crime neighborhood that co-workers jokingly called “Siberia.’’ Experiencing mental anguish, she consulted a psychiatrist. and her lawyers wrote a letter to the company — after which Stevens was quickly fired, the papers state.

h/t Bryan Becker

Fighting Them There Rather than Here: From Hitler to Bush

23 Apr

George W. Bush: “It’s better to fight them there than here.” (May 24, 2007; also see September 22, 2003)

Adolph Hitler: “We are fighting on such distant fronts to protect our own homeland, to keep the war as far away as possible, and to forestall what would otherwise be the fate of the nation as a whole.” (November 8, 1942)


Protocols of Machismo, Part 2: On the Hidden Connection Between Henry Kissinger and Liza Minnelli

22 Apr

Yesterday, I posted Part 1 of this excerpt from Chapter 9 of The Reactionary Mind. Today, I post Part 2.

• • • •

 Liza Minnelli
What is it about being a great power that renders the imagining of its own demise so potent? Why, despite all the strictures about the prudent and rational use of force, are those powers so quick to resort to it?

Perhaps it is because there is something deeply appealing about the idea of disaster, about manfully confronting and mastering catastrophe. For disaster and catastrophe can summon a nation, at least in theory, to plumb its deepest moral and political reserves, to have its mettle tested, on and off the battlefield. However much leaders and theorists may style themselves the cool adepts of realpolitik, war remains the great romance of the age, the proving ground of self and nation.

Henry Kissinger

Exactly why the strenuous life should be so attractive is anyone’s guess, but one reason may be that it counters what conservatives since the French Revolution have believed to be the corrosions of liberal democratic culture: the softened mores and weakened will, the subordination of passion to rationality, of fervor to rules. As an antidote to the deadening effects of contemporary life—reason, bureaucracy, routine, anomie, ennui—war is modernity’s great answer to itself. “War is inescapable,” Yitzhak Shamir declared, not because it ensures security but “because without this, the life of the individual has no purpose.” Though this sensibility seeps across the political spectrum, it is essentially an ideal of the conservative counter-Enlightenment, which found its greatest fulfillment during the years of Fascist triumph (“war is to men,” Mussolini said, “as maternity is to women”)—and is once again, it seems, prospering in our own time as well.

Nowhere in recent memory has this romanticism been more apparent than in the neoconservative arguments during the Bush years about prewar intelligence, how to prosecute the wars in Afghanistan and Iraq, and whether or not to use torture. Listening to the neocon complaints about U.S. intelligence during the run-up to the war, one could hear distant echoes of Carlyle’s assault on the “Mechanical Age” (“all is by rule and calculated contrivance”) and Chateaubriand’s despair that “certain eminent faculties of genius” will “be lost, and imagination, poetry and the arts perish.” Richard Perle was not alone in his impatience with what Seymour Hersh calls the intelligence community’s “susceptibility to social science notions of proof.” Before he became secretary of defense, Donald Rumsfeld criticized the refusal of intelligence analysts to use their imaginations, “to make estimates that extended beyond the hard evidence they had in hand.” Once in office, he mocked analysts’ desire to have “all the dots connected for us with a ribbon wrapped around it.” His staffers derided the military quest for “actionable intelligence,” for information solid enough to warrant assassinations and other preemptive acts of violence. Outside the government, David Brooks blasted the CIA’s “bloodless compilations of data by anonymous technicians” and praised those analysts who make “novelistic judgments” informed by “history, literature, philosophy and theology.”

Rumsfeld’s war on the rule-bound culture and risk aversion of the military revealed a deep antipathy to law and order—not something stereotypically associated with conservatives but familiar enough to any historian of twentieth-century Europe (and, indeed, any historian of conservative thought more generally). Issuing a secret directive that terrorists should be captured or killed, Rumsfeld went out of his way to remind his generals that the goal was “not simply to arrest them in a law-enforcement exercise.” Aides urged him to support operations by U.S. Special Forces, who could conduct lightning strikes without approval from generals. Otherwise, they warned, “the result will be decision by committee.” One of Rumsfeld’s advisers complained that the military had been “Clintonized,” which could have meant anything from becoming too legalistic to being too effeminate. (Throughout the Bush years, there was an ongoing struggle within the security establishment over the protocols of machismo.) Geoffrey Miller, the man who made “Gitmo-ize” a household word, relieved a general at Guantanamo for being too “soft—too worried about the prisoners’ well-being.”

By now it seems self-evident that the neocons were drawn into Iraq for the sake of a grand idea: not the democratization of the Middle East, though that undoubtedly had some appeal, or even the creation of an American empire, but rather an idea of themselves as a brave and undaunted army of transgression. The gaze of the neocons, like that of America’s perennially autistic ruling classes, does not look outward nearly as much as it looks inward: at their restless need to prove themselves, to demonstrate that neither their imagination nor their actions will be constrained by anyone or anything—not even by the rules and norms they believe are their country’s gift to the world.

If Torture, Sanford Levinson’s edited collection of essays, is any indication of contemporary sensibilities, neocons in the Bush White House are not the only ones in thrall to romantic notions of danger and catastrophe. Academics are too. Every scholarly discussion of torture, and the essays collected in Torture are no exception, begins with the ticking-time-bomb scenario. The story goes something like this: a bomb is set to go off in a densely populated area in the immediate future; the government doesn’t know exactly where or when, but it knows that many people will be killed; it has in captivity the person who planted the bomb, or someone who knows where it is planted; torture will yield the needed information; indeed, it is the only way to get the information in time to avert the catastrophe. What to do?

It’s an interesting question. But given that it is so often posed in the name of realism, we might consider a few facts before we rush to answer it. First, as far as we know, no one at Guantanamo, Abu Ghraib, or any of the other prisons in America’s international archipelago has been tortured in order to defuse a ticking time bomb. Second, at the height of the war in Iraq, anywhere between 60 and 90 percent of American-held prisoners there either were in jail by mistake or posed no threat at all to society. Third, many U.S. intelligence officials opted out of torture sessions precisely because they believed torture did not produce accurate information.

These are the facts, and yet they seldom, if ever, make an appearance in these academic exercises in moral realism. The essays in Torture pose one other difficulty for those interested in reality: none of the writers who endorse the use of torture by the United States ever discusses the specific kinds of torture actually used by the United States. The closest we get is an essay by Jean Bethke Elshtain, in which she writes:

Is a shouted insult a form of torture? A slap in the face? Sleep deprivation? A beating to within an inch of one’s life? Electric prods on the male genitals, inside a woman’s vagina, or in a person’s anus? Pulling out fingernails? Cutting off an ear or a breast? All of us, surely, would place every violation on this list beginning with the beating and ending with severing a body part as forms of torture and thus forbidden. No argument there. But let’s turn to sleep deprivation and a slap in the face. Do these belong in the same torture category as bodily amputations and sexual assaults? There are even those who would add the shouted insult to the category of torture. But, surely, this makes mincemeat of the category.

Distinguishing the awful from the acceptable, Elshtain never mentions the details of Abu Ghraib or the Taguba Report, making her list of do’s and don’ts as unreal as the ticking time bomb itself. Even her list of taboos is stylized, omitting actually committed crimes for the sake of repudiating hypothetical ones. Elshtain rejects stuffing electric cattle prods up someone’s ass. What about a banana [pdf]? She rejects cutting off ears and breasts. What about “breaking chemical lights and pouring the phosphoric liquid on detainees”? She condemns sexual assault. What about forcing men to masturbate or wear women’s underwear on their heads? She endorses “solitary confinement and sensory deprivation.” What about the “bitch in the box,” where prisoners are stuffed in a car trunk and driven around Baghdad in 120° heat? She supports “psychological pressure,” quoting from an article that “the threat of coercion usually weakens or destroys resistance more effectively than coercion itself.” What about threatening prisoners with rape? When it comes to the Islamists, Elshtain cites the beheading of Daniel Pearl. When it comes to the Americans, she muses on Laurence Olivier’s dentistry in Marathon Man. Small wonder there’s “no argument there”: there is no there there.

 

The unreality of Elshtain’s analysis is not incidental or peculiar to her. Even writers who endorse torture but remain squeamish about it can’t escape such abstractions. The more squeamish they are, in fact, the more abstractions they indulge in. Sanford Levinson, for example, tentatively discusses Alan Dershowitz’s proposal that government officials should be forced to seek warrants from judges in order to torture terrorist suspects. Hoping to make the reality of torture, and the pain of its victims, visible and concrete, Levinson insists that “the person the state proposes to torture should be in the courtroom, so that the judge can take no refuge in abstraction.” But then Levinson asks us to consider “the possibility that anyone against whom a torture warrant is issued receives a significant payment as ‘just compensation’ for the denial of his or her right not to be tortured.” Having just counseled against abstraction, Levinson resorts to the greatest abstraction of all—money—as payback for the greatest denial of rights imaginable.

If the unreality of these discussions sounds familiar, it is because they are watered by the same streams of conservative romanticism that coursed in and out of the White House during the Bush years. Notwithstanding Dershowitz’s warrants and Levinson’s addenda, the essays endorsing torture are filled with hostility to what Elshtain variously calls “moralistic code fetishism” and “rule-mania” and what we might simply call “the rule of law.” But where the Bush White House sought to be entirely free of rules and laws—and here the theoreticians depart from the practitioners—the contemplators of torture seek to make the torturers true believers in the rules.

There are two reasons. One reason, which Michael Walzer presents at great length in a famous essay from 1973, reprinted in Torture, is that the absolute ban on torture makes possible—or forces us to acknowledge the problem of “dirty hands.” Like the supreme emergency, the ticking time bomb forces a leader to choose between two evils, to wrestle with the devil of torture and the devil of innocents dying. Where other moralists would affirm the ban on torture and allow innocents to die, or adopt a utilitarian calculus and order torture to proceed, Walzer believes the absolutist and the utilitarian wash their hands too quickly; their consciences come too clean. He wishes instead “to refuse ‘absolutism’ without denying the reality of the moral dilemma,” to admit the simultaneous necessity for—and evil of torture.

Why? To make space for a moral leader, as Walzer puts it in Arguing about War, “who knows that he can’t do what he has to do—and finally does” it. It is the familiar tragedy of two evils, or two competing goods, that is at stake here, a reminder that we must “get our hands dirty by doing what we ought to do,” that “the dilemma of dirty hands is a central feature of political life.” The dilemma, rather than the solution, is what Walzer wishes to draw attention to. Should torturers be free of all rules save utility, or constrained by rights-based absolutism, there would be no dilemma, no dirty hands, no moral agon. Torturers must be denied their Kant and Bentham—and leave us to contend with the brooding spirit of the counter-Enlightenment, which insists that there could never be one moral code, one set of “eternal principles,” as Isaiah Berlin put it, “by following which alone men could become wise, happy, virtuous and free.”

But there is another reason some writers insist on a ban on torture they believe must also be violated. How else to maintain the frisson of transgression, the thrill of Promethean criminality? As Elshtain writes in her critique of Dershowitz’s proposal for torture warrants, leaders “should not seek to legalize” torture. “They should not aim to normalize it. And they should not write elaborate justifications of it . . . . The tabooed and forbidden, the extreme nature of this mode of physical coercion must be preserved so that it never becomes routinized as just the way we do things around here.” What Elshtain objects to in Dershowitz’s proposal is not the routinizing of torture; it is the routinizing of torture, the possibility of reverting to the “same moralistic-legalism” she hoped violations of the torture taboo would shatter. This argument too is redolent of the conservative counter-Enlightenment, which always suspected, again quoting Berlin, that “freedom involves breaking rules, perhaps even committing crimes.”

But if the ban on torture must be maintained, what is a nation to do with the torturers who have violated it, who have, after all, broken the law? Naturally the nation must put them on trial; “the interrogator,” in Elshtain’s words, “must, if called on, be prepared to defend what he or she has done and, depending on context, pay the penalty.” In what may be the most fantastic move of an already fantastic discussion, several of writers on torture—even Henry Shue, an otherwise steadfast voice against the practice—imagine the public trial of the torturer as similar to that of the civil disobedient, who breaks the law in the name of a higher good, and throws himself on the mercy or judgment of the court. For only through a public legal proceeding, Levinson writes, will we “reinforce the paradoxical notion that one must condemn the act even if one comes to the conclusion that it is indeed justified in a particular situation,” a notion, he acknowledges, that is little different from the comment of Admiral Mayorga, one of Argentina’s dirtiest warriors: “The day we stop condemning torture (although we tortured), the day we become insensitive to mothers who lose their guerrilla sons (although they are guerrillas) is the day we stop being human beings.”

By now it should be clear why we use the word “theater” to denote the settings of both stagecraft and statecraft. Like the theater, national security is a house of illusions. Like stage actors, political actors are prone to a diva-like obsession, gazing in the mirror, wondering what the next day’s—or century’s—reviews will bring. It might seem difficult to imagine Liza Minnelli playing Henry Kissinger, but I’m not sure the part would be such a stretch. And what of the intellectuals who advise these leaders or the philosophers who analyze their dilemmas? Are they playwrights or critics, directors or audiences? I’m not entirely sure, but the words of their greatest spiritual predecessor might give us a clue. “I love my native city more than my own soul,” cried Machiavelli, quintessential teacher of the hard ways of state. Change “native city” to “child,” replace “my own soul” with “myself,” and we have the justification of every felonious stage mother throughout history, from the Old Testament’s rule-breaking Rebecca to Gypsy’s ball-busting Rose.

Protocols of Machismo: On the Fetish of National Security, Part I

22 Apr

As part of my ongoing series of short takes from The Reactionary Mind, I excerpt here chapter 9, “Protocols of Machismo.” This chapter originally appeared as a review essay in the London Review of Books in 2005. Because that piece remains behind the firewall, I’ve decided to reproduce the chapter here in its entirety: Part 1 today, Part 2, I hope, tomorrow.

In the last several months, I’ve spent much time defending the state against both libertarians and anarchists. In this chapter, however, I go after the state and one of its most powerful and primary fetishes: the doctrine of national security. I also expand beyond my analysis of conservative intellectuals, taking on prominent liberal theorists like Michael Walzer and, in Part 2 to come, constitutional law scholar Sanford Levinson.

• • • • •

The twentieth century, it’s often said, taught us a simple lesson about politics: of all the motivations for political action, none is as lethal as ideology. The lust for money may be distasteful, the desire for power ignoble, but neither will drive its devotees to the criminal excess of an idea on the march. Whether the cause is the working class or a master race, ideology leads to the graveyard.

Although moderate-minded intellectuals have repeatedly mobilized some version of this argument against the “isms” of right and left, they have seldom mustered a comparable skepticism about that other idée fixe of the twentieth century: national security. Some writers criticize this war, others that one, but has anyone ever penned, in the spirit of Daniel Bell, a book titled “The End of National Security”? Millions have been killed in the name of security; Stalin and Hitler claimed to be protecting their populations from mortal threats. Yet no such book exists.

Consider the less than six degrees of separation between the idea of national security and the lurid crimes of Abu Ghraib. Each of the reasons the Bush administration gave for going to war against Iraq—the threat of weapons of mass destruction (WMD), Saddam’s alleged links to Al Qaeda, even the promotion of democracy in the Middle East—referred in some way to protecting the United States. Getting good intelligence from informers is a critical element in defeating any insurgency. U.S. military intelligence believed (perhaps still does believe) that sexual humiliation is an especially useful instrument for extracting information from recalcitrant Muslim and Arab prisoners.

Many critics have protested Abu Ghraib, but few have traced its outrages back to the idea of national security. Perhaps they believe such an investigation is unnecessary. After all, many of these individuals opposed the war on the grounds that U.S. security was not threatened by Iraq. Some of national security’s most accomplished practitioners, such as Brent Scowcroft and Zbigniew Brzezinski, as well as theoreticians like Stephen Walt and John Mearsheimer, claimed that a genuine consideration of U.S. security interests militated against the war. The mere fact, these critics could argue, that some politicians misused or abused the principle of national security need not call that principle into question. But when an idea routinely accompanies, if not induces, atrocities—Abu Ghraib was certainly not the first instance of a country committing torture in the name of security—second thoughts would seem to be in order. Unless, of course, defenders of the idea wish to join that company of ideologues they so roundly condemn, affirming their commitment to an ideal version of national security while disowning its actually existing variant.


In its ideal version, national security requires a clear-eyed understanding of a nation’s interests and a sober assessment of the threats to them. Force, a counselor might say to his prince, is a tool a leader may use in response to those threats, but he should use it prudently and without emotion. Just as he should not trouble himself with questions of human rights or international law, he should not be excited by his use of violence. Analysts may add international norms to a leader’s toolkit, but they are quick to point out, as Joseph Nye does in The Paradox of American Power, that these rules may have to give way to “vital survival interests,” that “at times we will have to go it alone.” National security demands a monkish self-denial, where officials forego the comforts of conscience and pleasures of impulse in order to inflict when necessary the most brutal force and abstain from or abandon that force whenever it becomes counterproductive. It’s an ethos that bears all the marks of a creed, requiring a mortification of self no less demanding than that expected of the truest Christian.

The first article of this creed, the national interest, gives leaders great wiggle room in identifying threats. What, after all, is the national interest? According to Nye, “the national interest is simply what citizens, after proper deliberation, say it is.” Even if we assume that citizens are routinely given the opportunity to deliberate about the national interest, the fact is that they seldom, if ever, reach a conclusion about it. As Nye points out, Peter Trubowitz’s exhaustive study of the way Americans defined the national interest throughout the twentieth century determined that “there is no single national interest. Analysts who assume that America has a discernible national interest whose defense should determine its relations with other nations are unable to explain the failure to achieve domestic consensus on international objectives.” This makes a good deal of sense: if an individual finds it difficult to determine his or her own interest, why should we expect a mass of individuals to do any better? But if a people cannot decide on its collective interest, how can it know when that interest is threatened?

Faced with such confusion, leaders often fall back on the most obvious definition of a threat: imminent, violent assault from an enemy, promising to end the independent life of the nation. Leaders focus on cataclysmic futures, if for no other reason than that these are a convenient measure of what is or is not a threat, what is or is not security. But that ultimate threat often turns out to be no less illusory than the errant definition of security that inspired the invocation of the threat in the first place.

Hovering about every discussion of war and peace are questions of life and death. Not the death of some or even many people, but, as Michael Walzer proposes in Arguing about War, the “moral as well as physical extinction” of an entire people. True, it is only rarely that a nation will find its “ongoingness”—its ability “to carry on, and also to improve on, a way of life handed down” from its ancestors—threatened. But at moments of what Walzer, following Winston Churchill, calls “supreme emergency,” a leader may have to commit the most obscene crimes in order to avert catastrophe. The deliberate murder of innocents, the use of torture: the measures taken will be as many and almost as terrible as the evils a nation hopes to thwart.

For obvious reasons, Walzer maintains that leaders should be wary of invoking the supreme emergency, that they must have real evidence before they start speaking Churchillese. But a casual reading of the history of national security suggests not only that the rules of evidence will be ignored in practice, but also that the notion of catastrophe encourages, even insists on, these rules being flouted. “In normal affairs,” Cardinal Richelieu declared at the dawn of the modern state system, “the administration of Justice requires authentic proofs; but it is not the same in affairs of state . . . . There, urgent conjecture must sometimes take the place of proof; the loss of the particular is not comparable with the salvation of the state.” As we ascend the ladder of threats, in other words, from petty crime to the destruction or loss of the state, we require less and less proof that each threat is real. The consequences of underestimating serious threats are so great, Richelieu suggests, that we may have no choice but to overestimate them. Three centuries later, Learned Hand invoked a version of this rule, claiming that “the gravity of the ‘evil’” should be “discounted by its improbability.” The graver the evil, the higher degree of improbability we demand in order not to worry about it. Or, to put the matter another way, if an evil is truly terrible but not very likely to occur, we may still take preemptive action against it.

Neither statement was meant to justify great crimes of state, but both suggest an inverse relationship between the magnitude of a danger and the requirements of facticity. Once a leader starts pondering the nation’s moral and physical extinction, he enters a world where the fantastic need not give way to the factual, where present benignity can seem like the merest prelude to future malignancy. So intertwined at this point are fear and reason of state that early modern theorists, less shy than we about such matters, happily admitted the first as a proxy for the second: a nation’s fear, they argued, could serve as a legitimate rationale for war, even a preventive one. “As long as reason is reason,” Francis Bacon wrote, “a just fear will be a just cause of a preventive war.” That’s a fairly good description of the logic animating the Cold War: fight them there—in Vietnam, Nicaragua, Angola—lest we must stop them here, at the Rio Grande, the Canadian border, on Main Street. It’s also a fairly good description of the logic animating the Nazi invasion of the Soviet Union:

We are fighting on such distant fronts to protect our own homeland, to keep the war as far away as possible, and to forestall what would otherwise be the fate of the nation as a whole and what up to now only a few German cities have experienced or will have to experience. It is therefore better to hold a front 1,000 or if necessary 2,000 kilometers away from home than to have to hold a front on the borders of the Reich.

These are by no means ancient or academic formulations. While liberal critics claim that the Bush administration lied about or deliberately exaggerated the threat posed by Iraq in order to justify going to war, the fact is that the administration and its allies were often disarmingly honest in their assessment of the threat, or at least honest about how they were going about assessing it. Trafficking in the future, they conjured the worst—“we don’t want the smoking gun to be a mushroom cloud”—and left it to their audience to draw the most frightful conclusions.

In his 2003 state of the union address, one of his most important statements in the run-up to the war, Bush declared: “Some have said we must not act until the threat is imminent. Since when have terrorists and tyrants announced their intentions, politely putting us on notice before they strike? If this threat is permitted to fully and suddenly emerge, all actions, all words and all recriminations would come too late.” Bush does not affirm the imminence of the threat; he implicitly disavows it, ducking behind the past, darting to the hypothetical, and arriving at a nightmarish, though entirely conjectured, future. He does not speak of “is” but of “if” and “could be.” These words are conditional (which is why Bush’s critics, insisting that he take his stand in the realm of fact or fiction, never could get a fix on him). He speaks in the tense of fear, where evidence and intuition, reason and speculation, combine to make the worst-case scenario seem as real as fact.

After the war had begun, the television journalist Diane Sawyer pressed Bush on the difference between the assumption, “stated as a hard fact, that there were weapons of mass destruction,” and the hypothetical possibility that Saddam “could move to acquire those weapons.” Bush replied: “So what’s the difference?” No offhand comment, this was Bush’s most articulate statement of the entire war, an artful parsing of a distinction that has little meaning in the context of national security.

Probably no one in or around the administration better understood the way national security blurs the line between the possible and the actual than Richard Perle. “How far Saddam’s gone on the nuclear weapons side I don’t think we really know,” Perle said on one occasion. “My guess is it’s further than we think. It’s always further than we think, because we limit ourselves, as we think about this, to what we’re able to prove and demonstrate . . . . And, unless you believe that we have uncovered everything, you have to assume there is more than we’re able to report.”

Like Bush, Perle neither lies nor exaggerates. Instead, he imagines and projects, and in the process reverses the normal rules of forensic responsibility. When someone recommends a difficult course of action on behalf of a better future, he invariably must defend himself against the skeptic, who insists that he prove his recommendation will produce the outcome he anticipates. But if someone recommends an equally difficult course of action to avert a hypothetical disaster, the burden of proof shifts to the skeptic. Suddenly she must defend her doubt against his belief, her preference for politics as usual against his politics of emergency. And that, I suspect, is why the Bush administration’s prewar mantra, “the absence of evidence is not evidence of absence”—laughable in the context of an argument for, say, world peace—could seem surprisingly cogent in an argument for war. “Better to be despised for too anxious apprehensions,” Burke noted, “than ruined by too confident a security.”

As Walzer suggests, an entire people can face annihilation. But the victims of genocide tend to be stateless or powerless, and the world has difficulty seeing or acknowledging their destruction, even when the evidence is undeniable. The citizens and subjects of great powers, on the other hand, rarely face the prospect of “moral as well as physical extinction.” (Walzer cites only two cases.) Yet their leaders seem to imagine that destruction with the greatest of ease.

We get a taste of this indulgence of the state and its concerns—and a corresponding skepticism about non-state actors and their concerns—in Walzer’s own ruminations on war and peace. Throughout Arguing about War, Walzer wrestles with terrorists who claim that they are using violence as a last resort and antiwar activists who claim than governments should go to war only as a last resort. Walzer is dubious about both claims. But far from revealing a dogged consistency, his skepticism about the “last resort” suggests a double standard. It sets the bar for using force much higher for non-state actors than it does for state actors—not because terrorists target civilians while the state does not, but because Walzer refuses to accept the terrorist’s “last resort” while he is ready to lend credence to the government’s, or at least is ready to challenge critics of the government who insist that war truly be a last resort.

For Walzer, the last resort argument of antiwar activists is often a ruse designed to make a government’s going to war impossible—and a muddy ruse at that. For “lastness,” he says, “is a metaphysical condition, which is never actually reached in real life; it is always possible to do something else, or to do it again, before doing whatever it is that comes last.” We can always ask for “another diplomatic note, another United Nations resolution, another meeting,” we can always dither and delay. Though Walzer acknowledges the moral power of the last resort argument—“political leaders must cross this threshold [going to war] only with great reluctance and trepidation”—he suspects that it is often “merely an excuse for postponing the use of force indefinitely.” As a result, he says, “I have always resisted the argument that force is a last resort.”

But when non-state actors argue that they are resorting to terrorism as a last resort, Walzer suspects them of bad faith. For such individuals, “it is not so easy to reach the ‘last resort.’” To get there, one must indeed try everything (which is a lot of things) and not just once. Even “under conditions of oppression and war,” he insists, “it is by no means clear when” the oppressed or their spokespersons have truly “run out of options.” Walzer acknowledges that a similar argument might be applied to government officials, but the officials he has in mind are those who “kill hostages or bomb peasant villages”—not those who claim they must go to war. Thus, Walzer entertains the possibility that governments, with all their power, may find themselves racing against time, while insisting that terrorists, and the people they claim to represent, invariably will have all the time in the world.

To be continued…here!

Follow

Get every new post delivered to your Inbox.

Join 5,569 other followers