March 3rd, 2003

The following is reposted from the interesting people list.


John Perry Barlow describes himself as a cognitive dissident. He is the Co-Founder & Vice Chairman of the Electronic Frontier Foundation and a Berkman Fellow at Harvard Law School.

Sympathy for the Devil

John Perry Barlow

I remember a time years ago when I was as convinced that Dick Cheney was obscenely wrong about something as I am now. Subsequent events raised the possibility that he might not have been so wrong after all. With this in mind, I’ve given some thought lately to how all this might look to the Vice President (who is, I remain convinced, as much the real architect of American policy as he was while Gerald Ford’s Chief of Staff or George the First’s Secretary of Defense).

As I’ve mentioned, I once knew Cheney pretty well. I helped him get elected to his first public office as Wyoming’s lone congressman. I conspired with him on the right side of environmental issues. Working closely together, we were instrumental in closing down a copper smelter in Douglas, Arizona the grandfathered effluents of which were causing acid rain in Wyoming’s Wind River mountains. We were densely interactive allies in creating the Wyoming Wilderness Act. He used to go fishing on my ranch. We were friends.

With the possible exception of Bill Gates, Dick Cheney is the smartest man I’ve ever met. If you get into a dispute with him, he will take you on a devastatingly brief tour of all the weak points in your argument. But he is a careful listener and not at all the ideologue he appears at this distance. I believe he is personally indifferent to greed. In the final analysis, this may simply be about oil, but I doubt that Dick sees it that way. I am relatively certain that he is acting in the service of principles to which he has devoted megawatts of a kind of thought that is unimpeded by sentiment or other emotional overhead.

Here is the problem I think Dick Cheney is trying to address at the moment: How does one assure global stability in a world where there is only one strong power? This is a question that his opposition, myself included, has not asked out loud. It’s not an easy question to answer, but neither is it a question to ignore. Historically, there have only been two methods by which nations have prevented the catastrophic conflict which seems to be their deepest habit.

The more common of these has been symmetrical balance of power. This is what kept another world war from breaking out between 1945 and 1990. The Cold War was the ultimate Mexican stand-off, and though many died around its hot edges –  in Viet Nam, Korea, and countless more obscure venues – it was a comparatively peaceful period. Certainly, the global body count was much lower in the second half of the Twentieth Century than it was in the first half. Unthinkable calamity threatened throughout, but it did not occur.

The other means by which long terms of peace – or, more accurately, non-war – have been achieved is the unequivocal domination by a single ruthless power. The best example of this is, of course, the Pax Romana, a “world” peace which lasted from about 27 BCE until 180 AD. I grant that the Romans were not the most benign of rulers. They crucified dissidents for decoration, fed lesser humans to their pets, and generally scared the bejesus out of everyone, including Jesus Himself. But war, of the sort that racked the Greeks, Persians, Babylonians, and indeed, just about everyone prior to Julius Caesar, did not occur. The Romans had decided it was bad for business. They were in a military position to make that opinion stick. There was a minority view of the Pax Romanum, well stated at its height by Tacitus: “To plunder, to slaughter, to steal, these things they misname empire; and where they make a wilderness, they call it peace.” It would be well to keep that admonition in mind now.

There are other, more benign, examples of lengthily imposed peace. One could argue that the near absence of major international wars in the Western Hemisphere results from the overwhelming presence of the United States which, while hardly a dream neighbor, has at least stopped most of the New World wars that it didn’t start. The Ottoman Empire had a pretty good run, about 700 years, after drawing its borders in blood. The Pharoahs kept the peace, at least along the Nile, for over 2800 years until Alexander the Great showed up. If one takes the view that war is worse than tyranny and that the latter doesn’t necessarily beget the former, there is a case to be made for global despotism. That case is unfortunately stronger, in the light of history, than the proposition that nations will coexist peacefully if we all try really, really hard to be nice to each other. It is certainly unlikely at the moment that geopolitical stability can be achieved by the formation of some new detente like the one that terrified us into peace during the Cold War. Europe, old and new, is furious with the United States at the moment (if my unscientific polls while there in January are at all accurate), but they are a very long way from confronting us with any military threat we’d find credible.

I’m pretty sure that, soon enough, hatred of our Great Satanic selves will provide the Islamic World with a unity they have lacked since the Prophet’s son-in-law twisted off and started Shi’ism. But beyond their demonstrated capacity to turn us into a nation of chickenshits and control freaks, I can’t imagine them erecting a pacifying balance force against our appalling might.

I believe that Dick Cheney has thought all these considerations through in vastly greater detail than I’m providing here and has reached these following conclusions: first, that it is in the best interests of humanity that the United States impose a fearful peace upon the world and, second, that the best way to begin that epoch would be to establish dominion over the Middle East through the American Protectorate of Iraq. In other words, it’s not about oil, it’s about power and peace.

Well, alright. It is about oil, I guess, but only in the sense that the primary goal of the American Peace is to guarantee the Global Corporations reliable access to all natural resources. wherever they may lie. The multinationals are Cheney’s real constituents, regardless of their stock in trade or their putative country of origin. He knows, as the Romans did, that war is bad for business. But what’s more important is that he also knows that business is bad for war. He knows, for example, there there has never been a war between two countries that harbored McDonald’s franchises.

I actually think it’s possible that, however counter-intuitive and risky his methods for getting it, what Dick Cheney really wants is peace. Though much has been made of his connection to Halliburton and the rest of the Ol Bidness, he is not acting in the service of personal greed. He is a man of principle. He is acting in the service of intentions that are to him as noble as mine are to me – and not entirely different.

How can this be? Return with me now to the last time I was convinced he was insanely endangering life on earth. This was back in the early 1983 when Dick Cheney was, at least by appearances, a mere congressman. He was also Congressional point man for the deployment of the MX missile system in our mutual home state of Wyoming. (The MX was also called the “Peacemaker,” a moniker I took at the time to be the darkest of ironies.

The MX was, and indeed still is, a Very Scary Thing.  A single MX missile could hit each of 10 different targets, hundreds of miles apart, with about 600 kilotons of explosive force. For purposes of comparison, Hiroshima was flattened by a 17 kiloton nuclear blast. Thus, each of the MX’s warheads could glaze over an area 35 times larger than the original Ground Zero. Furthermore, 100 MX missiles were to lie beneath the Wyoming plains, Doomsday on the Range.

Any one of the 6000 MX warheads would probably incinerate just about every living thing in Moscow. But Cheney’s plan – cooked up with Brent Scowcroft, Don Rumsfeld, Richard Perle, and other familiar suspects – was not about targeting cities, as had been the accepted practice of MAD (Mutually Assured Destruction). The MX was to be aimed instead at the other side’s missile emplacements.

The problem with this “counter-force strategy, ” as it was called, was that it was essentially a first-strike policy. The MX was to be placed in highly vulnerable Minuteman silos. In the event of a Soviet first strike, all of the Peacemakers would have been easily wiped out. Thus, they were either to be launched preemptively or they were set to “launch on warning.” The MX was to be either an offensive weapon or the automated hair-trigger was to be pulled on all hundred of them within a very few minutes after the first Soviet missile broke our radar horizon .

In either case, the logic behind it appeared to call for fighting and winning a nuclear war. Meanwhile, President Reagan was bellowing about “the Evil Empire” and issuing many statements that seemed to consider Armageddon a plausible option.

I spent a lot of time on Capitol Hill during the winter of 1981-82. I lobbied over a hundred Congressmen and Senators against a policy that seemed to me the craziest thing that human beings had ever proposed. The only member of Congress who knew more about it than I did was Dick Cheney.

Veteran Washington Post columnist Mary McGrory accompanied me on one of my futile visits to his office, where she spent better than an hour listening to us argue about “circular errors probable” and “MIRV decoys” and the other niceties of nuclear nightmare. When we were leaving, she, who had seen a lot of politicians in her long day, turned to me and said, “I think your guy Cheney is the most dangerous person I’ve ever seen up here.” At that point, I agreed with her. What I was not thinking about, however, was the technique I once used to avoid being run off the road by Mexican bus drivers, back when their roads were narrower and their bus drivers even more macho. Whenever I saw a bus barrelling down the centerline at me, I would start driving unpredictably, weaving from shoulder to shoulder as though muy borracho. As soon as I started to radiate dangerously low regard for my own preservation, the bus would slow down and move over. As it turned out, this is more or less what Cheney and his phalanx of Big Stategic Thinkers were doing, if one imagined the Soviet Union as a speeding Mexican bus. They were determined to project such a vision of implacable, irrational, lethality that the Soviet leaders would decide to capitulate rather than risk universal annihilation.

It worked. While I think that rock and roll and the systemic failures of central planning had as much to do with the collapse of communism as did Dick’s mad gamble, I have to confess that, by 1990, he didn’t look quite so nuts to me after all. The MX, along with Star Wars and Reagan’s terrifying rhetoric, had been all along a weapon for waging psychological rather than nuclear warfare.

I’m starting to wonder if we aren’t watching something like the same strategy again. In other words, it’s possible Cheney and company are actually bluffing. This time, instead of trying to terrify the Soviets into collapse, the objective is even grander. If I’m right about this, they have two goals. Neither involves actual war, any more than the MX missile did.

First, they seek to scare Saddam Hussein into voluntarily turning his country over to the U.S. and choosing safe exile or, failing that, they want to convince the Iraqi people that it’s safer to attempt his overthrow or assassination than to endure an invasion by American ground troops.

Second, they are trying to convince every other nation on the planet that the United States is the Mother of All Rogue States, run by mad thugs in possession of 15,000 nuclear warheads they are willing to use and spending, as they already are, more on death-making capacity than all the other countries on the planet combined. In other words, they want the rest of the world to think that we are the ultimate weaving driver. Not to be trusted, but certainly not to be messed with either.

By these terrible means, they will create a world where war conducted by any country but the United States will seem simply too risky and the Great American Peace will begin. Unregulated Global Corporatism will be the only permissible ideology, every human will have access to McDonald’s and the Home Shopping Network, all “news” will come through some variant of AOLTimeWarnerCNN, the Internet will be run by Microsoft, and so it will remain for a long time. Peace. On Prozac.

If I were in charge, this is neither the flavor of peace I would prefer nor the way I would achieve it. But if I’d been in charge back in 1983, there might still be a Soviet Union and we might all still be waiting for the world to end in fifteen nuclear minutes. Of course, I could be completely wrong about this. Maybe they actually are possessed of a madness to which there is no method. Maybe they really do intend to invade Iraq and for no more noble reason than giving American SUVs another 50 years of cheap gas. We’ll probably know which it’s going to be sometime in the next fortnight.

By then,  I expect to be dancing in Brazil, far from this heart of darkness and closer to the heart itself.



 
John Perry Barlow’s Home Page

Thanks to Flemming Funch for the Link

March 2nd, 2003

The following 1998 article is reposted from Great Change.


William R. Catton, Jr. is Professor of Sociology Emeritus, Washington State University.  A distinguished educator,  he has taught courses in sociology and human ecology at universities in the U.S., Canada and New Zealand.  Dr. Catton has authored or co-authored over 100 articles in journals or as chapters of books and four books on sociology, the environment and human ecology, including his 1980 classic Overshoot: The Ecological Basis for Revolutionary Change.

MALTHUS: More Relevant Than Ever

William R. Catton

In 1798, Thomas Robert Malthus tried to inform people that a human population, like a population of any other species, had the potential to increase exponentially were it not limited by finite support from its resource base.  He warned us that growth of the number of human consumers and their demands will always threaten to outrun the growth of sustenance.  When Charles Darwin read Malthus, he recognized more fully than most other readers that the Malthusian principle applied to all species.  And Darwin saw how reproduction beyond replacement can foster a universal competitive relationship among a population’s members, as well as how expansion by a population of one species may be at the expense of populations of other species. 

Others were not so perceptive.  When I was in high school, the textbook used in my biology class listed “Over-production of individuals” first among “the chief factors assigned by Darwin to account for the development of new species from common ancestry through natural selection” (Moon and Man, 1933:457), but it did not cite Malthus nor discuss his concerns about population pressure.  That neglect was typical because, for a while, “it was argued widely that developments had disproved Malthus, that the problem was no longer man’s propensity to reproduce more rapidly than his sustenance, but his unwillingness to reproduce adequately in an industrial and urban setting” (Taeuber, 1964:120). 

Malthus in the age of exuberance

Most of us can remember learning in school to dismiss Malthus as “too pessimistic.” Technological progress and the economic growth resulting therefrom, we learned to assume, can always provide the essential consumables (or substitutes) that have permitted exuberant population growth.  One of my college textbooks put it this way: “For conditions as they existed in 1798, Malthus was reasonably sound in his doctrines; but scientific and technological changes in the interval since his day have made Malthusian principles, in large part, an intellectual curiosity in our era” (Barnes, 1948:51). 

In graduate school one of my textbooks acknowledged that “Man’s tendency to multiply up to the maximum carrying capacity of the land is superficially evident in many parts of the world” (Hawley, 1950:150-151).  Its eminent author, who has been called the “dean” of American human ecologists, conceded the likelihood that most lands at most historic times “have been populated to capacity in view of the particular modes of life of their occupants” but insisted (pp. 160ff) changes in such modes of life had made “the Malthusian interpretation of population problems decreasingly useful.” The article about Malthus in the International Encyclopedia of the Social Sciences called his theory of population “a perfect example of metaphysics masquerading as science” (Blaug, 1968:551).

Reassessing Malthus inappropriately

When co-authoring an introductory sociology text my colleagues and I began to dissent from these disparaging evaluations of Malthus, but for not quite the right reasons (Lundberg et al., 1968:682):  “Despite his inadequate data,” we said Malthus “was nevertheless correct in arguing that the food supply fixes an upper limit beyond which the population cannot go at any given time.”  And we gave him credit for having taken into account “certain social and psychological factors, such as celibacy and moral restraint, which might keep population below that theoretical limit, and in doing this he focused attention,” we supposed, “on factors which were frequently overlooked at the time.” 

Looking back, I now see both of those sentences of ours as inaccurate or misleading.  His essay did not fully succeed in directing most people’s attention to all the relevant factors, i.e., those checks that would prevent a human population from expanding to its full potential.  Further, and more importantly, Malthus’s confidence that no population could overshoot carrying capacity, but would only press miserably against the limit, precluded foreseeing the prodigality-based affluence we achieved by running up carrying capacity deficits that would be disastrous later on.

Overshooting Carrying Capacity

Drawing down resources from the future

Contrary to our partial endorsement, (1) Food is not the only component of “sustenance” for modern human living; industrialized human societies rely on continuing flows of many other resources, and a cessation of supply of any essential commodity can be devastating. (2) By drawing down “savings accounts” (i.e., using resources faster than their rates of renewal), populations can (and do) temporarily exceed carrying capacity.  When the stockpile runs out, the once-thriving population finds itself in dire straits. 

Misunderstanding checks and balances

With respect to our second appraisal sentence, although Malthus meant to focus attention on factors that check population growth, the effort didn’t always succeed.  Readers’ attention seems to have persistently strayed back to the notion that Malthus believed populations would inevitably doom themselves to starvation by growing exponentially, so populations that burgeoned and prospered have been seen as supposed refutations of Malthus. 

What most of us just didn’t see was that a relatively short feedback loop was assumed by Malthus because of his 18th century perspective on technology.  He was not mistaken in attributing exponential growth potential to all populations, nor was he mistaken in recognizing the unlikelihood that required resource supplies would grow apace.  He did err in supposing population could never grow significantly beyond a key resource limit.  Populations can, and often do exceed carrying capacity, and come to grief only after a delay.  Malthus was writing not only before there was a developed science of ecology but also before there were full-blown industrial societies making prodigal use of fossil energy and other nonrenewable resources.

Delayed feedback from the environment

Human over-reproduction may be curbed by its ultimate adverse consequences much less promptly than Malthus assumed, unlike what happens to animals with much shorter maturation times and without technology.   Two facts make the feedback loop dangerously longer for us than for most nonhuman species.  First, humans have an unusually long period of maturation compared to other species.  The lag between birth and the age of maximum resource consumption hardly mattered in 1798.  Then as now, people’s offspring made small resource demands as infants, and in 1798 their adult demands exceeded those of their infancy by a ratio not much greater than the adult-to-infant resource demand ratio for other animal species (which grow to maturity in only a year or so).  Second, a mere eight human generations after Malthus, today’s technology and our colossal reliance as adults on exosomatic energy sources (Cottrell, 1955; Catton, 1980; Price, 1995) have enormously magnified that ratio, putting it too far out of adjustment with ecosystem processes that supplied the modest demands of our ancestors. 

So continuing to suppose the world can afford all the precious progeny we may produce leads now to serious problems.  Babies grow up.  In an industrial society, as adults they expect to live lifestyles that involve taking from the environment enormous per capita resource withdrawals and dumping into it vast amounts of life’s toxic by-products. 

It was no fault of Malthus that in 1798 he did not foresee this magnification.  Even today, parents seldom if ever base their decisions about sexual activity on calculations of the lifetime resource demands and environmental impacts of each prospective child that may result.  Our affluence, technology, and extraordinary period of maturation combine to obscure and delay but do not avert negative feedback from the environment.

Criticism ignores human capacity for overshoot

Malthus was not wrong in the ways commonly supposed.  From his 18th century perspective he simply had no basis for seeing the human ability to “overshoot” carrying capacity.  It was inconceivable to Malthus that human societies could, by taking advantage of favorable conditions (new technology, abundant fossil fuels),  temporarily increase human numbers and appetites above the long-term capacity of environments to provide needed resources and services.  But it is inexcusable today not to recognize the way populations can sometimes overshoot sustainable carrying capacity and what happens to them after they have done it. 

Human economic growth and technology have only created the appearance that Malthus was wrong (in the way we used to learn in school).  What our technological advances have actually done was to allow human loads to grow precariously beyond the earth’s long-term carrying capacity by drawing down the planet’s stocks of key resources accumulated over 4 billion years of evolution.

Competition and Overshoot

Human population growth and inter-species competition

Nearly everyone (but not Darwin) ignored crucial parts of the Malthus message.  Darwin (1859:63) stands out for understanding Malthus correctly.  Just after those two famous sentences about geometric increase of population versus arithmetic increase of food, Malthus ([1798] 1976:20) had said, “Necessity, that imperious all pervading law of nature, restrains them [all species] within the prescribed bounds.  The race of plants and the race of animals shrink under this great restrictive law.  And the race of man cannot, by any efforts of reason, escape from it.  Among plants and animals its effects are waste of seed, sickness, and premature death.  Among mankind, misery and vice.” 

In the third chapter of On the Origin of Species, Darwin (1859:60-79) spelled out how checks on the growth of any one species population are exerted by populations of other species associated with it in the web of life.  Because every population is part of what we have since learned to call an ecosystem, when a particular species is “fortunate” enough to expand its numbers phenomenally, catastrophic reduction of other species populations must result.  “We suck our sustenance from the rest of nature . . . reducing its bounty as ours grows” (Leakey and Lewin, 1995:233).  But the “prosperity” of an irrupting population is fatefully precarious, as its own future is imperiled by nature’s disrupted balance.

Environmental feedback: mass extinction poses a major threat

We have trebled the human load upon this planet in my lifetime by using the planet unsustainably and this has caused a new era of extinction.  According to a recent survey, a majority of American biologists regard the mass extinction of plant and animal species now resulting from human domination of the earth as a grave threat to humans in the next century (Warrick, 1998).  We live in a world losing biodiversity at an unprecedented rate (Koopowitz and Kaye, 1983; Wilson, 1992:215ff; Tuxill, 1998).  It is high time to see that this consequence was implicit in the 1798 essay by Malthus. 

Mankind is not only depleting essential mineral stocks.  We are also diminishing the plant and animal resources available to future human generations, and destroying biological buffers against the effects of global climate change (Suplee, 1998).  We are stealing from the human future.  Had the “moral restraint” of our parents and grandparents been enhanced by understanding Malthus as cogently as Darwin did, a less ominous future might have been their legacy to us (and ours to our descendants).

© Copyright 1998 by NPG


NOTES:

Barnes, Harry Elmer.  1948.  “Social Thought in Early   Modern Times.”  pp. 29-78 in Harry Elmer Barnes   (ed.), An Introduction to the History of Sociology.   Chicago: University of Chicago Press. 

Blaug, Mark.  1968.  “Malthus, Thomas Robert.”  pp. 549-  552 in vol. 9, David L. Sills (ed.), International Encyclopedia of the Social Sciences.  New York: The  Macmillan Co. & The Free Press. 

Catton, William R., Jr.  1980.  Overshoot: The Ecological Basis of Revolutionary Change.  Urbana:   University of Illinois Press. 

Cottrell, Fred.  1955.  Energy and Society.  New York:   McGraw-Hill. 

Darwin, Charles.  1859.  On the Origin of Species By Means of Natural Selection.  London: John Murray. 

Hawley, Amos H.  1950.  Human Ecology: A Theory of  Community Structure.  New York: The Ronald Press Company. 

Koopowitz, Harold, and Hilary Kaye.  1983.  Plant Extinction: A Global Crisis.  Washington, DC: Stone  Wall Press, Inc. 

Leakey, Richard, and Roger Lewin.  1995.  The Sixth Extinction: Patterns of Life and the Future of Humankind.  New York: Anchor Books Doubleday. 

Lundberg, George A., Clarence C. Schrag, Otto N. Larsen,  and William R. Catton, Jr.  1968.  Sociology (Fourth edition).  New York: Harper & Row, Publishers. 

Malthus, Thomas Robert.  1798.  An Essay on the Principle of Population.  pp. 15-130 in Philip    Appleman (ed.), An Essay on the Principle of Population: Text Sources and Background Criticism.   New York: W. W. Norton & Company, Inc. 1976. 

Moon, Truman J., and Paul B. Mann.  1933.  Biology for Beginners.  New York: Henry Holt and Company Price, David.  1995.  “Energy and Human Evolution.”  Population and Environment: A Journal of  Interdisciplinary Studies.  16 (March):301-319. 

Suplee, Curt. 1998.  “1 in 8 Plants in Global Study   Threatened: 20-Year Project Warns of Major Diversity Loss.”  The Washington Post, April 8, p.A01. 

Taeuber, Irene B.  1964.  “Population and Society.”   pp.83-126 in Robert E. L. Faris (ed.), Handbook of Modern Sociology.  Chicago: Rand McNally  
& Company. 

Tuxill, John.  1998.  Losing Strands in the Web of Life: Vertebrate Declines and the Conservation of Biological Diversity.  Worldwatch Paper 141.  Washington, DC: Worldwatch Institute. 

Warrick, Joby.  1998.  “Mass Extinction Underway, Majority of Biologists Say.”  The Washington Post,   April 21, p. A04. 

Wilson, Edward O.  1992.  The Diversity of Life.  Cambridge, MA: The Belknap Press of Harvard University Press. 

 

February 28th, 2003

Flemming Funch writes: Robert Wright, who is the author of Nonzero wrote a series of excellent articles last September titled “A Real War on Terrorism”. It is the best thing I’ve read on the subject. Wright possesses a sense of logic, which seems to be peculiarly absent in the people we elect to run our countries. What we mostly hear is talking monkeys who repeat half-baked political and religious ideas, but who somehow have avoided developing the skill of thinking the whole situation through logically.

Wright describes very well how it will only become easier and easier for small groups of motivated, angry, intelligent people to create major grief and death for large numbers of people. It is no longer a matter of what governments support it, or what public support exists for such actions. A small group can, all by its own, in complete isolation, concoct some very bad things in a garage, from ingredients that can be bought openly, that might kill hundreds of thousands. No way you can just cut off the supply. There isn’t necessarily anybody to bomb. Doesn’t matter if large numbers of people support it or fund it or not.

The inevitable answer is that we’ll need to change the things that different geographical or cultural groups are likely to become extremely angry about. That is more about memes, about the contageous ideas that travel through cultures, than it is about what really goes on. If certain relatively small and apparently sensible actions, like arresting some trouble maker, make many more people angry, the net result easily becomes more terrorism, not less. The amount of discontent in the world is becoming a highly significant national-security variable.

There will keep being reasons for terrorism as long as there are tyrannies and major economic inequalities anywhere on the planet. The United States will be a major target of terrorism as long as it keeps being a major force behind perpetuating these. The answer is democracy, freedom and a free economic market that actually works for most people in most areas. As Wright explains:

“A few decades from now, there will need to be a ‘global civilization’ in which both words are literally accurate – a planetwide community of mutually cooperative nations, bound by interdependence and international law, whose citizens are accorded freedom and economic opportunity. This is the goal we’re forced toward by some of the creepier aspects of technological evolution: ever-more-compact, ever-more-accessible, ever-more-lethal munitions, and the ever-more-efficient crystallization of interest groups, including hateful ones, via information technology.”

In other words, the only way out is to make a world that works for most everybody, no matter where they live, so that there is no good reason for anybody being pissed at some unfairly privileged and parasitic group of people living in some other area. I’d probably also go further and say that we need to go beyond the idea of ‘nations’ altogether. But that’s gonna take more work. In the meantime I wish somebody would listen to what this man is saying.


The following is the complete nine part series as originally posted at MSN Slate Magazine beginning on September 03, 2002.

A Real War on Terrorism

Robert Wright

After the attacks of Sept. 11, the Bush administration depicted the war on terrorism as something that, like past wars, would have a definite ending. Secretary of State Colin Powell said we would get terrorism “by its branch and root.” And President Bush’s pledges of clear-cut victory weren’t confined to his memorably ambitious vow to “rid the world of evil-doers.” Even in less exuberant moments, he said his goal was to “rout out and destroy global terrorism.” The war would be complex and multifaceted, and it might not be brief, but “its outcome is certain,” Bush said. “This will not be an age of terror.”

By the spring of 2002, the message had changed. Gone was the theme of certain triumph, replaced by an official sense of perpetual dread. In May, climaxing a cascade of spooky administration pronouncements, Secretary of Defense Donald Rumsfeld said that anti-American terrorists would “inevitably” obtain weapons of mass destruction and use them.

Some people thought the new pessimism was tactical, a pre-emptive strike against charges that any coming terrorism had gone unforeseen. And maybe it was. But it was also acknowledgment of the truth: Wars on terrorism have very little in common with regular wars. The initial, sheerly military phase–which the Bush administration had handled capably–was just the beginning. Now, a year after 9/11, pretty much everyone realizes that we’d better have a very good, very long-run strategy.

I don’t think we do. I think the Bush administration’s long-run plan, to the extent that one can be discerned, is at best inadequate and at worst disastrous. So, what’s my long-run plan? (Or, as a Slate reader put it via e-mail, after one of my carping columns about Bush policy, “OK, big shot … What’s the solution?”) Over the next two weeks, in daily installments, I’ll lay out my answer: a long-term strategy for America’s war on terrorism.

My argument will come in readily attackable form. It will be organized around a series of propositions–conveniently printed in boldface–that, I claim, describe the mess we’re in. Interspersed with these descriptive propositions will be policy prescriptions in italics. To refute me, all you have to do is either show that the bold-faced sentences are wrong or show that the italicized sentences don’t follow from them.

Warning: Some of the propositions will be a bit cosmic, dealing with large-scale social, technological, and historical trends. I believe we’re standing at a genuine threshold in history, rivaled in significance by only a few past thresholds, and that any diagnosis of our plight that doesn’t include some ambitious observations about, say, the future of information technology or the history of the nation-state isn’t up to the challenge.

Seven years ago, I wrote an article for the New Republic about the growing threat of terrorists using weapons of mass destruction. It would be an exaggeration to say that the piece spurred an overhaul of American policy–or even to say that it had any discernible impact, aside from briefly freaking out my wife. After Sept. 11–and the subsequent anthrax episode, and reports that al-Qaida was in the market for nukes–I thought: Well, at least now Washington will take more seriously the increasingly precarious world we live in. In a sense, Washington did. For example, Rumsfeld offered the aforementioned assurance that someday an American city would get decimated. Further, there was heightened vigilance and plans to institutionalize “homeland security.” But still, virtually nobody–and certainly nobody with great influence in Washington–got what I considered to be the picture.

The picture is this: If you look back over history, you will see enduringly disastrous phases–decades if not centuries of lethal contagious disease, of ruinous war, of societal collapse, of imperial decline. Sometimes these things “just happen,” but sometimes they happen because of momentous technological and social changes whose import humankind fails to reckon with. The premise of this series is that right now we’re undergoing such change, and so far we’re failing to reckon with it. These are dramatic times, and tomorrow I’ll start with my dramatic propositions. The first one will be: Al-Qaida and radical Islam are not the problem.

 

TWO: The Threat of Terrorism Naturally Grows

Obviously, Al-Qaida and radical Islam are a problem, and a big one. We’ll have to find a way to neutralize the specific threat they pose–and in the coming days, I’ll spend lots of time on the roots of Muslim rage, the structure of Islamist terrorism, and so on. Still, if we’re going to treat the war on terrorism as the long-term struggle that it is, we have to first understand that the threat posed by radical Islam is just a wave that signifies a deeper, even more menacing current.

The current, driven by technological change, is described by Proposition No. 2: For the foreseeable future, smaller and smaller groups of intensely motivated people will have the ability to kill larger and larger numbers of people. They won’t have to claim that they speak on behalf of a whole religion. They’ll just have to be reasonably intelligent, modestly well-funded, and really pissed off. It may be hard to imagine a few radical environmentalists, or Montana militiamen, or French anti-globalization activists, or Basque separatists, or Unabomber-style Luddites, killing 100,000 people. Yet what makes this plausible is exactly what makes radical Islam such a formidable long-term threat: two enduring aspects of the evolution of technology.

First, there is the much-discussed growing accessibility of massively lethal munitions–in particular, nuclear weapons and biological weapons. (Chemical weapons, though called a “weapon of mass destruction,” really aren’t. They’re horrible, yes; but a chemical attack by a dozen terrorists can’t kill hundreds of thousands of people, as the nuclear or biological equivalent can.)

Of the two, biological weapons are in a sense spookier because the threat is so deeply ingrained in commercial progress. The things it takes to make biological weapons–fermenters, centrifuges, and the like–are in buildings you drive by routinely: hospitals, universities, pharmaceutical plants. Every year they grow in number, along with the number of people who know how to use them. And, as if it weren’t scary enough that these things are essentially unregulated, the march of progress keeps creating new regulatory challenges. In July scientists announced they’d created a polio virus using mail-order DNA and a recipe available on the Internet. Hmmm … maybe someone in the government should look into this mail-order DNA business!

If last fall’s anthrax attacks were indeed, as some speculated, perpetrated by an American  trying to sound a useful alarm, he/she chose a lousy germ for the job. Anthrax, though scary, is a pale harbinger of impending bio-disaster. It isn’t contagious, so it’s basically the equivalent of a time-release chemical weapon. Smallpox, Ebola–not to mention as-yet-unknown designer plagues–could kill millions, even tens of millions.

I could go on about the various advances that are making massively lethal attacks a layperson’s sport, ranging from the already available poor man’s cruise missile to the nanotechnology in Bill Joy’s fevered-but-not-entirely-crazy nightmare. But the basic problem is widely recognized–Thomas Friedman called it the “superempowered angry man” in his 1999 book The Lexus and the Olive Tree–even if its magnitude is underestimated and a solution to it remains unarticulated.

The second technological force behind Proposition 1 is less widely understood: the diverse threat posed by information technology. For starters, there is the obvious value of infotech in orchestrating a terrorist attack, both in the planning and execution phases. (Mohamed Atta, while awaiting takeoff on American Airlines Flight 11, used a cell phone to keep in touch with his troops.) Less obvious but more important, there is the use of ever-cheaper, ever-more-powerful information technologies to mobilize constituencies.

One example is Osama Bin Laden’s recruiting videos–deftly edited, complete with special effects–to maximize emotional impact. Twenty years ago, before cheap desktop editing, making such films was beyond the capacity of a rag-tag terrorist group–and, anyway, distributing them was hopeless since almost nobody had VCRs. Twenty years from now, distributing them will be much cheaper and easier, thanks to the emerging broadband Internet. (If you have broadband, check out Bin Laden’s videos–complete with expert commentary–at www.ciaonet.org. Try to imagine yourself as an alienated Saudi or Palestinian teenager, looking for a way to channel your discontent, as you watch the powerful images of starving Iraqi babies and of a Palestinian woman being manhandled by Israeli troops.)

This high-tech mobilization of radical constituencies needn’t be centrally orchestrated. Since 9/11, American pundits have griped about the propaganda issuing from TV channels run by Arab governments. But take a look at the free market at work: The new, unregulated satellite TV channels–notably Al Jazeera, founded in 1996–haven’t exactly been a sedative for irate Muslims. The uncomfortable fact is that a free press often fuels antagonisms because people choose channels that bolster their biases. (Which is the most popular American cable news channel? The most ideological one–Fox.) Increasingly, “tribes”–interest groups of any kind, including radical ones–will be, in effect, self-organizing. 

All of this applies to all potentially violent interest groups. Those paranoid-nationalist videotapes full of fiery Waco imagery have already instilled fear and loathing in some Americans, but the efficiency with which they reach vulnerable minds will grow as the Internet goes broadband. So, too, for the sermons of radical environmentalists or rabid animal-rights activists. All are becoming more powerful by virtue of information technology. The sudden emergence of anti-globalization demonstrators wasn’t due to the sudden emergence of globalization–which, actually, hadn’t emerged all that suddenly. It was due largely to the Internet, the medium by which demonstrations are cheaply publicized and organized.

True, we haven’t seen much lethal terrorism from these mainly Western, well-educated groups. Then again, the fact that they’re Western and well-educated means that a small number of them could turn very lethal very easily. (Remember Timothy McVeigh?) So, whatever the conversion factor by which highly hateful Muslim adolescents become terrorists–one in 10,000; one in 100,000–the conversion factor for these Western groups is scarier. (Suicidal terrorism, the thing that has made Islamic doctrine so distinctively frightening, will be less and less a prerequisite for massive atrocity as time goes on and munitions technology evolves.)

Again, the point isn’t to minimize radical Islam, which is probably the biggest single threat to American security of the next decade, if not longer. But as we address that threat on its own terms, we should be building a policy framework that will apply to the larger, more generic threat as well. This is especially true in light of the fact that the current phase of rapid change–info revolution, globalization, etc.–is hardly over, and periods of rapid change tend to spawn intensely aggrieved groups. Indeed, this point is important enough to deserve official proposition status. Proposition No. 3: The number of intensely aggrieved groups will almost certainly grow in the coming decades of rapid technological, and hence social, change.

Propositions 2 and 3 together give us our first italicized policy principle: Prescription No. 1: Take your bitter medicine early. Often in the course of human events–or in the course of just living your life–you can either bite the bullet now or bite it later. In the stock market, for example, America enjoyed a wild ride in the 1990s and is now paying the price; alternatively, it could have shown more discipline and circumspection then and enjoyed more stable prosperity now. Who’s to say which is better? Not me. But in the case of terrorism, I have a decided preference because in 10 or 20 years, terrorism will have much more lethal potential than it has now. So, if there are burdens we can bear now–in money, even in lives–that will dampen future terrorism, they’re probably worth it.

This is a crucial principle, for the menu of policy options in the war on terrorism is loaded with short-term/long-term trade-offs. And democracy–like most other human systems of decision-making–is naturally biased toward short-term gratification.

I’m not saying, by the way, that the growing lethality of terrorism is a universal constant, immune to human influence. There are things we can do to cut access to munitions–in fact, we’ll have to do some things that are beyond the imagining of the Bush administration, a point I’ll address by the end of this series. But, even if these things are quite successful, scenarios of horrific death and destruction will still be more plausible in 20 years than now.

We’ll get to the first of our short-term/long-term policy trade-offs later this week. But I want to close this installment by addressing an obvious question: Who cares whether a channel like Al Jazeera helps Bin Laden “mobilize his constituency”–if, after all, it takes just a handful of al-Qaida staffers to set off a nuclear bomb? So long as 19 hijackers will get the job done, why does it matter whether al-Qaida has a thousand supporters or a hundred million? It matters for several reasons, chief among them the fact that today’s angry adolescents are tomorrow’s terrorists. Sure, only one in 10,000, or in 100,000, of these adolescents stays angry enough to become a true terrorist, especially a suicidal one–and of that subset, only a fraction is smart, well-educated, and disciplined, and thus as dangerous as a Mohamed Atta. But it doesn’t take many Mohamed Attas to markedly lower the planet’s quality of life. So, keeping hundreds of thousands of adolescents from getting hateful today could save hundreds of thousands of Americans 10 or 20 years from now.

Besides, it isn’t just a question of terrorist “recruits.” Hesham Mohamed Hadayet, who went on a shooting spree in the Los Angeles airport on July 4, had never been to an al-Qaida training camp. But he had in some sense been tuned in to al-Qaida’s wavelength, imbibing the same resentments and hatreds as al-Qaida recruits. As time goes by, and the Internet goes broadband, and satellite channels keep proliferating, wavelengths of this sort will get more powerfully enthralling.

That there was only one anti-American terrorist evident on a holiday that America-haters would love to ruin tells us that hatred, and its expression, remain at low enough levels that there’s still time to salvage a reasonably peaceful future. (On July 5, the stock market breathed a sigh of relief.) At the same time, July 4 was a warning about the price of American inaction. It wouldn’t take many Hadayets–walking into an airport and killing a few people before being killed–to have a major effect on American travel habits.

All of this points to Proposition No. 4: The amount of discontent in the world is becoming a highly significant national-security variable. 

 

THREE: Why the World’s Opinion of Us Matters

Of course, there’s never been a time when seething worldwide discontent was good for America’s security. But in the past, for the discontent to really hurt Americans, it had to first find expression via some national government. That’s why 50 years ago the basic goal of American foreign policy was simple: Make sure all national governments either like us or fear us. As we approach an age when a small group of free-lancers can traumatize a nation, the rules of foreign policy change.

The problem isn’t that Washington has been wholly oblivious to this development. On the contrary: For years it’s been hard to make it past the front desk of a foreign-policy think tank without noting the growing significance of “non-state actors.” But chanting the “non-state” mantra isn’t tantamount to getting the picture. The disconnect between mantra and picture lies with the phrase “non-state actors.” Though technically accurate, it suggests the image of a finite number of enemies, lurking in dark corners, whose elimination would spell lasting security. As President Bush puts it, we’ll “smoke out” the terrorists, hunt them down, and that will be that. “We will starve terrorists of funding, turn them one against another, drive them from place to place until there is no refuge or no rest.”

This sort of rhetoric acknowledges one of the two technologically driven trends behind Proposition 1 but ignores the other. Bush sees that, thanks to advancing munitions technology, a few well-organized terrorists can now do lots of damage. But he gives short shrift to the fact that, thanks to advancing information technology, intense anti-Americanism is more and more likely to become clusters of well-organized terrorists.

Once you emphasize both trends, you see what a pickle we’re in. Many things you would do to “smoke out” terrorists could increase the amount and intensity of anti-Americanism in the Muslim world and elsewhere. Yes, it’s nice to hunt down the few remaining al-Qaida troops in Afghanistan. But if every once in a while you accidentally bomb a Muslim wedding and kill 50 civilians–providing Al Jazeera with a week’s worth of programming, fanning hatred of America across the Arab world–is the prize really worth the price?

From the beginning of the Afghanistan campaign, Secretary of Defense Donald Rumsfeld dismissed reporters’ questions about civilian casualties: “When one is engaged militarily … there are going to be unintended loss of life. It has always been the case, it certainly will be the case in this instance.” In other words: Why make a big deal about what has been a feature of all past American wars? Answer: Because something basic has changed. Back during World War II, when Rumsfeld came of age, enemy civilian casualties had essentially no bearing on America’s national security. Now they increase the chances of American civilians dying in the future. (Obviously, military action that risks “collateral damage” can make sense even in light of this fact; the initial liberation of Afghanistan from Taliban control was extremely valuable from the standpoint of both the average American and the average Afghan–and, in fact, it was accomplished with fewer civilian casualties than many had feared, though arguably more than was necessary.)

Even when American foreign policy is concerned with old-fashioned political actors–prime ministers, presidents, kings–public opinion abroad matters as never before. The wave of democratization over the past few decades has made many foreign governments more responsive to their citizens. Even non-democratic governments–notably some in the Islamic world–have to pay more attention to public sentiment as the information revolution proceeds; their ability to shape that sentiment via centralized control of the media is fading, while the ability of dissidents to organize grows. More and more, how governments treat America–including how thoroughly they cooperate in the war on terrorism–will depend on how their people feel about America.

To the extent that people in Washington have, since Sept. 11, seen the growing significance of public sentiment abroad, they’ve tended to depict the problem as one of public relations. Congressman Henry Hyde asks, “How is it that the country that invented Hollywood and Madison Avenue has such trouble promoting a positive image of itself overseas?” In July the Bush administration replied to such concerns by announcing the creation of an “Office of Global Communications.” The office, an official explained, would do things like broadcast top-40 songs to Muslim youth and punctuate them with, for example, quotes from President Bush.

Well, I suppose it can’t hurt. Or at least it can’t hurt much. And certainly public relations matters. But it will have to be public relations of a subtle and creative sort, given the subzero credibility that information emanating from the American government carries in much of the Muslim world. And, anyway, image isn’t everything. In the end there will be no substitute for Policy Prescription No. 2: The substance of policies should be subjected to a new kind of appraisal, one that explicitly accounts for the discontent and hatred the policies arouse.

To put it another way: We have to understand that terrorism is fundamentally a “meme”–a kind of “virus of the mind,” a set of beliefs and attitudes that spreads from person to person. One way to squelch terrorism is to kill or arrest the people whose brains are infected with the meme, and the Bush administration has done some of that effectively. But some forms of killing and arresting–especially the kinds that get us bad publicity–do so much to spread the meme that our enterprise suffers a net loss. So, policy prescription No. 2, in some contexts, can be more precisely stated as Policy Prescription No. 3: The ultimate target is memes; killing or arresting people is useful only to the extent that it leads to a net reduction in terrorism memes.

Rephrased in these terms, the point I’ve been trying to drive home is that, for technological reasons, memes are getting faster and slipperier. The information age is doing for these “viruses of the mind” what dense urban living and interurban transport did for biological pathogens during the late Middle Ages. (The result of humankind’s failure to reckon with this was the Black Death.) And few things drive terrorism memes farther and faster over their new electronic conduits than doing an ill-thought-out job of neutralizing people already “infected.”

Seen in this light, some American anti-terrorism policies appear if not clearly wrongheaded, at least more dubious than before. After Sept. 11, we sent hundreds of troops to the Philippines to help the government fight Islamic guerrillas. Given that the Americans’ essential function was just to train and guide Philippine troops, one might ask why the Americans had to be uniformed and armed–and photographed and publicized. Mightn’t some locals resent this conspicuous intrusion by their former overlords, the Americans? Especially given that anti-American sentiment had already forced the government to kick Americans out of their Philippine military bases? Ensuing street demonstrations, in which thousands of Filipinos protested the new American presence and were subdued with water cannons, answered the question.

This particular mission–to confront a group known as Abu Sayyaf–had little relevance to the war on terrorism anyway. As the New York Times‘ Nicholas Kristof pointed out, Abu Sayyaf is basically a small group of thugs who kidnap for profit. And the assault on them was hardly an unalloyed success: One of the two Americans they had kidnapped was killed in the rescue attempt. Meanwhile, the Moro Islamic Liberation Front, a larger, more genuinely ideological Philippine guerrilla group that has clearer ties to al-Qaida, was unharmed by the operation–though its leaders presumably enjoyed the demonstrations and may have capitalized on the clumsy American presence to build support. (The dicey business of handling groups like the MILF–deeply ideological Islamic separatists with substantial constituencies–we’ll discuss next week. For now let me just assert that evolving information technology is going to make separatism a more and more powerful force in a manner strikingly analogous to the way the printing press eventually favored the carving of nation-states out of empires. So, sending troops in to quell other nations’ separatist uprisings is not a policy that should be pursued without discernment, unless our goal is to divert the hatred of all the world’s separatists toward America.)

The Philippines escapade resulted from taking the phrase “war on terrorism” literally and thinking of the enemy as a finite group of warriors, rather than a contagious mind-set that may spawn new warriors faster than you kill the old ones. We mounted a “show of force”–something that may work when you’re trying to intimidate a potentially aggressive nation but that may backfire when the enemy is, in part, Muslim resentment of American power and arrogance. This suggests Policy Prescription No. 4: In a war on terrorism, applying force inconspicuously makes sense more often than in regular wars.

The potential for the pursuit of enemies to backfire applies also within America’s borders. The surveillance of mosques, the interrogation of donors to Islamic charities, the detention of Muslim-American citizens for weeks without filing any charges–these things can definitely help prevent terrorist attacks. But to the extent that they make Muslim Americans feel persecuted, they also have a downside, such as making things like the July 4 airport shooting more common. My point isn’t that the downside is clearly outweighing the upside; the upside of the administration’s police work, both at home and abroad, has been considerable, and in most cases the net result is no doubt a gain. My point is just that administration deliberations and public debate should go beyond their present scope–the valid question of whether we’re “violating civil liberties” in a legal or moral sense–and raise the separate question of whether in some cases we’re planting the seeds of our own future suffering. It isn’t in America’s interest for the only check on Attorney General John Ashcroft’s zeal to be negative feedback from judges.

Though 9/11 made Americans aware that in some sense the attitude of the world’s Muslims toward America matters, this fact has yet to enter foreign-policy debate very explicitly. This summer, in a big policy shift, President Bush demanded that Yasser Arafat step aside as Palestinian leader, even if he is elected to office by a majority of Palestinians. Bush made no counterbalancing demand of Israel, even though there is one demand–ending the construction of new settlements in the West Bank–that has the support of roughly every American who thinks about these things. Bush caught some flak on this count, but I’m not aware of a single pundit who put the criticism in its most elemental terms: The speech’s conspicuous asymmetry had in some intangible but real sense reduced America’s national security.

Maybe this possibility never crossed Bush’s mind. Months after 9/11, remember, he was still sticking by his early commitment to avoid involvement in the Israeli-Palestinian mess altogether. He abandoned that pledge only when he realized that, absent some progress on this front, he couldn’t win the support of Arab leaders for a war in Iraq that he deemed vital to American security. He didn’t seem to see that helping to solve the Palestinian issue would inherently add to American security, by denying Islamic anti-Americanism one of its major sources of fuel.

And, likewise, Bush doesn’t seem preoccupied with the reaction of Arab Muslims to an Iraqi war. Has anyone pointed out to him one big difference between this war and his father’s war? Back in 1991 Arab television was largely controlled by Arab governments that didn’t want the war to incite their people. Now Al Jazeera and other alternative broadcasters exist, and I’ve got a feeling that Saddam Hussein will have liberal access policies for their cameramen.

In the past, one common test of a piece of foreign or defense policy was whether it could be sold to the relevant government. If the Saudi government would swallow an ongoing contingent of American troops, as it did after the Persian Gulf War, then the deal was done. Of course, Osama Bin Laden’s reaction to those American troops–undergoing a kind of conversion experience that seems to have led eventually to 9/11–is something no one could have predicted. Still, someone could have pointed out that for foreign troops to be stationed in Islam’s holy land is sacrilege–not just according to Bin Laden, but according to some mainstream clerics. If we had realized this, and had grasped the rapidly growing importance of public opinion abroad, that would have counted heavily against this policy.

After an early terrorist response to America’s presence in Saudi Arabia–the 1996 truck bombing that killed 19 U.S. troops–American elites responded in time-honored fashion. In assessing the implications of this anti-Americanism, they focused largely on whether it could seize control of an actual government. A New York Times analysis concluded, “The consensus among outside experts and American officials is that the royal family maintains a firm grip on power and that Saudi Arabia’s fundamental alignment with the United States is unlikely to change.” That turned out to be true–but 9/11 still happened. 

Remarkably, even after 9/11, conservative pundits were still dismissing concerns about the opinion of “the Arab Street” since the street, however angry, never seemed to boil over and topple a regime. But those 19 hijackers started out on “the Arab Street,” and if “the Arab Street” weren’t full of hatred of America, the Twin Towers would still be standing.

Of course, that hatred has been building for awhile. If you listed all the culturally and politically insensitive things America has done over the past two decades, you wouldn’t be close to accounting for all of it. Any good war-on-terror strategy must deal more deeply with “the roots of Muslim rage.”

 

FOUR: How Islamic Democracy Helps Us

Since Sept. 11, Princeton University historian Bernard Lewis has become, for American conservatives, the official interpreter of Islamic discontent. In a way this is ironic. As suggested by the title of his famously prescient 1990 Atlantic Monthly essay, “The Roots of Muslim Rage,” Lewis is interested in root causes, a subject conservatives tend to dislike.

They make an exception in Lewis’ case because of the roots he emphasizes: really old ones. By his lights, the basic problem is that a) Islamic civilization, which only a millennium ago was at the front of the pack, has been soundly beaten by western capitalism, whose modern values it finds alien; b) Islam, dating back to its founding, has been a particularly severe religion–so don’t expect Muslims to be gracious losers. From their point of view, writes Lewis, “what is truly evil and unacceptable is the domination of infidels over true believers.” We are seeing “the perhaps irrational but surely historic reaction of an ancient rival against our Judeo-Christian heritage, our secular present, and the worldwide expansion of both.”

Thus the roots of Muslim rage run so deep–lying in scripture and historical memory–that real-time policy is nearly powerless to affect them. Lewis hopes that moderate Muslims will win the struggle for Islam’s soul, but “we of the West can do little or nothing” to influence the outcome. So much for the insistence of lefties on addressing the various social and political irritants that top their list of root causes.

Yet, in a series of to-be-sure paragraphs preceding his rhetorical climax, Lewis actually grants credence to some items on the lefty list. In fact, parts of his analysis are arguably at odds with his prescription of benign neglect. In the next several installments of this series, I’ll draw on parts of Lewis’ analysis in building a plan of action. At the same time, I’ll take issue with one of his central themes: the enduring, nearly autonomous power of religious doctrine. I don’t think you can trace the origins of 9/11 back to Mohammed, and I don’t think we have to now wait patiently, hoping that the harshness of radical Islam will slowly mellow as the seasons pass. Which is fortunate, because we can’t afford to wait.

One prominent item on the lefty root-causes list is decades of American support for authoritarian regimes in the Muslim world. These regimes are “seen as reactionary by radicals, as impious by conservatives, as corrupt and tyrannical by both,” writes Lewis, and he acknowledges that their ties to America may indeed fuel Islamic anti-Americanism. (If he were writing the Atlantic piece today, he might add that American support of these regimes contrasts awkwardly with President Bush’s justifiable advocacy of democracy in Palestinian territories.)

The career of al-Qaida mastermind Ayman al-Zawahri–sometimes called “Osama Bin Laden’s brain”–illustrates how obliquely authoritarianism’s toxin can spill over onto America. As a teenager in Cairo, Zawahri joined the Muslim Brotherhood, which at the time was, according to the Wall Street Journal, “a relatively moderate but banned organization.” Thrown in jail, he mingled with more radical prisoners and converted to the more violent Egyptian Jihad, later becoming its leader. While floating from country to country after being driven out of Egypt, he linked up with fellow radical exile Bin Laden. In light of American support for Zawahri’s enemy, the Egyptian government, Bin Laden’s anti-American agenda had a natural appeal.

So, why doesn’t America push for freedom and democracy in the Muslim world? Why continue to stand by brutal authoritarians who dishonor our most cherished values and rev up our most hateful enemies?

The standard answer is a watered-down version of Franklin Roosevelt’s famous remark that, though Dominican Republic dictator Rafael Trujillo was an SOB, he was at least our SOB. The Saudi and Egyptian governments aren’t exactly our authoritarians, but they’re at least authoritarians we can do business with; they respond to time-honored incentives like money and power. Who knows how zealously unreasonable, even hostile, their democratically elected successors could be?

It’s true that democratic elections have the unsettling property of unpredictability–and that it’s especially unsettling, post-9/11, in Muslim nations. Nonetheless, I recommend Policy Prescription No. 5: Support free expression and, ultimately, democratization in authoritarian Arab and other Muslim states.

One reason is Proposition No. 5: The current phase in the evolution of information technology is anti-repression. This is just an extension of the previously noted tendency of plummeting information costs to ease the mobilization of interest groups, including dissidents. Authoritarian governments everywhere are going to find it harder and harder to hold down restless masses. Sooner or later, the Egyptian and Saudi regimes will either graciously usher in democracy or bitterly bite the dust. Why incur the enduring enmity of much of the Islamic world by defending them until their dying day? Besides, we’d be doing them a favor by steering them toward the “graciously usher in democracy” option since it’s less likely to lead to their deaths. (Though Egypt and Saudi Arabia are the standard focal points for such questions, the questions of course apply in other parts of the Islamic world, notably in such authoritarian Central Asian nations as Uzbekistan, which suddenly developed strong military ties to the United States as a result of its proximity to Afghanistan, and Kazakhstan, whose oil reserves are attracting American interest that is eerily reminiscent of the logic behind America’s unholy alliance with the Saudi regime.)

The Bush administration, which has generally resisted Prescription 5, may be in the midst of a turnaround. This summer an Egyptian court imprisoned the democracy advocate Saad Eddin Ibrahim, apparently on trumped-up charges. The administration at first responded meekly–and caught flak from both lefties and neoconservatives–but in August it raised its voice. In a high-profile move, it said the Ibrahim conviction ruled out any new aid for Egypt (beyond the annual aid resulting from the 1978 Camp David agreement with Israel).

That’s a start. We don’t have to demand elections tomorrow. For one thing, we wouldn’t get them; for another, a transition to full-fledged democracy may be easier if there’s time for institutions of civil society to start taking shape first. But so long as we can keep pushing states like Egypt and Saudi Arabia toward free expression and democracy, then a) intense young men will more and more have a non-terrorist outlet for their political energies; and b) to the extent that they do still feel frustrated, America will be less culpable for the frustration.

The great fear, of course, is that when democracy does arrive, it won’t be the final destination: Radical Muslims, once elected, will promptly restore autocracy. Or maybe, before reform even gets to the point of democracy, the radicals will have enough power to stage a revolution. Either way, we’d then be dealing with a zealous theocracy, like the one that took over Iran in 1979.

That’s certainly possible (though as Fareed Zakaria has noted, the most extreme Islamists have actually been losing support in many Muslim nations lately). But is this worst-case scenario–the Iran scenario–really much worse than what we’ve got now in a place like Saudi Arabia?

You can say this much for present-day Iran: It didn’t contribute any hijackers on Sept.11. That’s ironic, given that the Iranians have a legitimate gripe with us: In 1953 we sponsored a coup that empowered a repressive monster (the Shah) who had numerous well-meaning Iranians tortured or killed.

So, why haven’t any big anti-American terrorist plots been hatched by Iranians? For one thing, thanks to the 1979 revolution, America is no longer backing their repressive monster. Iran, like Egypt and Saudi Arabia, has lots of angry people, but the ones who are angry at their government don’t have the United States to blame for it. In fact, since they want more moderate, perhaps even secular rule, the American way is as close to being the solution as the problem. Iran also has its share of fundamentalists, but since they feel politically empowered, they aren’t consumed by resentment, and what antipathy they have they focus on their angry moderate fellow Iranians, not Americans.

Of course, the Iranian government has supported terrorists–Hezbollah and Hamas, for starters. Al-Qaida. Then again, Saudi Arabia has fomented its share of terrorism, if more circuitously, by massively funding Islamic extremists. True, the Saudis may be starting to clean up their act, post-9/11. But it’s far from clear that Iran couldn’t be coaxed into doing the same if President Bush moved it from the “evil” category into the “people you might be able to deal with” category.

Besides, however frustrating the longevity of Iran’s fanatical government, it a) grants more liberty than it did when founded and b) seems to be on its death bed, with more modern, moderate government in the offing. As former CIA Director James Woolsey wrote in June in the Wall Street Journal, Iran’s theocracy is today where Soviet communism was in the 1980s: “still in power, but widely recognized as being rigid and unworkable.”

This is almost surely the fate of any hypothetical theocracies that could emerge in Egypt or Saudi Arabia. This is because in both cases the authoritarians will run into the same principle that worked on their behalf during their ascendancy–our old friend Proposition 5: The current phase in the evolution of information technology is anti-repression.

In Iran, this proposition is already showing that it’s a two-edged sword. In the 1970s, it helped Iranian revolutionaries as the Ayatollah Khomeini’s subversive sermons were passed around on audio cassette. This may seem like an archaic information technology now, but it was a revolution at the time, and in historical perspective it was just another step in the ever-cheaper, ever-richer transmission of information, a trend whose current incarnation is the Internet. And that incarnation is now turning on Khomeini’s heirs in Tehran. The number of Iranians online–put at 400,000 only last year–is now approaching 2 million, and many of these users are young, disdainful of government strictures, and inclined to communicate with others of their kind.

That information technology dooms authoritarians is a claim often made–by the “pro-engagement” faction in the China debate, for example–but often misunderstood. The point isn’t that a government can’t do anything to keep infotech from empowering its masses. Witness North Korea: no infotech, no empowerment. Rather, it’s that if a government wants to keep infotech from empowering its masses, it condemns its nation to poverty. Witness North Korea again.

And in the longest run, this is not a tenable policy. Witness the Communists throwing in the towel in China and Russia. Or witness North Korea’s own attempts at engagement with the West–ambivalent and spasmodic, yes, but real. Or witness Iran, where, as Nazila Fathi wrote in the New York Times recently, the government accepts the pluralizing peril of the Internet because it recognizes that “the Internet holds a wealth of scientific and technological information, and therefore promises progress.”

Once a regime concedes that modern information technology is a prerequisite for economic health, the genie is out of the bottle. In both Russia and China, authoritarian impulses endure, and freedom periodically suffers setbacks, but freedom has plainly grown over the past two decades. 

In short: In the modern technological environment, the Ayatollah Khomeini–or the Osama Bin Laden–model of governance is hideously unworkable. And if such a model is put on display in Egypt, Saudi Arabia, or elsewhere that fact will become clear.

This is a proven means of learning. Communism isn’t dead because grievances against capitalism disappeared; communism is dead because it was tried and it failed. True, it took communism a disconcertingly long time to fail–more than half a century. But in the information age, the feedback comes faster.

Could the feedback phase still be long and pernicious, featuring theocratic regimes that funnel cash and weapons to terrorists? Yes. As Thomas Friedman has noted, this is especially true in oil-rich nations, which can fend off acute poverty without fully embracing high-tech capitalism in all its subversiveness. That’s why the transition has taken awhile in Iran and could take awhile in Saudi Arabia (as opposed to, say, oil-poor Egypt).

Should we have to endure such a transition, there are some rules we would have to make explicit–e.g., if you attack Israel, we attack you. But transgressions that extreme would be unlikely anyway. Attaining actual power has a way of sobering radicals up; upon inheriting an economy that depends partly on staying in the good graces of other nations, and realizing that their own grip on power depends on delivering some prosperity to their people, they often become reluctant to glaringly defy international norms.

In any event, notwithstanding the possibility of a messy and in some cases long transition toward modern, moderate government, this is a bullet we should bite. Remember Prescription No. 1: Take your bitter medicine early. If the Islamic world must go through a period of upheaval, if theocratic authoritarianism must temporarily flourish, and even aid and abet terrorism, let’s get it over with. However scary the thought of a Bin Laden protÈgÈ running a government, it will be a much scarier thought two or three decades from now, when bioweapons technology, in particular, has reached whole new levels.

By then any nations that aren’t responsible members of the global community will pose a much bigger threat to the globe than they do now. In the next installment, we’ll look at, among other things, another lever that can be used to move nations toward membership.

 

FIVE: Poverty and the Middle-Class Terrorist

In the aftermath of Sept. 11, various knee-jerk liberal columnists, including me, asserted that “poverty in the Islamic world” had nourished terrorism. Pesky observers highlighted an apparent problem with our theory: Many of the hijackers had come from the middle- or upper classes of their societies. When you pursue a graduate degree in urban planning, as Mohamed Atta did, it’s safe to say that you’re not desperately trying to pull yourself up by the bootstraps.

One fallback position for liberals was that, though the terrorists themselves weren’t poor, their local constituents–the masses that exude moral support–were poor. This tactical retreat encountered complications when economists Alan Krueger and Jitka Maleckova published a paper on pro-terrorist sentiment. Using data from the Middle East, they found that not only were terrorists not likely to be especially poor; local supporters of terrorism were, if anything, more affluent and better-educated than average.

So, does this mean that poverty isn’t a big part of the problem? No. It just means it’s time for a second tactical retreat, to a more defensible position. Namely, Proposition No. 6: The problem isn’t poor people; the problem–or at least part of the problem–is poor nations. Terrorists may not be the poorest people in their nations, and they may not draw most of their support from especially poor people in their nations–but the nations they come from tend to be at the bottom of the world’s economic hierarchy.

Even the “poor nations” formulation is in a way misleading. Saudi Arabia, the world’s leading supplier of murderous hijackers, has a GDP that, though far below Western standards, isn’t down there with North Korea’s. Still, like all other nations that 9/11 hijackers grew up in, it is not part of the web of global prosperity. Saudi Arabia has an ossified, statist, protectionist economy whose essential link with the larger world is selling it black goo; it isn’t even a member of the World Trade Organization. Though it could once use oil to sustain a per capita GDP of $28,000, by September 2001, that number had fallen below $7,000, and the unemployment rate stood at 18 percent–higher for recent college graduates. Whether terrorists are middle class, like some of the hijackers, or lower class, like many al-Qaida foot soldiers, the ranks of the unemployed are prime turf for recruiting them.

Lack of opportunity is something Saudi Arabia shares with its poorer (in oil and hence GDP) neighbor Egypt, home of Mohamed Atta. In both nations, the private-sector outlet for creativity is so meager that a bright, ambitious young man might as well do graduate study in urban planning. In fact, by the time Atta the urban-planning-graduate-student turned to radical Islam in Hamburg, Germany, he had tried repeatedly, and failed, to get a good job in Egypt.

What does a smart, well-born guy like Atta do if he grows up in Western Europe or America or some other part of the globalized world? There’s a good chance he’ll wind up flying business class–doing deals with foreigners, and thus finding it hard to sustain the idea that any particular class of foreigners is evil. Even if he stays at some lower level of the business hierarchy, at least he’s off the streets. What’s more, he’ll imbibe the cosmopolitan ethos that trickles down from the top; his nation’s economy is richly interdependent with economies around the world, and it has a credo of intercultural tolerance that flows from this fact. So Proposition 6, in refined form, holds that Part of the problem is poor nations–or, at least, underglobalized nations.

A basic law of nature is that young males will seek status and recognition through locally available channels. The object of the game is to make those channels lead to contentment and, ideally, productive engagement with the world. Atta’s Egyptian channels didn’t. In Germany, he encountered an open and vibrant society, but the cultural barriers proved forbidding. In fact, the encounter may have only fed his inchoate bitterness. Years after he left for Germany an old friend from Egypt ran into him; the friend, according to Time magazine, gathered that Atta “had made few German friends” and was  “depressed about not having a career or a family back home.”

To say that the juxtaposition of Atta’s economically stagnant home and the vibrant but culturally alien West may have fueled his radicalism is to endorse a part of Bernard Lewis’ message. Resentment of Western superiority–economic, technological, military–is central to his explanation of Islamic discontent.

But Lewis is saying something more, and on this some of the policy implications hinge. He’s saying that people who grow up in these relatively poor Muslim nations aren’t just resentful, they are resentful on behalf of their religion. That’s why, in Lewis’ view, we face “a clash of civilizations.” (Yes, he beat Samuel Huntington to that phrase–it’s a subhead in his 1990 Atlantic Monthly piece.) And that’s why there’s little we can do about the problem–it’s rooted in deepest cultural memory.

Is this true? It’s true ­that the terrorists had come to identify deeply with Islam–and a  particularly austere version of it–by the time they became terrorists. But it’s equally clear that they didn’t all start out with that intense identification. So far as anyone can tell, Atta, though already devout, didn’t become radically Islamic until he went to Germany. What seems to have happened is that personal resentments and frustrations, themselves products of economic and political forces, latched on to radical Islam as a congenial, affirming ideology, one that made the West a useful scapegoat. Religious ideas aren’t passed down through the generations inexorably, from one passive brain to another; in each generation they can be rejected, embraced, or amended, depending on how they mesh with people’s socially conditioned needs.

This isn’t to say that Islam isn’t in any sense part of the terrorism problem. Obviously, had radical Islam not been in the air, Atta would have found some other, presumably less lethal, outlet for his frustrations; he might even have vented them productively, by assimilating into the West, rather than attack it. But for radical Islam, he might today be chairman of the Hamburg Chamber of Commerce!

Still, it is wrong, and unduly depressing, to see the problem as being Islam in some large historical sense–to trace the origins of 9/11 all the way back to morally primitive Quranic passages that have supposedly poisoned the minds of men ever since the seventh century AD. The Holy Bible has passages every bit as morally primitive as anything in the Quran. (I’ve assembled a small sample.) The history of Islam, like the history of Christianity, is a history of people in some times and places focusing on morally primitive scriptures and people in other times and places focusing on loftier scriptures. Islam attained its economic and technological dominance during the Middle Ages in part by focusing on the loftier ones, extending a tolerance to Christians and Jews that, at the time, was on the cutting edge of intercultural understanding.

If we want to know why people’s interpretations of their own religious doctrines vary so much from decade to decade, we have to look at what is going on in the world around them. In the case of modern radical Islam, we find no shortage of explanations, ranging from economic stagnation to political repression to an American foreign policy that over the past few decades has paid roughly zero attention to Muslim opinion (unless you count the opinion of Muslims who happened to be in charge of armies or oil wells). What we don’t find is any sense in which religion is an exogenous variable, an autonomous force that floats above the social landscape and, generation after generation, mysteriously bends the minds of men to its will. (Is that last sentence a caricature of Bernard Lewis’ opinion? Well, yes. Click here for my more nuanced characterization and a corresponding critique.)

The view I’m advancing is, broadly speaking, a Marxist view–that religious beliefs are largely a function of underlying economic and political circumstances, as mediated by psychology. It’s also a hopeful view. Because it means we don’t have to figure out how to “change Islam”–a disconcertingly amorphous task, and one that would probably backfire. Lewis is right about the hopelessness of intervening at that level. We can instead intervene at the level of economics and politics, and if we’re successful, then the radical variants of Islam will lose support; radical “memes” will find fewer brains willing to host them. Hence, for example, Policy Prescription No. 6: Draw Islamic nations–and for that matter all nations–into the web of global capitalism.

This would have several benefits: 1) It would give young men an outlet for economic ambition, diverting them from radical pursuits. 2) It would give young men an outlet for political ambition by abetting pluralism; after all, global capitalism brings modern information technologies that are powerful tools of political expression and of interest-group formation. 3) It would expand person-to-person contact with the West in a natural, enduring way; when it comes to nurturing multicultural tolerance, there’s nothing like doing a mutually profitable deal with a foreigner. 4) It would expand the number of affluent Muslims who, by virtue of dependence on trade, have a stake in preserving world order against terrorist disruption, and in nourishing their country’s reputation as a stable place for foreign investment.

This last benefit is especially important. Ultimately, the war against Islamic terrorism has to be conducted within Islamic nations in order to be lastingly successful, and it has to be conducted in an organic, virtually unconscious way. Though there is short-term value in America’s using carrots and sticks to get rulers to fight the obvious kind of war–with wiretaps and arrests and shared intelligence–in the long run the war must be one of ideas, fought via the evolution of political, moral, and religious beliefs. A large chunk of the population has to see its interests aligned with order and international concord.

Even if this analysis is in some ways Marxist, and even if it rejects the implication of “benign neglect” that makes Lewis’ analysis so popular on the right, it will disappoint many leftists. Because one of its implications is Policy Prescription No. 7: Emphasize trade at least as heavily as aid in fighting the kind of economic deprivation that breeds terrorism.

Foreign aid is good for lots of things–like keeping people from starving, and fighting disease, both of which, as Gregg Easterbrook has noted, are their own reward. But one thing aid has generally not done, as the economist William Easterly showed in The Elusive Quest for Growth, is make a clear contribution to lasting economic development–to the market-driven modernization that tends to be lacking in terrorism-exporting countries.

Nor has aid dramatically contributed to the freedom and democracy that are also lacking in terrorism-exporting countries. In fact, when aid passes through the hands of dictators, a large chunk has a way of winding up in the hands of their cronies, consolidating autocratic rule. According to calculations reported by Bruce Bueno de Mesquita and Hilton Root in The National Interest, “every dollar of per capita foreign aid improves an incumbent autocrat’s chance of surviving in office another year by about 4 percent.”

This doesn’t mean aid will have no role in the war on terrorism. For one thing, some people, such as economist Joseph Stiglitz, contend that we’re learning more and more about how to make aid more conducive to growth and less conducive to kleptocracy. For another, one can imagine forms of aid that, regardless of their effect on GDP, specifically help fight terrorism. Those madrassas, the often-radical schools that are the only educational option for many poor parents in Pakistan, are begging for a subsidized alternative–schools that entice parents with free hot meals and medical care and don’t teach hatred (though here American funding, as opposed to funding from non-governmental organizations or multilateral institutions, might carry a counterproductive taint).

Still, if economic modernization is your goal, trade works more reliably than aid. As economists Jeffrey Sachs and Andrew Warner showed years ago, developing nations with the most open, least protectionist economies–the nations most plugged in to the global economy–grow the fastest.

Yet the West makes it hard for poor nations to fully plug in. Heavily subsidized agriculture in the United States and Europe stifles what is an important sector in virtually every developing country. (Egypt pretty much invented cotton farming, but the world won’t give its cotton farmers a fair shake.) Heavily protected textile markets also hurt lots of poor nations. In general, according to the World Bank, economically advanced nations levy tariffs against developing nations that are four times as high as the tariffs they levy against other advanced countries.

In this light, American policy after 9/11 was a study in how not to conduct a long-term war on terrorism. The United States denied General Pervez Musharraf’s pleas to open its textile markets to Pakistan as a reward for his vital support in the Afghanistan war. Instead, we handed him $1 billion in aid–aid that may help an ally maintain control in the short run but that probably won’t help Pakistan become less hospitable to terrorism in the long run and could even do the opposite, by slowing progress toward pluralism and ultimately democracy. Then, in June, Congress passed, and President Bush signed, a $100 billion subsidy to American farmers that a New York Times reporter called “one of the biggest reversals to free trade in decades.”

Now that Bush has been given “fast-track” trade authority, there’s a chance that the United States, and the West more broadly, will lower trade barriers to developing countries. Still, the administration’s focus is on trade deals with Latin America, a worthy goal, but certainly not worthier or more urgent than doing deals with Muslim countries.

Trade is no cure-all. Many Latin American countries that embraced market economics, while seeing real growth, have also seen rising income inequality (though the commonly repeated claim that, as nations globalize, “the rich get richer and the poor get poorer,” seems to be, strictly speaking, a myth). We have a lot to learn about easing developing nations along the path to modernity, as do the developing nations themselves.

Still, whatever the shortcomings of capitalist development, you didn’t see any Brazilian hijackers on Sept. 11. Latin American economies by and large provide economic opportunities for a would-be Mohamed Atta. And if they fail, and enough people get economically frustrated, there are democratic outlets for rage; their leaders are held in power by the ballot box, not by repression and an unholy alliance with the United States.

Of course, there’s always the possibility that the key difference between Brazil and Saudi Arabia is religion, not economics or politics. This explanation might be favored by those who put a Lewisesque emphasis on the power of religious doctrine. But if the problem is Islam per se, then how do you explain Turkey? It’s a Muslim nation, but in terms of exporting terrorists it ranks down around Brazil (even if one does come across the occasional Turkish terrorist). Maybe the explanation is that in terms of economic vibrancy and political freedom Turkey also ranks closer to Brazil than to Saudi Arabia (though Mustafa Kemal Ataturk, who nearly a century ago put Turkey on the path to modernity and showed that the character of a nation’s religion can change sharply within a generation, used a top-down, statist approach to get the ball rolling). This explanation could also apply to the 150 million Muslims in India, who by and large are much less sympathetic to radical Islam than nearby Pakistani Muslims; India is a paradise of economic and political liberty compared to Pakistan.

In light of such examples, it isn’t really necessary to finally resolve the debate about what role Islamic doctrine plays in inspiring terrorism. After Sept. 11, I argued in these pages against the “Islam is the problem” position held by many Lewis admirers (even if that phrase oversimplifies his own position). But suppose, for the sake of argument, that we stipulated that “Islamic doctrine” is a pre-condition for a certain species of anti-American terrorism. Examples such as Turkey and India would still show that, even if Islam is a necessary condition, it’s not a sufficient condition. And the point of these most recent two installments in this series is that, judging by the 9/11 hijackers, there are at least two other conditions that appear to be necessary–a lack of political freedom and a lack of economic opportunity. And by definition, if we successfully address any necessary condition, it won’t matter what other necessary conditions may or may not exist. Focusing on politics and economics will get the job done–and the moderation of radical Islamic doctrine, I maintain, will then take care of itself.

All this suggests that abetting globalization, and its natural concomitants of economic and political liberty, is a big part of any successful war on terrorism. Unfortunately, globalization also has some terrorism-abetting properties, a subject we’ll address in the next installment.

 

SIX: Does Globalization Cause Terrorism or Cure It?

Our unfolding prescription for a war on terrorism would seem to go with the flow of history. To encourage the democratization of Arab and other mainly Muslim nations is to ride in the slipstream of technological evolution, which at the moment has anti-autocrat tendencies. And steering nations toward economic modernization is largely a matter of tearing down trade barriers and letting capitalism do what it naturally does. The ongoing globalization of technology and commerce, it would seem, amounts to an autopilot anti-terrorism machine. Sounds almost too good to be true.

It is! Unfortunately, there’s Proposition No. 7: Globalization, though a large part of the solution, is also a large part of the problem.

As Bernard Lewis and others have pointed out, the modern world–featuring alcohol, satellite-beamed pornography, lapel-wearing alpha females–is an offense to traditional Islamic values. And globalization sticks modernization in the face of Muslims, whether they like it or not. Mohamed Atta didn’t have to go to Germany to see Hollywood movies or the Western skyscrapers that, in his view, scarred the landscape of Islamic architecture.

This clash of cultures, by itself, needn’t be a huge problem. Sure, the encroachment of modern values on traditional culture will create friction, including resentment and even disgust. But we’ve all felt resentment and disgust, yet few of us have killed people. Had it not been for Atta’s other issues, his economic and political frustrations, he probably wouldn’t have either. Right before his final radicalizing phase in Germany–when he apparently decided to go train with al-Qaida–he spent a few months in Egypt, where he failed to find a job and bridled at the government’s oppression of fundamentalist groups. He even saw a link between the two. His own sympathy for such groups, he felt, would keep him from getting hired by an Egyptian firm.

In short: If people everywhere had economic opportunity and political freedom, the clash of cultures that globalization brings would more often be endured without explosion. So, maybe globalization, to the extent that it’s part of the problem, is self-solving. By moving the world toward market economies and democracy, eroding the kinds of frustrations that pushed Atta over the edge, it defangs itself.

This, too, sounds too good to be true–and it, too, is. There are two main reasons.

First, in developing countries globalization can, in the short run, create economic frustrations very much like those that afflicted Atta. When traditional economies modernize, people’s skills become obsolete. Low-tech farmers can’t compete with modern conglomerates, shopkeepers lose business to chain stores, and both can wind up looking for work.

The work is usually there, but it may be a disorientingly different kind of work. The journalist Robert Kaplan has described some workplaces in the developing world as “polluted, grimy factory encampments” where migrants, freed from the norms of the rural village, are “assaulted by the temptations of the pseudo-Western city–luxury cars, night clubs, gangs, pornographic movies.”

Encouragingly, America underwent a similar transition a century ago and lived to tell about it. Back then, young men and women moved en masse from farms and small towns to cities, where they sometimes found dehumanizing workplaces, new corruptions and temptations, and few traditional moral tethers. (See, e.g., The Jungle and Sister Carrie.) But by World War II, Americans had largely equilibrated. They built stable urban neighborhoods and found new forms of community in social clubs and civic groups. And new legislation improved working conditions and empowered workers.

Still, it was a wild ride, featuring enough working-class discontent for genuine revolutionaries to briefly gain a foothold. Today parts of the developing world are taking an even wilder ride, going straight from pre-industrial agrarian lifestyles into the modern, globalized world, covering in decades a jump that the West took centuries to make. With traditional routes to status and community being rapidly redefined, there is bound to be some virulent discontent generated somewhere.

The second reason that globalization isn’t an autopilot anti-terrorism machine takes us back to Propositions 1 and 2, and the fact that big-time terrorist threats could increasingly come from somewhere other than the Muslim world. Like Montana, say. Globalization alienates some Americans and Europeans, especially rural and working-class people who feel victimized by foreign or immigrant labor or by job-killing workplace automation.

The sociologist Michael Kimmel has noted parallels between the frustrations of the American far right and those of Atta and other hijackers. American neo-Nazis and white supremacists feel “downward mobility and economic uncertainty” and are “emasculated by big money and big government.” Three years before bombing Oklahoma City, Timothy McVeigh wrote a letter to a newspaper blaming the American government for killing the American dream, leaving people scrounging to pay their bills. McVeigh, who had already given up on the private sector as his ticket to status (he dropped out of business college), still dreamed of being an elite soldier, but he would soon flunk out of Green Berets camp. The rest is history. Like Atta, he felt that his economy and his government had betrayed him.

In addition to the American far right, whose grievances are loosely tied to modernization and globalization, there are left-wing groups that cite globalization as an explicit grievance: environmentalists, labor activists, and so on. Most of these people are sane and safe, but almost every movement has a lunatic fringe, so antagonizing them further is not a recommended feature of an anti-terrorism strategy, given the increasingly lethal expression of discontent that technology will make possible.

In sum, we have a fact that is widely recognized but not generally linked to the war on terrorism, namely Proposition No. 8: Globalization has doubly bad short-term side effects, bringing transitional alienation to both developing and developed nations.

These transitional problems have no true cure; there’s a reason they call wrenching change wrenching change. But there is one thing that could at least dampen some of the alienation in both the developed and developing worlds. Namely, Policy Prescription No. 8: To blunt some of globalization’s sharper edges, carry political governance beyond the level of the nation-state, to the transnational level.

This approach to handling the turmoil of a modernizing and expanding economy has a good track record. A century ago in America, with industrialization roiling society and economic activity moving from the regional to the national level, progressive political leaders decided to regulate it at that level, with federal laws on interstate commerce–notably ones ensuring the safety of food and drugs–and, eventually, laws that empowered labor unions and brought dignity to the workplace. Among the benefits was convincing consumers that modernization was a good thing and blunting the appeal of Marxist revolution to workers.

Of course, one thing that made national laws feasible was the existence of a national government. When it comes to governing globalization, there’s no comparable entity. Still, the governance of globalization is possible, and in fact it is already starting to happen.

Some of it isn’t governance in a traditional sense. International nongovernmental organizations–such as the International Labor Rights Fund and the Lawyers Committee for Human Rights–have negotiated with Nike, L.L. Bean, and Liz Claiborne over wages, workweeks, and child labor in their overseas clothing factories. The companies agree to the resulting codes so that their clothes can sport labels attesting to humane working conditions.

Governance in a more traditional sense–featuring actual governmental bodies–is also possible at the international level but so far is pretty skeletal. Western labor unions would like to use the leverage of the World Trade Organization to upgrade foreign working conditions–whether with child labor laws or workplace safety standards or a guaranteed right to bargain collectively, or whatever. So far they’ve been foiled, but there’s no reason in principle that the WTO can’t address labor issues and even the transnational environmental issues that concern anti-globalization activists, thus evolving from a right-wing form of governance toward the center. (The Clinton administration set a small precedent by signing a bilateral deal with Jordan that was the first American trade pact to meaningfully address labor standards.)

In favoring better working conditions abroad, American union leaders aren’t being altruistic; they just have a competitive interest in raising foreign production costs. In fact, if they were massively successful, they’d price many foreign workers out of the labor market. But political reality–the clout that corporations carry in trade policy around the world–will preclude that degree of success; foreign production costs would at most grow modestly and incrementally. So, many more foreign workers would be helped than hurt by the improvement in working conditions–just as a slight increase in America’s minimum wage helps many workers while harming a few (the ones who don’t get hired as a result of the higher labor costs).

That many workers in the developed and developing worlds share an interest in elevating developing-world labor standards is promising. It means that the lobbying of transnational governance could someday draw foreign workers, including many Muslims, into international labor coalitions that would give them salutarily friendly contact with Western peers. And higher labor standards abroad–including safer, more humane workplaces–would have the separate virtue of making the face of globalization more appealing, less alienating, in the developing world.

The influence of global governance on Westerners would also be salutary. Letting American labor and environmental activists exert meaningful influence on globalization will help keep them from demonizing it. If those scruffy leaders of anti-globalization demonstrations could be put in suits and turned into lobbyists, that would be a major advance. But they can’t be transnational lobbyists unless there’s transnational policymaking to lobby.

In the early 20th century, when America started regulating its economy at the national level, free-market purists complained that this would slow the wheels of commerce. And it presumably did–but it also helped keep society calm and intact. Today, free-trade purists will similarly complain that even modest transnational regulation would slow the wheels of commerce. And it presumably would–but it, too, would have a stabilizing effect. (Free-trade purists who have read this series so far might add that even slightly slowing the inherently unsettling transition to stable modernity would seem to violate Prescription 1: Take your bitter medicine early. But when taking the medicine more slowly makes it less bitter, that’s another matter, and that may be the case here.)

Besides, it’s pretty much inevitable that some workers and environmentalists in the developed world will find some way to slow globalization down. In the worst case they would do it violently; people on the fringes of the labor and environmental movements would engage in trade-sabotaging terrorism. Alternatively, they would do it the way it has often been done in the past: by pushing for unilateral trade barriers. Preferable to both scenarios is letting them channel their energies into transnational governance. By participating in global politics, acting in concert with peers across cultural borders, they form some of the sinews that will make a true clash of civilizations less likely. 

Full-bodied global economic governance–such as adding meaningful regulatory dimensions to the World Trade Organization–may strike some as far-fetched and in any event unconnected to the hear-and-now threat of terrorism. But if this series has one overriding theme, it’s that any good war-on-terrorism strategy has to be long-term, creative, and multi-faceted. The threats that could exist in 20 or 30 years are of a magnitude that today would strike many as unimaginable or at least highly unlikely. If some of the solutions I’m proposing seem unlikely, that may be appropriate.

And, anyway, lots of seemingly unlikely things have become real in the course of a few decades. Three decades ago it seemed highly unlikely that France and Germany would at the turn of the millennium share a single currency, not to mention a fairly dense fabric of common regulation. And three decades before that it was hard to imagine a time when war between France and Germany wouldn’t be a live possibility. But it’s all happened. International commerce on a continental scale–mini-globalization–has drawn implacable foes into a web of common interest, and the web has been sealed by transnational governance.

But enough inspiration! In the next installment we’ll get into another dark side of globalization.

 

SEVEN: The Mindless Altruism of Unilateralism

Why do so many Middle Eastern Muslims aim their dislike at America, when only 50 years ago Britain and France were the preferred targets? The standard foreign-policy explanations–American support for the Shah of Iran, for Israel, for current Arab authoritarians, and so on–have merit. But there is something bigger going on, too. Namely, Proposition No. 9: We are seeing, and will continue to see, the globalization of resentment. Thanks to television and other technologies, the world has become a small town, even a neighborhood, and America is by far the richest kid in it. Do you remember how you felt about the richest kid in your neighborhood?

There’s a chance that you liked him or her. It is not the ineluctable fate of rich kids to be resented by everyone. However, it is their fate to be resented pretty widely unless they comport themselves with careful attention to their inherent resentability. Here America is failing–and failure could really start to chafe as popularity, thanks to technological evolution, becomes more and more essential to national security.

During the 2000 presidential campaign, George W. Bush said something that, post-9/11, sounds prescient. He said the world’s most powerful nation ran the risk of being seen as arrogant; he pledged that under his leadership America would become “a humble nation.” Yet since he took office, America’s reputation for arrogance–painstakingly built up over decades by countless American politicians, tourists, and crybaby tennis stars–has only grown. “Today,” Michael Hirsh wrote recently in Foreign Affairs, “Washington’s main message to the world seems to be, Take dictation.”

Hence Policy Prescription No. 9: Honor President Bush’s pledge–make America a humble nation. Sometimes this will just be a matter of rhetorical fine-tuning. It would be nice if Bush started fewer sentences about the future behavior of other nations with “I expect …” and more with “It is America’s hope …” But for the most part, convincing the world of American humility will be a meatier endeavor. It will mean actually taking into account the views and interests of other nations, just as we expect other nations to take ours into account; it will mean providing cooperation as well as seeking it.  

The technical term for this, of course, is “multilateralism,” and to urge the Bush administration toward it isn’t exactly to stake out virgin op-ed turf. Bush has been scolded for unilateralism on issues ranging from global warming (rejecting the Kyoto Protocol) to war crimes (rejecting the International Criminal Court) to war (seeming to disdain U.N. authorization of an Iraq invasion). But the standard op-ed arguments undersell multilateralism’s virtues.

The routinely cited virtue is that if America cooperates with other nations, and takes their views seriously, they’ll be more likely to do the many things America asks in its war on terrorism, such as surveillance or extradition or freezing dubious funds; unilateralism, in contrast, will eventually leave America stranded, needing friends and having none. But this argument, while valid, overlooks a second problem with Bush’s unilateralism, and a subtle but deep contradiction in his foreign policy.

Bush typically justifies his coolness toward international agreements in terms of strict national interest. America, he says, would bear a disproportionate share of the burdens of these agreements; European and other nations would get off cheap and thus be, as economists say, “free riders.” This is in some cases true. Even Bill Clinton, liberal multilateralist, couldn’t reach a final agreement with Europe over how the costs of the Kyoto Protocol would be distributed. The free rider issue pervades multilateral negotiations, and any responsible president should worry about it.

But Bush’s repeated failure to even suggest alternatives to the high-minded treaties he rejects has the perverse effect of letting Europe be a free rider in a larger sense; he is exacerbating a long-standing American image problem that ultimately works to Europe’s benefit. Increasingly, in the iconography of globalization, America is the robber baron and Europe the conscientious reformer; or America is the bully and Europe the kind constable. So, Europe, though fully enjoying the benefits of globalization, incurs less than its share of the wrath that globalization (rightly or wrongly) arouses. When the French farmer and anti-globalization activist JosÈ BovÈ is looking for a building to vandalize, he chooses a McDonald’s. If his disciples someday inflict destruction on a larger scale, expect them to stick with the same nationality, unless America’s image undergoes the kind of make-over that the Bush administration is emphatically not engineering.

Of course, anti-globalization activists and radical environmentalists are not the terrorism threat du jour. But, as we’ve seen, they could be someday, as could any other group intensely disenchanted with the modern world. Besides, even when hatred and resentment of America don’t turn into terrorism, they still complicate the war against it, to the extent that they influence the policies of foreign governments whose cooperation America needs. And across the globe, mass opinion influences policy more and more powerfully, thanks largely to the democratizing or at least pluralizing effect of information technologies. (Obviously, America shouldn’t swallow a slew of dopey left-wing policies just to be popular. But in the case of global warming, as well as such hot-button issues as the export of hormone-treated beef and genetically modified foods, the current American positions could use at least some revising, even by the lights of mainstream economic theory.)

And, anyway, if this particular image problem–America as globalization’s id, Europe as its superego–sounds only speculatively connected to the war on terrorism, there’s another realm in which Bush’s unilateralism has let Europe play free rider, a realm with undeniable relevance to the war on terrorism. Namely: the war on terrorism. Intent on freedom of action, the Bush team often eschews meaningful alliance in the military part of this war and so winds up sealing America’s status as most hated nation in the Islamic world.

Bush’s idea of a judicious division of labor is that America drops almost all the bombs on Afghanistan–inevitably doing some “collateral damage”–and then, after the war,  British troops come in and hand out most of the free food. America is Gen. Sherman, and England is Clara Barton. Similarly, the Bush administration feels that the United States–on its own, if necessary–should invade Iraq to help free the world of the threat of weapons of mass destruction. That’s generous, since doing so makes the United States more likely to be the target of any such weapons that nonetheless find their way into the hands of terrorists.

Hence Policy Prescription No. 10: Share the blame. Invite allies to participate more fully in the conspicuous application of violence. Let their planes drop more bombs. And whenever possible, get formal multilateral approval for military action. If we must invade Iraq, let’s at least try to provide Al Jazeera with some videotape of the French ambassador to the United Nation voting to authorize the invasion. 

Administration Iraq hawks might laugh at this prospect: the French voting to back an  invasion of Iraq? But if President Bush had taken advantage of the moral and political capital America possessed right after 9/11, he almost surely could have gotten the Security Council to authorize an Iraq attack with at least some degree of explicitness. To be sure, the attack would have been authorized to proceed only in the event that Iraq continued to rebuff U.N. weapons inspections. But let’s face it: As much as many Bush advisers would like to skip an inspections ultimatum and just cut to the regime change, invading Iraq won’t in any event be politically doable if Saddam Hussein unconditionally readmits U.N. inspectors. So, Bush might as well, all along, have cast his war plans as being on behalf of the U.N.-mandated weapons inspections, and thus on behalf of international law. Instead, by insisting on regime change regardless of the regime’s future behavior, and casting the war as part of a new doctrine of pre-emptive invasion, Bush has cast America as an international outlaw.

This sort of public-relations blunder is not what you’d expect from a man who promised to give America a worldwide reputation for humility. It’s what you’d expect from someone who hasn’t truly grasped how the growing importance of world opinion has recast the logic of international cooperation.

Admittedly, with Iraq Bush does face a dicey version of the free rider problem: The free riders don’t acknowledge that they’re free riders; European nations don’t believe–or at least don’t admit to believing–that they’ll benefit from a war against Iraq. But if Saddam Hussein is indeed as clear a threat to the whole world as the Bush administration claims, then this predicament arguably reflects a failure of pedagogy and world leadership on Bush’s part. (Besides, there may be ways of educating free riders even at this late date.)

Of course, we’re assuming here that the administration’s public position is its private one–that it honestly believes that Saddam Hussein is a threat to Europeans and Americans alike. The administration may in truth have a different view: that Europe is right to see the risks of inaction as low because Europe isn’t a likely target of large-scale terrorist attacks, anyway; it’s Americans who will die if Saddam doesn’t.   

In this view, war in Iraq wouldn’t entail a free rider problem; America would be invading Iraq single-handedly because America is terrorist enemy No. 1. But in that event, the logic behind war is a little circular. After all, the reason America is terrorist enemy No. 1 is that we keep doing things like invade Iraq. And even when we proceed in a more ostensibly multilateral fashion, we insist on doing all the conspicuous heavy lifting ourselves. According to standard accounts, it was because Osama Bin Laden saw American troops in Saudi Arabia after the Persian Gulf War–not French troops, not British troops–that he became obsessed with America and wound up destroying the World Trade Center.

This circular logic has a rough parallel in the case of the International Criminal Court. The administration fears the ICC because it thinks that the court would become a channel for worldwide anti-Americanism–that ICC prosecutors would unfairly single out Americans for prosecution. Yet one major source of this anti-Americanism is that America keeps refusing to do things like join the International Criminal Court.

The administration also has a more specific fear about the ICC: that past American officials could be prosecuted for such adventures as supporting the 1973 Chilean coup that ushered in the era of Augusto Pinochet. But here, too, the logic is broadly circular: We fear joining a multilateral legal system because in the past we’ve followed extralegal (a polite term for illegal) unilateral policies.

At some point we have to break the vicious circle and quit citing our past unilateralism, or its consequences, as the reason for avoiding future multilateralism. Because the globalization of resentment, combined with the growing downside of unpopularity, means that, more and more, we’re going to have to use multilateral institutions to diffuse wrath. Allies have many uses, and one of them is to absorb their share of shrapnel.

Suppose, for example, that we do ever find Osama Bin Laden alive. Presumably he’ll be tried in an American court. Where else would the Bush administration have him tried–in the International Criminal Court? But, actually, an ICC trial would better serve American interests. First of all, Bin Laden’s residual band of supporters would then have more trouble convincing potential recruits that his conviction had been an American miscarriage of justice. Second, the security nightmare accompanying the trial–complete with the quite real threat of high-casualty terrorism–would be The Hague’s problem, not New York’s.

And why shouldn’t it be? If indeed the war on terrorism is in the long run a campaign on behalf of global civilization–and it is–why should America shoulder all the burden? This seems like a reasonable question, but there’s no hard evidence that it’s ever occurred to anyone in the Bush administration.

The irony is that this administration prides itself on its cool rationality (an attitude especially evident in the three most influential players on the Bush foreign-policy team–Dick Cheney, Donald Rumsfeld, and Paul Wolfowitz). The Bushies dismiss multilateralism as a feel-good policy favored by large-hearted, woolly-minded liberals, people who would be out of their depth at a Rand Corporation game-theory seminar. Yet it’s the Bushies who are the inept game theorists. They’re failing to defend America against the parasitism of free riders. George W. Bush hasn’t made America humble, but he has certainly made it a gracious host–in the biological sense of the word.

 

EIGHT: Policing Weapons of Mass Destruction

President Bush sometimes casts his aspiration to invade Iraq as part of the war on terrorism. That’s plausible enough. Iraq probably has biological weapons and is presumably trying to make nuclear weapons, both of which could be given to terrorists. What’s not plausible is that this sort of invasion could work as a long-term strategy for keeping such weapons out of the hands of terrorists.

The Bush administration believes that at least six countries have secretly tried to develop biological weapons, including other “axis of evil” members North Korea and Iran. Presumably Bush isn’t planning to invade all six. Maybe he hopes that an Iraq invasion will intimidate other nations into swearing off biological weapons. But surely he wouldn’t trust them to keep their word. (And even if they offered to let Americans come in and verify their compliance, would that be enough for this administration? Various Bush officials, in justifying an attack on Iraq, have suggested that weapons inspections aren’t reliable.)

The problem goes deeper. Biological weapons, much more than nuclear weapons, can be developed without government support, since the equipment it takes to develop them is found in universities, hospitals, and pharmaceutical plants, among other places. In fact, if you’re making small quantities of some bioweapons–or are willing to make bigger quantities very laboriously–you can use hobbyist-sized fermenters and centrifuges that are available on the Internet. So, even if Iran’s government joined the axis of goodness, biological weapons could, unbeknownst to it, be made somewhere in Iran. Or they could be made in countries that already reside on that axis, such as America. The post-9/11 anthrax may well have been made in the United States (though it was too high-grade to have been made with amateur equipment).

So, what exactly does this administration plan to do, in the long run, about the proliferation of biological weapons? Your guess is as good as mine. All I know is that any solution would have to reckon with Proposition No. 10: The lines separating domestic policing and foreign policing, national security and international security, are rapidly blurring.

Consider: 1) Increasingly, as biotechnology advances and expands, foreign attackers can launch their attacks from within America, with weapons made in America. 2) Obviously, so can native-born terrorists. 3) Increasingly, lax policing by foreign governments of their hospitals, universities, pharmaceutical plants, and so on could imperil American national security. 4) There’s also, of course, the old-fashioned threat of foreign governments purposefully developing bioweapons and giving them to terrorists.

Before exploring this problem further, let’s make it slightly more terrifying. In addition to biological weapons, there may someday be “nanotechnological weapons” that share the key properties of the spookiest biological weapons: microscopic, self-replicating, and lethal. But in this analysis I’ll stick with the problem of bioweapons, since any solution to it would, broadly speaking, work for nanotechnology as well. As for nuclear weapons: I’ll skip them entirely (except to echo those who note that we’re doing a really bad job of securing loose nuclear materials around the world). The reason is that nukes–more cumbersome and conspicuous to make and deploy than bioweapons–are not, relatively speaking, all that challenging. If we can figure out a way to control bioweapons, then controlling nukes should be plenty doable.

The first step toward controlling biological weapons is to think big. In light of Proposition 10, it’s hard to imagine a secure America decades from now unless: 1) all the governments in the world are verifiably not making bioweapons that could be given to terrorists; and 2) all nations are being policed effectively enough so that it would be very hard for non-governmental agents to make such weapons. Obviously, this is a long way from the world we have now. In fact, it’s so far from that, and so far beyond the reach of the incrementalist policy proposals that get airtime in Washington, that getting there will require visionary leadership.

If President Bush is a visionary, he is slyly concealing this fact. His administration’s last headline-making initiative in bioweapons control was to annoy roughly the whole world by rejecting an arduously negotiated protocol that would have put some teeth in the Biological Weapons Convention, which bans bioweapons internationally. This rejection isn’t by itself unforgivable; a number of experts consider the protocol deficient, in need of revision if not overhaul. (It focused on the routinized monitoring of big, relatively easy targets, such as government labs and pharmaceutical plants, while doing little that would stop a small band of creative free-lance bioweapons makers.) What’s unforgivable is that the Bush administration–even after Sept. 11–suggested no strengthened version of the protocol. It just suggested invading Iraq.

What path might be followed by an administration more enthusiastic about arms-control agreements? I’d recommend two complementary strategies, one “bottom-up” and one “top-down.” Decades from now, after lots of trial-and-error evolution, these two approaches could converge on a single, coherent international policing structure that would be up to the challenge. Or maybe they wouldn’t, and millions of people would die instead. But I say we give it a shot.

A big part of the “top-down” strategy is Prescription No. 11: Develop a serious international inspection system for biological weapons. For starters, we could pick up the ball where the Bush administration dropped it–with an earnest attempt to put teeth into the toothless Biological Weapons Convention. That would include an unprecedentedly robust inspection regime. Any nation plausibly alleged to have biological weapons would be subject to short-notice inspection by an international body (a basic approach that America already subscribes to by virtue of having ratified the Chemical Weapons Convention).

A second part of the top-down strategy would be rules regulating the international shipment of biological cultures. There are nearly 50 germ banks around the world that make anthrax samples available for sale or exchange or giveaway. What do you have to do to qualify for shipment? Whatever the people with the anthrax say! Except for the 33 nations that belong to the strictly voluntary “Australia Group,” there is no uniform code governing the export of microorganisms, or for that matter the export of equipment that can be used to make bioweapons.

A standard gripe about all international weapons-control regimes is that the nations that participate in them tend to be nations that don’t need watching anyway. Just take a look at who isn’t part of the Chemical Weapons Convention, the one somewhat intrusive international inspection system the planet has mustered to date: Iraq, North Korea, Syria.

In the long run, such nonconformity is literally unacceptable. No nation can be safe decades from now if any nation remains opaque to international scrutiny. America and the world will have to develop a set of carrots and sticks that eventually makes international weapons control planetary in scope. The day may come when one of those sticks has to be war.

In this light, one regrettable thing about the administration’s threat to invade Iraq is that it hasn’t been deployed in the service of this cause. Iraq, in rebuffing weapons inspections mandated by the U.N. Security Council, is in violation not just of international law, but also of exactly the kind of international law whose violation the world, increasingly, can’t afford to tolerate. You wouldn’t know this to listen to the Bush administration’s pronouncements on Iraq (or, I should say, to listen to the dominant themes in the mÈlange of free-form administration utterances over the past few months). Rather than demand that inspectors be readmitted to Iraq, Bush officials have vowed to effect “regime change” regardless of Iraq’s behavior, bad-mouthing weapons inspections to justify their position. They’ve thus squandered a chance to shore up respect for internationally mandated inspections, which, though imperfect, will have to play a role in the future unless we plan to have wars on an annual basis. (Among the problems with such wars: Is telling a man thought to possess biological weapons that you’ll kill him no matter what he does really the optimal way to shape his incentive structure?)

Encroaching political reality may, even as I write, be forcing the administration to abandon its instincts and issue a weapons-inspections ultimatum before invading Iraq. Better late than never.

Fortunately, war isn’t the only persuasive tool the world can use to corral all nations into a global policing structure. The world also has an international trading system in which more and more nations are embedded and exclusion from which would mean serious trouble for any of them. Conveniently, that system has a formal embodiment–an organization whose members are guaranteed access to international markets. Hence Policy Prescription No. 12: Use the World Trade Organization as the fulcrum for ensuring compliance with international weapons-control law. Refuse to admit nations to the WTO if they don’t sign vital international treaties, and when WTO members violate a treaty–by, say, rebuffing inspectors–impose an automatically escalating set of penalties, in the form of rising tariffs, that culminate in expulsion (with expulsion understood to be a likely prelude to war).

This sort of economic leverage is getting more powerful and will probably keep doing so. More nations (e.g., China) keep joining the WTO, and more nations (e.g., Saudi Arabia) keep vowing to. And the more members the WTO has, the more valuable membership is, since membership brings favorable access to the markets of all members. Meanwhile, it’s getting harder for even the most die-hard Stalinist dictator to resist the lure of prosperity. As the globe becomes more of a village, leaders of backward countries have more trouble concealing from their people how the rest of the world lives. Even in low-tech North Korea, the cat is half out of the bag.

Purists will balk at “corrupting” an essentially economic organization with extraneous functions. Well, a) better corrupt than dead; b) this function isn’t really extraneous. Part of sustaining the international trading system is insulating it from terrorist disruption; if a single nuke makes it into some major port on a commercial barge, commercial barges will become a rarer, slower, costlier form of shipment. Those nations not willing to help keep the trading system secure don’t deserve to benefit from it.

If carrots such as the WTO fail, then sticks such as war will eventually be necessary. That’s how high the stakes will become as Proposition 2 makes its force felt. We have to resolve that, one way or another, we will reach a day, 15 or 20 or 25 years from now, when no nation is outside of the international policing system. Secure in that commitment, we can start now to set up an international inspection system and revise it through trial and error, even if the likeliest criminals are temporarily outside of it.

That same knowledge allows us to proceed with the “bottom up” approach to international weapons control. That is, Policy Prescription No. 13: Imagine how biotechnology would have to be policed in all nations for the United States to feel secure 20 years from now; implement and then continually refine that policing strategy in the United States, while beginning the long, laborious task of getting every other nation on the planet to eventually adopt a comparable system.

Here we should feel free to experiment with ideas that, at the moment, wouldn’t stand a chance of adoption on a global scale. For one thing, effective biotech policing in the United States would be its own reward, even if never emulated abroad. For another thing, the chances of emulation abroad will grow as the threat of biological terrorism becomes clearer. Today’s radical American proposals are tomorrow’s global op-ed staples. 

For example:

1) Recombinant DNA technology–whether in government or university labs or the private sector–will need to be heavily regulated. This is the kind of equipment you would use to create a nightmarish designer pathogen. If, for example, you took the Ebola virus, for which there is no vaccine, and made it as contagious as small pox (for which there is, thankfully, a vaccine) and seeded several cities with the resulting germ, you could make the carnage of 9/11 seem trivial. This kind of genetic manipulation is not science fiction, a fact that makes the currently loose state of regulation in recombinant DNA labs a little scandalous. In the future, gene-sequencers and other recombinant DNA technology could be equipped with sensors and computers that reliably identify and indelibly record every user and every use. (This data could even be sent instantaneously to a remote location.) Possession of equipment that lacked such recorders, or whose recorders had been tampered with, would be a felony. 

2) Lower-tech machines that can make such noncontagious bioweapons as anthrax in high volume–big fermenters in pharmaceutical plants, say–might be redesigned to include similar sensors and computers, with these, too, made mandatory.  As for the less sophisticated, less capacious versions of these devices, such as “desktop” fermenters or centrifuges: At a minimum, these should be regulated as heavily as firearms. Government computers could record all purchases, do background checks on purchasers, and look for suspicious patterns of purchase.

I could go on, but you get the picture. Experiment with bioweapons policy at the national level, the eventual goal being to make the best policies, in effect, international law: Every nation would agree to implement them and might even be subjected to “meta-inspections” or “meta-audits” by an international body as a check on national enforcement.

Getting all foreign leaders to accept strict international policing mechanisms may not be the biggest challenge. Getting America’s national leaders to accept them may be–in particular any intrusive short-notice inspection system. After all, America, as a party to any inspection system, would have to itself be open to the inspections. The current ideological cast of the White House, and for that matter the Republican party, doesn’t bode well for this prospect.

One of the major Republican dissenters on an Iraq invasion, House Majority Leader Dick Armey, doesn’t favor weapons inspections instead of invasion; he opposes the inspections for the same reason he opposes the invasion: They would violate Iraq’s sovereignty! And even less extreme sovereigntists, such as Donald Rumsfeld and Dick Cheney, opposed the Chemical Weapons Convention, whose inspection system is less intrusive than effective bioweapons inspections would be.  

What the sovereigntists don’t see is that the option of preserving our sovereignty isn’t on the table. If international inspectors can swoop down and inspect an American medical school, then, yes, America has in some sense lost sovereignty. But if a few well-educated terrorists working out of Amsterdam can easily start an epidemic that kills 500,000 Americans, then America has also in some sense lost sovereignty. Take your choice. I prefer the first kind of lost sovereignty. Some people may prefer the second kind. But anyone who thinks we can skip that choice, and preserve sovereignty in some across-the-board sense, is, in my opinion, confused.

Is President Bush confused? A case could be made. On the one hand, he definitely doesn’t favor lots of Americans dying in a bioweapons attack. On the other hand, his aversion to arms-control treaties is deep and abiding. This year, he reached a nuclear-arms agreement with Russia that was essentially meaningless. It would compel no actual destruction of nuclear arms, but rather their temporary decommissioning, and even that requirement would literally expire the day it took effect, 10 years from now. (I’m not kidding.) Even so, Bush insisted that the agreement be strictly verbal; there was something about the act of signing an arms accord that he couldn’t stomach. Finally, under the prodding of Colin Powell et al., he agreed to actually sign a meaningless document. In a sad testament both to Bush’s congenital unilateralism and to the impotence of token administration multilateralist Powell, a Financial Times op-ed piece later listed that act of persuasion as one of Powell’s most important accomplishments as secretary of State.

International arms control is important both as a specific tool and as a larger metaphor. The subordination of national behavior to international law will have to happen in various policy areas in order for the war on terrorism to succeed. So long as any single nation is a haven for terrorists to park their funds in, or their hackers in, or themselves in, the rest of the world, notably including us, will be in trouble.

The good news for ardent sovereigntists is that often the solidification of international law won’t much affect American law (and, strictly speaking, will never supercede it). As we cajole other nations into tightening their policing of money laundering, or their policing of hackers, we’ll largely be converting them to policies that already exist in America. But even here we’ll need to think big and treat international law as something to be nurtured, not shunned or ridiculed.

So, the Iraq issue is among other things a valuable microcosm. Over the past few months, in President Bush’s aversion to seeking a U.N. mandate, in his aversion to giving Iraq a final weapons-inspection ultimatum, he has revealed an indifference if not a hostility to nurturing a robust, enforceable system of international law–and in exactly the area where victory against terrorism most demands it. That global and domestic politics may now have cornered him into recalibrating his position doesn’t change what this says about  his basic attitude. It’s an attitude that is not an ideal feature in someone leading a war on global terrorism. One might even go so far as to suggest that America won’t seriously wrestle with the terrorism problem–in all its dimensions–until there is regime change in Washington.

But that’s for voters to decide. My job is just to write a series of articles on how to fight a real war against terrorism. The final installment will appear tomorrow.

 

NINE: How the War on Terror Can Make Us Better People

One nice thing about the policy prescriptions I’ve laid out in this series (if I do say so myself) is that if we follow them all, and they succeed, we’ll do a lot more than just win the war against terrorism. We’ll give the planet a major upgrade–spread democracy and prosperity and turn all nations into responsible members of the world community.

That’s the good news: Whereas winning regular wars means wreaking death and destruction, winning the war on terrorism means ushering in an era of global concord. The bad news is that, whereas wreaking death and destruction is demonstrably practical, there’s no hard evidence that global concord is possible. 

But however idealistic and unlikely this goal sounds, I’m afraid some close approximation of it is required for anything like true victory. Thirty years from now, if there is a single nation that isn’t carefully regulating biotechnology, and cooperating with the larger international regulation effort, we’ll all be in trouble. And if there is a single nation that provides safe haven to cyber-terrorists or money launderers, that too will be a non-trivial problem. And if some of the governments that are cooperating on these fronts are authoritarians whose repression we tolerate in exchange for their cooperation (sound familiar?), that indulgence may come back to haunt us. After all, given the elusiveness of biological (and possibly nanotechnological) weapons, even the best global policing effort won’t be airtight. So, large pockets of thwarted political aspiration will still have the potential to morph into the occasional burst of massive lethality. Similarly, it will be bad news if large patches of the planet are left out of the global economy; even the poor will more and more be able to electronically view the planet’s privileged classes and conceivably work up explosive resentment.

In short: A few decades from now, there will need to be a “global civilization” in which both words are literally accurate–a planetwide community of mutually cooperative nations, bound by interdependence and international law, whose citizens are accorded freedom and economic opportunity. This is the goal we’re forced toward by some of the creepier aspects of technological evolution: ever-more-compact, ever-more-accessible, ever-more-lethal munitions, and the ever-more-efficient crystallization of interest groups, including hateful ones, via information technology. History seems to be pushing us toward idealism with an awful realism.

This idealism explains the ambitious array of policies I’ve said we should pursue and the large number of traditional interest groups we’d have to resist in the process. If we follow all the prescriptions in this series, we’ll do outrageous things like kill the farm lobby’s subsidies, tell the textile lobby to take a hike, and alienate dictators that our oil companies are fond of. (Among the little things I haven’t had time to mention is that it would also be nice to conserve energy, thus cutting our reliance on these dictators and leaving us freer to alienate them.) We also have to resist the cheaply patriotic rhetoric of sovereignty fanatics, ranging from quasi-isolationists like Pat Buchanan to economic nationalists like Ralph Nader to unilaterists like Dick Cheney and Donald Rumsfeld. All these people oppose at least some part of the interlocking system of transnational governance that could help congeal global civilization.

There are lots of ways to lose the war on terrorism. One, as the previous paragraph suggests, is to proceed normally–gratify the standard interest groups and the easy sentiments. Another is to create a “war of civilizations” by adopting the perspective of people who believe we’re already in one. According to these people, wherever there are terrorists who are Muslims, there are enemies of America, and they should be treated as such. Thus we must stand by China in its war against Muslim separatists in Xinjiang province, even though their separatist aspirations aren’t historically grounded in radical Islam, and even though, in an authoritarian nation like China, it’s hard to imagine how people could express separatist aspirations without breaking the law. If we follow this course, the self-fulfilling prophecy will work like this: As we declare war on various Islamic groups that are only marginally concerned with America, these groups will grow more opposed to America and more united in that opposition, until we indeed have something like a “war of civilizations” on our hands. (Two things make this trap especially seductive: Information technology will increasingly empower separatist groups–a subject that is worth pondering if you have time; and many governments, including China and Russia, would love to get America to help fight their separatists and usefully divert some of their separatists’ wrath–a point that Zbigniew Brzezinski has acutely made.)

What if we do fail in our war on terrorism? What if, for whatever reason, we don’t create an orderly, peaceful, reasonably contented world? What if instead the America-haters only grow in number and intensity? Actually, a fallback strategy would be available, but it’s not very attractive; it’s “homeland defense” with a vengeance.

In principle, technology permits much tighter monitoring of day-to-day life than we’ve seen or contemplated even since 9/11. Anonymous transactions, for example, needn’t be legal. We can do away with cash, thus linking your name to every purchase you make, every toll you pay. This not only would make it easier to catch terrorists after the fact but would also let government computers constantly scrutinize patterns of transaction to pre-emptively single out rich surveillance targets (in which case you should let a decent interval elapse between buying, say, a book by Edward Said and a copy of Soldier of Fortune magazine). We can also turn mail carriers and meter readers into de facto deputies who take a good long look when you answer the door. That this idea has been proposed and rejected doesn’t mean it wouldn’t be embraced should hatred grow and some fraction of it find predictable expression in terrorism.

The point I’m making is a familiar one that is justly considered depressing: The price paid for security is liberty. But there’s a larger point I’ve been trying to make throughout this series, and it’s more upbeat: This famous trade-off between security and liberty isn’t ironclad. There is a third variable that can recalibrate the trade-off: the amount of discontent and hatred in the world. The less of that there is, the more secure we can be  without a big sacrifice of liberty. It’s the trade-off among these three things–security, liberty, and antipathy–that is ironclad. This iron triangle is our future predicament, for better and worse.

Another way to say this is that, increasingly, our fortunes are correlated with the fortunes of people around the world. To the extent that these people are intensely unhappy, then we will be less secure, or less free (take your choice)–but in any event less happy.

More than a century ago Herbert Spencer wrote, “No one can be perfectly happy till all are happy.” Even now, that’s an oversimplification. One reason is that there is a small subset of people whose fortunes are inversely correlated with ours: people like Osama Bin Laden, people who have already committed their lives to terrorism. The more we frustrate them, the more sad we make them (or the more dead we make them, when we can do that at acceptable cost), the better off we’ll be. In this series I’ve said relatively little about this part of the war on terrorism because it’s a fairly obvious necessity. What’s less obvious is that a) the people who have committed themselves to terrorism are a small subset of the people who potentially could; and b) given how much deadlier the technology of terrorism will be in 20 years, we should work hard to keep these would-be terrorists in the “would-be” category. The happier they are, the happier we’ll be. 

In a sense, there is nothing new here. The basic direction of history has been to make the fortunes of people at ever-greater distances more closely correlated, both for better and for worse. The Silk Road meant that merchants in the Middle East and East Asia could both gain through interaction. But such trade routes also meant that an epidemic that started in Asia could be bad news not just for Asia but for Europe as well–as was, indeed, the black death. Today the correlation of fortunes spans not just Eurasia but the whole planet: mutually profitable commerce, mutually lethal disease, mutually destructive hatred–whatever. Never before has discontent on some street halfway around the world been so capable of becoming such bad local news so rapidly.

That it has taken this threat for us to start paying attention to the welfare of people halfway around the world isn’t something to be proud of, but it’s not something to be ashamed of, either. That’s just the way people are. In the Flannery O’Connor short story “A Good Man Is Hard To Find,” a crotchety, self-centered old woman is held at gunpoint by an escaped convict. She suddenly becomes sensitive, sympathizing with him and inquiring into the roots of his alienation. After killing her, the convict remarks, “She would of been a good woman, if it had been somebody there to shoot her every minute of her life.” Wouldn’t we all be good–less self-centered, more empathetic–if our lives depended on it? Stay tuned.

Historically, humankind has often managed sooner or later to pick up on this logic. In a great little book called The Expanding Circle, the philosopher Peter Singer documented the moral progress on this planet over the past few millennia. Around 2,500 years ago, Greeks–the very acme of enlightenment at the time–considered non-Greeks essentially subhuman (which was progress; there had been a time when citizens of one Greek city-state considered citizens of another practically sub-human). Today in America we consider people of all races, nationalities, and religions human and deserving of basic human rights.

My explanation (not Singer’s) of this expanding moral circle is that it reflects simple interdependence of various kinds, particularly the economic kind; to do mutually profitable business with the Japanese, we have to accord them basic respect. I think that’s why intercultural tolerance is by and large more common in advanced, globally interdependent economies than in less-developed nations. And that’s one big reason that (to concisely summarize Parts 4 and 5 of this series) I want to make the less-developed nations more developed–more developed economically and politically and hence–in a sense–morally.

In this view, moral progress is directly rooted in technological progress. Technological advances, ever since the Stone Age, have correlated the fortunes of people at ever-greater distances. (That is, technological progress has put people in more long-distance “non-zero-sum” relationships, if you want to describe this historical trajectory technically, as I’ve been known to do). And the result is a growing interdependence that translates enlightened self-interest into an expanding circle of moral consideration. That brings us to Proposition 11: The force is with us.

Isn’t that a load off your mind? Unfortunately, the force sometimes works chaotically. The moral progress that Singer describes has hardly been continuous. And, ominously, one episode of backsliding came in a time, like this one, of revolution in information and munitions technology. In the 16th century the printing press–which, like the Internet, radically lowered the costs of political organization–helped split the Western Christian church in half and, combined with gunpowder, ushered in the “wars of religion” of late 16th and early 17th centuries. The press would also fuel nationalist movements that eventually managed to wrest independence from empires; and the wresting was often not done peacefully.

In the long run, of course, Europe was knit back together. Today, once-mortal enemies, including mainly Catholic and mainly Protestant nations, are enmeshed in the European Union, and war between them is unthinkable. Indeed, the same technology that had helped tear Europe apart–the printing press–helped mend it, eventually forging a pan-European consciousness and lubricating the international capitalism that can mute historical antagonisms by making states economically interdependent.

But that’s meager consolation to all the Europeans slaughtered in the wars of religion, or for that matter in the two world wars. And to us it is meager consolation that someday, a few centuries from now, “global civilization” will probably deserve that name, however many catastrophes are required to drive home the compelling logic behind it. 

Could Europe have averted some of the chaos brought on by the age of print? Suppose that the pope had grasped the pluralizing import of the printing press back in the 16th century and had gracefully made reforms to accommodate the restive masses. Or  suppose that four centuries later, on the eve of World War I, the rulers of the Austro-Hungarian empire had realized that to keep suppressing Balkan nationalism in the age of print wasn’t practical. Could World War I have been averted?

The premise of this series is that the answer to such questions is in principle yes. Proposition 12: Understanding where technology is moving us in the long run can save us lots of short-run turmoil. Or, to put Proposition 11 in refined form: The force is with us, but only so long as we see and respect its power.

I promised at the outset of this series that some of my propositions would be  “cosmic.” In this regard, at least, you can’t say I’ve let you down.

 

©2003 MSN Slate Magazine of Microsoft Corporation.


Robert Wright is a visiting scholar at the University of Pennsylvania, and the author of The Moral Animal and Nonzero: The Logic of Human Destiny.

February 27th, 2003

In researching Limits to Growth, I came across this interesting article. It is a brief extract from a much longer article, entitled ‘The Ecology of Sustainable Development’ by William Rees, which originally appeared in The Ecologist.


A Sunshine Limit to Growth

William Rees

The Second Law of Thermodynamics states that in any closed isolated system, available energy and matter are continuously and irrevocably degraded to the unavailable state. Since the global economy operates within an essentially closed system, the Second Law (the entropy law) is actually the ultimate regulator of economic activity.

‘Any form of economic activity dependent on material resources therefore contributes to a constant increase in global net entropy’

All modern economies are dependent on fixed stocks of non-renewable material and energy resources. The Second Law therefore declares that they necessarily consume and degrade the very resource base which sustains them. Our material economies treat other components of the biosphere as resources, and all the products of economic activity (that is both the by-products of manufacturing and the final consumer goods) are eventually returned to the biosphere as waste. Thus, while we like to think of our economies as dynamic, productive systems, the Second Law states that in thermodynamic terms, all material economic ‘production’ is in fact ‘consumption’. Any form of economic activity dependent on material resources therefore contributes to a constant increase in global net entropy (disorder), through the continuous dissipation of available energy and matter. It follows that contrary to the assumptions of neo-classical theory:

  • There is no equilibrium in the energy and material relationships between industrial economies and the biosphere;

  • Sustainable development based on prevailing patterns of resource use is not even theoretically conceivable.

The thermodynamic interpretation of the economic process therefore suggests a new definition of sustainable development which contrasts radically with present practice: sustainable development is development that minimises resource use and the increase in global entropy.

Eco-systems, unlike economic systems, are driven by an external source of energy – the sun. The steady stream of solar energy sustains essentially all biological diversity and makes possible the diversity of life on earth. Through photosynthesis, living systems concentrate simple dispersed chemicals and use them to synthesise the most complex substances known. Thus, in contrast to economic systems, eco-systems steadily contribute to the accumulation of concentrated energy, matter and order within the biosphere. In thermodynamic terms, photosynthesis is the most important materially productive process on the planet and it is the ultimate source of all renewable resources used by the human economy. Moreover, since the flow of solar radiation is constant, steady and reliable, resource production in the ecological sector is potentially sustainable over any time scale relevant to humanity. Ecological productivity is limited, however, by the availability of nutrients, photosynthetic efficiency, and ultimately the rate of energy input (the ‘solar flux’) itself. Eco-systems therefore do not grow indefinitely. Unlike the economy, which expands through resource conversion and positive feedback, eco-systems are held in ‘steady-state’ or dynamic equilibrium by limiting factors and negative feedback.

‘Eco-systems, unlike economic systems, are driven by an external source of energy – the sun’

The consumption of ecological resources everywhere has begun to exceed sustainable rates of biological production. Nearly 40 per cent of terrestrial net primary productivity (photosynthesis) is already being used or co-opted by humans, one species among millions, and the fraction is steadily increasing.

‘Biosphere resources are becoming increasingly scarce and there are no substitutes’

At present, markets do not even recognise such factors as nutrient recycling, soil building, atmosphere maintenance and climate stabilisation as resources. Thus, while market economics can usually price the scarce material inputs to manufacturing, it is virtually silent on the value of biosphere processes. Not surprisingly, it is these more critical resources that are becoming increasingly scarce and there are no substitutes.

‘Any human activity cannot be sustained indefinitely if it uses not only the annual production of the biosphere (the ‘interest’) but also cuts into the standing stock (the ‘capital’)’

Clearly, any human activity dependent on the consumptive use of ecological resources (forestry, fisheries, agriculture, waste disposal, urban sprawl onto agricultural land) cannot be sustained indefinitely if it uses not only the annual production of the biosphere (the ‘interest’) but also cuts into the standing stock (the ‘capital’). Herein lies the essence of our environmental crisis. Persistent trends in key ecological variables indicate that we have not only been living off the interest but also con-suming our ecological capital. This is the inevitable consequence of exponential material growth in a finite environment. In short, the global economy is cannibalising the biosphere.

This means that much of our wealth is illusion. We have simply drawn down one account (the biosphere) to add to another (material wealth). It might even be argued that we have been collectively impoverished in the process. Much potentially renewable ecological capital has been permanently converted into machinery, plant and possessions that will eventually wear out and have to be replaced at the cost of additional resources.

Heilbroner has noted that the origin of surplus in the era of industrial capitalism ‘has gradually moved from trade through direct wage labour exploitation toward technological rents, and that modern-day profits consist of combinations of all three.’ We can now add a fourth profit source to Heilbroner’s list; the irreversible conversion of biological resources.

For human society, carrying capacity can be defined as the maximum rate of resource consumption and waste discharge that can be sustained indefinitely without progressively impairing ecological productivity and integrity. The corresponding maximum human population is therefore a function of per capita rates of resource consumption and waste production.

‘Hence we are within one population doubling of the ‘sunshine limit’ to growth and at present rates will reach that limit in 35 years’

Through a thermodynamic analysis of food production, Bryson has estimated that about 900 square metres of cropland are required to produce the average per capita food energy requirements assuming year round cropping. With an average growing season of only 180 days, each hectare of agricultural land will theoretically support about 5.5 people. The present world population density is about 3 persons per arable hectare. Hence we are within one population doubling of the ‘sunshine limit’ to growth and at present rates will reach that limit in 35 years.

It should be understood that while human society depends on many ecological resources and functions for survival, carrying capacity is ultimately determined by the single vital resource or function in least supply. (On the global scale, loss of the ozone layer alone could conceivably lead to the extinction of the human species.)

Such considerations call seriously to question the Brundtland Commission’s route to sustainable development through a five-to-ten-fold increase in industrial activity. Indeed, it forces a reconsideration of the entire material growth ethic, the central pillar of industrial society.


William E. Rees, Ph.D., is an Associate Professor of Planning and Resource Ecology, University of British Columbia, School of Community and Regional Planning.

How Many Earths are Enough? by William Rees 

Toward Environmental Citizenship — a presentation by William Rees’ with synchronized visuals – Windows Media Player ¦ Real Player ¦ SLIDES

 

February 26th, 2003

Twenty years after Limits to Growth made such a splash on the intellectual landscape, the authors updated their data and revisted the subject in a new book. The following is the preface to that second book with an introduction and following the preface a comment on that second book from Context Magazine published in 1992.



” “Grow or die,” goes the old economic maxim. But in 1972 a team of systems scientists and computer modelers challenged conventional wisdom with a ground-breaking study that warned that there were limits – especially environmental limits – to how “big” human civilization and its appetite for resources could get. Beyond a certain point, they said in effect, the maxim could very well be “grow and die.”

“That same team of researchers (minus one) has just released an historic update to The Limits to Growth. The new book – Beyond the Limits: Confronting Global Collapse, Envisioning a Sustainable Future – is instant must-reading. The authors use updated computer models to present a comprehensive overview of what’s happening to the major systems on planet Earth and to explore probable futures, from worst- to best-case scenarios. The book is rigorously scientific, yet very engaging, and it is especially well-suited to educational settings. We strongly recommend it to our readers, and present the Preface here.

“Donella H. Meadows is a systems scientist and journalist who teaches at Dartmouth College, as well as an IN CONTEXT contributing editor. Dennis L. Meadows is a Professor of Systems Management and directs the Institute for Policy and Social Science Research at the University of New Hampshire. J¯rgen Randers, a policy analyst and President Emeritus of the Norwegian School of Management, is Chairman of the Norwegian Bank for Industry, the Norwegian Institute for Market Research, and ≈ke Larson, AS. “


Beyond the Limits

Donella H. Meadows, Dennis L. Meadows, and J¯rgen Randers

Twenty years ago we wrote a book called The Limits to Growth. It described the prospects for growth in the human population and the global economy during the coming century. In it we raised questions such as: What will happen if growth in the world’s population continues unchecked? What will be the environmental consequences if economic growth continues at its current pace? What can be done to ensure a human economy that provides sufficiently for all and that also fits within the physical limits of the Earth?

We had been commissioned to examine these questions by The Club of Rome, an international group of distinguished businessmen, statesmen, and scientists. They asked us to undertake a two-year study at the Massachusetts Institute of Technology to investigate the long-term causes and consequences of growth in population, industrial capital, food production, resource consumption, and pollution. To keep track of these interacting entities and to project their possible paths into the future we created a computer model called World3.

The results of our study were described for the general public in The Limits to Growth. That book created a furor. The combination of the computer, MIT, and The Club of Rome pronouncing upon humanity’s future had an irresistible dramatic appeal. Newspaper headlines announced:

A COMPUTER LOOKS AHEAD AND SHUDDERS

STUDY SEES DISASTER BY YEAR 2100

SCIENTISTS WARN OF GLOBAL CATASTROPHE.

Our book was debated by parliaments and scientific societies. One major oil company sponsored a series of advertisements criticizing it; another set up an annual prize for the best studies expanding upon it. The Limits to Growth inspired some high praise, many thoughtful reviews, and a flurry of attacks from the left, the right, and the middle of mainstream economics.

The book was interpreted by many as a prediction of doom, but it was not a prediction at all. It was not about a preordained future. It was about a choice. It contained a warning, to be sure, but also a message of promise. Here are the three summary conclusions we wrote in 1972. The second of them is the promise, a very optimistic one, but our analysis justified it then and still justifies it now. Perhaps we should have listed it first.

1. If the present growth trends in world population, industrialization, pollution, food production, and resource depletion continue unchanged, the limits to growth on this planet will be reached sometime within the next 100 years. The most probable result will be a sudden and uncontrollable decline in both population and industrial capacity.

2. It is possible to alter these growth trends and to establish a condition of ecological and economic stability that is sustainable far into the future. The state of global equilibrium could be designed so that the basic material needs of each person on earth are satisfied and each person has an equal opportunity to realize his or her individual human potential.

3. If the world’s people decide to strive for this second outcome rather than the first, the sooner they begin working to attain it, the greater will be their chances of success. (Meadows et al., 1972)

To us those conclusions spelled out not doom but challenge – how to bring about a society that is materially sufficient, socially equitable, and ecologically sustainable, and one that is more satisfying in human terms than the growth-obsessed society of today.

In one way and another, we’ve been working on that challenge ever since. Millions of other people have been working on it too. They’ve been exploring energy efficiency and new materials, nonviolent conflict resolution and grassroots community development, pollution prevention in factories and recycling in towns, ecological agriculture and international protocols to protect the ozone layer. Much has happened in twenty years to bring about technologies, concepts, and institutions that can create a sustainable future. And much has happened to perpetuate the desperate poverty, the waste of resources, the accumulation of toxins, and the destruction of nature that are tearing down the support capacity of the earth.

When we began working on the present book, we simply intended to document those countervailing trends in order to update The Limits to Growth for its reissue on its twentieth anniversary. We soon discovered that we had to do more than that. As we compiled the numbers, reran the computer model, and reflected on what we had learned over two decades, we realized that the passage of time and the continuation of many growth trends had brought the human society to a new position relative to its limits.

In 1971 we concluded that the physical limits to human use of materials and energy were somewhere decades ahead. In 1991, when we looked again at the data, the computer model, and our own experience of the world, we realized that in spite of the world’s improved technologies, the greater awareness, the stronger environment policies, many resource and pollution flows had grown beyond their sustainable limits.

That conclusion came as a surprise to us, and yet not really a surprise. In a way we had known it all along. We had seen for ourselves the leveled forests, the gullies in the croplands, the rivers brown with silt. We knew the chemistry of the ozone layer and the greenhouse effect. The media had chronicled the statistics of global fisheries, groundwater drawdowns, and the extinction of species. We discovered, as we began to talk to colleagues about the world being “beyond the limits,” that they did not question that conclusion. We found many places in the literature of the past twenty years where authors had suggested that resource and pollution flows had grown too far, some of which we have quoted in [our] book.

But until we started updating The Limits to Growth we had not let our minds fully absorb the message. The human world is beyond its limits. The present way of doing things is unsustainable. The future, to be viable at all, must be one of drawing back, easing down, healing. Poverty cannot be ended by indefinite material growth; it will have to be addressed while the material human economy contracts. Like everyone else, we didn’t really want to come to these conclusions.

But the more we compiled the numbers, the more they gave us that message, loud and clear. With some trepidation we turned to World3, the computer model that had helped us twenty years before to integrate the global data and to work through their long-term implications. We were afraid that we would no longer be able to find in the model any possibility of a believable, sufficient, sustainable future for all the world’s people.

But, as it turned out, we could. World3 showed us that in twenty years some options for sustainability have narrowed, but others have opened up. Given some of the technologies and institutions invented over those twenty years, there are real possibilities for reducing the streams of resources consumed and pollutants generated by the human economy while increasing the quality of human life. It is even possible, we concluded, to eliminate poverty while accommodating the population growth already implicit in present population age structures – but not if population growth goes on indefinitely, not if it goes on for long, and not without rapid improvements in the efficiency of material and energy use and in the equity of material and energy distribution.

As far as we can tell from the global data, from the World3 model, and from all we have learned in the past twenty years, the three conclusions we drew in The Limits to Growth are still valid, but they need to be strengthened. Now we would write them this way:

1. Human use of many essential resources and generation of many kinds of pollutants have already surpassed rates that are physically sustainable. Without significant reductions in material and energy flows, there will be in the coming decades an uncontrolled decline in per capita food output, energy use, and industrial production.

2. This decline is not inevitable. To avoid it two changes are necessary. The first is a comprehensive revision of policies and practices that perpetuate growth in material consumption and in population. The second is a rapid, drastic increase in the efficiency with which materials and energy are used.

3. A sustainable society is still technically and economically possible. It could be much more desirable than a society that tries to solve its problems by constant expansion. The transition to a sustainable society requires a careful balance between long-term and short-term goals and an emphasis on sufficiency, equity, and quality of life rather than on quantity of output. It requires more than productivity and more than technology; it also requires maturity, compassion, and wisdom.

These conclusions constitute a conditional warning, not a dire prediction. They offer a living choice, not a death sentence. The choice isn’t necessarily a gloomy one. It does not mean that the poor must be frozen in their poverty or that the rich must become poor. It could actually mean achieving at last the goals that humanity has been pursuing in continuous attempts to maintain physical growth.

We hope the world will make a choice for sustainability. That is why we have written our book. But we do not minimize the gravity or the difficulty of that choice. We think a transition to a sustainable world is technically and economically possible, maybe even easy, but we also know it is psychologically and politically daunting. So much hope, so many personal identities, so much of modern industrial culture has been built upon the premise of perpetual material growth.

A perceptive teacher, watching his students react to the idea that there are limits, once wrote:

When most of us are presented with the ultimata of potential disaster, when we hear that we “must” choose some form of planned stability, when we face the “necessity” of a designed sustainable state, we are being bereaved, whether or not we fully realize it. When cast upon our own resources in this way we feel, we intuit, a kind of cosmic loneliness that we could not have foreseen. We become orphans. We no longer see ourselves as children of a cosmic order or the beneficiaries of the historical process. Limits to growth denies all that. It tell us, perhaps for the first time in our experience, that the only plan must be our own. With one stroke it strips us of the assurance offered by past forms of Providence and progress, and with another it thrusts into our reluctant hands the responsibility for the future. (Vargish, 1980)

We went through that entire emotional sequence – grief, loneliness, reluctant responsibility – when we worked on The Club of Rome project twenty years ago. Many other people, through many other kinds of formative events, have gone through a similar sequence. It can be survived. It can even open up new horizons and suggest exciting futures. Those futures will never come to be, however, until the world as a whole turns to face them. The ideas of limits, sustainability, sufficiency, equity, and efficiency are not barriers, not obstacles, not threats. They are guides to a new world. Sustainability, not better weapons or struggles for power or material accumulation, is the ultimate challenge to the energy and creativity of the human race.

We think the human race is up to the challenge. We think that a better world is possible, and that the acceptance of physical limits is the first step toward getting there. We see “easing down” from unsustainability not as a sacrifice, but as an opportunity to stop battering against the earth’s limits and to start transcending self-imposed and unnecessary limits in human institutions, mindsets, beliefs, and ethics. That is why we finally decided not just to update and reissue The Limits to Growth, but to rewrite it completely and to call it Beyond the Limits.

References

Donella H. Meadows et al., The Limits to Growth (New York: Universe Books, 1972).

Thomas Vargish, “Why the Person Sitting Next to You Hates Limits to Growth,” Technological Forecasting and Social Change 16 (1980), pp. 179-189.


From Context Magazine : “Unlike most scientific books about our relationship to the environment, Beyond the Limits is willing to break several taboos. For one, it is willing to be optimistic – to say that it is possible to overcome the manifold obstacles between here and sustainability. For another, it is willing to use words like “love” and “revolution,” meaning by the latter not a violent uprising but an historical transformation, not unlike that from agricultural to industrial civilization.

“What are the elements of the sustainability revolution? They go beyond good information, new technologies, democratic participation, and sound policy. The authors close their book with a description of five “tools” that are generally not mentioned in most supposedly “serious” studies of what we must do: visioning, networking, truth-telling, learning, and – as they explain in this excerpt – loving.”

The following paragraphs are excerpted from the Conclusion of Beyond the Limits.


Love and the Revolution

One is not allowed in the modern culture to speak about love, except in the most romantic and trivial sense of the word. Anyone who calls upon the capacity of people to practice brotherly and sisterly love is more likely to be ridiculed than to be taken seriously. The deepest difference between optimists and pessimists is their position in the debate about whether human beings are able to operate collectively from a basis of love. In a society that systematically develops in people their individualism, their competitiveness, and their cynicism, the pessimists are the vast majority.

That pessimism is the single greatest problem of the current social system, we think, and the deepest cause of unsustainability. A culture that cannot believe in, discuss, and develop the best human qualities is one that suffers from a tragic distortion of information. “How good a society does human nature permit?” asked psychologist Abraham Maslow. “How good a human nature does society permit?”

… It is difficult to speak of or to practice love, friendship, generosity, understanding, or solidarity within a system whose rules, goals, and information streams are geared for lesser human qualities. But we try, and we urge you to try. Be patient with yourself and others as you and they confront the difficulty of a changing world. Understand and empathize with inevitable resistance; there is some resistance, some clinging to the ways of unsustainability, within each of us. Include everyone in the new world. Everyone will be needed. Seek out and trust in the best human instincts in yourself and in everyone. Listen to the cynicism around you and pity those who believe it, but don’t believe it yourself.

Donella H. Meadows, Dennis L. Meadows, and J¯rgen Randers


Beyond the Limits to Growth at Amazon.com

Limits to Growth at Amazon.com

Revisiting Limits to Growth: Could the Club of Rome Been Correct? (PDF) by Matthew Simmons

Fair Warning? The Club of Rome Revisited

The Club of Rome