August 2004 to January 11, 2005

January 11, 2005

Two Tyrants Are Not Better Than One

Peace in the Sudan! The end of a long and horrible civil war.

Don’t believe it.

The settlement is basically a power-sharing agreement. John Garang, the leader of the southern insurgency (SPLA), gets to be vice-president. The SPLA gets a formally dedicated share of government posts and representatives.

The Sudanese state itself is fundamentally left intact in its structures, functioning and norms. It’s just that the armed rebels get their place at the already-set table. There’s a commitment to allow the south to eventually vote on autonomy six years from now after being governed independently in the interim. The south is formally exempted from the imposition of shari’a. There’s a formula for sharing oil revenues.

First, despite a lot of celebratory talk, it should be evident to every observer that this is hardly the most stable arrangement. The hope obviously is that both parties will feel they have too much to lose if power-sharing returns benefits to both, particularly six years down the line when the south is supposed to vote on its long-term future.

Maybe. That sort of calculus has paid off from time to time in places like Mozambique where peace produced economic dividends and the insurgencies rested on relatively shallow social and organizational foundations to start with.

In this case, I don’t see a lot of reason for thinking that the elites of Sudan’s north will willingly share fifty-fifty, and equally little reason for thinking that John Garang and his associates will be acceptingly resigned to whatever plunder comes their way.

Which points to the core problem here: that a pact between Ali Osman Taha and John Garang is a bargain between two unprincipled and tyrannical leaders, neither of whom has much interest in the liberty and prosperity of the ordinary people within their territories. The roots of the conflict lie deep in societies that share relatively little sense of being part of a single nation, but also in some rather depressingly similar philosophies of rule, in which states exist substantially as a system for vectoring wealth to autocratic elites and their cronies and not as a guarantor of political, economic and social freedom. Maybe Taha and Garang will be satisfied with their share of the take, and their followers satisfied with whatever distributions of that take they get.

That might provide a measure of peace, but none of justice, and thus no lasting security. All it will take for this agreement to vanish into vapor is one party demanding more than the other thinks it due to it, or for some group far down the patronage chain of either deciding that they’re not getting their fair share and using all-too-plentiful weaponry to demand a larger amount of plunder. Probably most outside interests will drop a few pennies in the national coffer by way of a reward for the settlement, but that will last only as long as the Sudanese civil war is remembered by any but the Sudanese, which is to say probably for another six months or so, until some other conflict or problem commands the restless attention of media, diplomats and publicity-driven NGOs.

The root problem here is shared by both sides: a conception of nation that exists only as a tool for elites to mobilize forces on behalf of their own local parasitism.

Neither Garang or Taha have anything approaching an authentic vision of society as something that exists for something other than their own benefit, and to which they must defer.

Neither is a custodian of something larger than themselves, save for the networks of clientage to which they must answer. Nowhere in this accord will you find the blueprint for the reminagination of state-society relations anywhere in Sudan, north south or otherwise.

Two pirate ships may come to agreement about which sides of an island they intend to plunder. Two mob families may agree that one of them will extort money out of garbage removal and the other out of construction. Those agreements rest on the honor of thieves, which is a weak foundation in any event. Even when they hold, they don’t bring peace to the victims. They just make their sufferings more predictable and manageable.


January 5, 2005

Reading the Web still brings so many surprises and discoveries to me that I can’t imagine finding any other way. So reading this phobias thread at Obsidian Wings, having been directed there by John and Belle Have a Blog, I’m amazed to find out that quite a few other people share a feeling that I thought was my own peculiarity: I find it very difficult to watch films or television shows that turn centrally around the embarrassment or humiliation of normal or decent people, even (or especially) when it’s done in the context of comedy. I find it really painful most of the time: I’d rather watch videos of surgery. I had no idea that other people felt that way.

There’s a really odd review of Alias by Virginia Heffernan in this morning’s New York Times. It’s really not about Alias: it’s a sort of free-floating expression of unapologetic hatred towards comic books and men who like comic books. I started reading it idly and then more closely, wondering who exactly had pissed in her corn flakes, and where this was all coming from. Ok, yes, it’s good not to just have the same-old boring polite snobbery about popular culture from the Times critics, I suppose, but if the alternative is aggressively bilious snobbery instead, maybe I’ll settle for boring. I want Elvis Mitchell back.

January 5, 2005

Back just in time to go off to the American Historical Association meetings. If I never move again, I'll be happy: each time it gets worse because of accumulating stuff and accumulating decrepitude--I really felt all those boxes of books this time in my lower back. But it feels great to own our house. Yes, it feels great, even when the pipes burst in the garage ten minutes after I moved the last box over. Even when we paid the bill to have the house interior completely repainted. Even when the tub upstairs started to have a painfully slow drain and I had to whip out the DIY manual to see what I can do about it. Er, I think it feels great..

Anyway, regular blogging will resume here on Monday when I get back from Seattle. Here's my first post-move entry, however.

January 5, 2005

On the Occasion of Your Catastrophe

It’s a small side story amid the big and horrible details of the Asian tsunami, but I have read with some interest attempts by various experts to assess the social impact of the disaster in the nations affected and to place the catastrophe in global and historical perspective.

Mostly what strikes me is how little an event like this perturbs the established discursive formulations of expertise, how easily it can be plugged into a prepackaged argument or perspective, even including the criticisms of the American government's slow response, which I think are largely fair.

So, for example, there have been social scientists, some experts in the affected nations, others with more global portfolios, predicting that if the various governments in the area did not respond with effective disaster assistance, they could face serious political consequences from angry citizens further down the road. Certainly there are examples of this happening elsewhere, and it's not a wildly incorrect assessment. There’s nothing wrong with a critique of the Indonesian government’s historic corruption, for example, and it’s absolutely correct to argue on behalf of more effective, free and responsive government wherever and whenever the circumstances. However, it smacks of canned or opportunistic use of a big news story to just roll out these kinds of statements at this point.

There’s a sort of brutal specificity missing from a lot of the expert assessments circulating in the American media. For example, how much more alienated can the citizens of Banda Aceh and its surrounding province get from the Indonesian state, given that the province has been the center of a long-term separatist conflict? Perhaps it’s an opportunity for the Indonesian government to put that conflict to rest by delivering relief effectively, sure, but the specifics of the situation are such that I wouldn’t care to predict any political outcomes from the effectiveness of aid in that particular place. Nor, as at least one article accurately observed, is there likely to be any major economic impact to almost any of the nations involved, just to the communities that have been destroyed. The honest fact is that the coastal areas hit are relatively peripheral to the national economies they sit within, or that their economies are classic enclave economies, as in the tourist areas of Thailand that were affected. For those who work and live within the tsunami's reach, the impact is enormous. For those who live well beyond it, politically or economically, it's not terribly relevant. If it's an important moment in those nations, or in the history of global society, it has to be important for other reasons.

Many of the historians and environmental scientists trying to look at the big picture don’t fare much better, in my view. There have been quite a few arguments floated that this catastrophe demonstrates how vulnerable to natural disaster modern cities and communities have become because of the places contemporary humans tend to build in and because of the building materials they tend to use. This is a fair comment when you’re talking about hurricanes hitting the coasts of the southeastern United States. It’s a reasonable comment when you’re talking about the shoddy construction standards for modern buildings in some earthquake-prone parts of the world, though if we’re talking about a place like Bam in Iran, that’s a city which was hard-hit in a 2003 earthquake partly because its buildings were as much premodern as modern in design and materials.

In the big picture, however, this whole presentation looks again more like experts using a specific event in the news, any event, to try and bolster established arguments for their own favored policies, in this case policies on zoning, construction and planning, policies that I think they favor for reasons which have relatively little to do with prudent planning for natural disasters.

To say that modern human societies are more vulnerable to catastrophic disaster is empirically wrong. Across the longue duree of the last three millennia of human experience, for example, it was much more common for entire cities to be completely wiped out by fire than it is today. Humans in the past built in floodplains just as much as they do now, perhaps more so. They lived in earthquake-prone areas, and built large and fragile cities in them. They engaged in forms of agriculture and construction with very high environmental risks. Their elites constructed monumental sites and cities that required massive amounts of menial human labor and concomitant death and suffering, and sometimes saw those same monuments abandoned or destroyed when those who were enslaved to build them rebelled. Human history is full of catastrophic destruction, and the finality of catastrophe was far greater in the past than in the present. Modern states, even inefficient ones, possess a host of practical tools for the mitigation of catastrophe that no premodern society, even the wealthiest, could deploy. There were no international relief agencies or concerned journalists or charitable donations to come to the rescue in the vast majority of past disasters. Just the dead and the living.

An argument that surfaces at this point to counsels much more stringent controls on construction and habitation so as to minimize vulnerability to disaster is especially inappropriate in response to this particular catastrophe. Even hurricanes might arguably be affected by global warming, and our building patterns are more extensively vulnerable to them, but earthquakes and tsunami are one class of catastrophe with no relationship to human alteration of the environment. Coastal communities of fishers and merchants in premodern societies around the Indian Ocean would have been just as devastatingly affected by this tsumami (and were affected by several past ones that we know about). There's one important exception, and that’s tourism. Beach tourism is a distinctively modern phenomenon—but even the advocates of far tighter controls on human construction and living patterns aren’t calling for an end to all leisure travel to locations which might have particular vulnerability to catastrophe.

The difference in many cases is not that modern human societies are more vulnerable to catastrophic destruction or damage, but simply that the numbers of people involved in contemporary disasters are much higher. This tsunami might have killed 25,000 people rather than 150,000 if it had happened in 1650. That has nothing to do with where or how we live, simply that there are many more of us living and so potentially dying when disaster strikes.

That we feel those numbers so devastatingly has to do with the way that modern states constantly and persistently enumerate their populations—so that the tally of the dead circles like lightning around the globe, to be compared with all the other unimaginably large social quantities that we try to keep in our heads. Premodern societies did not have the mechanisms or the conceptual desire to count people in the same way, and did not understand catastrophe in the numerical terms that we are so accustomed to in the 21st Century.

We feel those numbers also because we live at the other end of the revolutionary impact of liberalism, the inheritors of a belief in the individual meaning and worth of every human life. We are everywhere, even in authoritarian states, enveloped by legal and social institutions which give individual lives at least notionally a structured importance and gravity. Death and suffering have been part of human experience from its beginning to the present day, but the human meanings and felt importance of death in the past were different, sometimes strikingly so. I do not think that many premodern societies conceptualized catastrophe the way that we do, even though they were just as affected by it as we are, often more so. The devastatingly painful stories of individual loss that serve as our collective route into this disaster, that allow us to relate its enormity to our everyday lives, are distinctively modern, a mark of our age. That makes us feel as if catastrophe affects us more, worse, and in a way it does. Not because we build more, or build in the wrong places, or have a flawed relationship to our environment. It affects us more in the 21st Century because of a change in meaning, in sentiment, in consciousness, in the infrastructure of human subjectivity.


December 9, 2004

Apologies for light blogging of late: we are moving in a week and a half to our new house, and this is occupying a lot of my time.

December 9, 2004

Cuckoo's Egg

When I was working on my first book, I spent some time reading a series of 1930s-era reports from “Jeanes teachers”, African men and women who were hired by the Rhodesian state and a private American foundation to train other Africans in agriculture and domesticity. I’m paraphrasing something that one of the teachers wrote with evident frustration after many of her child-care lessons were rejected by the women in her assigned area: “They say that none of their grannies did these things and yet here they all are. What am I to say to that?”

We have to be mindful that such assertions about “tradition” were part of fast-moving, highly mobile cultural and social struggles in colonial southern Africa. My own interpretation of the Jeanes teachers’ reports is that their assigned subjects were not so much objecting to the teachings as much as the teachers, that older men and women resented the authority of young, educated men and women who were not locals. But also the locals were in many cases pragmatically noticing that some of the advice being dispensed was of questionable value. Not just Jeanes teachers but all variety of European and European-educated African missionaries and teachers, for example, persistently suggested that living in square houses was better than living in round houses. The square house was a symbol for them of a “civilizing process”, of a transformation of their subjects. Showing skepticism about that project strikes me as simple common sense: most of the rural villagers must have seen how arbitrary and purely symbolic this counsel was.

There is something about the entire global practice of modern social reform and its relationship to both civil society and the nation-state that inspires or ought to inspire similar appropriate skepticism everywhere. I was initially inclined in my work on these questions in African history to identify this skepticism with colonialism, but once I worked on the history of controversies over children’s television, I began to see some larger patterns, and now see better still.

I’ve been minded of this from reading Laura at 11D discuss Mary Eberstadt’s Home-Alone America , a book which I’ve now taken a look at myself, though much less diligently and superficially than Laura. Laura's excellent critique pretty much speaks for my own reaction. What I’d like to observe is that the sins of Eberstadt are common, and moreover, tend to recur on both the right and left ends of the political spectrum. They are most marked when the discussion is about children, childhood, domesticity and the family. The problem isn’t just the very weak kind of social science that Laura accurately nails—the confusion about correlation and causation—but something deeper as well.

The deeper problem is even found in much more respectable, careful kinds of social science. It’s roughly the same problem that Deirdre McCloskey has identified in a number of writings about economics: that the “secret sin” of economics is its sleight-of-hand when it comes to its claims about the significance of a given problem or finding. Significance gets reduced to statistical significance, but making a philosophically or politically vigorous argument about the relative importance of a particular problem gets outsourced as being somebody else’s problem.

This is in some ways what African men and women in 1930s Southern Rhodesia were saying to the Jeanes teacher. Not necessarily, “I don’t believe you when you say that this or that thing that we do is a problem,” but “I don’t believe that the problem, if it exists, is a very important one. I think that what you want us to do instead is more hassle than it is worth”. When you look carefully at a great many studies and books that claim to diagnose pressing social or public problems, you frequently find that the evidence at hand, even when it is much more carefully arranged than Eberstadt’s, suggests that the “effect size” of the problem is very small.

In many ways, this kind of social criticism aims to tackle huge, complex problems that it acknowledges to be huge and complex by acts of incremental subtraction. Take away one small contributing factor, the implicit argument goes, and you reduce the general problem by that much. But huge social problems, even when almost all of us concede that the problem is real and significant, don’t work that way. They’re not giant agglomerations of smaller problems which can be neatly pulled out of the overall mess.

Besides that, however, most general social critics, left and right, neglect to make the really foundational arguments that they need to make about why we should care about the problems they claim to identify. Let’s suppose Eberstadt is right and day care is a major contributing factor to childhood obesity. I think Laura very clearly demonstrates why we shouldn’t take her too seriously on this point, but let’s be generous and assume it’s so. The real job is telling me why I should care about that. So what if kids get fat? So what if people die ten or twenty years earlier than they should? So what if people are more sedentary when they’re alive? And so on.

Yes, you can make some consequentialist arguments about all those “so whats”, and some of them are pretty substantial. But even the substantial ones require some very profound assumptions about the nature and purpose of society and about the moral obligations we owe one another—or the practical needs we have. Eberstadt, coming from the right, is typically inconsistent in her understanding of the application of such public welfare arguments—but then, so too are many intellectuals and public figures on the left. You cannot simply assume that childhood obesity is a bad thing, or rattle off a bunch of secondary effects (say, for example, more car usage, hence more fuel usage, hence more pollution or dependence on foreign oil) as if those are Q.E.D. on the general point. There are deeper foundations to lay down first. They can be done simply by referencing various bodies of social theory, but you have to do it. Most social critics in the public sphere can’t be bothered, or don’t even seem to know what they’re missing.

That’s one thing that economics can sometimes be awfully good at—asking a question like, “Why do we assume it’s a bad thing if people die from smoking cigarettes? Can we prove that it is?”, questions that other viewpoints have a hard time asking with equanimity. Economics may not be very capable of providing the philosophically coherent argument about why that is bad or good, but at least it can observe that things commonly assumed to be good may not be so even in purely evidentiary terms. Sometimes the change between one set of social practices and another set of social practices can’t be reduced to a simple matter of good and bad. So kids used to walk home and their mommies were waiting for them there and they used to play together in the neighborhood and so on. Let’s say that all is true. So now they go to day care and see their mommies at night and their mommies work. How we experience and evaluate that change, which seems a real change, if exaggerated in some respects by Eberstadt, is nothing that can be boiled down to concretized evidence anyway. You can’t find a moral argument about that change by nattering about with things like obesity or attention-deficit disorder. Those are red herrings. If you want to make a moral argument, make it, and leave the statistics for people who know what statistics are and what they can and cannot be made to do.

Oh, yes, many of us experience regret, longing, confusion, angst about that change—or any of the other changes that end up being superficially dissected by well-meaning social critics on the left and right. That’s why there is a market for such social criticism. Many of us are looking for someone to tell us that our longing for our own history is more than just us, that it is right and proper that our childhoods, our past families, our past worlds, are the ones which should be the model forever forward. That the cultural and social worlds that we have known—even when our own personal familial lives as children were not pleasant--is what should be. We are not prepared to hear that the very legitimate feelings of longing and loss and confusion that we personally experience are just that: our feelings, our lives caught in history, and nothing more. That to become strangers to the present is our inevitable destiny. We should not merely accept that no matter what: some things in the worlds we knew are precious, and must be saved or translated into the future. That effort requires extraordinary arguments, because it takes extraordinary resources and will to deliberately change the forward drift of social change.

Most of the time, we should just accept that what we were is not what we are or will be. That humans are resilient, and children most of all. Our children will be ok in day care, or at home, just as we were ok with the range of things done for us and to us when we were children. That this change, though wrenching and complex to us personally, is likely to be value-neutral when you think about the big picture of how humans survive, thrive and are fulfilled. Just as the mothers and fathers in small African villages in 1930s Rhodesia observed, however they themselves were reared, they were there. Whether a child was bathed this way or that way was a thing to consider, but not at the level of incessant urgency that some well-meaning stranger might vest in it.

So much of this angst, from Eberstadt or many other commentators, is about the most confusing and difficult fact of human life: our children will not be us. Modern middle-class Americans are more confused than most about this fact. We hope our children will be better than us. We hope that they will be us. We fear that they will be worse than us. But we are not prepared to relax and deal with the truth: that most of the time they will be nothing more or less than different. That children are both alienation from as well as connection to the present, and that this is neither good nor bad. It simply is.


December 3, 2004

Once More With Feeling

I’ve largely stayed out of the recent resurgence of conversations about conservatives in academia. I don’t have a lot to add to my previous writing on the subject, and remain frustrated with most of the participants in the extended public conversation.

There is one thing I want to draw out of my earlier writing for further emphasis and exploration. I think that those who complain about “groupthink” in academia are perfectly right to do so, and they’re equally right to suggest that the character of the groupthink in the humanities and some of the social sciences, and possibly in the larger culture of academic life, produces certain kinds of political positions or arguments as assumed orthodoxies.

However, I think the conservative critics who complain about this tendency largely misattribute its causes and as a result systematically fail to articulate meaningful solutions or remedies, sometimes as a result descending to the astonishing cognitive dissonance of suggesting crude forms of “affirmative action” or state intervention, solutions which are intellectual betrayals of conservative principle as well as programmatic disasters in the making.

The critique of groupthink in academia has already gone badly astray when it begins by counting up voter registrations and assuming that this is both evidence and cause of the problem. Political partisanship as we conventionally think about it and practice it in the public sphere is only an epiphenomenal dimension of the groupthink issue in academic life. It is telling that those who perceive the issue largely as a matter of Democrats vs. Republicans or liberals vs. conservatives operate either as pundits or as think-tank intellectuals, contexts where those oppositions really do clearly structure how an intellectual operates.

Academics are not motivated to groupthink out of a loyalty to liberal causes, left-wing politics or registration in the Democratic Party, though in many disciplines at the moment, they may end up predominantly having those affiliations in a smug, uninterrogated manner. They’re motivated to groupthink by the institutional organization of academic life. The same forces that help academics to produce knowledge and scholarship are the forces which produce unwholesome close-mindedness and inbred self-satisfied attitudes. These forces would act on conservatives as well were we to magically remove the current professoriate and replace them with registered Republicans. They do act already on academics who operate in disciplines where certain kinds of political conservatism are more orthodox, or in institutional contexts, like religious universities, where conservative values are expressly connected to institutional missions.

When I was briefly at Emory many years ago, I helped organize a one-day event about “interdisciplinarity”. After about six or seven helpings of young snot-nosed punks like myself rattle on about how cool and interdisciplinary we all were, a wise senior scholar named David Hesla finally intervened. “Virtually everybody’s interdisciplinary in some way”, he said. “You guys are unhappy with departments, not disciplines.”

What Hesla was pointing was that most of the constraints, both hidden and obvious, that produce forms of “groupthink” or suppression of innovation and debate within academia are the consequence of the administrative organization of academic institutions. Groupthink isn’t enforced by partisan plotters: it happens invisibly, cumulatively, pervasively, in the space in between scholars. It happens in department or faculty meetings, in peer reviews. It lives in what has been called “the invisible college”, the pattern of normative judgements that all academics make (including yours truly) about what is cogent, what is original, what is canonical, what is important. Those judgement are formed out all the things you know already, including those you scarcely consciously know that you know, and the heuristics you use to guide yourself to further knowledge.

That is the heart of the problem. It is one thing to talk about breaking down groupthink, to attack the insularity of academic life, and another thing to figure out how to do that without destroying the productivity and usefulness of scholarship and research altogether. The administrative constraints on my life as a scholar are not just noxious restrictions on what I can and cannot do, should or should not say. They’re also necessary in both practical and philosophical ways.

If tomorrow I persuaded my colleagues that the next job that opened in the humanities in Swarthmore should not be dedicated to any particular discipline or research specialization, but thrown open to the most interesting, fertile intellect we could recruit, I would be persuading my colleagues to join in an impractical catastrophe that would involve trying to winnow a field of 25,000 applicants down to a single person.

This weblog is an exercise in the decomposition of my own authority as a specialist—I don’t write too much about African history here, though my writing as a scholar in that field is an invisible hand that guides much of my thinking here and otherwise—but I wouldn’t necessarily recommend that decomposition for my colleagues or my institution as a whole. The danger beckons very quickly, and has swallowed me up more than once, of just becoming a rootless bloviator. Scholars have to know something through their labors that can’t be known without such labor, whether they’re conservative or liberal. If we get to a point where my classes can be taught just as easily by George Will or Michael Moore, we get to a point where we’re no longer thinking about how to open up academic life to the winds of change, but about how to padlock the doors and call it a day.

The heuristic constraints on any given scholarly project are what make those projects possible. Those same heuristics are what allow scholars to productively collaborate or contribute to a shared body of knowledge. What enables us also defeats us, however. The peer review that instructs me to come inside a canon so that I can be understood by an audience of comparable specialists quickly becomes the peer review that cracks the whip to force me inside a political orthodoxy. The colleague who usefully assumes a shared language about the nature of modern colonial regimes becomes the colleague who stares at me as if I were an incomprehensible freak when I break from that language, assuming they don’t just blithely move ahead without hearing my dissent. We need constraints on what we know and want to know, but we also need to always remember that those constraints are provisional, that they are merely tools. The administrative infrastructure of academic life has a way of ossifying what ought to be provisional into prisons of convention.

This problem would not go away with more conservatives in academic life. It is why I mistrust most of the critics concerned about these issues, even my judicious Cliopatria colleague KC Johnson. I generally do not disagree with Johnson’s particular complaints about particular issues, but he (and many others) don’t seem to be able to make the next step from the problems of those cases to a general revision of our expectations about academic work as a whole. Indeed, in his persistent nudging of the University of Michigan’s history department for its particular range of specializations and his related promotional arguments about political and diplomatic history, it seems to me that Johnson is operating well within the norms that are part of the problem, not the solution. We can’t get past the problem of groupthink without getting past the game of dueling specializations altogether.

The question is how to reconstruct the everyday working of scholarly business, to open up the ways in which we legitimate, value and authenticate scholarly work, to change the entire infrastructure of publication, presentation and pedagogy. Academics have to change their internal standards along these lines, but people outside academia also have to work to rethink when and where they need and are willing to respect the advice of experts. More than a few of the current round of complaints from conservatives outside academia contain a general disregard for the entire idea of expertise or scholarly knowledge. This general reconstruction of knowledge and its architecture is the real business, and it can only be tackled well with a scrupulous disinterest in scoring partisan points, with an understanding that the forces which produce a liberal groupthink among academics could easily be reversed in partisan terms without disturbing the more fundamental and difficult issues at hand.


November 23, 2004

The Not-So-Hidden History of the Pacers-Pistons Incident

In a Philadelphia Inquirer front-page story this morning, we get the inevitable take on the Pacers-Pistons-fans brawl: it’s a sign of the times, a mirror to society. Followed by the inevitable declaration by an expert that it’s a consequence of too much violence on television and in the movies, resulting in desensitization.

My grandmother used to snort when she saw certain paintings in a museum, feeling that any child could have painted the modern art she was seeing. Well, any ignoramus could serve up the conclusion that the fan-player brawl was the result of mass-media depictions of violence. I could write some free-floating quotations to that effect and give them to reporters so that they could use them in reference to any current or future incidences of violence. That soldier in the mosque? Video games, I’m sure of it. That hunter in the tree stand? Video games. Or television. One of them. Take your pick, whatever feels good. It’s like the doctor tapping a knee to demonstrate a reflex. Chronicle of an expertise foretold.

Chronicle of an expertise based on nothing.

Instead try starting with some attention to the specifics of the incident and proceed from the assumption that its key triggers rest in the individuals who made sovereign choices about their own actions. Ron Artest. The fan who threw the beer. The fans who came on the court. The players who felt a need (pretty reasonably, in my opinion) to back Artest up.

The problem here for the experts, or the superficial hook that sustains the Inquirer story, is that this approach treats this incident as just an incident, not as a pattern. Which strikes me as a basic responsibility for any “expert” who wants to speak authoritatively about recurring patterns, about social structures. Not, “What does this one isolated incident mean,” but, “Is it happening more often?” No evidence that it is, really.. You don’t have to have a systematic explanation for a single idiosyncratic case, or even a handful of them. Human beings see patterns because that's part of the architecture of our intelligence--but an expert has to do better than just give in to his primate brain. Even if you just want to do a rich treatment of the meaning and symbolics of the incident, be mindful of its idiosyncracy unless you want to demonstrate that it's more than that.

If individual responsibility—still the core conceptual underpinning of our view of criminality and accountability, even after a century of challenges of various kinds—will not suffice, move on to what we know about crowds, complexity and emergence. A human crowd is probably always one tipping point away from a riot or chaos: it is a complex system poised to phase change from one tempo of activity to the next. This isn’t just the abstract insight that complexity studies provide to us, but also the specific insight that a history of human crowds or mobs can provide. You could start a riot in a church with a sufficient sense of the cultural and rhetorical provocations that would move a crowd from one state to another. Sports fans with alcohol in them and testosterone at the ready are easier by far to tip into mass action.

If neither the highly specific, individualized explanation nor the generic systems-based explanation will do, then try a little cultural and social history, specifically of fans and sports. A global perspective on that history is especially devastating to the proposition that there is a tie between recent trends in mass media and violence in fan culture, but let’s bow to American exceptionalism and stay within the borders of the United States. Take a gander at this Sports Illustrated timeline, which doesn’t even cover more local or regional contexts like minor leagues, intramural sports, and so on. Did "Ozzie and Harriet" or "Gunsmoke" spur two fans to attack Jimmy Piersall? Violence within the stands, or even between fans and players, is an old story in the United States. Not an eternal one, but certainly one that has deep and complex connections with 19th Century processes of urbanization, industrialization, and massification.

Thinking about cultural history also allows to recognize that from small acorns mighty oaks do grow. Every American city has a slightly different fan culture which has complex organic ties to the cultural identity of its home community, incubating slowly over a span of decades. Each successive incident or moment in that history—which may come from a “tipping point”, almost accidentally—becomes a structured memory among local fans, and influences their sense of subsequent identity and resulting practice in a way far more profoundly important than something as generic as “television violence”. Eagles fans sometimes act like assholes because isolated cases of assholery over the years have been lovingly and pridefully recounted by later fans, forming a kind of instructional manual about the culture of being an Eagles fan that guides people who weren’t even alive when some of the storied legacies of local fandom first unfolded. It probably just took a couple of guys chucking ice at Santa Claus to get fifty or sixty guys to do it, but once it was done and commented on and retold a thousand times, it became a legend that constituted cultural practice forever forward. There are definitely some echoes of this in the local cultural history of Detroit fandom and Detroit's cultural history in general. "Disco Demolition Night" anyone? Hell Night, anyone?

Throw in some of the cultural shifts narrowly and specifically within basketball—the underlying posturing of the players, the racial and economic gap between players and fans, the iconography that surrounds figures like Bryant and Iverson vs. the iconography of Bird, Johnson and Jordan, and you begin to build a pretty good composite understanding of what happened that night. Media violence enters that picture in such a peripheral, attenuated way that to bring it up at all is a miscarriage of expertise, a comfort food for the ignorant who want easy scapegoats to point at for everything that discomforts them.


November 22, 2004

Honey, Not In Front of the Kids!



Couldn't resist putting up a photo of this set of Incredibles PVC minatures now available from The Disney Store. We didn't do anything to produce this particular alignment of Elasti-Girl's hands and Mr. Incredible's...well, you know. Even their facial expressions seem to fit the tableau of their packaging: he's blushing, she has a kind of lustful, playful smile. We looked at the packages and I'd say about half of them, at least, look something roughly like this. On the rest, Elasti-Girl's hand is pointing at Mr. Incredible's knee.

The movie, by the way, is possibly the best I've seen all year, and easily my favorite of Pixar's entire oeuvre. It's not just funny, but also passionately invested in the things that it cares about. It's completely authentic about its passions.

As long as I'm on the subject of fun rather than my dreary post-election ramblings, I also played a goodly bit of Half-Life 2 over the weekend. Anyone interested in computer graphics at least needs to see the game in action. It comes the closest of any game I've seen in terms of climbing out of the "uncanny valley" of representations of humans--the faces are eerily life-like, never looking that plastic-CG way. So are the environments. Everything has a physics to it--you can pick up any loose object and throw it.

This only makes me feel the same way that the first Half-Life did. There's another "uncanny valley" here. The more it strives for visual realism, the more that the narrative and representational conventions of the first-person shooter become noxious. Every once in a while, I felt like the action of Half-Life 2 and my emotional experience of its frightfully realistic simulated world became simultaneous, and that was an amazing sensation. I'd feel panic, scramble through an environment desperately like a hunted man, feeling I was Winston Smith Gordon Freeman in a dystopic world, rather than a jaded gamer who knows how to methodically explore the movement space of a FPS. But then the first time you die and have to repeat a sequence, you find yourself saying, "Eh. Why can't I climb that chain-link fence? I can do all this other stuff," or "Why are those guards just sitting up there with a rocket launcher waiting for me when I come back this way ten minutes later when they have no reason to expect that I will?" At one point, I left a sort of hovercraft behind at a burning barricade because I thought there was no way through for it. I made it on foot to a point on the map where I could go no further without dying, either from a helicopter shooting me or toxic waste in a tunnel. Finally, I went to look at a walkthrough and found out that I still needed the hovercraft and that there was a ramp at the barricade. The immersive magic of the world sort of faded at that point.

Grand Theft Auto: San Andreas mostly gets right what Half-Life 2 gets wrong--it still has the wide open world that the other GTAs did (with a few obnoxious missions that have to be completed in one way and one way only). If only we could squeeze these two together: HL2's astonishing visual realism in a world where you don't have to be a train car riding down the tracks.


November 19, 2004

Unhappy with any of my recent postings? Sent me emails, written weblog entries, written to me in snail mail about my writings on the election? Then read this very long entry. Even if you're not pissed at me, but just interested in more post-mortems on the election, read it.

Damage Survey...

November 18, 2004

Six Degrees of Condescension

I’m working on a point-by-point overview of some of the interesting and useful arguments in the post-election discussions that I’ve read, but before I get there, I want to address the accusation that some readers have made, either in their own blogs or in emails to me, that I’ve demonstrated a condescending attitude towards Bush voters in general or specific groups of Bush voters in particular.

Some of that complaint may have some teeth in it, but some of it seems to me to misunderstand the meaning of the term “condescend” or to be based in a serious misreading of my own arguments, even the most extreme or passionate of what I’ve written recently.
Let me run down what I’m seeing.

1) From what I can see, “condescend” for a small handful of critics seems to mean, “You disagree with me”. Yes, the Republicans won the election, and the right-wing is in the majority. For a select few—I really want to emphasize that this doesn’t seem to me to be the common sentiment—that seems to me that any strong continued objection, criticism or dissent offered by the losers (who are after all a very, very large minority faction) is by definition illegitimate and “condescending” to those who stood with the majority. Why? Because to continue to believe that your own views are correct when they have been repudiated by 51% of the voters is to believe that the majority is wrong; to believe in one’s own rightness in the face of the will of the majority is to believe one is better than the majority.

This sentiment I don’t feel much obligation to engage further, given how silly and profoundly anti-democratic it is. Am I therefore condescending to those who hold this view? Oh my, yes, absolutely.

2) For some critics, it seen to be condescending if someone else argues that their actions can be explained in any other way than they themselves would explain those actions. In this instance, let’s say a Bush voter explains his vote by saying that he made a careful, pragmatic, well-reasoned assessment of the likely governing competency of Kerry and compared it to Bush’s established record and then concluded that Kerry poses more danger than Bush has so far. If he then reads me saying that Bush voters made their decision based on moral values, or because they have red-state economic resentments, or because they’re not thinking clearly, or because they aspire to majoritarian tyranny, the response is that I’m being condescending.

This complaint is worth taking seriously, but I want to pick it apart a bit. First, some of these criticisms of my writing on these topics are based on misattributions of my various characterizations of Bush voters. I’ve been pretty careful, even at the height of my anger and frustration, to qualify those characterizations. When I said that some Bush voters are stupid, some Bush voters are intelligent but horribly given to rationalizations and inconsistency, and some Bush voters aspire to “capture the state” and lock in a structurally permanent majoritarian tyranny, those are three subsets that do not describe the whole. If you read that and say, “I’m a Bush voter, and none of the above”, then fine. I didn’t describe you. If you don't think you're stupid, unprincipled, or an aspirant tyrant, then feel free to exempt yourself from the characterization. Exempt your spouse and your parents and your friends and your neighbor while you're at it. Perhaps I didn't talk about you because I don’t think your logics or reasons for voting for Bush are in the most significant subset of Bush voters either numerically or they pose no systematic larger danger that I feel called upon to criticize.

The same would go for my arguments about “moral values” voters, red-state voters, or “soft libertarian” voters. None of those categories covers the entirety or maybe even the majority of those who voted for Bush. They’re imprecise, loose, and problematically generalize about a tremendous variety and diversity of motivations for voting a particular way. A particular reader is not obligated to see himself as described by any of those generalizations.

There is a deeper problem here, however, and at least some of those who have responded critically to my writing would have to indict themselves as well in the complaint. The deeper issue is, “Can we ever analyze why people act the way they do in ways that are not part of their own self-conception or explanation of their actions?” From what I can see, at least some of those who complain of condescension are essentially saying that it is impossible or illegitimate to do so, that it is always condescending to attribute motives or explanations to social action that it does not offer for itself.

It is possible to defend that extreme position. From time to time, it’s an argument that crops up in anthropology, for example. A less extreme version of the general position is actually quite common, and I tend to favor it myself: that it is at least always important to try and understand how different individuals and groups interpret their own actions and practices in the terms they themselves offer, and to engage those explanations when offering a contrary or divergent explanation of their underlying motives or intentions.

That’s a place where I’ve fallen short in recent writings, and where I think many of those opposed to Bush have fallen short, sometimes greviously so. So yes, it’s important for me to stop and listen more attentively to what diverse Bush voters say about why they acted as they did. Let me stress again, though, that at least some of the critical respondents are making the same error they accuse me of, by conflating the rationale for their own electoral decision with the entire class of people who voted the same way. I’m particularly struck by the number of libertarian or security-minded Bush voters who seem to believe that their own rationale was widely shared by virtually all other Bush voters, and that any criticism of Bush voters that is not focused on libertarian or security issues is therefore “condescending” in its misattribution of motives.

If you accept the strong or extreme version of this complaint, you’re basically saying that all social analysis, liberal and conservative, is by definition condescending. If so, I only ask you to have the courtesy to apply your criticism even-handedly.

3) Strong rhetorical language in the criticism of Bush voters is seen to be condescending. This is a pretty straightforward observation, and I’ll plead a limited mea culpa. When I imperiously declare a conversation to be finished, or imply that everybody who disagrees with me is a damn fool, yeah, that’s the frustration speaking. What can I say: only Eugene Volokh seems to achieve perfect equanimity. Obviously, the conversation not over, both for practical and philosophical reasons, and assuming the grandiose position of someone who thinks he can declare it over by fiat is certainly an attempt to condescend. At the same time, I won’t bend an inch on the basic point, which goes back to the first claim about condescension. I still think I’m right: a vote for Bush was a very bad decision that will have very bad consequences, regardless of whether that vote was cast stridently or in doubt, carelessly or thoughtfully. The strength of my opinion is proportional to the consequences I envision. This doesn't strike me as particularly exceptional in the context of the public sphere. It's a fairly normal kind of consequentialist reason, in fact. When you think the consequences of a given action are likely to prove extremely bad for extremely large numbers of people, you're entitled to blow your bugle and sound the alarm, and to feel scorn towards those who don't see the danger. If you do that on every single position you hold, then yes, your opinion of yourself is so high and your evaluation of consequences is so indescriminate that you effectively believe that you and you alone are worthy to decide the fate of the world. That is bad. I think for most of the issues I write about here and elsewhere, almost everybody at the table has some valid points to make, and the consequences that flow from different views are sufficiently modest that you can't justify being strident. This election is not one of those issues.

Time will tell. If I am still saying the same things ten, twenty, thirty years from now even though none of the consequences I feared came to pass, then yes, by all means, call me on it then.

4) The general attitude of anti-Bush critics towards the “red-states”, or “middle America”, or “religious Americans” or what have you, is said to be condescending, and I am said by some to share in that. I can only say that the entire substance of my writing here and my commentary elsewhere aims to avoid that sin, which I agree has been unfortunately more common than it ought to be. However, going back to point 2), it is not definitionally condescending to offer an overall characterization of “red-state” communities in the United States, even if it turns out on closer examination that a given generalization doesn’t hold that much water. If it is, then there are a lot of us—including many conservatives—who are in the docket. Feel free to poke holes in my generalizations about the red-state social world. I’ll shortly be joining you all in doing so. But that’s different than accusing me of being condescending.


November 17, 2004

Burn Rate, or How Not To Use The Next Generation of Employees

Lots of discussion out there right now about a number of negative reports on Electronic Arts, one of which seeks to provide advice to undergraduates who might consider an offer to work for EA.

As the author notes, for many 20-somethings with the appropriate skills, working for EA sounds pretty exciting on the surface of it, given EA’s centrality to the making and selling of video games. As I read the report and the other more harshly critical accounts I would say that the reality is not at all exciting, only appalling and exploitative.

One of the dirty little secrets that undergraduates at many colleges and universities are not particularly exposed to is just how much the first, and maybe second, and maybe more, jobs that they will get after graduating are going to suck.

Some of that is an inevitable consequence of organizational life, that newcomers, however inherently talented, start at the relative bottom of any organizational structure in terms of benefits and responsibilities, and have to prove themselves before gaining respect and compensation. Some of that suckitude also has to do with the relative extent that educational institutions insulate their students from the world beyond their borders. The essentially positive, largely supportive relationship that many professors and supervisors have with students is not a good preparation for the petty authoritarianism of untalented managers. More pressingly, the arrogance that academic life cultivates in some students doesn’t help them to accurately evaluate their own skills in relationship to others, so it’s not uncommon for the graduates of selective institutions to believe themselves considerably more experienced, capable and skilled than they are.

All of that being said, I would also say that there are some organizations which essentially rely upon and take for granted their ability to parasitically extract tremendous amounts of work from well-meaning, intelligent and reasonably capable graduates in return for poor wages and poor supervision by weak managers whose short-term abuse of new workers allows the entrenched managers to simulate competency and productivity.

This is not a problem restricted to the business world. In fact, I would say based on my own experiences and many reports from alumni and friends that some of the most common abusers of highly motivated and fairly capable recent graduates are non-profit and community organizations with liberal or left political or social missions, where some truly extraordinary exploitation can occur and be justified by the proposition that staff should simply accept exploitation because of their commitment to the cause.

Whether we’re talking about a community group or a big company like Electronic Arts, it’s clear that this is where the short-term perspective of middle managers whose only goal is to protect their own prerogatives can badly damage the interests of the larger organization. EA may shovel product out the door on time, but the costs of 20% or more turnover in staff, widespread disaffection among those who remain, considerable ill-will from those who eventually depart and a pattern of rewarding managerial drones while harming creative or skilled workers is a bad way to run the railroad. It is ultimately unsustainable, a house of cards. The people who can go elsewhere do and even the gullible or innocent undergraduate who feels excitement at the prospect of working for EA starts to hear enough negative news that he or she looks elsewhere for their first job.

Some of our students, of course, can sense already the likely undesirability of many available jobs after graduation. That’s why so many of them go to law school or medical school or other graduate schools almost immediately after graduating, even though this isn’t necessarily the best strategy in terms of assessing what kinds of careers will prove the most enjoyable or satisfying.

I feel sorry that so many of our undergraduates are going to have a bad experience in the world of work when they finish here. Sometimes, it’s inevitable; sometimes, it’s probably even a valuable comeuppance or life lesson—but sometimes it’s just a sign of an organization that doesn’t know what to do with the resources at hand, and can’t think past the problems of the next week or month to its longer term health and sustainability.


November 11, 2004

Why Is Equality Good?

An old question, and one for which there are many extremely sophisticated, intelligent answers.

Over at Pandagon, Jesse Taylor asks, “What does the Democratic Party stand for?” It’s being asked a lot this week. It’s important not to make too much of blog-comment threads, but I think this is one case where the crazy-quilt hubbub of contradictory answers that Jesse has received is probably pretty closely matched to the reality of the situation.

Some of the answers Jesse receives seem to me to be useful but lacking in some important respect. Some suggest the Democrats stand for realism, or truthfulness, or common sense. I think one could build on that possibility, but it’s also a somewhat transitory foundation that has political legs only as long as the Republicans appear unrealistic, untruthful or lacking in common sense. Not to mention the kind of problems I’ve already discussed here, that particular kinds or flavors of competency, intelligence and experience may be things which are valued in a political leader only by some of the voting population.

One other consistent answer that pops up a lot in that thread, and is echoed at many left-liberal blogs this week, is that the Democrats stand for equality. My problem, as I say in that thread, is that I strongly suspect that many of the people who say that don’t really know what they mean by it, and that at least some of those who do know what they mean are defending the idea of equality from radically different directions.

Are we talking about equality of opportunity, which is basically a complement to meritocratic visions of social hierarchy? Equality of opportunity, strictly speaking, is indifferent to unequal results as long as it is confident that inequality does not derive from unfair initial conditions. Now as it turns out, ensuring equal starting conditions in life for all citizens in a society that has a long history of enforced inequality is clearly a very difficult task, and grows steadily more difficult the shorter the time-frame in which a remedy is sought. If you’re content to work incrementally towards equality of opportunity, or think that removing statutory or structural barriers to equality of opportunity will naturally produce a slow drift towards achieving such equality, then you may not see the need for dramatic interventionary policy that may create short-term inequalities in order to produce long-term equalities. If you think that it’s unfair to ask people presently alive to endure the legacy of unequal opportunity and that immediate and dramatic remedies are necessary, you may require strong interventions.

But the important point is, if this is your idea of “equality”, you are not necessarily going to object to the mere existence of social inequality—in fact, you may expect it and even welcome it as long as you’re assured it comes from the natural talents and efforts of individuals, whatever "natural" might mean.

Contrast that with ideas about equality that seek to lessen the distance between the richest and poorest ends of the social spectrum. There are dramatically different reasons to support this vision of equality: you might argue it is pragmatically good as it makes society more stable. You might argue that it is culturally good as extreme wealth or poverty are unseemly or aesthetically displeasing in some respect. You might argue that it is a moral obligation, that extreme poverty and extreme wealth are morally unacceptable in a just society. Here the emphasis is less meritocratic, and more with constraining two possible social outcomes through state intervention—providing a subsidy to the poorest end of the spectrum and a hugely disproportional tax at the upper end whose express purpose is not revenue for the state but the reduction of extreme wealth for the sake of promoting equality.

Or contrast that with much more rigorous ideas about egalitarianism, that promote approximate economic and social equivalency between members of communities or whole societies. Again, there’s a diversity of underlying ideas that might support such a vision, ranging from certain kinds of practical ideas about social stability to certain kinds of moral commandments, though I think any really rigorous argument here is limited either to coherent bodies of analytic and philosophical thought like Marxism or to certain kinds of religious moralities that derive their authority from reference to the will of the divine or some absolute.

I would submit that all of these ideas, and others less well conceptualized, are bouncing around in a poorly digested state within contemporary invocations of the importance of equality by the Democratic base. Some Democratic leaders clearly favor the meritocratic conception of equality, and few party officials or leaders are friendly to strongly egalitarian social philosophies, but the voting base has a range of openness to all of them. Moreover, at least some of those in that base tend to bristle strongly when the commitment to equality is challenged or questioned, and tend to avoid calls to reflectively re-examine the concept and re-articulate its worth or importance. It’s taken as a given by many, that equality (of one or more kinds) is self-evidently a good thing.

Certainly I've seen some of this in the campaign for a living wage at Swarthmore: it's hard to even open a conversation with some of its advocates about why it would be a good thing if there was less inequality among the employees of the college. That's taken as being so much an unquestioned given that it doesn't need to be defended as such--though the implications of the argument aren't always followed--if we dissent from the labor market for low-wage employees, why do we accept its dictates elsewhere?. Some of the advocates have at least documented the concrete social problems and suffering that results from making less than what they have defined as a living wage, so I don't want to make my complaint too broadly. But even at that level, there are deeper questions whose answers are assumed: why should we care if some people suffer? I grant that you don't want to have to deal with questions that deep every single time you want to make concrete policy recommendations, but you at least need to find out if most of the advocates of a particular initiative aimed at inequality have some kind of reasonable, consistent, coherent answer to that question rather than a purely reactive, unarticulated assumption about it.

I’m sometimes mystified by colleagues or friends who start from the premise that inequality is the first and last concern of political life, because I don’t hear in their ardent commitment any systematic or worked-out idea about the reasons why equality is a good thing or ought to be central political theme that sustains Democratic voters. I hear people who basically are talking about meritocratic equality talking in seeming agreement with people who want to produce a distribution of wealth that favors the middle without either person recognizing that their visions may actually be antagonistic. I hear people who argue that equality is an absolute moral obligation, but then I either don’t understand the deeper sources and wider extent of the morality to which they allude or I’m mystified by the seeming gap between their own moral behavior and this moral obligation that they lay on society as a whole.

I see policy objectives that are held dearly by most of the Democratic base because of those objectives’ service to equality that really ought to vary hugely depending on which kind of equality they supposedly service. Affirmative action that is seen narrowly as a meritocratic correction to inequality of opportunity is a very different thing from affirmative action that is aimed at the permanent suppression of overly unequal life results through the maintenance of idealized diversity within institutions. The first kind of program has at least a potential future date at which it would be deemed irrelevant or obsolete; the second does not.

Most importantly, if this is the foundational value of the Democratic Party, I see one version where it might indeed sustain the party’s pursuit of electoral success—and other versions that would doom it to being a minority party forever. Meritocratic ideas about equality of opportunity might appeal to many Americans; ideas about blunting both ends of the social spectrum in favor the middle perhaps less so, particularly those that aim at the suppression of extreme wealth. Strong egalitarianism I think is even less appealing to many American voters.

That the Democrats need a solid foundational narrative and identity as a party seems unquestionable to me, even if they don’t want to pursue the more extreme kinds of reorganization I have elsewhere advocated. If it’s to be equality, then there’s some preparatory work to be done in advance: which kind of equality, and why do we believe in it?


November 5, 2004

And Another Thing...

Ok, so yesterday's lengthy manifesto wasn't quite the last word I have on some of these issues. This morning, two things hit me so strongly that I felt compelled to say a bit more on them.

First, I want to extract out of my long essay one of the points I make along the way, just in case it gets overlooked by the four or five people who might actually read the whole thing. Quite a few commentators have observed that Bush is popular with some voters precisely because of his malapropisms, his anti-intellectual stance, because they see a resemblance to themselves and because that resemblance aligns them with him against educated elites. The reverse is equally true. A lot of us who voted for Kerry are astonished that the simple competence issue didnít carry the day by itself. What I have realized is that seeking competency and a respect for institutional process are cultural values that are parochially confined to educated elites. They're part of the everyday ethics of our work, part of our habitus. But this is not what some other social constituencies are looking for in a leader. We take codes and norms of professionalism as a matter of course and so forget just how class-bound and culture-bound they are as a value system. Yes, probably most everyone wants to "do a good job" at whatever it is that they do. But for some, that is simply working hard, or meeting their basic obligations, or not letting down the team. The union ethos of hard work, for example, tends to collide culturally in some really sharp ways with the ethos of professional meritocracy. Professional meritocracy prizes hierarchically-coded distinctions between particular individual professionals; the union ethos tends to suppress individuals from distinguishing themselves from the group. The upshot of all this is that a Presidential candidate who is unquestionably a better professional in the way he approaches leadership is probably never going to win a general election on that theme.

Second, on Thomas Frank. The more I re-read his book, and now his op-ed in the New York Times this morning, the more I realize that even though I mostly agree with his diagnosis of the red-state, blue-state question, I really, really radically dissent from his solution. In fact, I think his solution is a much more unmitigated disaster than the election this year has been. Frank thinks that all the Democrats need to do is uncompromisingly remind red-staters of their real economic interests, and to articulate those interests in terms of a moral problematic rather than a kind of policy-wonk one. He is right, for the reason given above, that policy-wonk rhetoric is useless, but he is wrong that the key lies in pushing some notion of "real economic interests".

Frank's argument reminds me of an old, deep discussion between intellectuals on the left about peasant rebellions in world history (somewhat echoed by a smaller but similar conversation about some slave rebellions, particularly marronage), a conversation that was most intense during and just after the Vietnam War. The basic question was, "Why do peasants who stage successful rebellions frequently simply reinstate or restore customary social relations with landlords instead of radically restructuring property relations in their own favor?" The parallel question about maroon revolts against slavery, was, "Why do the most successful maroon communities strike deals with slaveholders that reinforce the power of slavery outside of the boundaries of maroon society instead of insisting on the abolition of slavery?"

There were roughly three answers to be found in the literature on peasantries (again, echoed in the literature on maroons). One was that peasants do not choose revolutionary answers to their problems even when they successfully revolt because the hegemonic authority of landlord elites and their allies has had such power that these answers do not even occur to them--basically a false consciousness argument. There's a sort of teleological variant to be found in some Marxist writings that says, "Revolutionary strategies did not occur to peasantries prior to the moment in world history in which revolutionary strategies were properly produced out of a dialectic."

The second argument said, more or less, because peasants actually rationally assess their situation and realize that a revolutionary solution may actually be to their economic and social disadvantage in the end, and so settle instead for the most favorable enforcement of traditional landholding reciprocities that they can manage. In this context, revolts are simply a kind of pragmatic rights-enforcement. The "rational peasant" argument also tended to argue that peasants who resist modernization schemes often do so not out of a political or philosophical objection, but because they can see that most modernization schemes are going to cause more harm than benefit to their agricultural productivity.

The third argument said, "Because peasants have a culturally and intellectually particular conception of economic and social relations, because they choose a different ethical norm which does not require the maximization of their own economic interests". This is the "moral economy" argument, the idea that peasants don't mind giving away the surplus of their production to landlords or other elites because they view all accumulative activity with suspicion, because they have a value system which is focused on the deliberate maintenance of sufficiency. In this context, when peasants revolt, it is precisely to reinstitute their deliberately imagined vision of traditional social relations rather than to restructure them.

I think Frank's proposed strategy is likely to come to grief on exactly this issue. What I think he is missing is that red-staters are not dupes of plutocrats. They are not people who've been distracted by the "moral issues" trope from their "true interests". The red-staters are the people who have stayed behind while everyone else has left because they do not want to or cannot live the blue-state way, because they have an idea of moral economy that scorns getting ahead, rejects meritocratic values. They don't mind wealth achieved through pure serendipity, as Jackson Lears has noted in an interesting essay on gambling and fortune in the American imagination. But they do mind wealth achieved through individually differentiated effort, through accumulative aspiration.

You cannot promise to serve the economic interests of such communities if such service is about redirecting accumulative economies in their direction. Iím not even sure you can do it simply through the idea of directing public investment in the infrastructure of their communities in their direction. I think you can only do it one way, legitimately, the same way that nationalist governments in the developing world have done it from time to time, or the way that the National Party in South Africa did it for Afrikaners after 1948, and that is a massive program of public employment in which everyone in red-state communities has assured access to an entry tier of relatively non-hierarchical sinecure jobs within the government. Basically, by promising to make every red-stater who wants it into a postal clerk.

Maybe that is actually what Frank has in mind: it would fit with some of his rhetorical invocations of the old socialist left. It might actually work in some respect: certainly the National Party in South Africa got a powerful stranglehold on white politics after 1948 by basically agreeing to guarantee the employment of whites. If it is what Frank has in mind, though, then I personally--like most professional elites, Democrat or Republican--would want no part of it. If it is not what he has in mind, then I think any other vision of economic empowerment aimed at the red states is likely to lack appeal precisely because the red-state world at this point is a moral economy whose terms are deliberately stressed as oppositional to the blue-state economic and social world, and maintained as such by those who remain within it.


November 4, 2004

The Road To Victory Goes Through the End of the Democratic Coalition

Introduction: I’ve been working on this one all day, stimulated enormously by Russell Arben Fox’s latest entry, which is just terrific, as well as a number of other writings both on blogs and elsewhere since the election.

My initial responses to the election were fairly emotional, like many, and I do not repent of them. I fully concede that there are many of us who voted for Kerry who have long been uncomfortable with the core constituencies that lie behind the Bush Administration. I am certainly not happy with Bush voters today—I do hold them responsible for what I think are likely to be terrible events to come, and thus have some contempt for what I take to be their indifference to consequences, but I think their existence is also a social fact that needs to be accepted and even embraced. Nevertheless, speaking for myself, it is a shocking thing to wake up the next morning and feel that one is really the target of hatred, to recognize that one’s country is now in the hands of people who hate you, disrespect you, and intend to leave little room for you to live the life you prefer on the terms you prefer to live it, that this is not just the mean little words of mean little writers, but a tangible social reality loosed to stomp across the national landscape.

I fully grant that this may be exactly the way that religious and cultural conservatives felt in 1972, that they felt equally besieged, hated, threatened, outgunned, and that they still sometimes legitimately feel that way today. One always hopes that people who have a sense of having suffered at the hands of others will learn not to inflict suffering, but I fear that the more typical response is to return every perceived harm back to the sender magnified a thousandfold. I suspect most bullies think of themselves as victims, and most victims dream of being bullies. In this essay, I make some fairly radical suggestions, but before any of them are even considered, I think an easy, universal first step needs to be what Russell suggests: taking religion and the religious seriously, respecting their authenticity, their meaningful place in the lifeworlds of the faithful and not-so-faithful. I think the richest expression I've seen of this point comes simply but powerfully at the end of my friend Paul Landau's book The Realm of the Word, about the history of conversion to Christianity in Botswana. Paul basically asks why historians and anthropologists so often try to understand faith as something other than itself, to translate it into some sociological commonplace, rather than to take seriously what those who say they have it say it is.

Russell picks up on the fact that some of my post-election responses are continuous with the themes that have dominated much of my writing in this space since I began Easily Distracted. In this essay, I will try even more to bring together many themes and tie them up in a kind of culminating work. I’m sure this will not be my last word on these issues, but after this it will be time to talk about other things, like “Ren and Stimpy” or the nature of ethnographic fieldwork.

"The Road to Victory" is very long, so I've posted it as a .pdf.

Read "The Road to Victory"

November 3, 2004

Moral Values, Divided Universalisms, and Parasitic Anti-Modernities

Kieran Healy writes that the election appears to have been decided by moral values, and wonders why no one was tracking that seriously.

No one? I think that many Americans have been interested in this issue, and keenly aware of it. I think we’ve been talking about it for four years now, perhaps longer. I know that some public intellectuals have been desperately trying to get Americans on the left to understand not only their immense vulnerability in these terms but to adopt some kind of response that goes beyond an uncompromising defense of secular humanism.

Not to renege on my promises about the dispensation of blame—Bush voters are morally responsible for their vote, no one made them do it—but there is a deep causal reason why the culturally conservative minority is as overwhelmingly mobilized and politically aggressive as they are, why they are as determined as they are to gain control of the mechanisms of the state and through it, civil society and popular culture.

From the perspective of social and religious conservatives, their campaign to capture the government is a defensive response to attacks from the late 1960s through to the 1980s on the central mechanisms of their own social and cultural reproduction. Abortion rights, feminism, the expansion of free speech, the increased legal rigidity in interpreting church-state separation, and so on: these are hot-button issues not just for and of themselves, but because each of them has symbolically come to stand in for a perception of a larger and more pervasive attempt to make religious and social conservatism a historical rather than continuing phenomenon.

I think that there is much that is fair about that perception, exaggerated and overwrought though it may be at times. Just as I think it’s pretty fair for those who oppose gun control advocates to suggest that “sensible” restrictions on handguns really are just a first step from the perspective of some gun control proponents, and that the struggle over guns has been in some ways only the veneer of a struggle over culture, lifeways, habitus.

Some of us with other values, who do not share the core orientations of the strongly religious or culturally conservative in the United States, pushed too hard beyond the basic necessary restructuring of social life, the basic enforcement of lowest-common-denominator rights and universal freedoms. Too many pushed forward towards a transformative project and suborned both the mechanisms of the state and civil society to try and accomplish that project.

Having mobilized, however, American social and religious conservatives are now far more sinning than sinned against. They are not and have not been for some time content to simply defend the integrity of their own choices, their own communities. They are paying back tenfold any harm done to their own social worlds, rolling the dice to commandeer American society as a whole.

In alluding to the need for a Fort Sumter in the reconfiguration of the American spirit, I am seriously thinking of the following Great Compromise: a radical embrace of an extreme states’ rights position on all cultural and social issues. We would offer to abandon the argument that the federal government must enforce a singular position on abortion rights and any other similar issue.

Let there be Heartlandia and Bicoastia, two republics with radically different laws. In Bicoastia, you could watch whatever you wanted to watch, read whatever you wanted to read, publish whatever you wanted to publish. Evolution would be taught in the schools. Women would have the right to choose an abortion. Civil unions or even gay marriage would be allowed. Church-state separation would be enforced strongly. In Heartlandia, public schools could mandate prayer time. Creationism could be required. Strict controls on popular culture could be allowed. Restrictions on divorce would be permitted. Abortion could be strictly outlawed.

This wouldn’t resolve the problem of federal mismanagement of foreign policy and so on, but at least the moral question would be taken off the table, or so you’d think. It might also provide an interesting “market” test for both ways of life.

Of course it would not actually, because neither of the antagonistic social coalitions that feel strongly about such issues would be willing to concede to the division of the Union. More importantly, the fact that most of us live in the in-between would suddenly arise as an equally powerful cultural fact. I’m reminded of something a friend of my father once told us about. He lived for some time in an almost entirely Mormon neighborhood in Utah. He would throw out his liquor bottles with the rest of his garbage. He noticed after a couple of months that by the time his garbage was picked up, the number of liquor bottles in it had mysteriously multiplied by a factor of ten, far beyond his own consumption.

The people who want to live in Heartlandia probably quietly live some of the life of the Bicoastians. I've seen the outside of a lot of strip clubs when I've been travelling in Texas, and you know something, I don't think it's liberal elites who are sitting inside. Some of the Bicoastians quietly or not so quietly harbor culturally conservative views on some subjects. I know I have quite strong feelings about marriage and infidelity: though I might intellectually support the right of people to carry on however they like, I emotionally am much more comfortable with monogamous couples (gay or straight). In fact, strip away a lot of my intellectual views, forget the fact that I’m not religious, and just look at me socially: married for almost 20 years, fairly ordinary middle-class life, soon-to-be-a-home-owner, pretty staid in a lot of my cultural tastes. I could at least disguise myself for a while as a Heartlandian.

The real true believers among the Heartlandians don’t want to let the Bicoastians go: they want to impose their entire worldview on them. And to some extent, the reverse is true. Truly fervent, quasi-theological “secular humanists” are a bit harder to find, and smaller in total numbers, than strong Christian evangelicals, but they exist. In autonomous Heartlandia and Bicoastia, both sides would blame the existence of the other for whatever failures and disappointments they suffered. The Bicoastians would be driven to watch for subversive signs of religious intrusion onto the secular, and into stupidities like France's law against the veil; the Heartlandians would have to prowl the backyards of their communities for hidden satellite dishes and smuggled videotapes.

There is a deeper set of asymmetries here, however, and it’s the real reason why Bicoastia and Heartlandia cannot be. One of the strangest things I’ve struggled to understand over the last decade is how neoconservatives and other Wilsonians can take a strong position on the universal necessity of liberty and freedom abroad and then not extend that position to the domestic front. The contradiction works equally well in the opposite direction: many liberals and leftists in the United States take a strong position on the importance of the strong enforcement of universal rights by the federal government within the borders of the United States and then take the position that other societies need to be self-determining and autonomous on questions of values, culture and rights. From the first position, how can someone take a strong position about clitorendectomy abroad and then be agnostic about abortion rights at home? From the second position, how can someone defend the importance of self-determination in non-Western societies and then argue strongly for serious intrusions into the autonomy of rural, religious communities within the United States? But many have and still do exhibit just this pattern of contradiction.

There are those who do not. Fareed Zakaria’s The Future of Freedom does a pretty good job of trying to bridge this gap. Trying to avoid this contradiction is the exact reason that some Marxists or leftists like Norman Geras ended up supporting the Iraq War; it’s the same reason that Francis Fukuyama has found himself on the outs with his old colleagues. But on the whole, there are few who claim the authority of a universalist perspective who live up to its obligations. Some of those who try, like Geras or Andrew Sullivan, ended up the victims of a colossal con game, the dupes of particularists and chauvinists who have absolutely no interest in a universal, globalizing, free world, nor any interest in an authentically devolutionist approach to the size and power of the state.

There are very few particularists or anti-moderns who really let go of the power and capacity of universal, globalizing liberalisms. American cultural and religious conservatives don’t really want autonomy for themselves, to pursue their own views and convictions freely. Like parasitic wasps, they want to lay their particularist eggs inside of the U.S. Constitution, to suborn its increasingly well-realized framework of universal rights and protections for the sake of a universalized anti-modernity. They don't want small government, they don't want federalism. Osama bin Laden does not really want to produce a reversion to a purified Islam of the caliphate: his Islam is a futuristic one, a globalized and unified Islam whose identity is forever defined in opposition to and therefore in reliance upon Western secularism. There are no real anti-modernities left: a real anti-modernity is something which is not against modernity, but perpendicular to it, opaque to the understanding of modern subjectivity, a social or cultural world which retains some untranslatable affect to it.

I cannot give up Heartlandia because I believe in a universal humanity possessing universal rights and freedoms. Giving up Heartlandia means giving up Zimbabwe, means giving up Iraq, means giving up Afghanistan. I wish that those who really believe in the Iraq War because of a commitment to universal human rights would see that their project has been commandeered by a movement which is not in the least bit interested in those universalisms save for the ability of a universal framework to impose their own particularisms.

At the same time, precisely because I recognize the advance of universal human rights as a matter of slow, persuasive and voluntaristic transformation rather than civil or statist enforcement, I know that not giving up on Heartlandia is not the same as forcing it to be what I want it to be. I am against clitorendectomy, but there is no way for my opposition to be anything more than a persuasive argument. Any attempt to enforce it through global institutions will make the localisms that reproduce clitorendectomy stronger and more entrenched. I am for the teaching of evolution in public schools, but I recognize that the more stridently I insist on this as an absolute priority, the more I am likely to have the opposite effect, the more that my insistence will come to symbolize the intrusion of the entire substance of my everyday life into the fabric of someone else's comforts and norms.

There is a bottom floor of basic rights when it comes to ceding to particularisms. Dictators do not have the right to hide behind localisms when it comes to torturing their own people or squashing free speech. Religious conservatives in the United States cannot defend their own values to the point of outlawing divorce outright, or requiring women to stay at home. At some point, we need both international and national structures which guarantee the solidity of that bottom floor. In this, the defenders of the Iraq War have a point: someone must act when the fundamentals of freedom are violated by the rulers of nations. If only they could see that point must be carried home to the United States. If only they could also see that you cannot prosecute a war for the extension of freedom in which you exhibit total contempt for the sanctity of individual rights. You can’t be concerned about the loss of life on 9/11 and utterly, chauvinistically indifferent to the loss of civilian lives in Iraq. On the domestic front, liberals have a point: the federal government must enforce basic rights uncompromisingly, and allow no one to defend their violation in terms of “values”. If only liberals could see that point must be carried abroad as well. If only they could see that concern for the rights of innocents in Iraq has to be always and persistently, consistently matched by equally vehement concern for the rights of people blown up by suicide murderers in Israel or the rights of the dead in the World Trade Center.

Much as I would like to be quit of Heartlandia at the moment, I cannot, and not merely because in practical terms I know they would not be quit of me. I cannot be quit of Heartlandia any more than I would be quit of humanity. Modernity points only forward to a world united in its rights and freedoms, and divided in its diversity of values, practices, lifeways. We need to find a stable new consensus about where the division between our universal obligations and our divergent values can lie: in 2004, it’s become clear that none of us know any longer where that line is, and some of us—today, the majority of voting Americans--seem determined to increasingly violate its necessary sanctity with little compunction and a great deal of contradictory ruthlessness.


November 3, 2004

Ship of Fools

As promised, I do not blame Kerry. I do not blame his campaign managers. I do not even blame Bush.

I blame you, whomever you are, if you voted for Bush. Because I think it’s more or less clear at this point that a narrow majority of voting Americans voted for Bush.

What comes to pass now, whatever it is, is entirely your responsibility. The nation is yours. Let no one in the defeated 49% indulge themselves with talk of how people were deceived or cheated or stolen from. The left (and some conservatives as well) have a ready supply of theories and arguments that allow them to believe that most people do not make bad choices, but have bad choices made for them, bad choices done to them. The weakest, most crybaby versions of these scapegoat the media or mass culture; the most sophisticated try to think seriously about the historical sources of consciousness and identity. Even the most sophisticated versions are security blankets that keep us from facing the hard truth.

With eyes wide open, in full possession of their faculties, acting as sovereign individuals, around 51% of American voters leapt consciously and clearly into the abyss, taking us with them.

Are you one of them? The nation is yours, but you are not stewards generously preserving something precious to us all. You are destroyers.

Four more years, and then the cycle begins again. I still think that there are people out there who may come to realize that the Bush hardcore do not serve their interests, their desires, their wishes, who can choose another path. People who could not be persuaded, would not let themselves be persuaded in 2004, but who may wise up in 2008.

I also know there are people who cannot or will not choose another path, who are seeking and may find an American authoritarianism or theocracy that will extinguish American democracy for our lifetimes. There are people who want me forgotten, silenced, gone, maybe even dead, people who hate me and anyone like me. Their enemies are not the terrorists: their enemy is other Americans. I will not simply lay down before that hatred. I did not start this fight in that spirit but I am prepared to respond to it if cornered and left no other choice.

51% of the voters now share with their President—their President, not mine, not mine: I am not his loyal subject, though I may be ruled by him—his most abominable trait, the inability to recognize and correct mistakes. As long as I can hope that four years from now, some of them might do so, I can still hope for the rekindling of a common American dream, a shared social reality, a remembered pride in our mutual dreams and hopes. That center is there already: we are not so divided, really. But the leaders of our nation and the people who support them want to forget and trample that commonality.

That may be no hope worth having, and the road there is hard regardless. It will take a complete reconstruction of the demographics and nature of opposition, because—and I hope the left, such as it is, will recognize this—the old coalition of unions, minorities, urbanites, is done. It is finished. It cannot break through. The old hope has always been voter turnout, that there were many in that coalition who did not vote, and in not voting, gave the election to highly mobilized social groups on the right. That hope is broken. As many people voted as probably will ever vote in this election, and the old coalition was not enough. Not enough people, not enough general national resonance in its convictions. Some new alliance, with a new mix of issues and convictions, will have to be made by 2008 that can carry any candidate to victory, not just to the White House but the House and Senate as well. If the next four years bring the crop of suffering and failure that I fully expect them to bring, it may be easier to convince suburbanites, married women, the young, to see where their interests and their rational judgement ought to lie.

But perhaps not. Perhaps the hardcore that hates me and everyone like me, that hates the other 49%, that hates New York and California and Boston and Chicago, hates the cities and the educated and the culture-makers and the secularists, perhaps they cannot be turned or changed or persuaded, any more than I can be on the convictions that form the heart of my lifeworld. Perhaps this is a social conflict so deep and so fundamental that its resolution will never be carried out through electoral politics. Four more years may make no difference. If so, then our time is better spent in a quest for the Fort Sumter of our times and our souls, for the path to the figurative dissolution of our contaminated Union.


November 2, 2004

I Don't Want to Go On the Cart

Still alive and kicking, but I was sidelined for about a week and a half by two developments: first we bought a house (scary, but interesting, too!) and then I got pneumonia, which I steadfastly kept willing to be a minor flu until it was clear that it was neither flu nor minor. So now I'm back in the saddle again, and blogging will resume. No shortage of things to blog about, certainly.

November 2, 2004

The Day After

One reason I could never quite summon the outrage that committed Democrats felt about Florida in 2000 was that the battle over a few hundred votes here or there to me overlooked the much more fundamental fact. No matter which system you preferred for the accounting of Florida’s votes, no matter whether you wanted a recount or not, the vote was by any measure exceptionally close. I’m not sure there is any system for allowing millions to vote that’s going to be terribly robust when it’s faced by electoral margins of a few hundred voters. This is not to say that the Republicans weren’t underhanded or that the Supreme Court decision wasn’t a bad piece of jurisprudence. It’s just that the real social fact I took away from the debacle was that the two parties were in a dead heat, that Americans were evenly divided.

That’s even truer now, but the stakes and nature of the contest have been changed enormously. You could vote for George Bush in 2000 and not be sure of what you were getting and tell yourself some pleasant things about what might happen. You might plausibly have been right. I myself didn’t think there would be all that much of a difference, that it would be two flavors of American centrism, two slightly different styles. I was basically okay with George Bush on January 20th, 2001. Not happy, but figuring that he’d be decent enough.

You can’t think that today. I don’t think that much of John Kerry, but the difference between Bush, a proven catastrophe, and Kerry, a probable mediocrity, is huge.

If Bush is the victor tomorrow (assuming we know by tomorrow) then I promise not to blame Kerry or his campaign staff. He’s a weak candidate like most Democratic presidential candidates in the past two decades, a weak candidate from an intellectually and politically confused party. But he’d be good enough. He’d be better than disaster, better than incompetence, better than mendacity, better than bad. Better than a man who has made so many mistakes but can’t admit to even having made one.

I won’t blame Kerry. In an odd way, I won’t blame Bush, either. When someone screws up again and again in a workplace, after a while, you stop blaming the screw-up and start blaming the screw-up’s boss, the person who can’t or won’t see how badly the bungling of their subordinate is damaging the company. Bush and his people are responsible for all their failures, but they’re not responsible if a slim majority of voters choose to put them back in office.

I will have only one accusing finger to point, should the worst happen: the guilty parties will be anyone who pulled the lever for George W. Bush. They’re George W. Bush’s bosses, and by voting for him, they’re saying, “There is nothing you can do that is wrong enough, bad enough, foolish enough, destructive enough, that will lead us to vote for someone else. Here’s your blank check: write in what you like”.

Even if Kerry wins, tomorrow there will still be the problem of the people who voted for Bush. Some of them are dumb as a rock, and use their ignorance like a shield. Some of them are the smartest people I know, and that’s worse: they’re choosing to make Bush into a figment of their own imaginations, to overlook the facts, to cherry-pick the truth, to drift in an opium den dreaming of the world as they wish it. You can forgive an idiot, but you cannot forgive someone who would rather blind himself or herself than see reality.

Some Bush voters, as I’ve noted here many times, are neither ignoramuses nor self-deluding geniuses. They’re people with a plan. The plan is to capture the state and impose their order on the rest of us, to squeeze out all possible hope of a middle, to crush the last embers of an American consensus. They may have already gotten their wish. Because tomorrow they will still be there, undeterred, unbroken, unrepentant, regardless of what happens.

If Bush wins, so much the better for them. If Kerry wins, the Bush hardcore will immediately crank up the machinery of hatred and obstruction. Bush for them is only a symbol, a synecdoche of their larger social aspirations. Kerry, too, is only a placeholder. Behind Kerry are the real targets: all those who would vote for Kerry, the enemies in a barely-undeclared civil war.

There will be no peace on November 3rd. I have given up any hope of that, and with it, much of my desire to try and talk peaceably with people determined to choose ruin and incompetence, no matter how noble or ignoble their aims.

Nothing is really going to change in American life until the pressure we have relentlessly built up under the electoral surface blows catastrophically in some fashion. The election of either candidate will release almost none of that tension. It will now take enormous social trauma, political upheaval, bold leadership: some unforeseeable stroke of genius or idiocy, suffering or joy, to resolve the struggle into something else. Better or worse I cannot say, nor can I say when or what might blow the volcano. I only know that the pressure has risen to a point where an eruption seems inevitable.


October 21, 2004


Swarthmore is in pretty good shape financially, though like many of even the wealthiest private colleges and universities, the last five years have required a surprising amount of belt-tightening in small but important ways. We’ve moved from an era where the outer bounds of planning were expansive to an era where any new initiative has to survive some really serious fiscal testing to even make it to the stage where it can be discussed. It’s hard for outsiders to grasp, but this college, like all of its peers, is sustainable at its present rate of expense only if the endowment makes its expected returns, tuition remains about where it is and if alumni give at a fairly predictable rate to both the annual fund and to major capital campaigns. This is not to say that it couldn’t survive if those things changed, but it would have to undergo some fairly serious structural alterations if those sources of operating monies were reduced significantly, among them a significant change in the faculty-student ratio.

I get fairly annoyed when either students or even sometimes colleagues call for some major new program without telling me what they intend to take away in return, or if not, where they intend to get new monies. I don’t care what the program or idea is—I may agree it’s a great idea or initiative—without some responsible and bold attention to the institution’s resources and their current distribution, it’s a non-starter. A few years back, some students wanted an ethnic studies program. Ok, might be an idea worth talking about. What program are we going to get rid of? We could take existing programs that seem to overlap that project and collapse them into one, or eliminate one outright in favor of the new idea. The students weren’t willing to talk about that: they just wanted more: more faculty, more resources, more courses. Where’s it going to come from? Oh, the college is rich, it’ll find a way, there’s money somewhere. Sure there is: it’s just that it’s already being spent. Tell me whose ox is going to get gored: I might even be willing to join in the goring. Just don't sit around waiting for Tinkerbell to sprinkle the proposal with magic fairy dust.

Right now, we’re in our third (fourth?) year of talking very seriously about various proposals for a living wage. The most ardent supporters of the idea on the faculty and among the students have consistently pushed for both a substantial increase in the minimum wage level that the college pays and for changes in our health care, child care and other benefit policies.

The basic thrust behind the campaign strikes me as sound. I’d like to be a part of an institution that takes on an obligation to treat all its employees more fairly and supportively than the general standards of the labor market in its area do.

I’ve been consistently wary about the actual campaign here for a number of reasons. I’ve been mildly annoyed by a few of the most fanatical student supporters, who persist in wildly-over-the-top ad hominem attacks on anybody expressing any doubt about the idea. That can be ignored as noise, though. There’s some interesting intellectual and substantive criticisms that some faculty and administrators have made about the actual effects of having minimum wages that are far above the local market minimums, which may not be the effects which the campaign is striving for. Another important issue concerns wage compression and its effects on staff who were previously well above the college minimum because of their skills but who will now be close to it. The living wage advocates have been a bit cavalier about a lot of these objections, but they’ve made some responses, some of them substantive, and have learned to take wage compression seriously. We’ve moved towards something that strikes me as both a better idea and a more affordable one, namely, means-tested subsidies in our health care benefits. I’d still like to see a child care subsidy for the lowest paid staff as well, but that’s not presently supported by our administration.

What has frustrated me the most, however, is that the main proponents of the initiative have consistently refused to talk about where to get the money, and for much of the lifespan of the living wage campaign, even refused to talk about how much their proposals would cost. That, they have said, is the job of the administration. Their position—even today, after much discussion—is that we should first declare that the proposal, including a specified minimum wage figure of $10.72/hr, is a transcendant moral obligation—and then, only then, decide how to fund it.

There is nothing that this institution or any institution does which is so morally or even practically necessary that we don’t need to talk concretely about what it costs. Cost-benefit analysis is a basic part of ethics, not a technical appendix to it.

For example:

•Do we have to pay our faculty and top administrators as much as we do? Probably not in order to recruit faculty, given the facts of the academic job market. You could probably pay half what we do and never go wanting for strong candidates in most subject areas, particularly the humanities. Probably if we want to retain some of the faculty we most want to retain we do need attractive salaries —though we could easily replace them with junior candidates, there are nevertheless costs associated with the constant loss of middle-level and senior faculty due to under-market pay. Even the ones you keep might be disgruntled, and given that this college, like most, depends on a certain number of its faculty doing more than they have to or need to, that might be a problem. Though balancing that, it's not entirely clear to me that students stop coming to a university if the faculty don't teach with enthusiasm or commitment. Just look at some of the most prestigious research universities. But is it a moral obligation to pay the faculty what they’re paid? Nope. Might that have to bow to some other obligation, ethical or practical? Sure.

•Do we have to give tenure? It’s a pretty damn expensive thing to do—it locks you into a thirty-year obligation to a particular field of specialization and a particular employee. Is it a moral obligation to stick with it? The way I see it, probably not: in fact, I think there are complicated collateral ethical problems with tenure as a system that go beyond its costs. But at the same time, it helps you retain a lot of the faculty you want, and it ideally and often practically does protect some of the most creative faculty when they innovate. The political costs of abandoning it would be enormous, as well, and might cost you many of the current faculty you want to retain.

•Do we have to have need-blind financial aid? It’s very expensive. We could probably fill the entire college with paying customers and do a lot of things with that revenue. But then we might not get the students we want, who seek socioeconomic and cultural diversity among their peers, and all the other benefits that we accrue by having a diverse student body. We’d be missing out on vital moral and social obligations we have (at least the way I see it). I’d say the need trumps the cost—but not in such a way that we don’t even have to have the conversation.

There are also things which would might arguably be morally positive or socially positive that we just don’t do. We could make tuition free for every student we admit. We could give every admitted student an iPod. We could make a pledge to guarantee the post-graduate employment of our students who are first-generation college students. We could charge students an extra $1,000 a year and give the money to the nearby public school system in Chester as an outright grant. We could require the faculty of the college to teach in-service training for K-12 teachers in the Philadelphia area, or classes in nearby prisons, or to do some other relevant community service. We could specify a minimum level of accomplishment for potential students and randomly admit each class from that general group of prospectives rather than agonize over each particular decision. We could create more positions in subject areas that directly engage social justice in some fashion. We could invest directly in businesses located in nearby impoverished communities, even ones that might lose money. We could require the college to use only minority-owned contractors. We could close the college and sell its buildings and give away its endowment on the logic that nothing we do in order to monkey-wrench our small fraction of tomorrow's ruling class and that there are people other than ourselves who need the money more anyway. We could get rid of the humanities because they're hopelessly impractical, useless or theoretically incoherent. We could dump the history department because history just opens old wounds to no good end. We could get rid of the sciences because they reproduce the military-industrial complex. We could refuse to buy any journals which were not open-access publications just on principle. Some of those things might save us money; most would cost us a lot. We presently don’t do them because we recognize that every single one of them comes up against both practical and ethical objections, with competing priorities, and because most of them cost more than the ethical benefit they would deliver--or perhaps some of them draw upon a highly questionable, debatable, and in some cases flat-out stupid conception of ethics, politics or institutional philosophy.

Nothing is so important in either moral or practical terms that it gets a free ride. But that’s what the living wage proponents have argued: let the staff figure out how to pay for it. That’s not work you pass on to someone else, as if they’re just filling out your tax forms for you. It’s political work, it’s community work, it’s the heart of the matter. What are you going to stop doing that you presently do in order to do this new and worthy thing? Or if you’re not going to stop doing something, where are you going to get the new funds? Are you going to charge more in tuition?—a decision that also has social justice implications. Are you going to draw more heavily out of the endowment? That’s pretty risky, and impinges on other uses of the endowment. Are you going to beg for alumni to give even more than they already do, to the tune of a fairly significant sum required for the proposal? You can try, if you like: I wouldn’t give much for your chances.

It’s not just that the initiative has been pushed without attention to its costs. It’s also that it has never been stacked up against all the other comparably expensive things that we might do, and judged in relation to them. Does it compete favorably with adding new faculty positions in areas that we don’t address? Yes, I think it’s much better than adding new positions, but it’s not as if we’ve ever talked about it that way. Does it compete favorably with trying to slow the rate of tuition increases in the future? With extending favorable benefits to adjunct faculty or athletics faculty (another issue we’ve talked about more of late)? With improving our general benefits package for all employees, regardless of rank? With ordering books and journals for the library? With improving the information technology infrastructure of the campus? With new buildings or improvement of facilities? With new staff positions? And so on. It’s not as expensive a proposal as some of those, and it’s much more expensive than some others, but anything you choose to do affects other obligations you presently have and some you might take on in the future, and many of those obligations have an ethical component as well as a practical one. It seems to me that every discussion of a new initiative ought to be incubated always with an awareness of the entire universe of potential new initiatives that compare to it.

If you can’t tell me where you want to make sacrifices—or you only have feeble, painless, and inadequate suggestions designed to deflect the political burden of making hard choices—then your proposals aren’t something I can take terribly seriously. These are dilemmas we face at the small scale of institutional life as well as the grand stage of national politics. At the national level, I shortly have to decide between somebody who appears to be modestly irresponsible in his proposals (Kerry) and somebody who has an established track record of catastrophic irresponsibility (Bush). That’s a pretty easy choice. It ought to be easier still in our everyday sociopolitical worlds, but it rarely is: ducking the hard questions is a common political art.


October 20, 2004

Class War: The Republican Party's New Favorite Sport

Everyone’s agog at some choice quotes in Ron Suskind’s interesting piece about the Bush White House from this past Sunday’s New York Times, and rightfully so. I suppose you could write off the most damning, frequently cited comment from one staffer that basically embraces a half-postmodernist, half-fascist conception of social reality as the views of a nutty outlier, but it seems to me to be of a piece with the neocon faith in the transparency of other societies to the application of power, the belief that unyielding will and force alone can conform history to all our desires.

I’m more interested in the views of Bush associate Mark McKinnon, who Suskind quotes at length. McKinnon confirms the accusation that the Republican Party under Bush’s leadership has developed a balls-to-the-wall, scorched-earth commitment to class warfare, to pitting a lumpenbourgeosie from Middle America against the bi-coastal elites and chattering classes. It ties in nicely with Thomas Frank’s analysis in Who Lost Kansas?, except that I think Frank doesn’t make nearly enough of the contradiction between the business elites in the Republican Party and what McKinnon calls the “busy working folks who don’t read”. I think they’re cutting their own throats with this strategy: they are not going to be the masters of class warfare, but mastered by it.

It’s absolutely true, as many have noted, that the Red/Blue state division is not terribly descriptive of the actual way people live, relate and think. But then the Hutu/Tutsi division was not a good description of the actual lived identities of Rwandans before the genocide: people intermarried, they effaced or dispersed ethnicity in everyday life, they vested more of themselves elsewhere, they viewed hardcore ethnic ideologues as alien and intrusive. None of which made a damn bit of difference once the state mobilized a political project of genocide through the lens of Hutu and Tutsi identity. Suddenly something subtle and mutable became the divide between the quick and the dead.

We’re not at the edge of anything so drastic, but the “senior White House aide” who apparently believes in the power of the Bush Presidency to remold social reality in defiance of those who occupy the “reality-based community” is really not that far wrong. Push enough political and social projects with high enough consequences through the prism of culture war, and it stops being an amusing side drama composed of Dan Quayle and Murphy Brown and starts being the guillotine that separates the American Republic into two bodies. You might gather as an extended family today and find Republicans and Democrats, liberals and conservatives, atheists and evangelicals, all dining at the table, managing any tensions easily enough. But you might find tomorrow that those choices, those identities, suddenly flash into being incommensurable markers dividing life and death, success and failure.

Not by our choice, none of us, but because a rhetoric of class war that was originally deployed cynically and defensively (Nixon’s “silent majority”) , and then both optimistically and opportunisticially (Reagan’s demonization of the state as intrusive elitist actor in the lives of ordinary Americans) has morphed into hardcore militant commitment among Bush loyalists.

As Nixon voiced it, there was even some truth to it: the counterculture and the antiwar movement were never as widespread or evenly distributed in their social and cultural power as they liked to pretend. As Reagan voiced it, there was certainly some truth to it: I think many Americans do experience the state as intrusive or incompetent and controlled by elites (though they also tend to forget when it is helpful and productive).

Nor has Bush arrived at class warfare all on his own. This moment is the bitter fruit of two decades of efforts by liberals and radicals to capture the institutions of civil society not through persuasion, but through a kind of pseudo-Foucauldian or Gramscian conceptualization of those institutions as capable of remaking consciousness, identity and practice, of doing what many on the left had realized the state was unable to do. The hue and cry about “political correctness” is often miscast, exaggerated, or mislocated, but there is something real at the bottom of it, something more complex and pervasive than tangible institutional manifestations like speech codes.

It’s the spark behind the energies that inform “South Park Republicanism”, a reaction to the domestication of the counterculture by educated Baby Boomers into something rather resembling early 20th Century social reformism. The social reformers of the early 20th Century were often people who can be found in the progressive family tree, like Margaret Sanger—but whether they have that pedigree or not, they were part of general project of middle-class intervention into the intimate lives of other Americans. There is much of the cultural left’s approach to civil society between 1970 and 2000 that echoes that moment, that sought to domesticate and civilize the practice of various demonic Others: white men, the rural, the religious, the housewife. I came to political consciousness under the sign of such a project, and as an educated snob, it suited my sensibilities very well. I was the one who would have the keys to the world of symbols, I was the one who could master the etiquette of self-critique and self-abnegation, I was the one who could carry out exotic projects of self-transformation through ethnographic encounter. If language made consciousness rather than consciousness preceding language, how comforting for the educated masters of language! They found the keys to the kingdom of justice, and surprise, surprise, they were sitting in their pockets all along.

No baby with bathwater here: many of the transformations in American culture and everyday practice since 1970 are entirely good and productive and have even had some of the predicted effects on consciousness and social relations that their chief proponents envisioned. But the slide towards the replay of early 20th Century social reform gave too much authority and capacity to our own generation of Carrie Nations, to a censorious, intolerant, self-righteous streak which was all too easily—and often accurately—identified with or the provenance of elite social classes or constituencies. Why many either suddenly lose faith in class as a meaningful social category when they have to think about their own political identities, or worse yet, apologize cravenly for their class background and self-consciously beg for people from other socioeconomic backgrounds to absolve them is no mystery, but it is a painfully predictable tendency. (I am as guilty as anyone of much of this in my own political history.) Political labors within civil society for many progressives in the last three decades really was an intrusive, controlling, and often remarkably graceless affair, and small wonder that it was easy for the class warriors of the Republican Party to first cynically and then increasingly confidentally characterize bicoastal elites as the enemies of the Middle American lumpenbourgeoisie.

That was never a fair argument, but it was given legs by clumsiness and smugness, and by an inept tendency to pass off our own uses of liberty as universally powerful transgressions, as political projects rather than cultural preferences. The point ought to be, and ought always have been, that we recognize with all Americans that being born again in an evangelical baptism stands equal to having sex with someone of your own gender--not in meaning, not in essence, but as manifestions of the freedom we all share.Watching The 700 Club stands equal to watching Tales of the City: we are, or ought to be, united in our freedoms. As soon as somebody regards their own sexual choices as a trangressive attack on the sexual choices of another, as a transformative project, they’ve chosen another path. Yes, yes, the other team did it first and still does it now: that’s important to remember. Every time I see someone screaming about how the left politicized the academy, I’m astonished by the historical dishonesty that requires. Every time I see someone talking about how homosexuality impinges on their own sexuality, I have to ask: why can't you see how that looks on the other side of the mirror? Civil society has always been a site of repression and politicization: it was not made that way by the post-1960s left or counterculture. But the move that we made in the domain of culture and consciousness was a tit-for-tat strategy. We have been repressed; we cannot be free if we do not remove the repression. How do we know repression? It is that which we are not. We like diversity, as long as it's our kind of diversity.

This was in purely practical terms a very bad way to go, because McKinnon is right, in the end, as was Nixon: there are more of “them” than of “us”, though at the same time, there are still more who watch both The Sopranos and Lawrence Welk, or who in various ways refuse categorical choices of this kind. In philosophical terms, it was even worse: it was what Jonathan Rauch has argued is the classic sin of the post-1960s Western left, to choose the creation of equality over the defense of freedom. It handed one group of Republican conservatives a loaded weapon, and they have fired it with cheerful abandon, first at their enemies and now, increasingly, at their own temples. They are now captive to class warfare, having walked in the cage, sniffed at its corners, turned the key on the lock and swallowed it into their own gullets.

The real hope at this point is that most Americans will remember that they are neither Hutu nor Tutsi, neither Red nor Blue, neither politically correct nor ogrish bigots, neither bicoastal Times-reading elites nor Middle American jest-folks. The trope of class war has been spoken before in America, and it has rarely met with an understanding audience, even when it was spoken with some justification. Now is another time when Americans have to unsympathetically look on and let the children who want to play with fire burn themselves up.


October 12, 2004


I am struck at how peevish some of the discussion of Derrida’s legacy has been, both among his supporters and critics.

Whether he was a “real” philosopher or not strikes me as wholly immaterial save for academics who account intellectual influence or legitimacy strictly by whether someone is in the correct department or uses the correct narrow disciplinary forms for citation and publication. He can hardly be blamed for all the various ills attributed, fairly or unfairly, to postmodernism, poststructuralism or even deconstruction, not the least the spread of a modality of analytic writing among American academics that tried to imitate translations of Derrida’s prose.

My first encounter with Derrida was as an undergraduate. I’d found Foucault’s writings very interesting and stimulating so I asked a professor I really admired to do a tutorial on modern critical theory with me. Some of what we read I liked a lot. One or two things I shirked on because they were too difficult, and was forced to reencounter them later. (Being and Time was one of them.) Derrida I just sort of shrugged at, and asked, “What’s the big deal?” One of the things that came out of the ensuing conversation was that you sort of had to be there at the right time and place for Derrida’s work to be intellectually transformative, that he was an intervention in the truest sense of the term. I think that’s about right. Just as many of Marx’s critics scarcely recognize the degree to which Marx produced much of the common social and historical frame of reference and vocabulary that the critics themselves use, so too do many of Derrida’s critics fail to recognize how much Derrida and his associated helped to normalize certain propositions about interpretation and communication that we do not specifically attribute any longer to them. Almost all of us take for granted now the permanent imperfection of representation and communicative action, the inevitability of a profound and important slippage between signifier and sign, reader and text, but this wasn’t always a given in humanistic writing.

Derrida’s greatest admirers are right to insist that we recall the importance of that intervention, which was meaningful to different people in different ways, and in its wake, unmemorable and unnoticed by those who lived in its aftermath because it had become common sense for us. Of course binaries were silly and arbitrary! Of course texts have no fixed or final meaning! In the humanities, all of us have inherited a particular armory of critical techniques and powerful gestures that are a commonplace. We all know the art of kicking the feet out from under any text or interpretation we encounter, of destabilizing casually affirmative interpretations about the meanings found in a text.

Perniciously, many of us also came into our scholarly practice (or other practices outside the academy) less and less capable of saying what we ourselves knew, less and less capable of making confident interpretations ourselves, more and more qualified in our own analysis the point that we tried to say everything and nothing all at once, more and more reduced to too-clever-by-half gnawing at the work of others, the infantile destabilization of all other claims and arguments.

The greatest problem I have with both Derrida and his most ardent intellectual followers is the absolutism of their favored move. To note the impossibility of perfected communication, the impossibility of ever knowing with certainty what is meant or said, the impossibilility of knowing the subjectivity of others, became for Derrida and some others the impossibility of any communication, any knowledge. A sleight-of-hand moved us from trying to discern the singular most correct interpretation of a text to believing that any given text held within it an inexhaustible infinity of possible meanings, none of which could be subjugated to another on the grounds that one interpretation was more right than another, more communicative than another, more authentic to itself and its readers.

In some ways, this is yet another way we can see how much of postmodernism is less “post” and more the fall of a religion from its faith, the bitterness and lingering of a frustrated modernism. Derrida was oddly empiricist, in his way. Frequently, he would enter an ongoing discussion about a text, a turn of speech, a political regime, a sociological construct not by polymorphously opening up meanings and possibilities but by insisting that the one meaning that should be completely and utterly denied to us is that meaning which we are most commonly accustomed to seeing. Counter-intuitively, Derridean deconstruction was not a permissive practice, but an inhibitory one: it’s favorite word was, “No!”: no, this saying does not mean what we think it means; no, this book does not mean what it is said to mean; no, this government does not act as it says it acts; no, there is never male and female alone, always there is much more. You may have any meanings save those you are familiar with, trust in, assume: those are denied to you, because they are untrue.

This back-door empiricism, this authoritative negation, was one component of the interior absolutism of Derrida’s critical method. The other was the cry of all or nothing at all, that if communication could not be perfected, then there was no communication, if texts could not have a correct meaning, they meant everything, anything, nothing in particular. This is the rear-guard modernism of Derrida (and much poststructuralist or postmodernist thought) and it reminds me of nothing so much as Einstein’s later career. Most non-scientists scarcely appreciate the degree to which Einstein, the modern embodiment of science, was in fact wildly, quixotically, bitterly and almost theologically wrong about the key tenets of his science for much of his life. What he rejected was a probabilistic universe (the source of his much-misused “God does not play dice” comment). So too, in an oddly related way, Derrida. If meaning cannot be guaranteed with finality, then there is no use to talking about it at all. If interpretation cannot be absolute, it cannot be done save as a negation of all positive acts of interpretation. The massively excluded middle: that texts are more likely to mean some things than others, that some interpretation is more right than other interpretation, that communication is subject to some but not infinite slippage, that other subjectivities are not perfectly knowable but neither are they perfectly mysterious: all this was not so much denied as evaded by Derrida. Confronted with a probabilistic understanding of communication, subjectivity, meaning, interpretation, even the most devoted Derridean will concede that of course this is true, and likely angrily deny that Derrida ever thought otherwise. The problem is that Derrida did not ever move into a mode of critical praxis in which is was possible, once again, to make affirmative statements about what is more and less likely to be true, to chart a course beyond the absolutist “No!” directed at anyone so foolish as to claim a finding about meaning and the proclamation of infinite slippages and endless irresolvable mutability in representation. To move from a critique of knowledge back to the practice of it. That work he left to the rest of us, and so left himself behind in the infinite regression to the moment of his own eruption into the space of humanistic practice.


October 12, 2004

They Call Me Dr. Pangloss

I just started working my way through the Emma Peel megaset of The Avengers, one of the indulgences from my 40th birthday. Emma Peel beats a sports car anyday.

I was looking at my Amazon DVD purchases in the last six months or so. The Avengers. Invader Zim. The Battle of Algiers. The fourth season of The Simpsons. Hellboy. A collection of old Felix the Cat shorts. Volume 1 of the Batman animated series. Casablanca. Treasure of the Sierra Madre. The complete series of Firefly. Season 1 and 2 of Ren & Stimpy is on its way.

A bit narrow, but that’s the difference between movies I want to own and movies I’ll rent—they tend to be things I’m interested in playing again and again (or that other members of my family might), or things I’m worried might not be available indefinitely, like Invader Zim.

My book orders are much more diverse—I think Amazon’s recommendations system is beginning to find me profoundly confusing. I try to keep up with my books but my to-be-read shelf has grown to about twenty volumes.

Last night, I surfed the Web and played some City of Heroes, after making some risotto for my family and enjoying two glasses of a decent red wine. I channel-surfed a bit before going to bed, hoping to catch another episode of Celebrity Poker Showdown.

I don’t really spend much on clothes (it shows, I’m sure) or durables, but my wife and I are both pretty profligate when it comes to popular culture and books. I’m happy with that. Let someone else worry about beautiful furniture and elegant clothing. That’s cool too.

I don’t need to be reminded of how overwhelmingly privileged a life I lead. I know it very well, and it is my deepest wish that my comfort and prosperity spread to all corners of the Earth as soon as possible.

I also know that some people would look at my cultural consumption and think not so much that I am unfairly privileged but that I am degenerate, an example of an over-saturated, over-busy, perverse age. Cartoons and computer games! The Web! Television! Movies and shows about superheroes and secret agents! O tempora! O mores!

I went through a phase rather common to scholars who get drawn into popular culture where I associated that censoriousness with several traditions of the left—some strains of Western Marxism and postmodernist thought and some more muscular kinds of old-left activism. I still recognize that lineage in people like Juliet Schor, Neil Postman or Thomas Frank, but it’s become clear to me that left and right are not very good markers for sorting out those who generally embrace the cultural present and those who turn away from it in disgust. There’s a more fundamental schism here, between traditionalists who hate the world they live in and wish they lived in another, usually imaginary, past moment, and those of us who embrace the dizzy, glorious excesses of the current cultural dispensation, warts and all.

I look at my DVDs, my television shows, my books, my comics, my computer games, at something like The Avengers and I think to myself, “This is not the best world that ever could be, but it’s a damn sight better than any other historical world that humans have inhabited so far”. Some despair at the size of it all, some despair at its variety, some despair at what they see as the lack of variety. Some bemoan the ironic nostalgia or pastiche of popular culture, others complain at its superficiality, and still others of its immorality or vacuity.

Not what I see. What I see is the unlocking of human imagination, the democratization of creativity, an explosion of meaning and interpretation and possibility. Of course the cultural world is beyond any of us now, too big to know or see or understand. So are all the stars in the sky, but that doesn’t lead anyone to call for a permanent shroud of clouds to blot out that hateful infinity. I love the profligacy of modern popular culture, I’m delighted by the thousands of clever and interesting texts, songs, web pages, comic books, films, television shows, performances, artworks that appear every day, even knowing that I’ll never see or know about most of them.

Embracing the whole doesn’t require you to embrace every part of that whole. You can still hate a particular book, a particular film, or a particular system of cultural production. You can still shake your head at the short-sightedness of the Hollywood system, complain of the glut of dully imitative Top 40 songs, or bitch about massively-multiplayer computer games. It’s just that no act of critique calls into question the phenomenon of the cultural moment itself, the architecture of modern global culture.

I’m very concerned at the danger of a modern enclosures movement, where the quiet eddies and subcultural nooks of global popular culture get dragged inside giant corporate conglomerates and intellectual property law is used to sterilize rather than liberate the work of cultural creation. It’s a real danger we face, a reason for vigilance. The twin dangers of regulatory zeal and monopoly ownership could kill the beautiful profligacy of global popular culture at the cusp of its greatest achievements.

I’m less willing to credit complaints about cultural imperialism, because I don’t see in the outpouring of global popular culture the monolithic, unvarying homogeneity that most of the chief complaints about cultural imperialism attribute to modernity. I don’t see expressive culture as a zero-sum game. But it’s true that those forms of expressive practice which are fundamentally antagonistic to a cultural marketplace—the equivalent of usufruct ownership of land, the kinds of cultural practices that are unowned and unownable, collective and communal, and that require a protected relation to power, are threatened by the explosive force of market-driven popular culture. My feeling about that is the same feeling I have about gemeinschaft in general: good riddance. There is a thermodynamics to hermeneutics: almost no meaning, no idea, is ever truly lost or destroyed forever. The solids which seemingly melt into air are still there, and any sudden cooling of the atmosphere crystallizes them anew, often in surprising or unexpected places and forms. All that is lost are the forms of social power that reserved particular cultural forms as the source of social distinction or hierarchy, all that is lost are the old instrumentalities of texts, performances, rituals. The achievement of liberty loses nothing save the small privileges of intimate tyrannies. Culture, even in the premodern world, is ceaselessly in motion and yet also steady as a rock. In getting more and more of it for more and more people, we lose little along the way. The existence of South Park does not kill opera or gamelan.

Injustice and inequity exist widely in the world we have inherited. They matter, enormously, and we all bear responsibility for their existence, some of us more than others. The luxuriousness of my life against the poverty of many other lives matter. I have no easy answers for this, but I know we must answer to it.

But against the traditionalists, the censors, the snobs, the moralists, the monochromatic, those who want less not just for themselves but all the world, who want only their own vision of what is refined and elegant to propagate, who so fear the authentic popularity of global popular culture that they imagine its successes to be impossible save by conspiracy, subversion and subjugation—against them, I have an answer, from whatever ideological point of origin they hail. The answer is no.


October 5, 2004

From Larry Bowa's Clubhouse to the Streets of Fallujah

The Phillies fired Larry Bowa just before the end of the season.


This being Philly, of course there are some fans whining that Bowa’s not at fault, presumably because they think Bowa’s a genuine South Philly kind of guy and he couldn’t possibly be responsible for the players failing to play up to their abilities or the front office making the wrong trades or deals. Not me. I’ve been sure Bowa was the big problem pretty much from the moment that the Phillies hired him, and I would think that anybody who had followed his career as a manager would recognize that as well.

Then I asked myself when he got fired, “Why am I so sure? What’s my opinion based on?” I’m not in that clubhouse. I’m not a baseball player or a baseball manager or an ESPN anchor. I just watch the game and read the box scores. I don’t even play rotisserie any more—I just couldn’t spare the time in April and May in an ordinary academic year to do the dealing and preparation required.

So how do I know, or think I know? The same way most of us know what we know: a combination of information, theory and intuition. I’ve read a decent number of press reports and interviews about Bowa and the Phillies (and about Bowa’s work with previous teams). There’s a diversity of opinion out there, and some of it comes from obvious axe-grinders like Tyler Houston. But it’s hard to miss the patterns that emerge in that informational architecture. Even someone like John Kruk, who has gone out of his way not to slam Bowa, ends up confirming some of the basics. Bowa has persistently treated professionals like a bunch of kids while running the clubhouse like a caricature of boot camp. I’m sure it’s not that way all the time, or even most of it, but obviously often enough for it to emerge as Bowa’s signature style.

That’s not enough to come to a conclusion, however. Because it’s one thing to feel some confidence that this is the way things were on the inside of Bowa’s tenure, despite the fact that I’m sure Bowa himself and some of his players don’t see it quite that way, and another thing to see this pattern as explanatory. That involves not just information, but a theory of human relationships and even a kind of intuitive emotional intelligence about them. You don’t just have to know that this is how Bowa acted, you have to assume that the way he acted is a primary cause of the team’s underperformance. (Which, by the way, involves another complicated assumption, that the Phillies did indeed underperform, that they plausibly could have been much better than they were.) Here you can’t point to anything concrete. There’s no information that will factually confirm this argument. You can only say, “This is how I think human beings in general work, and how a bunch of male athletes between 18 and 40 in particular work”, to argue that Bowa’s style was very much the wrong kind of leadership. That either resonates with you or it doesn’t, and there’s not that much I can do to convince you if it doesn’t.

I’m going on at length about this because it seems to me this is how a lot of what we know comes into being. There is really very little we know from direct or eyewitness experience. Nor is it clear that being a direct participant yields information or knowledge that absolutely trumps all other kinds of knowledge. We know very well from recent research, for example, that witnesses to crimes frequently get some very basic details of their experience wrong. Eyewitnessing is important, and there are things you can’t know if you’re not directly there. We have to make a lot of judgments every day, some of them of critical importance, based on indirect, reported information and intuition.

Iraq is one of those judgments. I keep being struck in many conversations online and off not by the selectivity that different reasonable individuals exhibit in the information they gather about Iraq—we’re all selective, we have to be—but by the global statements about the nature of information about Iraq that they subsequently make, and how they use these global statements to categorically disregard other arguments or representations of the situation. I’ve seen a number of defenders of the war attack its critics for relying on press reports, or attack the press reports themselves for exhibiting false selectivity, sometimes both. But where are the defenders of the war getting their information, then?

It’s the same problem I have with many of Noam Chomsky’s arguments: he operates from a fundamental presumption that the press is firmly and structurally enmeshed in hegemonic defense of imperialism, but then uses press reports to document many things about the state of the world in general. If you don’t have an explanation for why a given report or fact or document has escaped what you regard as a global problem, you shouldn’t be able to use it to buttress your own understanding of the world. Chomsky either has to develop a much more nuanced view of hegemony or has to restrict himself to sources that are structurally counter-hegemonic. Defenders of the war have to find channels of information that are entirely free of what they claim is a global taint with both the information and with the use of that information, or they have to deal with the total plurality of the infosphere that surrounds the war—and not just shrug it off as if that information is self-evidently untrustworthy by the very fact of it being reportage.

This has been a very large-scale issue with a lot of postmodernist or poststructuralist writing in the humanities and social sciences. Much of it, taken for what it seems to say, ought to make it impossible to make what passes for normal evidentiary use of texts and documents. But I’ve read so many manuscripts now where the author theoretically kicks out the legs of the chair he’s standing on and then tries to float immaculately on air. It might surprise some conservatives and skeptics who probably could uncork a rant about “postmodernist academics” in a moment’s notice, but I think this particular rhetorical gambit has become even more profoundly characteristic of conservative thought and writing than any form of consciously “postmodern” writing.

If I want to say Larry Bowa’s the problem, and good riddance, I’m honor-bound to listen to Bowa’s own publicly expressed views (though so far he hasn't had much to say post-firing) and run them through the same intuitive and rational machinery that I use to process the rest of the information I’ve got. I can say why I think someone who weights pro-Bowa information more heavily than the rest of the information is wrong, but if I were to say, “Eh, it’s all bullshit, none of us are in that clubhouse”, then I’ve deep-sixed my own views and all other views as well.

Not to mention the fact that this is the first step on the road to solipsism: it’s a natural start towards saying that no one knows anything but their own experience, and perhaps not even that. Everyone, from George Bush down to Joe Six-Pack, ought to have the most pluralistic sources of information possible about anything they’re interested in or care about. Everyone ought to specifically look for and solicit information that dissents from or contradicts their own preferred conclusion. You can’t diss someone else for relying on sources of information that you yourself also make use of. You can diss them for coming to different conclusions, but even there you have to have a degree of humility if the other person seems to be making a good faith effort to explore the same ecology of information that you’ve traversed.



October 1, 2004

Stick A Fork In the Road

When I talk with people about contemporary Zimbabwe, they usually have two questions. The first is what I make of the contemporary situation—whereupon I lay out my argument that the international perception that the crisis is primarily about white farmers and government land seizures is a profound misunderstanding of the problem that essentially falls for the Mugabe government’s diversionary tactics.

The second is what I think can be done about it, either by outsiders or Zimbabweans themselves.

The answer I have to that is, “Not much”. I don’t seem to be the only one with that answer. Pius Ncube, the courageous Zimbabwean archbishop who has emerged as one of the outspoken critics of the Mugabe regime, has said much the same thing, that outsiders and Zimbabweans alike are both oddly powerless in the face of misrule and disaster. The popular discontent in Zimbabwe runs deep, particularly in urban areas, but the governing party brutalizes active or vocal dissenters and uses food aid and other public institutions to rein in everyone else. Under those conditions, it’s wise and fair not to expect too much from popular opposition. People do what they can, and often, what they must, but the state has many tools at its disposal and few if any scruples about their use.

Observers have speculated for years that change might come from within the governing party rather than from outside of it, that younger members of the party might mount a challenge in recognition that it is better to take over the ship and right its course than sink with it. The senior figures in the party will fight that to the end, of course: any break in the current autocracy spells disaster for their personal survival. A new reformist regime, even one originating from inside the governing party, is eventually going to have to clean house in order to move forward and reforge a connection to the nation.

This is one of the big reasons why few kleptocrats do what you’d think a rational person might do after plundering their nation, which is to just pack up and leave once they’ve swelled their Swiss bank accounts to sufficient size. Wouldn’t you rather live in obscure comfort abroad than hold onto power and have to run desperately one step ahead of the eventual palace coup or revolutionary uprising? But Mugabe couldn’t quit even if he wanted to: the older party hacks and generals around him wouldn’t allow it. They’re truly a collective interest, a sort of “mini-class”. When he goes, they probably go too.

The young and ambitious members of the governing party might want to save the government from itself and satisfy their own ambitions in one go, and have wanted that for some time. There was a moment where everyone thought articulate, intelligent, courageous Eddison Zvobgo would manage to force Mugabe out of the leadership, for example. But now Zvobgo is dead (after having survived a probable assassination attempt in 1996) and Mugabe is still there, a mummified despot determined to preside over the ruins he made. You can prophesize the coming of a Gorbachev or de Klerk, but I think you’ll always be surprised when he actually shows up. Even if there’s a coup within a party, there’s no guarantee of a real reformer seizing the reins. Sometimes all you get is a younger autocrat.

What Zimbabwe now is, it shall be for many years to come, even after Mugabe is put six feet under. But what is now is, it did not have to become: the men in power chose this future for all the wrong reasons. They did not have to. The fact that most postcolonial states in Africa have tended towards the same gruesome political fate is a parallel conjuncture, not a structural inevitability.

The distinction between those two things is on my mind all the time both as a social scientist and as a concerned citizen deeply worried about the political direction of my own nation. Which is the 2004 election? Conjuncture or structure? When I see the numbers of people who appear ready to vote for George Bush, I’m forced increasingly to think the latter, that there is some social formation that has cohered around the figure of Bush that is effectively immobile and unpersuadable. A vote which is profoundly integrated into and expressive of fundamental social antagonisms between large constituencies in the United States, more a kind of "fuck you" to groups of Americans you hate than a "I think, though I'm open to thinking otherwise, that this is the right thing to do" kind of choice.

I agree that there’s a kind of short-term rationality involved when social conservatives vote for George Bush—even when they vote for someone as unambiguously ill-suited for public office as Alan Keyes. Those men really do represent the primary interests of that constituency. I think there’s a long-term irrationality about that choice, because as I’ve said in this space before, to bid for control of the state to impose a cultural and social revolution on a large plurality opposed to it is at the least counterproductive and at the worst apocalyptically self-destructive. But I understand it.

I still don’t understand fully the other part of the solidly pro-Bush constituency that I encounter online and in everyday life. It’s not so much irrational as arational in my reading. I don’t understand where it’s coming from in social terms--it seems rather heterogenous and distributed--or whether it is in fact a structurally immobile, deeply fixed political posture whose terms draw from something prior to and unaffected by conjunctural political thought or experience. It doesn't seem economically or politically self-interested to me in the way that Thomas Frank argues it is (I have been thinking a lot about Frank lately, but Michael Berube has said most of what I might say, and far better than I could). Maybe most of Kerry's voters are the same way: using their vote as a communicative act, expressing deep-rooted social identities in antagonism to others, rather than as a reasoned, affirmative choice.

I don’t really know if there’s much of anybody making what I would call a choice this November, voting one way while conceivably holding out the possibility of voting another under some other circumstance. I hear from the non-religious Bush voters that they don’t like or trust John Kerry, but it’s not clear to me that they would like or trust anyone but Bush in 2004, perhaps not even Republicans at the fringes of the party consensus like John McCain or Colin Powell. I begin to think that their feelings about Kerry come from someplace that precedes conscious thought, from the sources and wellsprings of their own social identity and self-perception.

Is there a fork in the road here? There was a fork in the road in Zimbabwe that its ruling elites chose around 1986 or so. Today there are no forks in the road: just the pain of diminishment and loss, a train track through a long, dark tunnel with only the dimmest prospects of a far light at the end to hope for and not yet see.

I think there was a fork in the road in the United States (and the world) on September 12, 2001, a choice within the Bush Administration and the Congress. There was a choice. The choice was made, and now it forecloses many other choices. Is there another fork now? Increasingly I think not. I've often been on the losing side of votes or decision-making processes, whether it's for President of the United States or on an academic committee. In strong communities and nations, where the process of voting is perceived to be fair, and where everyone perceives they have at least a chance to win, e.g., that the outcome is potentially mobile depending on some process of open-ended discovery (of evidence, of facts, of arguments), even losers accept the result and bind themselves more strongly to the institution or community in the process. In a political contest where the results feel structurally predestined, where there is no mobility or movement possible, disaffection and alienation from the process and the larger institutions is the least of the bad outcomes that can follow.

Whether John Kerry or George Bush wins, they may win on the strength of votes which are given to them regardless of who they are or what they’ve done, on the strength of a kind of voting which is ultimately horribly weak. A vote which is cast from who each of us is, and against what we each think we are not. A vote which divides the house against itself rather than resolves out the will of a united nation.


October 1, 2004

Small Invasive Species Addenda

Got a lot of really interesting and intelligent responses to the September 29 entry.

1) Bull trout are actually "char", not true trout.

2) It's actually brook trout that threaten bull trout the most due to competition in preferred spawning habitat.

3) Hatchery trout are also bad because they spread diseases--for example, this may be what has given whirling disease such a leg up in US trout populations.

4) All fishing is an aesthetic experience, and so what is wrong in that context with having an aesthetic preference for clever, well-adapted native trout rather than shabby-looking and frankly stupid hatchery rainbows? I agree with this, but with two rejoinders. First, that the great populations of brown and rainbow trout found in the Bow River south of Banff were originally planted (both of them sort of by accident, actually) and are now anything but shabby or stupid-looking. So there might be a more aesthetically pleasing kind of hatchery practice than dumping truckloads of beaten-up looking rainbows every year, more akin to stewardship. Second, I'm much more comfortable talking about the no-invasives approach if it's couched as a frankly aesthetic one--the problem I have is that it masquerades often as a much more dispassionate and "scientific" argument. When I think about the animals and plants I like and dislike and the environmental tableaux I like them within, I'm often struck at the heterogeneity of sources for my attitudes--a mix of larger rationality, self-centered judgements of utility, personal but defensibly coherent aesthetic preference and frankly irrational biases. In that I suspect I'm little different than policy-makers at Parks Canada or someone like E.O. Wilson.

5) Gary Jones makes some great comments, including some pointers to very specific connections to Nazism and eugenicist kinds of arguments about invasive species and nativism. This is indeed a potent and provocative connection. I also think, however, that there is a much more diffuse kind of idea burbling under the surface, the same kind of quasi-anthropological desire to keep "native human cultures" pristine and unchanging that crops up so often even today, in both offensive and inoffensive ways, but always with troublingly weak and contradictory underlying logics.


September 29, 2004

Of Bull Trout and Purple Loosestrife

I love to travel in September, but normally, like most academics, I can’t. Since I’m on leave this year, I decided to seize the chance and take off with family members for a trip to the Canadian Rockies to celebrate (if that’s the word) my 40th birthday.

I found myself mulling over whether it would be possible to just chuck it all and move to Jasper, Alberta. It was an amazing trip—the first real “vacation” I’ve taken in a long while, where I wasn’t attending a conference or looking at an archive or something similarly productive. I love the mountains of western North America in general, but the landscape between Banff and Jasper trumps everything else I've seen.

One thing that I did find surprising and disappointing was that the fishing wasn’t terribly good. A bit of that was the time of year, and a bit of that was that we were in the wrong place to fish some very productive waters (the Bow River around Banff isn’t nearly as good to fish as the Bow River south of Calgary) or we didn’t have time and inclination to fish the way we ought to fish (we tried shore-fishing at Lake Maligne above Jasper, when you really need to get out in a boat and troll the deep water). Also, the fishing had just closed at a few places.

Still, there were waters that looked to me as if they should have been good trout waters, both small lakes and rivers, but turned out to be pretty well devoid of fish. I did a bit of snooping and found out somewhat to my surprise that Parks Canada is aggressively anti-stocking and has been ever since 1988. This explained in particular why some small lakes that have no outlet or inflow (like Horseshoe Lake near Jasper) were devoid or nearly devoid of trout.

The purpose behind Parks Canada policy appears to be two-fold. First, to remove trout from aquatic environments within the National Parks where non-native predatory fish are deemed destructive in their impact on the ecosystem; second, to protect native species like the bull trout and the cutthroat trout.

The first objective I can see—it’s easy to forget that the introduction or repeated re-stocking of trout into waters that wouldn’t normally support a trout population has a significant impact on other organisms like amphibians, particularly if the water is cold enough for the trout to reproduce.

The second objective I feel a bit more ambivalent about. If rainbow trout elbow out bull trout, then that’s a problem from the standpoint of losing a species of trout, but on the other hand, rainbows pretty well occupy the same niche as bull trout, only more successfully and possibly voraciously. The vision here isn’t just the preservation of a species—it’s the larger antipathy towards “invasive species” that’s become an orthodoxy of environmental science.

I do wonder about that attitude a bit, not just in the context of fishing, but as a whole. When I read some of the material on the dangers of invasive species, its rhetoric and tropes sometimes seem uncannily familiar, reminding me very much of ideas about race, miscegenation and nativism in modern colonialism, in post-colonial nationalism, and in identity politics. There’s some similar desires to stop the forward motion of change, to fix environments (human or natural) in their tracks, the same suspicion of dynamism. What is particularly striking to me is that the arguments against “invasive species” even from scientists sometimes seem not so much technical or scientific (when they are, they usually rest on the relatively weak assertion that there is a burning necessity for general biodiversity that trumps all other possible principles of ecological stewardship) but mostly aesthetic.

There are practical concerns posed by some invasive species, to be sure. Nobody wants zebra mussels in their waterways. More importantly, I readily agree that an introduced species which might appear harmless or inoffensive can have unpredictable effects on an ecosystem. It’s a classic source of emergent change. But what’s interesting to me is that the strongest general attacks on all invasive species frequently concede that it’s impossible in general to predict the full long-term consequences of a species introduction, and indeed in many ways impossible to predict or manage the long-term arc of change in any ecosystem even assuming that all introductions of new species could be prevented. I wonder then why there is such certainty, therefore, about the horror of any and all species introductions.

Breeding populations of animals do move around even without human assistance, after all. This is a basic part of the natural history of life. It’s the pace and scale of the phenomena that has changed, and that obviously has serious implications. But when ecosystems actually endure what we’re told is a fatal threat, one has to wonder whether we don’t need to be more discriminate and dispassionate about the phenomenon. Say, for example, the way that some places are trying to fight purple loosestrife and predicting environmental disaster if it gets established—but purple loosestrife has been around since the early 19th Century in the East Coast, with some complicated but hardly apocalyptic consequences.

I wonder then for the same reason whether it is really so terrible if rainbow trout displace bull trout in waters that support trout populations. There are consequences to that—loss of genetic resources of the bull trout population, possibly pressure on prey populations due to the more voracious appetites of the rainbow trout, and loss of the unique “character” that bull trout provide, whatever that might be—but the intrinsic, instictive horror at the idea of a “native” species displaced by a very similar “non-native” one seems to me to come largely from the same place that modern ideas about race, identity and nationality in human beings have come from, somewhere deep in the cultural and ideological foundations of modernity and not from a cleanly rational scientific principle. There's a rich potential intellectual history lurking in there somewhere--in fact, I strongly suspect that it's already been written, and I'm simply not aware of it.


September 8, 2004

I've joined the group blog Terra Nova, which is a really exciting opportunity. First up, I've linked to the essay I mentioned here a few weeks back on conceptions of the state in virtual worlds; very soon, I'm going to take a shot at beating the dead horse of the ludology vs. narratology debate.

September 8, 2004

Goody Two-Shoes, or the Composition of Toughness

There are a lot of things I don’t like about the Bush Administration, as anyone who reads this blog knows. The keystone of my complaint is that they’re incompetent, that they are screwing up their management of America’s global role at a time when incompetence has a uniquely high price. The question of whether the war in Iraq is morally right is a secondary or tertiary one for me. It so happens in this case that I think effectiveness in the war on terrorist groups and particular forms of militant Islamicism is aligned with what most people would regard as moral or ethical principle. That’s because I think such a war has to be won with a combination of cultural understanding, careful demonstration of the authentic attractiveness of modernity and liberalism, reasoned diplomatic persuasion, economic incentive, and military force. I don’t think it can be won simply with military force alone.

In fact, one reason so many people are reduced to sadness and horror by the events in Beslan is because Russia has already done what the “flatten Najaf” brigade has wanted to do in Iraq. Russia was victimized by Chechen banditry and terrorism, so Russia invaded Chechyna and pretty well wiped most of its population centers off the face of the map with heavy bombardment, followed by occupation. That doesn’t seem to have stopped horrific acts of terrorism by Chechens against innocent Russians.

It’s hard to know what would stop such attacks. Not territorial concessions—Chechnya was effectively autonomous before the Russian invasion. Not negotiation: there’s no responsible, authoritative polity left to negotiate with. Not strong internal security and defense by the Russians: their nation is huge and almost necessarily porous, and their economic and material capacity to mount such a security regime is lacking in any event.

There have been many organized groups that have practiced something similar to what we now call terrorism in the past whose names and causes are today nothing but a historical memory.

Some examples:

Early 20th Century anarchism in the US
Various varieties of anarchism and nihilism in pre-1917 Russia
Isolated cases of actions by the African National Congress
The Weather Underground

So how was terrorism in these cases defeated? In some cases, it was not defeated, but instead was subsumed into a victorious cause. The Bolsheviks were not major practicioners of “terrorism” in the overall context of pre-Revolutionary Russia, but post-1917 Soviet historiography was happy to claim such activities as part of the overall history of revolutionary action.

In other cases, an ultimately successful revolutionary or political movement may have dabbled in terrorism, as the ANC did, only to pull back from a few tentative forays in that direction due to intense negative reaction within and outside the movement. For a nationalist movement that seeks or relies on international political legitimacy, terrorism may be too costly.

In some cases, movements practicing terrorism were defeated because they were small, marginal organizations with minimal popular appeal and therefore had limited ability to replace members lost to arrest, conflict with opponents, or other forms of attrition. The Weather Underground, for example, was really nothing more than a few fairly stupid middle-class white kids. That’s no consolation to the people they killed or hurt, but they were never going to be able to accomplish much precisely because they turned to terrorism. Early 20th Century anarchism in the US, while a larger and more sustained movement, was roughly the same.

There are a few cases of groups that practiced terrorism being effectively contained through drastic military action, but these were often followed by political concessions to the causes or interests being pursued initially by the terrorists—say, for example, the British response to the Mau Mau uprising in late colonial Kenya.

The problem in part is that the range of variation in the history of defeated terrorist groups is fairly wide. Boil it down to:

a) terrorist movement wins some or all of the political goals it seeks and stops practicing terrorism, often because it has gained control of the state and society it was attacking
b) terrorist movement stops practicing terrorism because it judges it can accomplish its goals more effectively in some manner and because the cost of terrorism to its interests is too high
c) terrorist movement is held off or contained by security forces and becomes irrelevant or marginal as its members are killed, voluntarily decide to give up being terrorists, or are regarded with such loathing by the rest of their society that they have no source of support
d) terrorist movement defeated conclusively through drastic military action and repression, often followed by political concession to the underlying causes or interests behind terrorist actions.

I think that covers a lot of existing cases, if not all of them. Let’s start from the premise that groups like al-Qaeda cannot be defeated with the first option: that there is nothing which can be conceded to them that would satisfy them, and such concessions are profoundly unacceptable on moral and pragmatic grounds anyway.

Let’s move to the second option. Also not in the cards at the moment: because there can be no major concessions to militant Islamicism, it has nothing to gain by being “respectable” with the West, and its respectability at the moment within the Islamic world is not altered by terrorist activities. This has the potential to change over time, however. This is one of the more sensible propositions underlying the neoconservative argument about Iraq: if a stable, democratic and capitalist Islamic nation appeared and life there was better than in either fundamentalist states like Iran or corrupt autocracies like Egypt, then there would be a real incentive in the Arab world to reject anti-modern movements like al-Qaeda. It’s also possible that a moral consensus within Islamic practice against terrorism (which definitely violates some Qu’ranic exhortations) might grow over time, or in some other way the perceived rewards or needs for terrorist action within the Islamic world would weaken dramatically. (For example, if the US were able to broker a stable settlement of the Israel-Palestine conflict.)

Third and fourth options. Can military action by itself succeed, either by practicing strong defensive security or strong offensive operations? No, I don’t think so, in part because movements like al-Qaeda are much larger in their membership and potential membership than historical fringe groups that were easily defeated by strong military or police action. The most important part of #3 is that groups are marginalized when very few people feel much desire to join their cause, either because there is too great a chance of suffering death, injury or imprisonment or because the other side, the targets of the terrorists, seems more moral, more attractive, more admirable, more desirable. This is where the tactics that anti-terrorists use matter: if they are consistently ethical and restrained, they may eventually make a particular terrorist group into moral pariahs, or expose terrorists as the source of suffering even in their home communities, by making the terrorists seem far worse than their opponents. But if anti-terrorist forces reduce the perceived distance between themselves and terrorists, then the terrorists have permanent wellsprings of support and materiel.

If you combine 2, 3 and 4, you could make a good justification for a combined operation that was resolute on defense, aggressive where possible in offensive terms, and which sought to neutralize the perceived rewards and appeal of terrorist action.

To bring this back to the Bush Administration, then. They may be trying to combine all these options, but they’re doing it extremely ineptly, especially in the case of Iraq, which is simply the wrong war to have fought. So why aren’t they paying a more severe price for this incompetence with the electorate? For the most part, ardent supporters of Bush don’t seem to me to strongly disagree with the observation that the Bush Administration’s performance in the war on terror has been poor to date. What they argue is that the Kerry Administration will be much worse.

I’ve been trying to think about that fact. I now think I know why some potentially reasonable people see it that way (leaving aside the pure hacks who would sing Bush’s praises regardless). The problem is, they may have a point.

Strong critics of George Bush, like myself, nevertheless need to give him limited credit for a few things in his foreign policy. Most crucially, I think a declared willingness to pursue unilateral solutions to key threats and a skepticism about existing multilateral institutions, particularly the UN, is important. I also think that a consistent emphasis that the major principle guiding US interests abroad should be the defense of liberty rather than a realpolitik advancement of national security is important, even if the Bush Administration doesn’t itself consistently follow that principle. This all still means that exclusive, aggressive, xenophobic unilateralism is foolish, but the basic shift is a good one.

More to the point, the Bush Administration has established itself as being willing to be publically or openly ruthless, to make a certain kind of toughness a matter of policy rather than the secret or shadow face of foreign policy. I support American forces killing or capturing al-Qaeda leaders wherever and whenever they can, even if that involves using Special Forces or cruise missiles within the territory of other nations who have not assented to those operations. I support the general proposition that the highest matter of principle in US foreign policy should not be a respect for sovereignty, but a defense of national and global liberty. Discretion and good judgment is still important, but the use of US military and economic power wherever and whenever it produces good results is critically important.

If some people feel uneasy about Kerry, it may be because they feel that Kerry’s perspective on international affairs will be governed more by the need to be virtuous than to be effective. I don’t think this is a fair reading of Kerry or his team, but it is a fair reading of one major lineage of anti-war sentiment. I think it is important for us to act ethically but not just because that’s the right thing to do—I also think it’s the effective thing to do. This is to some extent the accident of this particular struggle. If the war we are now engaged in was a conventional war between two armies battling for the control of territory, and the opportunity to gain an important strategic victory through the use of heavy bombardment even at the cost of civilian lives and property destruction presented itself, I’d say that you go ahead and take the opportunity. That is not what this war is about; that is not the nature of this particular conflict.

You don’t bring a knife to a gun fight, and you don’t act like a clumsy occupier or New Crusader if what you really need to do is marginalize and contain terrorist groups in Islamic societies. But if the necessary approach happens to also look like the most conventionally moral one, then that’s just a fortunate coincidence. In this instance, Vietnam is less the appropriate historical sounding board than Hiroshima. (Not, I hasten to note, because the use of nuclear weapons is advisable in the here and now, merely because of the moral questions that Hiroshima raises about how to conduct warfare.) Hiroshima may not have been the right thing to do, but it was probably the necessary thing to do, or to put it differently, one kind of moral principle trumped another in that decision. Not so absolutely that we can be sure, even now, which was which: it remains, legitimately, a case to debate. But I know how I would want that equation solved myself, and should a similarly tough decision present itself, I know which way I want the painful calculus to go.

At least some critics of the war are more concerned with the promotion of national (or international) virtue, and from collective virtue, their own personal virtue. At least some critics of the war worry more about whether they’re personally good people than worry about what is good for the United States and the world. The more that Kerry appears to represent that approach, the more than those who believe that our government must do what is necessary in war will feel uneasy or be unable to support him, regardless of the demonstrated incompetence of the Bush Administration in the actual conduct of post-9/11 world affairs.

That’s what the subtext of the absurd battle over who was more manly in 1970 is about: not just who can do the right thing, but the necessary thing. If Kerry can’t convince more people that he is ready to do the necessary thing with the hope that it turns out to be the right thing as well, he may lose.


September 3, 2004

A Visual Representation of the Approximate Emotional Sensation That The Speeches at the Republican Convention Have Created in Me


(John Steuart Curry's "Tragic Prelude", from


September 1, 2004

Population Wrong

There are two ideas so firmly embedded in the minds of the students I teach that I have long since given up any hope of disembedding them through straightforward processes of education. One is the idea that Africa is a country, not a continent. There I’m not sure I even aspire to disabuse them of that idea in any simple fashion any longer. I’m certainly culpable of reinforcing that idea in many ways: I teach African history, my subject matter is Africa, I talk about the future of Africa in some of my writing. If I really wanted to fight the idea, I (and my Africanist colleagues) would have to totally reorganize the basis of my own expertise. I'm also no longer entirely clear on whether or how this mistaken notion is harmful. I can think of some things about it that are really problematic, and connected to deeper conceptual problems (say, failing to see the historical particularity of genocide in Rwanda or conflict in Sierra Leone) but there are postive things that the unity of the African subject or the "idea of Africa" actually makes possible.

The other embedded concept is a different matter. I’d love to be rid of it, not only because it is simply factually wrong but also because it seems to me to have a great many bad effects on the way people think about the world and its future. This is the idea that the future of global society is gravely threatened by a population explosion.

Social scientists have known for some time now that this simply isn’t true, that virtually all of the projections of runaway demographic growth made in the 1960s and 1970s have turned out to be profoundly wrong in almost every respect—not just in the population numbers they predicted, but in their understanding of the underlying nature of population growth in world history. Despite that, I find that most of my students (and frankly most people I know, including some academics) still believe in the imminent threat of the "population bomb" roughly the way that they believe the sun will rise tomorrow morning in the east, as an unshakeable fact of life.

You could be generous and say that the population-bomb bunch was wrong based on an understandable error in their reading of the probable curve of advances in clinical medicine in the latter half of the 20th Century. Most of them figured that scourges like malaria would be defeated by now, leading to a major continuing reduction in mortality in the developing world. Instead, malaria and many other diseases have become even more deadly, and now HIV-AIDS has joined that list. But even here, the demographers who made a living out of alarmist futurism in the 1970s and 1980s weren’t paying much attention to detailed work by demographic historians looking at the roots of global population growth. One thing those historians found was that whatever had led to the initial major spikes in human population in particular localities, it certainly wasn’t changes in clinical medicine and concomitant reduction of mortality, that the transformation of clinical medicine always came well after the major upward spike, wherever and whenever you were studying. The effect of improving medical knowledge was mostly seen later in the extension of the upper bound of human lifespans.

What the population-growth alarmists did not understand because they were ideologically, theologically, fundamentally unable to understand was that population growth would slow as it has not because more people are dying of disease than expected but because of the spread of middle-class consumerist individualism on one hand and the spread of legally and socially guaranteed women’s rights and birth control options on the other. That’s right, two important consequences of liberalism. The former was particularly unanticipated by the major figures shilling for stern population control measures, that in most developed societies and now in many developing ones, people would start to live more for themselves and less for the next generation, and that this would be both a product of values (the spread of liberal ideas about the self, individuality, and material comfort) and a product of social change (the movement from agricultural societies based on lineage to urban societies based on the contractual rights of individual liberal subjects within capitalist societies). Nor did most of the ardent alarmists suspect that either of these changes could happen without the central controlling intervention of the state. Now on the side of rights-enforcement for women (including the availability of birth control), governments have played a major role, and the population-growth people were right about that. You’ve got to have birth control available as an option, and you need governments to ensure that—but what’s interesting is that even there, it works best when it’s about the freedom of women to make their own choices and less well when it’s about the state dictating an ideal population size and using birth control as a means of enforcement. In the spread of bourgeois consumerism and individualism as both ideals and practice, states been much less involved, and these may be the most important factors in the plummeting of population growth.

The population-explosion club was one part of a larger tendency that carried over the faith in centralization and statism that was characteristic of one lineage of high modernist thought and practice. I see that lineage today in scholars like Juliet Schor, whose work essentially proceeds from the position that most people don’t know what’s good for them, and that we would all be a lot better off if we consumed less, worked less, and lived lives that closely reflected Schor’s sense of what is good and valuable and meaningful--lives which turn out to be the usual kind of potted faux-gemeinschaft communalism that invariably pops up in these kinds of arguments. It's the same sensibility that infects Neil Postman’s work, an essentially mystical belief that we were all much happier when we lived in small lineage-based village communities and that we need big authoritarian states to force us back to that future. (The mirror of the same desire is the social or religious conservative impulse to restrict the rights of women and de-emphasize individual rights and materialist pleasures.)

For the population control fanatics, there has been nothing more irksome and unexpected than to see that the thing they thought most crucial (the rapid reduction of rates of population increase) largely did not require the authoritarian intervention of the state (China being the very complicated exception here) but instead has derived significantly from consumerism, individualism and arguably even selfishness.

It’s no sin to be wrong as a scholar. It happens. It’s hard to admit you were wrong, and some people do it poorly or gracelessly.

The King of Gracelessness, however, has to be Paul Ehrlich, Mr. Population Bomb himself. Reading his comments in the New York Times this past week is what spurred me to write about this issue. He barely seems able to admit even the basic facts of the matter, and clearly still clings to a vision of authoritarian interventions in human demographics. Now the issue isn’t that there isn’t going to be a population bomb, so he’s decided instead that the issue is that there are too many people already, and worse yet, the wrong kind of people—the people who want to drive SUVs rather than be “vegetarian saints”. It’s the same idea that’s been percolating in certain environmentalist circles for two decades now, that crops up in Schor’s work, and so on—that the future can be saved only if we find a way to get rid of a lot of the people who are presently alive and somehow compel the ones who remain behave differently, desire differently, live differently, think differently. At the bare minimum, the story of global population in the last fifty years demonstrates amply that the world doesn’t need those kinds of changes accomplished in those kinds of ways to achieve the positive good of slowed rates of population increase. We’ve seen that the central cherished goal of the Paul Ehrlichs of the world was accomplished through precisely the kinds of mechanisms and phenomena that they despise, not through everyone becoming vegetarian saints and living under a one-child law but through the desire to live well and enjoy living today for our own personal and individual satisfaction.


August 31, 2004

Dear John

I’m sorry to have to write this letter.

You've been honest, passionate, and reasonably fair-minded. You've seemed to care about what’s true and not true. You've stood for the intellectual and ethical side of modern Republicanism that I’ve come to grudgingly respect, a mix of concern for the moral content of American life, wariness about the size and power of the state, and strength abroad where strength is needed.

I would have voted for you for President. We’ve been good for each other. But now is the time for all good men to come to the aid of their country, and for all good Republicans to turn their backs on their party's leadership. You could have begged off, sat on the sidelines, agreed to mumble a few bland compliments when asked directly. But you’ve decided to be George Bush’s ethics beard. You cannot honor the dishonorable in the name of loyalty. You cannot wallow in the bilges and hope to remain untainted.

I’m writing not just to you but to all conservatives and to all Republicans of good faith. I’ve discovered in the past ten years the importance and value of the intellectual and political heritage of modern conservatism. I’ve found that the works of Edmund Burke, Friedrich Hayek, and many other authors promoted by American conservatives contain much more complexity and richness than I once thought. I’ve come to find some of the ideas and politics of libertarianism attractive. Like Rick Perlstein, I now understand the genuine popular rootedness and authenticity of many strains of conservative thought and politics as they have developed in America since Barry Goldwater’s presidential run. I’ve come to admire a number of Republicans and conservatives in political life, in the public sphere, in academia. I recognize the validity of many conservative critiques of the American left as it took shape between 1960 and 2000, to the point that I would say that whatever my political identity is, it no longer resides with “the left”.

But it’s almost over between us, John. It’s over between me and anyone who stands with Bush. I’ve got no polite words, no patient rationalism, no toleration left in me anymore. If you go with that man, if you defend him, it’s over between us. I won’t vote for you in 2008 so you can preside over the steaming ruins that two Bush Administrations will leave behind. Don’t think you can pretend to be faithful to truth and competency and then slink secretly into the voting booth and pull the lever for Bush. I’ll know it when I look at the voting tallies the morning after election day, whether the conservatives have betrayed their ideas. I won’t take you back. And don’t throw it in my face that I’m in bed with others, not really a conservative at all. That’s absolutely true. I'm not. I’m shameless and loose. I’ll go with anyone who cares about America and the world, anyone who cares about ideas and evidence, who cares about honesty and who cares about honor, who cares about competency and who cares, really cares, about effectively spreading freedom and justice throughout the world.

Look at the man you’re fronting for, John, look all you supposed conservatives and honorable Republicans, look at him and the people around him. Let me count his sins, each and every one of them sufficient reason alone to turn your back on Bush.

1. Bush is incompetent. You’ve enjoyed throwing Jimmy Carter’s supposed incompetence in everyone’s face for years. Well, the Bush Administration has scaled new heights on that mountain. Almost everything that could have been mishandled about Iraq has been bungled: the run-up, the intelligence, the occupation, the whole damn idea in the first place. This Administration threw out a multi-million dollar study that would have been an effective aid in the occupation of Iraq just because of internecine administrative struggles. This Administration placed its faith in Ahmed Chalabi, a corrupt would-be kleptocrat who probably also spied for Iran. Iraq is only the beginning of the story of failure in the last three years: it goes on and on and on. We’re in a struggle for our lives and values, John, and these guys are losing. Losing big. Abraham Lincoln knew when to fire his generals, because for him winning the Civil War was what counted. George Bush just stands by his men no matter how disastrous their advice or actions.

2. Bush is the biggest threat to core American principles of liberty in a century. Just look at yesterday’s news if you don’t believe me, John. The President acknowledged that the war on terror is unwinnable in the conventional sense. This is the same President who has claimed extraordinary, extra-Constitutional emergency powers on the grounds that he is a “War President”. Put the two together and what do you get? It’s the nightmare scenario in a constitutional democracy, John, the “permanent emergency”.

3. Bush and the Republican leadership are the most filthily dishonorable and hypocritical politicians of the last 25 years. Don’t forget the outrageous pork barrel betrayals of core Republican convictions like the steel tariffs early in the Administration, but just the last month alone is enough to qualify for first prize in the Goebbels Sweepstakes. There’s the Swift Boat Liars and then there’s the Speaker of the House idly speculating yesterday that George Soros trafficks in illegal drugs. The Speaker of the House said that, John. Not B-1 Bob Dornan or some other fool. They're destroying the political institutions of this country. They’re the New Scum, John, willing to do anything, say anything, smash anything to gain power. They’re not worthy of your loyalty.

4. Bush and his men have no respect for evidence, for truth, for honest process. They don’t look at all the information in front of them, they don’t take advantage of the enormous resources available to them, they don’t believe in considered debate before difficult decisions. They don’t believe in complexity in a complex world, they don’t believe in fairness. They’re not interested in the world as it is, and if you’re not interested in the world as it is, you can’t possibly be interested remaking the world into what it could be. It used to be that American conservatives mocked liberals who looked at the Soviet Union or Yugoslavia or Nicaragua and saw what they wanted to see rather than what is. Well, what about all the people who look at Iraq and see a fantasyland of their own imaginings, who look at the world and see what they want to see? The worm has turned, John, the shoe's on the other foot now.

5. Bush and his men are the most fiscally irresponsible administration in the past 25 years. They’re destroying the future as they spend like drunken sailors on liberty in a whorehouse.

6. They’re pushing a moral crusade where it ought not, cannot be allowed to go. Hear me out on this one, John. I know how you feel about these issues—it’s always been the thing I liked least about you. But in every relationship, you have to overlook a fault or two. I know you’re a reasonable person, and I know you can see why it’s wrong to move from supporting a particular idea or belief as moral to making that belief the law of the land. That’s a move that we should only make with great reluctance and care, because it makes government the enforcer of morality, when morality has to come from within, from the private convictions of Americans. I can at least listen—if almost certainly permanently disagree—to someone who wants to argue that abortion is one place where that move has to be made. I can’t even tolerate someone who wants to make that move everywhere, with everything, as a matter of course, who proposes constitutional amendments out of pure cynical politics, or worse yet, fervent desire to bend the law of the land to their private religious beliefs. That way lies theocracy, that way lies the betrayal of everything that America has been and could be.

Don’t tell me you shouldn’t change horses mid-stream, or as the joke going around puts it, horsemen mid-apocalypse.

Loyalty in defense of vice is no virtue.

Don’t send me flowers after it’s all over, John—you or anyone else who wants my respect, who wants to be regarded fairly, who wants an honest exchange of views, who wants to be partners in a truly democratic society. We can be good together, John. There can be an America where people of different convictions and beliefs all join together respectfully within a democratic society to find the way forward towards our common values. Don’t expect me to give you a fair shake again if you can’t do the right thing now. This is your last chance: you get away from that man. He’s no good for you, he’s no good for me, he’s no good for us all. He’s ruining everything. He’ll rob you blind and treat you wrong. I know his type.


August 30, 2004

An F for E-rater

Why do we want K-12 students to learn to write? Why do we test them more and more to find out if they can?

Perhaps it’s just a simple matter of minimal adult competency in a society where writing is a crucial part of adult life. We want people to be able to read and write memos, emails, business letters. We don’t care much about the content of what most people write, we just want the bare minimum mastery of grammar, spelling and basic sentence construction to insure that all adults who have finished high school are at least at the ground floor when it comes to writing and reading. If that’s all we want, then E-rater, an expert AI system that can grade essays for standardized tests, is just fine.

Grading for grammar, basic construction and spelling, after all, is one of the most mind-numbing exercises a teacher can face. In one sense, it’s easy; in another sense, it’s painfully difficult. Anyone who catches my grammatical errors in these blog entries (there’s about two per entry usually, mostly agreement errors resulting from hurried cut-and-paste edits) can see how little I enjoy doing it myself. If I were being asked to evaluate a pile of 100 essays on these measures of assessment alone, I’d gladly turn to an automated system to do it for me and quickly double-check its work later.

Is that really all we want for our children? Is that really the major reason why we hammer kids with five zillion standardized tests before they finish high school? If we believe that writing is the primary way that an educated democratic citizenry expresses its views, if we think writing isn’t so much the issue as writing persuasively is, if we value the ability of adults to communicate effectively with each other through writing as opposed to merely meeting the minimal standard of literacy, then E-rater or anything like it is a complete disaster in the making.

In this respect, the silly mistakes that E-rater or anything like it makes, as described in the Inquirer article, are relatively trivial, though hardly insignificant. It’s been known for a while that automated essay correction (including MS Word’s grammar checker) runs into serious trouble when it comes up against unusual or innovative writing. There have already been cases of K-12 students getting low automated assessments on essays that any human reader would recognize as unusually skilled or innovative.

The flip side of that is, as the article notes, the systems are pretty easy to spoof or fool if you know how they work. Since standardized testing is already largely a case of testing how well a student knows how to take a test rather than testing for real and useful competencies, this will only aggravate the problem. If you don’t think students will find out how to spoof automated essay graders in this way, then you’ve never played a multiplayer computer game. If there’s an algorithm with weak knees, count on people smacking it across the digital kneecaps with a crowbar.

The real problem is that an automated system can never judge what is persuasive, at least not until such time as AIs are sentient. Talking optimistically about Deep Blue in this context is silly. Deep Blue still doesn’t understand the game of chess—it just so happens that chess, being what it is, is a game that can be mastered through brute force calculation. It is not a matter of persuasive writing simply being a much more complicated instance of the same thing. It is not and can never be: persuasion is an ethic, a philosophy, a choice, as well as a psychological affect that is by necessity a moving target—the rhetorical turns and argumentative structure that persuades a particular reading audience today may not be what persuades them tomorrow. You have to believe in the value, the meaning, the utility and the ethical centrality of reasoned persuasion and communicative action in order to value it in a student’s writing. There is no algorithm for that, not yet and maybe not ever.


August 30, 2004

A Tale of Two Victims

The African author Ngugi wa’ Thiongo was recently severely assaulted and his wife Njeeri raped when they returned to Kenya for a lengthy tour. Ngugi had been living in exile for many years after being imprisoned for his opposition to Daniel arap Moi’s government. With Moi out of office, the time seemed ripe for his return to his native land.

There has been a lot of anger, shock and sadness both in Kenya and abroad about the attack. The African Literature Association (ALA) just issued a statement of support and sympathy for Ngugi and his family. But it’s a statement that I think accidentally reveals some gaps in Africanist (e.g., Western scholars with an interest in Africa) understanding of this horrible event.

The ALA’s statement mostly confines itself to an expression of support and solidarity with the victims. It adds, entirely properly and importantly, that it is crucial to rally around the freedom of expression of all writers everywhere, and to defeat the use of rape as a weapon against women.

There’s more that ought to be said, though, and it’s not said in part because Africanists in the humanities (including history and cultural anthropology) haven’t really known how to say it. In the Kenyan and East African press, there have been two parallel stories unfolding since the attack. The first is speculation that the attack was “political” or retaliatory in some respect, possibly staged by people with shadowy ties to the former (or present) regime. A family member of Ngugi’s was charged on August 27th with involvement in the attack, so there’s very possibly something complicated going on here. Ngugi himself has made statements that he sees the attack as a deliberate attempt to humiliate and intimidate him as a writer rather than a simple act of violent theft. On the other hand, some observers, including the police, continue to report that the attack seems to have been one of the many more ordinary assaults and thefts that occur in Nairobi regularly: three of the men charged so far are guards at the secured apartment building that Ngugi was living in, which is something of a familiar story in the annals of robbery in urban Africa in the past ten years.

The second story that has run through the East African press is a meditation on criminal violence in general, and the ways it has transformed life in most of urban Africa since Ngugi began his exile.

The ALA’s defense of the freedom of expression nods towards the first of these two stories. Even here, though, it’s worth noting the miles traveled between Ngugi’s detention decades ago by a repressive state and this brutal assault, because it encompasses a kind of vicious decomposition of oppressive power in Africa, a mutation of centralized authoritarianism into something that is hard to distinguish from organized crime. What used to be the almost-hidden subtext of the postcolonial African state has become increasingly blatant even as that state has lost its ability to wield more conventional forms of centralized repression.

In almost every African nation, there have been stories of conspiratorial assassinations and brutalizations since independence, or even before independence within liberation movements and nationalist parties. The leadership of Robert Mugabe’s ZANU-PF telegraphed the moral character of its postcolonial rule through almost certain involvement in the assassination of several leading figures within the party as a result of internal struggles. Given the rate of highway fatalities in many African nations, most car accidents are surely just car accidents, but it’s natural to suspect in many cases when a politician or dissident dies in such an accident that it was anything but accidental.

Somehow that has flip-flopped from being the shadow face of the postcolonial state to being its major manifestation in African societies. Many Africanists have written about this fact: we’ve coined a host of terms: the vampire state, the gangster state; we’ve charted the social networks through which the dispersed official capacity for violence flows in increasingly decentralized fashion. Now it gets harder and harder to interpret what the intention or meaning of violence connected to elites or officials might be, perhaps because they themselves no longer know: violence merely proves the ability to mobilize violence, and the networks that can mobilize violence now operate largely to reproduce themselves rather than the abstract idea of “the state”. The Kenyan government, after all, is no longer Daniel arap Moi’s; it is now Ngugi’s ally, not his enemy. Wherever this attack came from, it did not come from the people who sit at the center of Kenyan national power—but maybe sitting at the center of national power is no longer a particularly meaningful relation to power.

Certainly it is not if we follow the old idea that the state is defined by its monopoly on legitimate violence. Which is why it is sort hard to sort out whether the main story of this attack is of the ordinary, careless violence and sadism of organized criminals looking to steal laptops and other valuable possessions—a story that permeates Nairobi and Johannesburg and Lagos like the air and soil—or a story of something more. Because it is entirely possible that it is both, that violent power in many African states has metamorphosized into a part of the social ecology. Like lightning, it may be drawn towards anything and all things which may ground it: valuable possessions, a dissident author, a woman to be raped. Metal at the high point, waiting to be struck. And it is now as hard to fight as stormclouds, as hard to succeed as Canute holding back the waves. A nation of victims looks on helplessly: the criminals are nowhere and everywhere.

Africanist scholars readily acknowledge the presence and importance of quasi-official criminal networks in postcolonial states, both informally in conversations and formally in our scholarship. Cars get stolen in South Africa or Kenya all the time, and most of us know what the citizens of those nations know, the traffic in those cars generally draws in elements of the police or other official figures. We know that the line between gangs that assault people on the streets and corrupt petty officials or soldiers who extort bribes is very thin, even sometimes non-existent.

But all this really wasn’t on our political agenda in the formative years of Africanist scholarship, and the ALA statement suggests that it still really isn’t. We’re comfortable attacking rape as a crime against women. We’re comfortable calling for the freedom of expression. But we weren’t ready to think politically, prescriptively, about crime. The typical indebtedness of Africanists to either (or both) the nationalist problematic or to the socialist and Marxian left made that a very difficult thing to do. We didn’t talk much about the issue of crime in 1960 or 1970. South African academics studied the history of criminality extensively in the 1980s, but politically almost no one was looking ahead to or anticipating the problem of criminality as one of the major challenges of the post-apartheid era.

Even Ngugi himself has gone out of his way to restore the usual nationalist framing of such an event after being attack: he insisted that it says nothing about the Kenyan people, the Kenyan nation. It’s just a few bad apples, just thugs.

Indeed so, that's true. The vast majority of ordinary Kenyans are the victims of crime, not its perpetrators. And yet also not, any more than American military misbehavior at Abu Ghraib is just about some bad apples. It’s about institutions on one hand and about everyday practice on the other. It’s about both the state and civil society in postcolonial Africa, about reaping the bitter and horrible fruit of years of looking the other way or speaking instrumentalist cant when the state claimed for itself unrestrained power. If South Africa today is awash in guns wielded by men who recognize no moral constraints, that has a lot to do with the vicious immorality of the apartheid state, and rather less—but still something—to do with a liberation movement that tortured its own fighters in training camps, looked the other way when tsotsis claimed to be comrades, and rewarded corrupt hacks with ministerial positions after the end of apartheid. If Nairobi is crumbling from neglect and criminal predation, that has a lot to do with a government that existed to enrich itself and rather less—but still something—to do with a widely distributed nationalist sensibility that saw competency and moral rectitude of officials and ordinary citizens alike as the least of its concerns after independence and which continues on occasion to rally around a defensive conception of patriotism that is less about pride in one's birthplace and more about hiding the dirty laundry.

Charles Onyango-Obbo, writing in The East African, says that the attack on Ngugi “started in our libraries”. What he is pointing to is not the reading of books, but the fact that national and academic libraries in many postcolonial African states have lost most of their collections to theft in the past three decades. When books are not outright stolen, notes Onyango-Obbo, they’re often missing pages. The link may seem a tenuous one, but I know what Onyango-Obbo means. It’s the thing Africanists were least prepared to think about and address, the possibility that civic virtue is a matter of everyday practice, and the society where that goes neglected at the top may find that some of the rot comes from below.


August 27, 2004

Smitty's Sodium-and-Water Fun-Time

PZ Myers passes on a story about an undergraduate who drank liquid nitrogen. This, it turns out, is a very bad idea. (No surprise.)

The story made me think a bit about my intermediate school science teacher, “Smitty”. He was an ex-military guy who acted the part, with us as his boot camp privates and himself as a drill sergeant who was drawn from one-third “Beetle Bailey”, one-third “Gomer Pyle” and one-third “Full Metal Jacket”. Most of the time he seemed to be hamming it up, but every once in a while, it seemed to trip over into real (and rather scary) anger over some screwed-up experiment.

The reason I think of him in this context is that once a year he would order a large block of sodium, perhaps about as big as a small brick, maybe 80-100 grams or so (my memory may be exaggerating, but not by much) and wearing goggles and gloves drop it in a tall bucket full of water with tongs and run.

This seemed very cool when I was in the 7th grade. The ceiling in his classroom had extensive pitting from the resulting annual explosions and spraying of acid. (This is why I’m sure the amount of sodium was considerably bigger than in that animated .gif I’ve linked to: Smitty’s demonstration was violent enough to spew the results straight up about ten feet.) We all huddled in the back of the room watching the reaction and the acrid fumes with wide eyes and a certain degree of nervousness and then afterwards we helped to clean up the acid by “skating” on the floor on top of rags that he provided. (He picked up the rags himself with his tongs and discarded them.)

I recall hearing some years later that the local fire captain got wind of this and angrily ordered it stopped. When I think on it myself, my feeling is about one-half, “My god, it’s about time, are you insane for letting that man do that?” but also thinking, “Come on, nobody ever got hurt and it was interesting, as well as a pretty good demonstration that some adults can be as crazy as bedbugs.” Even as kids, we knew this was not entirely wise, a feeling that I've had confirmed by a bit of websurfing this afternoon.

My mixed emotions about this memory echo some other ambivalences. The perennial debate about dodgeball, for example. I find myself pretty sympathetic to the impulse to ban it, having been slammed in the head continuously from 4th to 6th grade. As far as I was concerned, the entire structure of the game was about isolating and humiliating the students who were low in the popularity hierarchy: there was nothing fun or athletic about it. On the other hand, maybe that’s the usefulness of it.

Those hierarchies are going to exist anyway: dodgeball left me with no questions about where I stood with the popular kids. It’s not as if getting rid of dodgeball would have gotten rid of being called “scientific martian” (my personal favorite insult directed at me) or being pushed into metal fences or having my head shoved into the sand or any of the other fun recess activities I recall so fondly. Nor do I have any especial warmth for the pro-social gobbledygook that the professional educators who crusade against dodgeball offer, as if all sports and games for kids should be non-competitive, esteem-building and so on.

I think about this with my 3 year-old daughter starting preschool next week, too. We threw her in the deep end of the pool today, so to speak, by leaving her alone for a “trial hour” at the school. She did great, by all accounts. We’ve already seen that she’s going to run into disappointments, as all kids do—she wanted to play “superhero” with the kids the first time she was at the school playground (we stayed to observe) and one of the other kids told her that was stupid. Considering that my daughter yesterday told my wife and I to sit down in a group after playing hide-and-seek with her so that she could retrospectively analyze and critique our hiding choices (I kid you not: this is almost exactly the language she used), I don’t think this is the last time some little punk is going to tell her she’s stupid. If it happens enough, I know I’ll have a hard time just letting it happen, regardless of whether you learn important social skills by being forced to co-exist with pinheads.

My mom eventually became a more classically interventionist parent of the kind that is common today with the schooling of my younger brothers, after pretty well letting me negotiate my perils on my own. I don’t know if that benefitted them. I don’t know that it was better or worse that my 7th grade science teacher was able to casually carry out a nutty experiment that plausibly could have led to serious injury for himself or his students, or that it’s a good thing that today probably no intermediates school science teacher in the country could get away with doing the same. I don’t know that it’s better or worse to play dodgeball. I know that things have changed, but I don’t quite know how to add it all up.


August 24, 2004

If Wishes Were Fishes, I’d Have Fishbones Stuck In My Throat

I wish that all the disgruntled leftists who want a muscular and purified third political party that was authentically radical, progressive or left could not only have their wish but have still more. I wish that that the Democratic Party would agree to provisionally step aside on behalf of this new party for one major campaign season, so that the new True Left Party could run its candidates for President and all Congressional seats against Republicans. I wish furthermore in this miraculous political season that this same party could have total authority over all news broadcasts and major cultural outlets for a period of one year preceding the election. Just to eliminate the usual carping about the mass media. Just so we could see what would happen, just so the ensuing political disaster might actually buy us some peace from the pseudo-Naderite fringes.

I wish that the rigorous new standard for truthfulness and empirical rigor proposed by some conservative sages (because we all know the right-wing has philosophical chops that all other political factions are lacking) be generally accepted. The new standard, best enunciated by the famous Instapundit, is that anyone who makes one evidentiary claim that is true, no matter how trivial, all things they say are either true or worth taking seriously. Kerry was in Cambodia in February, not December! Hence all other claims about his service are important and credible as well. So under the new truth regime favored by Instapundit et al, if I say that George Bush used cocaine, murdered ten rivals personally, runs a prostitution ring out of the Oval Office, receives unmarked cash in briefcases from the North Korean government, and has a vice-president named Richard Cheney, everything I say is worth taking seriously. Because one of those things is true.

I wish that all the disgruntled armchair military strategists could get their wish that the United States level Najaf and Fallujah. Utterly wipe them clean with saturation bombing, killing every male over the age of 14 who tries to leave the cities. Just so we could see what would happen next. Because otherwise, I know I’m going to be hearing from these guys forty years from now: they’re going to be sitting in little booths behind some yet-to-be-built memorial in Washington DC talking about how the politicians wouldn’t let them win the war. Pointing out to them that this war, whether you’re a supporter or a critic, was always more about political objectives than military ones, that the military objectives were the easy part, doesn’t seem to do the trick.

I wish that the Swift Boat Veterans and their various supporters would continue their crusade to re-evaluate all medals awarded to American soldiers living and dead, discarding official records authenticating those medals and reinterviewing all surviving witnesses to the actions in question. Obviously the Swift Boat gang themselves will be the first to surrender all their medals, honors and citations given the tough new standards they propose, but that would only be the beginning: most medals given in the past 90 years would obviously need revocation, and there are bodies to be moved out of Arlington as well. What’s that? They’re not proposing any such crusade? How odd.

I wish that the people now beginning the drumbeat for the invasion of Iran would get their wish. I wish we would invade Iran. Syria, too! Throw in North Korea, why not? I wish they would get their wish—as long as I could get my wish for I and my family to be in a fully equipped nanotechnology-supplied mile-long generation spaceship heading for Proxima Centauri with everyone and everything we love about the world on board with us.


August 23, 2004

Here's another longer essay in PDF format on massively-multiplayer online games, this time on the ways sovereignty and governance manifest in them, partly inspired by a recent entry on law-as-code by Richard Bartle at Terra Nova.

Play of State: Sovereignty and Governance in MMOGs

August 20, 2004

The Rule of Four and the Romance of the Professorial Life

I just finished The Rule of Four over the weekend. It was pretty weak stuff, and I’m not clear why it got the reviews or attention that it did. It’s mostly a padded and inauthentic coming-of-age narrative mixed some alumni nostalgia for the (exaggeratedly portrayed) student culture at Princeton, topped off with a bit of unimaginatively warmed-over Name of the Rose. The inner voice and experiences of the main character didn’t remind me of any undergraduate I’ve ever taught, met, or been.

One thing I did get out of it is that the authors carried away a woefully inaccurate sense of academic culture from their time at Princeton, but one whose inaccuracies are drawn from some deeper archetypical representations of academia and professors.

Academics know which authors get it right, or somewhat right when they write about academia: David Lodge, Jane Smiley, Randall Jarrell. Many of these sorts of satiric or comic treatments of academic life have a wider readership. There are also hysterically wrong portrayals of academics that I don’t think anyone regards as credible or intended as such—did anyone besides me see Lou Gossett Jr. as an anthropologist in a television mystery series a few years back?

But The Rule of Four is another matter. It draws on an older, deeper expectation that people have about academics, that they are engaged in a romantic, eccentric, and often dangerously obsessive quest for truth and knowledge. The humanities professors and students in The Rule of Four are all driven by the thirst for discovery—they’re basically Indiana Jones and Belloq dueling amid the dusty library stacks, trying to be the first person to properly understand a 15th Century manuscript.

There are so few aspects of work in the humanities that are or ever have been like this. There’s an occasional flare-up of that kind of drama-queen “I have the secret to all knowledge locked in my office” stuff around Shakespeare, particularly between around the issue of the “true” authorship of his plays. But mostly, it’s not about discovery in that old romantic, explorers-in-unknown-lands sense. Even science isn’t like that operationally: the era of discovery, always something of a tarted-up mythology suitable largely for third-grade hagiographies of Newton, Curie and Edison, has got nothing to do with contemporary scientific research.

The Rule of Four is even worse on the details. One of the key turns in the plot is a nefarious professor plotting to take credit for the work of an undergraduate by preemptively accusing the undergraduate of plagiarism. Now something like this actually does happen, though it’s mostly professors vs. graduate students, and without the florid conspiracies. What’s wrong about it in the book is that the professor in question has been keeping up a long correspondence with an academic journal about a major article that he’s planning to submit (it will be the plagiarized version of the undergraduate’s research). Um, I just don’t think any journals in the humanities are waiting with baited breath for years for the hot scoop that some professor has promised them, no matter how prestigious the academic in question is. There isn’t a humanities equivalent to Nature, a journal that publishes work which is read avidly not just by academics but others waiting to hear announcements of major new discoveries about books, culture or philosophy.

There’s also another character in the book, a graduate student lurking about who is planning on applying for tenure-track posts using the same tactic, claiming responsibility for the undergraduate’s research, and he’s been drafting letters to various prestigious universities telling them that his research is almost done and he will soon be available if they want to hire him. I can tell you where a letter like that would go even if its graduate student author could plausibly claim to have found a new book of the Bible hidden inside some shopping lists written by Thomas Aquinas: way back into a file that would never be looked at again. Or the garbage can.

Now some of this is just part of the general clumsiness of this particular book. But I do think that this is still what a lot of Americans think academics are—basically a combination of the Nutty Professor, Professor Kingley from The Paper Chase, Dr. Frankenstein, and various and sundry novelistic alcoholic and lechers. People with secrets, people with strange and monastic passions, people with eccentric manners and esoteric knowledge, people who are sometimes horribly unprincipled but usually in an ethereal and otherwordly way. It’s not utterly wrong, but it’s not especially true either.

I wonder how much of this image, which is in an odd way complimentary—it makes the academic into a kind of liminal creature, a modern shaman—has to do with the savage disappointment that one class of academic hopeful experiences after several years of graduate work. I know I had a touch of this idea in me when I started my Ph.D, that one of the things drawing me to African history was the idea of “discovering” things which were unknown, and one of the things drawing me to academia as a whole was my perception that it combined the aesthetic freedoms and personal expressiveness that we associate with writers or artists with the austere purity of a social institution devoted to knowledge. Naïve, I know, but I do think that was in the back of my mind somewhere.

There are many things that I really do see as inadequate or flawed about contemporary academia, particularly the way it goes about reproducing itself through graduate education. But this one disappointment I don’t hold academia especially responsible for. Most institutions have a romance connected to them that gets shattered in the face of the banal, humdrum reality of their everyday functioning. I hear all the time from undergraduates who’ve been disabused of their hopeful fantasies about politics or government work, about nonprofit and charitable organizations, about development work in the Third World, about K-12 education. I even hear some disappointment from former students who went to work on Wall Street or other big businesses, but at least there is a shorter distance between the exalted image and the grubby reality in those cases.

When someone is bitter about academia for that reason alone—that it isn’t about the pure, passionate, eccentric pursuit of truth and beauty, that being a professor isn’t like being a free-spirit writer or artist who also gets a health care package and a regular salary, I don’t know what to say. I think the image is a lovely one. I had it in mind myself a bit when I chose this. I’d like there to be more freedom, more passion, even more honest eccentricity in academic life. (Though not the murderous rivalries that this leads to in The Rule of Four, of course.) But anyone who has an angry bone to pick with academic institutions that is meant to be a serious call to reform or change has got to have more (and there is more, much more, that could be had) on their bill of particulars.


August 18, 2004

Nothing R Us

Last week, the news came out that Toys R Us is planning to sell off its toy business. This sounds like a businessplan by Magritte: Ceci n’est pas un toy store.

This would turn Toys R Us into Babies R Us, which is apparently the only thing they’re doing that’s making a profit. The explanation offered by most analysts was that the company couldn’t keep up with Wal-Mart and other huge superstore discounters.

No doubt some companies do just plain old get beaten at their own game by a new player who does it better, but many just fumble the ball all by themselves. Toys R Us might be about to get out of the toy business simply because they’re really bad at it. I’ve had to spend more time in their hallowed halls than I might like in recent years, and the simple truth is, they suck. Their stores are usually dirty and badly laid out. Finding a clerk to ring you out—don’t even think of asking for help with the merchandise—can be a real chore. Their selection of merchandise is spotty and inconsistent.

Their ecommerce division is even worse. The Toys R Us neightborhood of is the one domain of Amazon’s that is almost guaranteed to have problems. I’ve given up ordering from them. (I’ve tried getting things for my daughter and also (blush) action figures for my own collection.) They suddenly cancel open orders or allow you to order goods they don’t have and never intend to get, even when you can find the things you ordered if you go to a brick-and-mortar store.

I can think of a lot of other retailers that have sabotaged themselves pretty effectively. Our local supermarket, Genuardi’s, is a great example. When it opened near us, we were very happy. Before that, when I first arrived at Swarthmore, the nearest grocers were uniformly horrible, and then Genuardi's came and saved the day. At first, the Genuardi’s was a smallish chain with high standards, but not a gourmet or boutique grocer like Wholefoods. It was well-kept, well-managed, good quality ordinary produce and meats.

Then Safeway bought out Genuardi’s and proceeded to pretty well destroy it. The worst thing they did was to aggressively move in a big line of their own branded generics in every part of the store, removing many brands previously stocked. This unsurprisingly drove many customers away, particularly because the Safeway brand was often inferior. I quickly got to the point that I wouldn’t buy it even when it was actually pretty good, just out of annoyance. The meat and produce standards went way down. Many brands of goods I rely on and buy regularly began to be stocked inconsistently. The whole store was reorganized for no particular reason. Safeway has apologized to its Genuardi’s customers, but I haven’t seen any actual changes to the stores themselves—if anything, standards have gone down even further since 2002. Nor have they brought back many brands that customers want. I actually tried calling their complaints line once to report specific brand absences—a phone number advertised on "apology" banners within the store—and got buried in an automated phone maze, which I took to be deliberate.

I suppose in terms of “competition”, this is what tennis players call an “unforced error”. If Toys R Us were a better toy store, I think they wouldn’t have so much of a problem with Wal-Mart. Since they suck, most people figure, “Why not go to Target or Wal-Mart? It isn’t like Toys R Us is more reliable, or cheaper, or even especially fun to walk around and shop in.” If Safeway had just bought Genuardi’s and left it alone, or added product without subtracting others, they’d be doing great. But some corporate wizard decided that if you dump product lines and replace them with your own, you double-dip your profits. Not if people stop buying altogether: Genuardi’s hasn’t sold a single cold-cut to me since they dumped Boar’s Head for their own inferior Safeway brand. (Yeah, I know the official line: they didn’t dump Boar’s Head, Boar’s Head withdrew when Safeway stuck its own line in. Amounts to the same thing: Safeway chose to push their crap on me at the cost of letting me buy the brand I want.) When you make me go to two or three or four stores in order to get what I feel is minimum acceptable quality, you make me start looking for a comprehensive alternative, besides losing all that business.

I always find it amusing when someone claims that privatizing a government service will make it more efficient and responsive. What they’re confusing is the abstract efficiencies that market mechanisms and competition produce—which I agree they do produce—and the internal organizational character of corporations. Corporations are often inefficient, lumbering bureaucracies no different from government bureaucracies in their ability to make, conceal and mendaciously defend bad decisions, no different in the way they allow middle-managers of few talents to fumble the ball and screw up the lives of thousands of employees, not to mention disrupt the everyday existence of numerous customers. Dilbert was funny for a reason (before it got stale and old): because that’s the way most corporations are, really are. At some moments, “Office Space” is more documentary than fiction. The interesting thing is, capitalist idealism aside, a giant lumbering company can fester to the brim with dysfunctional crap and still survive on a combination of inertia, accumulated capital and crony capitalist manipulation of public policy for decades.

Yes, there are business failures that I think are much more mechanistic, much less contingent. Krispy Kreme is clearly struggling with an overly rapid pace of expansion; the same thing happened to Boston Market a while back. There’s a kind of automatic, unplanned character to that sort of problem: it’s like bread dough rising too much when you put in too much yeast.

Yes, there are smart companies that simply outmaneuver and destroy their rivals by being smarter and better when their rivals haven’t done anything particularly wrong. (Or by being more ruthless and amoral, like Wal-Mart is with their employment policies. That’s something Wal-Mart can’t just wash away by giving money to NPR.)
But I think a lot of business problems incubate in the dysfunctionality and unresponsiveness of corporations as social institutions. There’s no reason to prefer them to government bureaucracies—in fact, they’re much worse, because as “private” institutions, they can hide information about their own incompetence and malfeasance much more effectively.

I’ve thought about this a little in some more formal and scholarly contexts. The business-case method favored in MBA programs pays a lot of attention to failure as well as success, and a lot of business consultation touts reorganization and innovation (sometimes in ways that simply creates a new layer of middle-management crap to interfere with common sense). It seems to me that a lot of economic history, business history, history of advertising and related fields could do the same. One of the things we don’t study enough is failure, and when we do, we tend to study huge, messy, Enron-level earth-shaking failures. But I suspect the smaller failures—the ad campaign that falls flat, the marketing decision that goes haywire, the product line that dies on the vine, the business venture that sprouts and goes to mold within two years—are by far the more compelling thing to study, and could introduce a healthy dose of contingency and agency to both the orthodox core of business and economic history and to left-leaning critiques of the history of capitalism. I even think there's considerable room for a quasi-Foucauldian take on the history of the corporation as institution: Dilbert and "Office Space" are already half-way there, as are John Bruce's tales of corporate bureaucracy.


August 13, 2004

Crisis on Earth-Fanboy

Jason Craft does a wonderful job of explaining why DC Comics’ new mini-series Identity Crisis is both really interesting and deeply disturbing for me all at once.

I’ve written before about my wish that the purveyors of what Craft calls “proprietary, persistent, large-scale fiction systems” (I really like his terminology) try to align their fictional conception of ordinary humanity and everyday life with their representations of the fantastic and superhuman.

The author of this new mini-series, Brad Meltzer, is trying to do just that, I think. Beware of what you wish for, because you may get it.

He’s got a very fresh approach to a lot of comic-book tropes: his big fight scene in the third issue of the mini-series is a compelling reconfiguration of the same-old same-old of superheroic battle, professionalizing it in rather the same way a writer of police procedurals might with descriptions of police work.

But Meltzer is also going to one of the deepest tropes of superhero fiction—the secret identity—and positing that the only reason the bad guys haven’t figured out what the secret identities of the superheroes are is that the superheroes have been magically erasing those memories systematically every time such a discovery is made.

It’s not as if this kind of thing hasn’t happened from time to time in comics, but it’s usually because of an accident—the supervillain learns the secret and then immediately does something like fall all off a cliff and suffers brain damage, that kind of thing. Occasionally there's the villain who knows the secret but won't tell other bad guys or act on it out of some kind of arrogant anti-hero sense of honor. It’s also not as if clever writers haven’t played with the issue, whether it’s Frank Miller’s run on Daredevil where he showed just how devastating it might be if a bad guy found out who the good guy was, or John Byrne’s somewhat silly if amusing one-shot suggestion that Lex Luthor would arrogantly reject the idea that Superman could want to be an ordinary person and so erase the conclusions of a researcher that Clark Kent and Superman were the same.

Closer to the mark of the current series, James Robinson told a story in Starman that hinted at three heroes murdering a villain who knew their identities and threatened their families. Meltzer is pretty well going balls to the wall with this theme, though. The story posits that villains find out who the good guys really are on a routine basis and then frequently threaten their families, and that the heroes have an organized conspiracy to erase that knowledge on an equally routine basis.

That’s interesting enough. But he goes from there to somewhere that is good, consistent storytelling and yet really squicks me out. So far, the wife and ex-wife of two superheroes have been horribly murdered by an unknown suspect. We’ve also found out that one of the two women was brutally raped by a supervillain in the past, which led the good guys to administer a magical lobotomy to the villain in question.

There’s just something in me that says this is really not a good place to go, that maybe one of the essential fictions of comics is that somehow, for some reason, it’s really hard for most people to guess who a superhero really is. Maybe I’m wrong. I’ve been really interested in how almost all the successful comic-book inspired films of recent years have essentially used the revelation of the hero’s identity to the villain and/or to friends and loved ones as an almost routinely climactic moment. Both of Tim Burton's Batman films had the hero’s secret revealed. Both Spiderman films have done the same. The comics have kept pace with this to some extent. In current issues of Batman it’s getting hard to remember which of his enemies doesn’t know the secret. Lois Lane is married to Superman now. Wally West (aka The Flash) has an identity known to everyone. If you couple the secret identity being less of a storytelling fetish with the rise in supervillains whose villainy is much more consciously if hyperexaggeratedly modeled on “real-world” criminality, you have to ask why the bad guys don’t do what the old-style heroes always feared the villains would do, and that’s target the hero’s loved ones.

The line that Meltzer crossed that might be hard for me to accept is rape. It virtually doesn’t ever happen in the standard comics. The few times that writers have tip-toed in that direction—Mike Grell, in a truly ugly and unnecessary part of his Green Arrow mini-series, or Alan Moore in The Killing Joke—there’s been some attempt to keep it out of frame, implied, contained. With Meltzer it is pretty much front-and-center, though not at all voyeuristically depicted. And because his story attempts to normalize and distribute so many of its proposed revisions of the superhero canon, you’re left asking, “Well, why is this the only time that has happened, given how bad these bad guys are?” And then I find myself thinking I just don’t want to pick up a regular, ordinary, non-pornographic superhero comic from Marvel or DC to find that superheroines are getting raped on a regular basis.

It’s a really well-done comic-book story. It seems to speak to some of the problems I have with superhero comics. And yet I find myself sort of wishing that it hadn’t been done, or it had been safely contained as some kind of “imaginary” or “alternate reality” story.


August 10, 2004

Al-Qaeda on the Inside

So far, I haven’t seen much conversation about Alan Cullison’s fascinating article in the current issue of the Atlantic Monthly (available online to subscribers only now, unfortunately) that centers on information he gleaned from an al-Qaeda laptop acquired right after the fall of the Taliban in Afghanistan.

Perhaps that’s because the information in the article tends to be discomforting for both the ardent defenders of the Bush Administration and some of its strongest detractors.

Cullison’s account fills what I feel is an extraordinary gap in our national—indeed, our international—discourse about al-Qaeda in specific and militant Islamacists in general. There are some interesting intellectual histories of contemporary Islamic fundamentalism out there, including Paul Berman’s contentious linking of fascism to Islamacism. There are some good institutional histories of the spread of particular Islamic educational and ideological projects under Saudi patronage. There are good accounts of the social roots of Islamicism in contemporary Arab nations, and of the role of the war against the Soviets in Afghanistan in providing actual military experience to future jihadis.

But there’s almost nothing that really gets ethnographically inside of an organization like al-Qaeda, that gives us a good model of how they think and operate on a day-to-day basis. All we have had so far is a lot of loose talk about Islamofascism from people who have zero curiosity about the enemy they propose to fight, or on the other side of things, a lot of lazy assumptions about the relationship between terrorism and past U.S. hegemony, as if US policy is a kind of “just add water, create terrorism” thing. In one case, we have terrorists as remorselessly unidimensional, in the other case, as people without real agency who exist as a kind of social formation produced automatically and monolithically by events.

Cullison discovered some interesting things on the laptop he acquired that finally begin to flesh out the complex reality more meaningfully. On one hand, it seems to me that defenders of the Bush Administration’s “war on terror” can actually come out of the article armed with some new support for their views. First, it’s very clear that 9/11 was not a strategic aberration, and that the current security alerts may well be warranted and legitimate, that al-Qaeda, whatever it is and however it is constituted, intends to attack the United States, Western Europe and indeed “Western” influences wherever it can, however it can. If 9/11 wasn’t convincing enough, Cullison’s information should convince more: al-Qaeda’s plans for terrorism are serious, substantial and of long-standing.

More to the point, much of what Cullison found tends to confirm something that George Bush and his associates have said since 9/11, and sometimes been mocked for saying, that al-Qaeda’s principal motivations for planning attacks against the West have a great deal to do with abstract hatred for Western freedoms. Cullsion found, for example, that news broadcasts from the West were carefully saved and compiled on the laptop by al-Qaeda observers, but that the image of female newscasters was always covered over. More generally, I see considerable evidence in what Cullison describes of a non-negotiatble philosophy of total struggle against the West. There’s nothing as tangible and achievable as a simple withdrawal from Saudi Arabia or a simple ending of support for corrupt Arab autocracies here. It might be that those moves would undercut the larger popular enthusiasm for Islamcism in parts of the Arab world, but they would do nothing to placate the core of al-Qaeda’s membership as it stood in late fall 2001. There's also some very interesting and sometimes rather funny material that indicates that al-Qaeda has been actively trying to figure out how to obscure the differences between its members and other Muslims or Arabs and has given serious thought to how to move unmolested across borders and through airports.

Now on the other hand, you can’t just take what you want from the article and ignore the rest. If you go to it and find support for the proposition that the fight against al-Qaeda really is total war, and that a tight focus on homeland security is justified, you have to also deal with another fact that the article extensively documents: that the strongest hope that some al-Qaeda members took into planning for 9/11 is that the United States would respond over-aggressively and clumsily to the attack and entrap itself in a no-win war close to where Islamicist insurgents might inflict heavy and continuing damage on the Americans. In other words, what many critics of the Iraq War said before the invasion, that the Iraq War would turn out to serve al-Qaeda’s interests, to grant al-Qaeda's fondest wish, appears to be something that al-Qaeda also believed.

It doesn’t mean that this is necessarily true—another thing that the article does wonderfully is to capture al-Qaeda’s leaders as fallible and capable of serious miscalculation, financial mismanagement and petty in-fighting over small perogatives—but I think the article, read seriously and honestly, is yet another nail in the coffin of the war in Iraq, and yet more confirmation that anyone serious about the war against terrorism should have been against that war from the outset, and should turn against it now.


August 6, 2004

Well, it looks like the college is going to implement Moveable Type on our server, so once it is, I'll do the work of migrating onto that. I'm also informed that someone is kindly aggregating me already over at Bloglines, so if you're looking for RSS before I take care of it myself through MT, there you go.


A brief rant: I was expecting to receive my DVD of Dr. Syn: Alias the Scarecrow this week, which is truly one of the most memorable things I've ever seen on TV. It's an old Wonderful World of Disney special featuring Patrick McGoohan as a Robin-Hoodish smuggler, but it's astonishingly atmospheric, spooky, and compelling. Well, for reasons unknown, Disney abruptly cancelled the scheduled release of the DVD, which had been heavily preordered at Amazon. All I can say is that they'd better get their act together. Get the thing out or give me some detailed explanation of what the hold-up is. Don't make me hold my toddler's taste for Disney stuff hostage, because I'm just crazy enough to do that. Want to sell more toddler underwear with Princess Ariel on it, Eisner? Then get Dr. Syn out the door. NOW.

August 6, 2004

Powerpoint, Presentations and Persuasion

In the two years, I’ve attended a number of talks, workshops and conferences where scientists or “hard” social scientists were the dominant presence. Up to that point, I’d always had a kind of envy for what I assumed were the advantages of a conference format where PowerPoint and poster sessions ruled the day. I thought that such a format would allow presenters to get to the good stuff quickly and efficiently, to integrate visual material into presentations easily, and to open up the general conversation and mutual learning processes.

Another beautiful hypothesis slain by an ugly fact.

I knew about some of the common criticisms about PowerPoint, and I’ve recently been enlightened further by some guidance from Eric Behrens. Seeing it used intensively, I give a lot of credit to those criticisms, and feel much less envy for meetings dominated by its use. The one thing that I still really like about PowerPoint and software like it is the ability to integrate visual information into a presentation—a skilled user can make images, films, graphs and so on be a part of an argument or presentation, rather than an illustrative sideline to it. In particular, it doesn’t seem to me to be any better at engendering discussion or conversation between a presenter and an audience—which seems to me ought to be one of the major points to have a conference rather than to simply post fifty presentations on a web site and let people consume them remotely.

It’s not like I prefer the norm at a humanities or social science conference any better. Technology rarely enters in any form: papers get read, generally woodenly, by their authors. Few authors bother to write a paper that is meant to be read, instead taking a draft of a journal article or chapter and skipping passages as they go, usually being forced to hurry more and more near the end. Most of the time, there’s as little conversation about the presentation as I’ve seen at science meetings.

It finally did occur to me that even if PowerPoint didn’t have some of the conceptual problems that it does, it would still be a problem for most humanities and social science presentations.

I was recently reading an interesting essay “What Is Originality in the Humanities and Social Sciences?” in the April 2004 issue of the American Sociological Review, by Michelle Lamont, Joshua Guetzkow and Gregoire Mallard that brought this home to me. (Not available online). The article crossed my desk because I was one of the informants interviewed for it, due to my work with the Social Science Research Council and the American Council of Learned Societies.

The researchers were looking at how academics involved in judging competitive research grants defined and operationalized “excellence”. Among the things they found was that humanists (including most historians) tended to regard the originality of a proposal as a moral attribute that couldn’t be easily distinguished from the character of the author of the proposal, that originality wasn’t a property that could be neutrally disaggregated from the rhetoric and structure of the proposal itself. The “harder” social scientists, in contrast, tended to be have a wider set of metrics for understanding originality that included this kind of intertwining of an author and an idea, but which also potentially appreciated an original approach or hypothesis on its own merits.

That struck me as more or less true, and more or less a reasonable description of some of the ways I’ve operated myself as a judge of proposals. It’s rare that I read a proposal by a historian where some hypothesis or evidentiary finding simply stands on its own, valuable by itself. It’s always tied into the craftwork of the author, the ways in which they write and think, the form their arguments take, the integrity of their use of material. I can think of many historical monographs where two authors are making a similar “finding” but where one monograph seems absolutely original to me because it’s written compellingly and confidently and the other seems dull and tedious to me because it’s imitative, derivative and evasive, because the author doesn’t seem to understand why what they’re saying is potentially interesting.

Take this passage from an essay called “’Voyage Through the Multiverse’: Contested Canadian Identities”, recently quoted by John Holbo over at his blog:

"Here, I want to look at the ways in which Canadian rap and dub poetry make and reconfigure the boundaries of Canada and Canadianness - those contested spaces that often lose their intelligibility outside of state managerial apparatus. But I am interested in how both dub poetry and rap music are often positioned as not constituting "Canadianness" given how rap and dub poetry disrupt and contest the category "Canadian." I am also interested in how state administrative practices aid in positioning blackness as both part of and outside of the state's various forms of management and containment. Blackness is then understood as having a diploctical relation to nation in its resistance and complicity; and its performances are also regarded as something otherwise.”

Like John, I wince while reading it. So imagine instead that this passage said something like this instead:

“Canadians know that they live in a multicultural society, but also are conventionally portrayed as The Great White North, a country of Caucasian Mounties and beer-drinkers. Many outsiders—and perhaps some Canadians—might regard “Canadian rap” as a humorous oxymoron. The point is not to protest angrily that there is too Canadian rap, and then demand that it be taken seriously and incorporated wholesomely within an official multiculturalism. Because Canadian rap is itself not entirely sure that it is or wants to be Canadian, or in what ways, and neither is the civil society or state to which it relates. It is a good example of the characteristic ambiguities of much global popular culture: of the nation and outside of it, posed as resistance but also as eager for incorporation and acceptance.”

Same argument, same “finding”, as I see it. But I know which of the two I’d be attracted to if I were handing out the money, and it’s not just because the second passage is my own paraphrase. In either case, the argument isn’t a particularly scintillating one, and the finding is pretty intuitive, but I get no sense of command or mastery over the project from the first passage, no sense that the author really knows or is making sense of what he studies.

I recall very intensely being a part of an interdisciplinary center very early in my career where there was a person who habitually waited until the question/comment time was almost over so that he could make the last remarks. (I’m notorious for being the opposite: I’m like Hermione in Harry Potter or Horshack on Welcome Back, Kotter the moment a talk is over, I go "Ooo! Ooo! Over here! Me! I have a question".) This guy would get the last remark and it would be so long and arcane and overtheoritized that no one could say anything else, both because time was up and because nobody got it anyway. Then one day Mr. Last Question slipped up and said something early, and we all pounced on it and interpreted it and translated it, and we basically got it down to: “I liked the paper, and I think some people are being too critical of it.” In its first incarnation, the comment had been more like (I’ve always remembered this unusually clearly): “I want to affirm the gestural field being initiated in the discursive economy of the paper, the refusal of incorporative strategies, the reconfiguration of tropes, the simultaneous translation and retranslation of language that it proposes to undertake…” and so on.

The thing of it is, when we got the commenter down to agreeing that his comment amounted to, “I liked the paper and some people are being too critical of it”, I think he was surprised to discover that that’s what he had said. It was a rather innocent thing, and made me like Mr. Last Question much more than I had before. He hadn’t known: he was mastered by academic language rather than the master of it.

I still dream of conference formats that no one uses at major professional meetings. I’d rather that most formal presentations of scholarly work—whether the writing of a humanist or the findings of a scientist—be delievered in ways intended to involve audiences, that make productive use of the face-to-face meeting of scholars. I’d rather there be more small workshops and roundtable sessions scheduled at large meetings. I’d rather that all scholars at conferences are required to give presentations that are meant to be heard, and meant to be responded to.

I don’t have PowerPoint envy any longer, though. The PowerPoint thing is never going to work for humanities scholars. We don’t have highly concretized knowledge that we can deliver in bullet points to an audience where the novelty or contribution of our work is going to be retained at all in that compressed form. Scientists and maybe some hard social scientists really can say, “Ok, we found out something that we didn’t know before, and here’s the facts, in the most efficient form we can deliver them to you”. Humanists almost never can do the same.


August 4, 2004

Calling all readers. I crave some advice about the future development of this weblog. I've gotten enough email asking for an RSS feed of some kind that I feel obligated to do it, and as long as I'm at it, I've decided that it's about time I had my own comments rather than just parasitically waiting for someone else to link to my essays.

This weblog is a very primitive, hand-rolled affair. I write my essays, drop them in Dreamweaver, and update to the college's webserver. I kind of like it that way, but there are obvious hits to its functionality as a result. (And it's more labor-intensive to boot.)

I can see three ways forward:

1) I move the whole operation out of the college's domain into Typepad and maintain the weblog myself entirely with all the standard Typepad gee-gaws and doodads.

2) I handroll an RSS feed to go along with the rest of the handrolled stuff and stay put right where I am. I have some attachment to keeping the weblog inside because I'm interested in arguing that this sort of writing is a part of what I legitimately do as an academic, not a private hobby. Looking at the materials on RSS (I don't use it myself to read blogs) that doesn't seem too impossible--but comments are another matter, and something I'd have to continue to forego.

3) Either with IT support or by my own efforts, I get Moveable Type or something similar working on the Swarthmore webserver and get all the extra functionality that provides. MT makes me a little nervous both technically and in terms of licensing, etcetera. Our IT staff is overburdened enough: I don't want to push for something that has the potential to go wrong or be a burden later on (with comment spam, or security issues, or a licensing issue, for example).

Advice on any of this from people more adept than I with these questions?

August 4, 2004

Indian Guides

Among the many family pictures I scanned while I was at my mother’s was this one:

It was strange to see that image again. My father and I at a meeting of Indian Guides, a kind of pre-Boy Scouts thing for young boys and their fathers. We were in the “Cherokee Tribe”. I was “Little Red Hawk”. He was “Big Red Hawk”.

I only remember fragments about the experience—I was probably in it for only a year or so. They would beat drums at meetings. You wore warpaint. You wore leather hand-made badges of the tribe around your neck. I think we did some kind of craftwork stuff at meetings, whatever it was that six-year old kids could do that was plausibly “Indian”. We went on some kind of trip up to the Angeles mountains and there was still a good deal of snow up there, to my chagrin, since my mother sent me without any snow gear at all.

It had all the basic embarrassing goofiness of all organizations of its type, that kind of ur-Shriner or Rotary Club manly associational thing, but also the extra absurdity added on top of a bunch of Anglo men and their boys playing (badly and inauthentically) at being Native Americans. Many years later, I reminded my dad of Indian Guides, and he commented, “Of all the dumb things that a dad has to do for his kids, that was the number one dumb thing by a mile”. I couldn’t agree more.

The more complicated thought I have about the experience, however, is long-time pressures to change Indian Guides (which are now known as “Y-Guides” as a result) is a good example of where the cultural politics of identity went badly wrong and ended up being the punching bag known as “political correctness”. That journey, from limited, useful, claims to free-floating puritan superego and finally to despised omnipresent caricature is an arc that centers on the culture wars of the 1980s. Within that history, there’s a moment of metastasis, where the practice of identity politics, which had a fairly complicated intellectual genesis, escaped the left and became generically distributed but remained strongly associated with the left by the general population, even after that practice was no longer at home there, even after many Americans of all political persuasions as a whole learned to use and wield the claims of “political correctness” when they have provided a tactical advantage. (I’d say that conservatives today are actually among those most prone to deploy “identity politics” type claims on their own behalf.)

Deep down in its foundation, what came to be glossed as “political correctness” drew on two reasonable propositions:

1) Racial or other forms of social identity and associated forms of social inequality seem to have a lasting character, persisting in the United States and other societies even when major legal and political discrimination ends. There must be something about social identities which draws its force unconsciously from everyday practice and culture rather than formal legal and political structures.
2) Language and representation are not “just” words, but acts. The identity of a speaker, the social context in which he or she speaks, the relation of a speaker to an audience, and so on, make an important difference. The same words in two different contexts mean two entirely different things, and the context in which those words are interpreted also changes their meaning.

These two observations had fairly different intellectual histories. Brought together, however, they led to a third proposition:

1) Speech acts and cultural representation are an important part of the maintenance of discrimination and the definition of social identity.

You can argue with either of those foundational propositions. You can certainly argue with the combined argument. But they’re not obviously silly or trivial; they have a lot of validity to them. They’re serious arguments, not intrinsically leading to the kind of schoolmarmish politics that later came out of them and that are today a rhetorical staple of institutional and cultural politics for some groups on the left and the right.

Where did those ideas go so badly wrong? Well, I think it might have something to do with the way something like Indian Guides or the Atlanta Braves tomahawk chop ended up being understood: first, with a lack of proportionality; second, with a lack of proper historical perspective; third, with a lack of interest in intentionality; fourth, with a lack of curiosity about the general phenomenon of impersonation and identity play in American society.

The lack of proportionality is the easiest mistake to catch, and is the chief reason that “political correctness” is now such a punching bag. If Indian Guides was a part of a system of representation connected to the oppression of Native Americans, it was a ragtag, left-over bit of trivial effluvia, not as some activists put it, the centerpiece of the "dehumanization" of Native Americans. The foundational arguments behind political correctness insisted on the seamlessness and coherency of oppressive systems of representation, claiming that every symbol and sign with even the least visible hint of a stereotypical referent to race, ethnicity or gender is imbricated with equal vigor in lynchings or violence against women or the Trail of Tears. One typical example: we had a “teaching event” here at Swarthmore after a student here showed up at a Halloween event in blackface where one of the students in the audience managed to leap casually from talking about the history of blackface to being angry that some of his peers mistake him for his brother—a person to whom he had a close genetic relationship.

Part of the reason for this lack of proportional differentiation between the way that different symbols in different contexts are tied to the maintenance of discrimination has to do with a disinterest in the cultural, intellectual and social history of these representations. Indian Guides in the 1960s was a harmlessly stupid thing in part because was an impotent and discarded leftover of a much more charged, violent and painful history. I dou’t doubt that as such it could give pain or offense to a Native American who feels a victim in the aftermath of that history, but the fact that a symbol or practice invokes some past practice or representation for an individual does not make it equal to that past practice. I might see an allusive hint of past oppressions in a present-day text, but to collapse the distinction between then and now is to live in a kind of hallucinatory atemporality. We sometimes run into students here who insist that the present condition of African-Americans in the United States is indistinguishable from antebellum slavery, and thus that the acts of representation which seem to them to have a racial component must be equally indistinguishable from the cultural experience of enslavement. That collapsing of distinctions trivializes past suffering while also making it impossible to have a real and tangible politics in the present: it denies any motion or change in the past, and so cannot imagine a condition of change in the future. This collapsing of distinctions is at its worst when it applies itself to culture, speech or representation.

“Political correctness” seems to have most profoundly grated on general sensibilities when it ran (and still often runs) roughshod over intentionality, when it discards any interest in why someone said or did something and takes the determination of meaning in a speech act as entirely dependent on what someone hears or feels it to be. No one involved in the creation or perpetuation of Indian Guides was setting out to create a relation to the social position of Native Americans in the United States in the 1960s. The participants might be said to have been blissfully, foolishly ignorant of and uninterested in how a bunch of white guys calling themselves the “Cherokee Nation” and making lanyards might play in that wider world, or look to the descendents of the Cherokees, but the innocence of the participants is also a material and political fact worth taking seriously before making any criticisms. Audiences at Atlanta Braves games don’t set out to say anything when they do the tomahawk chop: they’re just doing what fans at Braves games do. To tell them that what they are doing means something that none of the audience actually intends to say is unsurprisingly alienating to some. So many of the conflicts and critiques spurred by identity politics borrow the rhetoric of legality, charge people with crimes—but if we’re going to talk about crimes, we have to talk about intentionality—it’s a centerpiece of our ideas about justice and injustice.

To get easily ruffled by Indian Guides, or anything comparable, is also to miss a more complex history underneath it of racial and gendered impersonation, of people playing at being other identities not to mock or hurt, but honestly to explore and make creative use of the experiences of others, however ineptly. Indian Guides isn’t that different from the kind of cultural cross-dressing associated with the legacy of Karl May in Germany, or with any number of other kinds of practices that are at least as complicated politically and culturally as various practices of drag or transgender performance that tend to get exalted rather than attacked within identity politics. To me this is the most important thing that we've lost sight of, the most interesting thing to re-examine.

None of this is to say that Indian Guides shouldn’t have turned into Y-Guides or what have you, or that Braves fans shouldn’t rethink the tomahawk chop. But whether those things happen or not is simply not terribly important: no one’s politics should be built around pressing hard for those kinds of changes. (Or defending strenuously against them, perhaps.) There’s not that much at stake, and a much messier cultural history, with more meanings and possibilities, than conventional identity politics is inclined to credit.