Ancient Blog

November 2002-December 2003


December 18, 2003

Eragroan

Once upon a time, there were fairly straightforward retreads of Tolkien’s Lord of the Rings like Terry Brooks’ first Shannara book. Simple enough: after Lord of the Rings turned into a commercial success, use some of the recognizable motifs, characters and plot devices to bootstrap a ho-hum work to greater commercial success. Nothing to get too exercised about.

Now we are in a different moment. I recently read China Mieville’s all-out assault on JRR Tolkien and Lord of the Rings, and disagreed vehemently with much of it, especially the specific criticism of Tolkien himself. I think, though, that Mieville is right in one respect: the further iterations of Tolkien have led to the withering of imagination, the forging of invisible chains on a generation’s creativity.

That’s what genres are: a set of implicit, understood rules and motifs that organize and typify creative work. There’s nothing wrong with genre: it gives a writer a place to hang his or her hat, a platform on which to stand, a prior organizing principle. If genre becomes too apparently constraining, it can even provide a creative opportunity, by allowing a creator to entertaingly violate its constraints and rewrite the rules, or it can spur a quest for new sources and wellsprings. The proliferation of Tolkienesque fantasy trilogies is clearly what led George Martin to look for a different template in his Song of Fire and Ice, and he found it in the Wars of the Roses and similar narratives of late medieval Grand Guignol, or what Guy Gavriel Kay similarly was able to do with the Byzantine Empire.

What worries me is when genre conventions, through obsessive reiteration, begin to dissolve into a grammar of creativity, invidiously naturalized as necessary first principles, so that younger writers come along and never even think to rebel against them, any more than we might rebel against the sun rising or water flowing downhill on any given morning.

You can see this happening now with many computer games and other derivative forms that draw upon fantasy motifs, but the book that really brought this home to me was Christopher Paolini’s Eragon. It’s gotten a lot of press because of the young age and evident drive of its author, and I agree that aspect of the book is really neat.

Paolini is living every young fantasy geek’s fantasy, and it's hard to be too churlish about that. But the fact is, the book on its own merits is pretty terrible, though in the last third or so of the book, it begins hazily to take shape as a minimally adequate work of fantasy in its own right.

In the first two-thirds, though, it is impossible not to be dismayed by the shackles weighing down Paolini’s imagination, not to mention his prose. This is no cynical, calculated imitation. It is more that Paolini has absorbed certain motifs in through his pores. We have the mishmash of Northern European names; we have dwarves who make underground cities, immortal elves who come from oversea, ugly humanoids who are orcs in all but name, dark spirits who could double for ringwraiths; we have dragons who are psychically impressed by riders; we have magic that derives from speaking the primal language of the world. We have a protagonist who is seemingly a boy of humble origins but who in fact is a child of destiny, with mysteriously unknown parents, who is fated to rise to the pinnacle of noble power and authority. The book ends up reading like a Dungeons and Dragons campaign with all the spontaneously and unconsciously derivative gestures that such campaigns invariably entail.

I don’t blame Paolini for any of this, but I think it is time for the readers of fantasy, including teenagers, to step back from the secret language of genre convention and ask why any of those conventions should be so. In the end, Tolkien’s great achievement is that he did an immense amount of scholarly work to create a total world. That’s the key to his success with so many of his readers. If we sense there is more to Tolkien’s world than what we see in the pages of The Hobbit, The Lord of the Rings and The Silmarillion, it’s not because Christopher Tolkien continues to spew out every scrap of paper he can find in the trunks in his father’s attic, it’s because the original works drew upon a vast body of existing, lived mythology and gave those myths a little shove in a new direction.

The more distant fantasy becomes from a rooting in something organic, the more shallow and alienating it feels, unless the author troubles to do something even more difficult, which is to systematically build a largely unanticipated imaginary world from the foundations on up. You can only do that as an aspirant author by interrogating each and every one of the rules and conventions you use to build the world. With Paolini, I felt like I wanted to hop in a time machine and visit him when he was 15 or so and ask him a long series of questions as Eragon started to take shape in his mind:

1. What are dwarves, and why do they live underground? Where did they come from?
2. What are elves, anyway, why should your world have them, and why do they come from ‘oversea’?
3. What exactly are the languages that people in this world speak that yield you names like ‘Galbatorix’, ‘Ra’zac’, ‘Murtagh’, ‘Arya’, 'Saphira' and so on? (I especially find fantasy words with apostrophes in the middle annoying as hell given that they make no linguistic sense whatsoever unless they’re related to some common orthography or pattern of speech in the imaginary world being described.)
4. Why does magic exist in this world?
5. How can a dragon be born with an ‘adult’ consciousness? Does their consciousness exist in some mystical or physical way before the dragon itself is born? If not, how can Saphira possibly speak with evident experience about the way the world is with such speed? Why do dragons have telepathic powers? What do they get out of being associated with human or elf riders?
6. Why do the ‘Urgals’ [the faux-orcs] exist? Why are they instrinsically, genetically brutish and evil?
7. Why are there ‘evil spirits’ waiting to possess humans who use the wrong kind of magic? Where do they come from?
8. Exactly how does the Emperor Galbatorix exert power over his subjects, anyway? Why is his power strong in some places and weak in others? What is his ‘empire’ based on? (As far as I can tell from the book, there are very few civil authorities even in large cities who administratively represent this empire or communicate with its capital.)
9. Why is there a desert in the center of the landmass, a forest in the north, a mountain range in the south? What actually causes deserts to form, for example, in Paolini’s world? (The geography of his imaginary map is a textbook case of the arbitrary scattering of typified geographic features in derivative fantasy works.)

And so on. Note that Tolkien offers answers to every single one of these kinds of questions within the body of his text, and often without a grotesquely obvious “info-dump”, or at least an expository info-dump credibly disguised as lore. When Tolkien runs into problems, they are often problems of a very deep kind that resemble the basic questions we have about our own spiritual and material existence. (Say, for example, the problem of theodicy: it is no easier to explain why Eru, the high god or primary cause of Tolkien’s universe, permits Melkor’s rebellion and evil than it is for a Christian to explain why an omnipotent and loving God permits evil to occur.) Little is arbitrary in Tolkien’s case.

In Paolini’s case (or a host of other creators) much of the substance of his book is entirely arbitrary. That doesn’t make me angry, just sad. He and other young writers have a long career ahead of them and a lot of native tools to work with. Here’s hoping they can spurn their own inheritance and look for other ancestors, or perhaps rechristen themselves entirely.

[permalink]


December 17, 2003

Return of the Blogger

Been busy lately, as the date on the last entry no doubt shows. I’ve got a big backlog of things stored up for here, and I’ll also be contributing to Cliopatria, the new group blog at the History News Network.

But first off, some comments on “Return of the King”, which I saw as part of “Trilogy Tuesday” last night. The whole event was fun, if a little wearying in spots. Certainly the general praise for “Return of the King” is well-deserved. It stands favorably alongside Jackson’s other two installments of “Lord of the Rings”, both carrying forward some of the great creative decisions behind his entire approach to the films and making a couple of great particular choices for this part of the story.

I’ll depart a bit from the general love-fest, however. I actually had some significant issues with the film.

Major spoilers abound, so read on only if you've seen it already or don't care about spoilers.


November 6, 2003

Pay No Attention to The Man Behind the Curtain

I guess I have to go see “The Matrix: Revolutions” this week, but I can’t say that I’m feeling very enthusiastic about doing so (in contrast to “The Return of the King”: yes, I have a ticket to the December 16th showing of all three films, and no, you can’t have it).

My dissatisfaction with “The Matrix Reloaded”, once I finally saw it, was pretty similar to the common disgruntlement. I don’t fault it for the philosophical content, and in fact, I think the centrality of the question of choice was potentially interesting and highly appropriate to a speculative fiction concerned with machines, simulacra and human destiny.

The problem really was that it often chose to tell rather than show. It was a movie with footnotes. This is what sometimes happens when creative people do more than selectively nibble a few tidbits from the smorgasbord of contemporary academic thought in the humanities. When they start to dine in earnest from that table, they usually end up the ones being swallowed. I knew that “Reloaded” and the Wachowski brothers had been so devoured when Cornel West made his cameo appearance.

There are films and television productions that use contemporary academic thought or motifs to jumpstart a clever creative engine, to be sure, most especially productions that draw on the postmodernist or poststructuralist aesthetic to create disjunctive, uncertain or unreliable narratives and characters, to play games with representation: Spike Jonze’s “Adaptation” and “Being John Malkovitch”, for example.

When a singular work starts to consciously craft itself as within the discursive space of academic conversation, or when a continuing television series begins too earnestly to respond to the clusters of cultural studies scholars beginning to infest its body, the results are usually not very good.

This is not a new problem—the tension between the work of criticism and the work of creativity runs deep. The conflict is more sharply drawn and permanently antagonistic when it’s Frank Rich versus Broadway, for example. When it’s cultural studies and Chris Carter or Joseph Campbell and George Lucas, the seeming sympathetic resonances between the work of criticism and the work of creativity can lure both sides into imagining they are engaged in one big happy project together. Which often leads to shit like “Willow” in the worst case scenario, and to hampered, sodden, takes-itself-too-seriously if still halfway decent stuff like “The Matrix Reloaded”.

The problem is particularly acute on the academic side of things. By now, I’d say most humanities scholars are acutely aware of the shortcomings of the concept of a “social construction”, or of viewing everything as “text”. But these are plagues from Pandora’s box, unleashed upon the world, irreversible. Once you think about all of the world as a social construction—and of course, it always is, or at least your knowledge of the world is—to actually engage in the labor of constructing, of creating, feels inauthentic, clumsy, manipulative. You find yourself always in the position of the Wizard of Oz, revealed as the man pulling levers: you are stuck on the repeated trauma of everyone pulling back the curtain and exposing your magic tricks. Small wonder we’ve seen latter-day bits of gussied-up vanguardism like “strategic essentialism” wind their way through critical theory: it’s about trying to drop back into a naturalistic stream of cultural production.

If you want to serve as a critical handmaiden to the work of creativity, then I think that requires a frankly utilitarian approach, a conscious desire to render service at the points of absence or frustration in ongoing cultural projects. That is certainly what lies behind my own writing about computer games: I am not interested in being seen as an academic specialist in computer games, and legitimated as such, but as an academic whose scholarly experience bootstraps an experience of games to being productively engaged in the act of game design. I want to work within a consciously middlebrow critical practice, like Eric Zimmerman and Katie Salen do in their recent book Rules of Play. I mostly share Intelligent Artifice's feeling about academic game criticism: there is no real reason for anyone in the game industry to look at the vast bulk of it.

If you're actually doing creative work yourself, and trying to get from academic thought to creative output, you can’t think your way there, any more than a baseball player who has lost his swing can, any more than Austin Powers could find his mojo by formally studying mojoness. To actually create requires not strategic essentialism but strategic amnesia. It’s cool for the Wachowskis to do their homework, and its cool to make a densely philosophical work of action science-fiction, but actually getting to that point requires a Zen forgetting of the road travelled, an erasure of the footnotes. It means you have to leave Cornel West on the cutting room floor.

[permalink]


October 28, 2003

Caveat Emptor

One thing I hate about blogging sometimes is that bloggers are, whatever their political persuasion, a bit lazy and reactive. They find a source of information and they link to it and treat it more or less as gospel: there’s very little investigative spirit, very little curiosity. More than a few bloggers (and blog readers) have gotten burned by that tendency over time.

I’m seeing that now a bit with the Swarthmore-Diebold story. Let me say at the outset that 1) I’m proud of both the student groups involved for posting the memos in the first place and 2) these memos are of urgent public concern and Diebold has no business trying to suppress their circulation.

In fact, you couldn’t imagine a better confirmation that Diebold is not to be trusted to manage an election for dog catcher, let alone something important, than their conduct at the moment. I don’t care what your corporate interests are: if you’re involved in the business of provisioning technological infrastructure for voting, then everything you do should be one hundred percent transparent at all times—as well as everything you do wrong, which Diebold appears to have done plenty of. (The latest news, that Diebold is maintaining that their filings of DMCA notifications do not necessarily mean that these documents are authentic, is especially hilarious: if that’s so, Diebold has no legal right whatsoever to be filing DMCA notices.)

However, I see a number of websites (Ernest Miller and Sivacracy.net among them) repeating what they’re finding at the Why War? website as if it’s the absolute gospel truth, and exhibiting zero curiosity about the totality of the story. In so doing, I think they’re falling for a very self-conscious bit of agitprop mythmaking by Why War?, agitprop that I think is a good example of a common and characteristic mistake of campus activists in general.

I say that as someone who made that mistake myself once upon a time—which I know is the most infuriating, condescending, insulting thing that a rapidly prunifying aging fart like me can say. Ok, so I’m being a jerk. Sorry. Anyway, it’s still a mistake. The mistake is, when denied easy or straightforward access to the target of your activism, when involved in an activist project that takes laying a lot of foundations and settling for methodical achievements, to get impatient and decide to stick it to The Man Upstairs instead, aka the college administration.

Here’s the deal. Posting the Diebold memos: good idea. Everybody should do it: it should be a national fad. So Why War? did it. Good job. Then we got the inevitable takedown request from Diebold.

Swarthmore’s current policy on DMCA takedown requests is as follows (it’s pretty standard). If we get a notification of copyright violation, we notify the offending party that they have to take down the violating materials. The DMCA states that we must promptly notify the notifying party that the material has been removed, which shields us from liability. The person who had the material can file a counternotification. The party who believes they hold copyright then has a set period of time to file a court order, at which point it’s between the two parties, with Swarthmore shielded from civil liability and uninvolved in the dispute. If the party claiming copyright doesn’t file a court order, the counternotification entitles the original poster of the material to return it to where it was at the outset.

Now I am not a fan of the DMCA in a great many ways. But complying with these provisions of the DMCA seems like a pretty good idea to me on two fronts. First, no college administration is going to deliberately eschew compliance if that opens the college to civil liability, nor should it. I don’t care if you’re Leon Botstein. Hell, you can be Kevin Mitnick for all I care: you aren’t going to do it, and you ought to be canned if you do. In fact, you ought to be open to legal action yourself, given that the trustees of a college like this one have a legal obligation to look after the assets of the entity which they oversee. (I might mention that our nice big fat endowment is why we have the generous financial aid that we do, not to mention the low faculty-student ratio, but that's probably laying it on thick).

Second, the safe harbor provisions are one of the few things in the DMCA that have a progressive side to them, in that by shielding ISPs from liability, they allow ISPs to not care about what their customers do up until the moment they receive notification. If ISPs were open to liability for any content a customer might distribute, every web page and email would require prior approval from an ISP to disseminate. Including this blog. Bye-bye Internet. It's true that the notification procedure can be and often is being scurrilously abused--as Diebold is doing--but the safe harbor part is important, and challenging the abuses of notification is going to take statutory reform, in my opinion.

We have two student groups here interested in this issue. One of them, the Swarthmore Coalition for the Digital Commons wants to comply with DMCA (like the college wants to) in the hopes that Diebold will back down, which I think is a reasonable hope born out by circumstances elsewhere (and it’s a hope that I think at least some in the college administration share). This group is prepared, possibly with the help of the Electronic Frontier Foundation, to follow with a legal challenge should that prove necessary.

And then we have a second group, Why War?, that not only wants to violate the college’s DMCA compliance policy, but that publically announced at a meeting with administrators that they were going to do so, and has continued to do so to announce this on their web pages. This is a group whose leader asked Slashdot readers to mailbomb the Dean of Students, by the way, which isn’t exactly Brilliant Move #101 in the manual on creating alliances, including with people who care about netiquette who might otherwise be sympathetic. I’m sure the “college administrators side with the forces of darkness” narrative is a comforting and instinctively persuasive one for many reading about the story, but it doesn’t fly in this case. It’s pretty classic activist-martyr stuff.

When you announce you’re going to ignore a compliance procedure that shields your institution from civil liability, I think there’s a pretty good case to be made that the institution ignoring that announcement would reopen the institution to liability. Under those circumstances, I’d do the same thing as what the college administration is doing, shutting down people inside the swarthmore.edu domain as they post the Diebold documents even before the college gets a notification from Diebold, because the DMCA standard is if the ISP is clearly aware of a violation, if a violation has been brought to its attention. If Why War? had been discreet enough to keep its movement of the documents subrosa (as Internet-aided activism brilliantly allows) then that would have been different. That would have been smart mob stuff; right now they’re acting more like a IT-challenged mob.

It’s also very different from passing the material along to the next .edu domain you can find, where the DMCA clock starts all over again. That’s brilliant civil action, because it uses the law in its favor. Activism is not about breaking the law because the law is always a good thing to be broken. It’s not about picking fights with the nearest available straw man (the college administration) because you know that the hapless administrators are going to have to meet with you and listen to you—unlike Diebold representatives. Activism is about results. Why War? is, as it has sometimes in the past, preferring high drama over results, and not thinking much about the consequences in the meantime.

It’s also interesting to see how the story mutates as it moves along. Now Sivacracy has the students being “punished” because they link to the documents. No, their Internet access is being disabled until they’re in compliance. I suppose you could gloss that as a punishment if you want to, but it is neither intended to be, nor I think fairly described as such. We'll find out if the college is "punishing" anyone merely linking to the Why War site, because I've done so here.

If Swarthmore could do one thing differently, it would be to move to a DMCA interpretation that assumes liability only over content hosted directly on the site. At the moment, the college’s IT administrators are using an interpretation that encompasses direct links to copyright-violating content as well as directly hosted content. I think that’s a mistaken (if common) interpretation of the requirements of the law, and so do a lot of other scholars and observers. But even then, I’d rather we make that change in policy in a considered way, with a full awareness of what we’re doing.

[permalink]


October 24, 2003

No Nothing

Having read some of the stories from the trenches at Erin O’Connor’s site, I continue to feel some of the same ambivalence I experienced in reading David Brooks’ original lament about the lack of conservatives in academia.

All the qualifications that a month’s worth of online conversation have produced still strike me as important: there are institutional variations and disciplinary variations, universities and forms of inquiry that are much more open to or even dominated by conservatism. There is a looseness even in Brooks’ original piece about what is meant by conservatism that clearly needs to be picked apart: the ideas of the religious right are much less welcome in most academic circles than libertarianism or big-business Republicanism, and possibly for much more legitimate reasons, due to deep incompatibilities between the bedrock premises of the modern university and religious fundamentalism.

I would also re-emphasize my original thoughts in response to the stream of unhappy people writing to O’Connor, that thinking about this in terms of conservatism is thinking too small, that what we are seeing here is just one small piece of a much larger pattern of intolerance and narrowness within academic life.

At the same time, I would hold to my original feeling that it's pretty fair to say that conservatives in the humanities and most of the social sciences are rare and tend to be targets of abuse at most institutions when they do exist.

However, like Baraita, I remember being struck by one of O’Connor’s correspondents, who complained of liberal intolerance that was particularly directed at his choice to teach the history of the Renaissance through primary materials. This story really baffled me, and made me worry about how the general story of persecution of conservatives (or any other outlier persuasion within academic life) is told through anecdote, and how any given anecdote, looked at closely, may raise more questions than it resolves.

Taking the case of O’Connor’s correspondent, it is totally incomprehensible to me how practicing historians could find general fault, let alone ideologically malicious fault, with a course syllabus built largely or exclusively around primary materials. I’ve built syllabi like that, and I have colleagues here who’ve done it. I’d go so far as to say it’s a standard pedagogical strategy for historians everywhere, though certainly less common than syllabi built largely around secondary materials. So at least on the surface, there’s a simpler explanation for the mistreatment this writer described: he had the misfortune to be in a department of exceptional idiots.

Appealing as that explanation might be, I began to wonder whether there wasn’t more to the story. Why did O’Connor’s correspondent regard this tale as evidence that his conservatism was the source of the animus towards his syllabus? There’s nothing intrinsically conservative about the pedagogical idea behind it: radical, liberal and politically indifferent historians use the same approach at times.

For me, what I wonder is whether the syllabus was perceived as conservative because it was announced as such, that the logic behind its design was not a pedagogical one (teach primary texts because it’s an interesting way to teach history) but a polemical one (teach primary texts because the bulk of scholarship is judged to be politically or intellectually bad, trendy, leftist, etc.). O'Connor's correspondent said he didn't frame his syllabus as such, but he also says that his colleagues trooped down to the bookstore and divined somehow from his texts that he was in fact a conservative, which just seems surreal to me. Let's suppose he said that he was doing things this way because he flatly rejected the current state of things in the historiography on the Renaissance and didn't want to use that scholarship in his classroom. I do feel there were other self-declared conservative voices who spoke up during the Internet-wide debate over Brooks' article who more or less have said things of this sort. Certainly some conservative jeremiads against the academy, most notably the Young America Foundation's surveys of college curricula, pretty much amount to this sort of argument.

If that were the case, I would have a problem with O’Connor’s correspondent, though I hope I’d be polite about it if I were his colleague. At that point, I’d judge him to be an academic who was not living up to an important professional standard. I consider it my obligation to know what is going on in the scholarship in the fields I teach: it’s irresponsible to allow yourself to fall into a know-nothing posture in which you refuse categorically to read or engage prominent, common or normative modes of scholarship, however much you might disagree with their premises, themes, methodologies or arguments.

That’s a venal sin by comparison to doing the same thing in the classroom. Refusing to teach a text to your students merely because you disagree with its scholarly approach is an act of pedagogical malpractice, and to my mind, a pretty serious one at that. It’s one thing to judge a work so shoddy or weak that it’s not worth teaching, or to come to the conclusion that an important book in your own field won’t make good grist for the mill in the classroom because it’s too arcane, difficult, embedded in a scholarly debate or something else of the sort. It’s another thing to say, “I’m going to teach only primary texts because I categorically reject what passes for the state of knowledge in my field”. You’re entitled to feel that way (if you’ve done your homework) for yourself. You’re not entitled to inflict that belief on your students if they’re coming to you to learn about the subject matter as a whole.

In one class or another, I have taught Edward Said’s Orientalism even though I ultimately have enormous problems with both its central argument and its evidentiary logics. I have taught Gann and Duignan’s Burden of Empire even though I find it painfully disinterested in the harmful impact of European colonialism at many points. I have taught Cheikh Anta Diop’s work, even though I categorically, emphatically disagree with Diop’s vision of African history. I teach scholarship I like and scholarship I don’t like, if it is scholarship that many others rely on and regard as important, or that provides an important point-of-view. I may even teach scholarship I find actively reprehensible in some way. Sometimes I say so, sometimes I don’t, depending on the drift of the classroom discussion. But I teach it because the students are entitled to know about it. If I teach from primary texts, it’s because I think that’s a pedagogically exciting, useful thing to do, not because I’m trying to prevent my students from seeing a body of scholarship I disdain.

I partially endorsed Brooks’ original cri d’coeur about conservatives in academic life because I think these minimum professional obligations go unmet by many academics, in fact, more often by the academic left. However, I recoil from any conservative expression of dissatisfaction with contemporary academe that simply seems to want to invert the orthodoxies and expel its enemies from the syllabi and the departments, that indulges itself in know-nothingism, in blanket raving against the texts and pedagogies of others. The obligation to cast a wide net has to fall equally on everyone. That's good intellectual practice overall, but it's especially important in the classroom, where you have a sacred obligation to expose your students to the tensions and contours of debate within a given area of knowledge.

[permalink]


October 22. 2003

The Mystery of Star Wars: Galaxies

This essay is really, really long, so I'm not going to post the whole thing on the main page. Proceed if you're interested in computer games or cultural studies, avoid if you're not. Basically, I'm trying to figure out how a massively-multiplayer game that has the rights to the single most popular licensed property of the late 20th Century, the backing of a company with deep pockets, and a dream team of developers can end up being in the absolute best estimation no better than any other game of its kind, and by many accounts, including my own, among the worst.


October 17, 2003

You Can't Always Get What You Want

Did ANYONE want a Marlins-Yankees series? Hell, I know Yankees fans that didn't want it. I love baseball, but this World Series is totally uninteresting to me. Those are some mighty strong curses (or just some sucky managing in the case of Game 7 Sox-Yankees).


October 17, 2003

Please Touch: Choosing the Private

I took my almost 3-year old daughter to the Franklin Institute Science Museum and the Please Touch Children’s Museum during our fall break this week.

At the Franklin Institute, we found that there were about ten busloads of eight to twelve-year olds from the Philadelphia school system swarming over the exhibits. Anything remotely interactive was totally monopolized by them, often by the same five to six kids. Some of the exhibits they were doing their damnedest to break—there’s an exhibit on sports and physics that has a moving surfboard, for example, with a big warning that only one person should be on it, something that the eight or nine kids gleefully jumping on it together ignored.

We tried patiently waiting to use a few of the exhibits, in vain. One hulking twelve-year old even shoved my daughter out of a mock race car designed to measure reaction time and sneered defiantly at me when I objected.

The kids had adult supervisors, teachers I presume, but with one or two exceptions, they basically parked themselves on a bench and stared blankly into space.

At the Please Touch Museum, whose basic design I really love, there were no school groups, and it was mostly a quiet, pleasant afternoon for us there once we left the Franklin Institute. The children in attendance were younger, almost all accompanied by one or two of their parents. Kids shared, and if they didn’t, their parents intervened to make them share.

There was one interesting exception to this. There’s an area of the museum set up as a mock farm that you have to go through a gate to use, and it’s supposed to be for children 3 and under. Sometimes parents escorting several children where one is older allow the older child to come in, and usually keep a careful rein on the older child so he doesn’t overwhelm the toddlers. This time, there was a young woman with two sons, one of them about eight, who ignored her sons, looking fixedly ahead with a thin-lipped, angry expression. The eight-year old proceeded to round up every single play object in the space and sequester them inside a drum he kept in a corner. The whole room was stripped bare. Then he ran around at top speed a few times, almost knocking over children who were just learning to walk. He yanked violently on the one or two moving parts of the room. When my daughter crawled inside a soft mat that curled up, he ran over and tried to seal it up over her head while he leaned heavily on top of it, stopped only by my forcible intervention. The mother broke out of her reverie and in a bored, indifferent voice, said, “Don’t play too rough”. I took my daughter out of the farm area immediately. Later, we found a plastic saw—there’s a big mock-construction area with tools—and my daughter set out to take it back to the construction tool collection. The eight-year old came running full tilt as we approached and violently wrenched the saw from my daughter’s hand. It turned out that it was literally his saw, that he’d brought some of his own toys (and scattered them all over the museum like it was his room). This seemed like a really fantastically dumb idea to me, given that there’s tons of objects for the kids to play with at the museum, all of them to be shared. The mother said little to her son when he grabbed my daughter’s wrist and ripped the saw from her, and nothing to me.

I was thinking a lot after our day about the two experiences. In one museum, we had a terrible time in general. In the other, we had a pleasant, fun afternoon broken up by a single bad experience. What was it fair for me to expect and demand in either case, as a parent or a citizen?

My first reaction in the Franklin Institute was to want to complain to some authority—to the teachers, to the administrators of the museum, to society--to want the restoration of order and control, to seek the enforcement of rules and codes of behavior. The declension narrative--the story that we all tell sometimes where the world is going to hell in a handbasket, where such things did not happen when we were young—came readily to mind.

The more I thought about it, the less I felt I could demand. Who was at fault, after all? The teachers? Could a small set of adults really keep such a large group of kids under tight control in an environment? The museum? What was I supposing they could do, other than closing the museum entirely to school groups? The parents of the children in question? They weren’t there, and it was clear that what the kids were doing was a collective behavior, that the individual mannerliness of any given child was irrelevant in the maelstrom. Staring the tragedy of the commons in the face, I found it difficult to assign responsibility for it.

I began to wonder at my own reactions. The other children in the Franklin Institute were exuberant, after all, seeking sensation and finding it, enjoying themselves thoroughly, and who knows, maybe even learning a thing or two with the exhibits (though I doubt it). I long ago was given pause by Michael Bowen during an online conversation about loud boom-boxes and loud car stereos in public spaces: for him, they were a sign of the richly vigorous, life-filled exuberance of a healthy urban space; objections to them were only a sign of the narrowly bounded aesthetic of uptight suburban white folks, a waspy preference masquerading as a universal norm.

Maybe that’s all I was being, an uptight white guy unnerved by a mostly (though not entirely) black group of kids enjoying themselves. What was I supposing to be better? A bunch of bored kids being held on a tight leash by controlling authorities, rationed their three minutes of experience with each exhibit and lectured to all the while so that it was kept properly educational?

Here I was preferring a museum that was quiet and peaceful because every kid in it was with (mostly white) middle-class parents who had the economic luxury to take the day off and be with their child. Of course it was better controlled: leave everything else out, a 1-to-1 ratio is going to work better than a 15-to-1 ratio for keeping things in check. The children were younger: that kept things quieter, too. You didn’t need any reference to culture or society or a declension narrative to explain the difference: it was simply a matter of labor power.

Maybe I was just preferring an aesthetic again, a way of life where kids were kept controlled and monitored, where it was more important to restrain them from offending others and intruding on personal space than it was allowing them to play freely. In Please Touch, confronted by bad behavior, I didn’t quietly invoke the local equivalent of the state and imagine the intervention of some system or structure for maintaining order—I just contemplated resolving my dissatisfaction individually, by approaching the mother of the boy and telling her that she was behaving poorly. The solution I imagined (and lacked the nerve to implement) was a private one.

Seen on a larger scale, I think my experience explains a lot of the political and social drift of American society in the last two decades. In the end, I still think I’m right to regard what happened in the Franklin Institute as a tragedy of the commons, not just as a suburban white-guy hangup. I think I had as much right to use those exhibits as anyone else who paid the entrance fee, and that it shouldn’t be up to me to enforce those rights. The school groups may have been having fun, but my daughter and I could not. (I noticed a few other parent-children clusters having the same problem, and not surprisingly, a few of them drifted around the same time as we did to the Please Touch Museum).

If you think of the Franklin Institute as the public sphere and the Please Touch Museum as a private one, look at how the choices shape out. Remain committed to the public sphere as a middle-class person of privilege and you have to accept that you will always lose out, that you may not even get your small fractional equal share of resources or entitlements without the active presence of a strong interventionist state to maintain order, at the cost of cultural flexibility and spontaneity. You will be left at the end of the day accepting a permanent state of loss and possibly rationalizing that way of being as being appropriate or fair or just desserts, as a product of your own cultural shortcomings. That’s just the kind of abnegation that one fraction of the American left indulges itself in.

Or you can buy your way into a private retreat from the public sphere, where you can have as much of a share of the privately bounded always-for-sale commons as you have time and money to claim, and where enforcement of your rights and privileges is a civil, individual matter. A private sphere where it is difficult to tell where your cultural preferences end and some larger democratic norm of behavior begins because you’ve opted into a space that is culturally, racially, economically homogenous, a space that permits your differentially greater resources to return differentially greater returns to yourself and your family. Though at the same time, a world lacking in any enforcement of a common set of rules, where if you are confronted with a person who takes more than their share, you're wholly isolated in dealing with the problem.

Small wonder that the (disproportionately white) American middle-class opts for an increasingly manorial, privatized world. The alternative is a public world that at best gives back an equal share of a small pool of resources shared among a very large group of recipients, but more often than not entails losing on almost every struggle of authentic importance and getting no share at all, leaving the loser to accept such losses and even rationalizing them as justified in terms of the loser’s own culturally bounded shortcomings and hang-ups.

The current mayoral election in Philadelphia is as illustrative of this dilemma as the two museums. For the professional and managerial middle-classes, voting for the incumbent, John Street, is voting for the public world in which they will likely be permanent losers, voting for an acceptance of corruption and cronyism. It doesn’t matter if you vote for Sam Katz, the Republican: he’s not going to win. In fact, his defeat has probably become more certain as a result of the FBI’s probe into corruption around John Street: it’s about sticking it to the Man now. Street and politicians like Street are almost always going to win. They’re not even going to pretend that they’re fighting corruption: Street practically celebrates it. I heard several expert commentators on the local NPR talk show last week saying that it’s no different than Irish or Italian ethnic politics, and that Street’s practices only get talked about differentially because of racial animus. There’s actually some truth to that: I have relatives in a deeply corrupt small city in New England who excuse their own Italian-American or Irish-American leaders misdeeds but not the corruption of leaders like Marion Barry or John Street. But this is also an alibi: it doesn’t make corruption right or appropriate, and it ignores how much it harms urban populations in terms of opportunities lost and good works undone.

So many opt out and choose to retreat into a privatized, suburban world, where even if the local government is corrupt (and it often is: Republican, suburban politics can be just as filthy and mismanaged) it won’t impinge as dramatically on the private social worlds of the inhabitants. At least in that world, you can fantasize that most social struggles can be resolved through meaningful individual connection to or decisive, autonomous action against other individuals.

The American left likes to shrug indifferently at all this, and to view the choice of a private world as a selfish and destructive one. The argument sometimes goes: who cares if you always lose in the public world, if you have the resources to compensate? Most of the school kids in the Franklin Institute don’t have their daddy’s old computer or a television and DVDs or huge numbers of books or tons of toys or a middle-class academic father who has extra time to give. The museum and other public spaces are the only richly satisfying environment they’re going to have. Can’t I be big enough to let them have it all for themselves? Wouldn’t my demand for a local “state” to enforce the even sharing of the commons represent a theft of precious time from the kids who have nothing on behalf of a kid who has almost everything?

Sure, I can be big enough. But then I’m going to pay $36.00 for two tickets and go across to the Please Touch Museum. Don’t ask my daughter and I to stand outside watching the play inside, our noses pressed to the glass, proud of our virtuous losses in the public arena, flagellating ourselves for our white middle-class lack of exuberance and expressiveness.

I think there are a lot of reasons why hostility to the state as an institution has become such a central theme of American political life—and not all of them have to do with these kinds of issues, or originate with the professional and managerial middle-classes who have retreated to the manorial privacy of the suburbs. But this is a big part of it: as long as public life involves a contempt for rules, an acceptance of the tragedy of the commons as inevitable, and a deep tolerance for corruption and cronyism, it is neither rational nor reasonable to expect those who can opt out to opt in.

[permalink]


September 29, 2003

On Ellipses and Theses and Archives

I’m gratified that Ralph Luker responded so positively to my modest critique of his June 2003 remarks about Christine Heyrman’s Southern Cross. I think the only subject where we have a persistent disagreement rather than a consonant conversation is on his linkage of Heyrman to Bellesiles,which still seems really problematic to me in the context of a focused criticism of Heyrman’s scholarship. The linkage might make sense in a profile of Bellesiles himself, if one was interested in uncovering the specific and general genesis of his practices as a historian, but in a detailed critique of a specific work of scholarship by Heyrman, it seems besides the point.

There are also points at stake in the conversation where I would freely concede that the detailed issues at stake are well outside my own competency, no matter how promiscuously I might stick my nose into such discussion. The specifics of Luker’s reading of Heyrman’s racial demography is one such issue. This is one of the good and bad sides of specialization in historical writing: it doesn’t take very long before you come to a point where there really are only five or ten or fifteen people with sufficient specific erudition to assess a specific claim.

Which is of course one of the major reasons that the historical profession as a whole finds itself facing the ethical issues that have dogged it of late. There are tens of thousands, perhaps hundreds of thousands, of archives of historical material in the world, each of them containing within their collections the material to sustain one, two, five, ten, a hundred, a thousand monographs. Some of these archives are open to everyone; some are open to qualified researchers; some are open only to one or two people who have gained special permission to look at what lies within. Some were open in the past but are now effectively closed. Some, if you count oral history or informal interviews, are effectively individual archives.

When you come to know an archive, you often begin to see that other researchers who have used it sometimes seem to have read or quoted the documents in ways that seem odd to you. Or you see that people have used ellipses to make a quotation that supports an argument when a fuller reading tends to support some other analysis. This is a good deal of what Luker is concerned about with Heyrman and in historical practice generally.

He is right to be concerned, certainly. I think the kinds of practices that worry him are widespread in historical (and general scholarly) writing. To be honest, I have been nervously thinking a bit about whether I’ve ever ellipsized something to make it more favorable to my analytic slant. I don’t think so, but it’s possible. I suspect that almost any historian has to wonder, and wonder all the more the longer they’ve been writing and the more they’ve written.

On the other hand, I think the reason why this kind of practice is widespread goes deeper than sloppiness or error or even what is sometimes glossed simply as “bias”. Some of it has to do with the unruliness of archives and documents and the truth of the past itself, and of the inadquacy of contemporary historical thought, especially in its most specialized forms, for dealing with that unruliness. Some of it has to do with the reward structure that academic history has constructed, and the expectations that we carry when we go into archives for the first time.

We are taught now to privilege argument and interpretation, to have a position. I teach my students that when I teach expository writing, and the commandment still holds when it comes to writing and reading academic monographs. The purpose of analytic writing, it seems to me, is to play what Gerald Graff calls “the persuasion game”, to answer the question “so what”? The purely descriptive monograph is not especially admired or honored, and with some reason. There are infinite number of events, institutions, societies and practices in the past to be described. You cannot explain why you find yourself working with any particular subset of that infinity without an answer to the question “so what?”, and that’s going to lead you either to the work of interpretation and argument, or it’s going to lead to you announce the rule of whimsy and romantic self-indulgence, that you are writing about a particular topic because you feel like doing so. I actually don’t think it would be a bad thing if a few people took the latter road, but mostly scholars will choose the former, and rightfully so.

We can safely leave Ranke where many of us have found him, in the bargain-basement bin of turgid 19th Century German thinkers. We need not pretend that we come to the archives a blank slate, prepared to have the past write its truth through us. It’s not merely that this is, as Peter Novick put it, “that noble dream” denied. It’s not a noble dream at all. There are no final truths in human history. There is a reason that American high school students graduate hating history, and it’s because of dead hand of Rankean positivism weighs upon them. They come to college thinking that the goal of historical study is to boil down conflicting accounts about past events and come up with a single unblemished account of what “really happened”.

What a horrible, deadening, unreal way to think about the historical enterprise. How loathsome it would be if we were professionally confined to it.

At the same time, we need to come to the archives humbly dedicated to intellectual transformation, as acolytes prepared to undergo an alchemy, open to discovery and curiosity and persuasion. Because it is equally bad to enter the archives knowing exactly what we deem we must find: this is a self-fulfilling prophecy. There is no single truth, but there are true accounts and false ones, and interpretations that lie in the balance in between.

Here is the first danger of the ellipse, and Luker and others are right to remain watchful against it. Knowing that we come into the archives charged with the need to find an interpretation, an argument, a slant, a position, we can grow desperate to find it, and without a strong professional inhibition against ways of reading that produce precisely what we need to find, we can grow desperate and canny. The pressures of initial publication for junior academics are especially frightening in this regard, not in the least because some of the most interesting, richly developed interpretations of history come only with a magisterial command over a wide body of fact and historiography. The temptation of small, mean dishonesties opens wide all the time with the need to have a marketable, sexy "take" on a subject. Freedom from positivism is not freedom from reason.

The danger is more subtle than that most of the time. I firmly believe that the “linguistic turn” and postmodern theory have left us technically more proficient as historians even if we utterly reject (as I largely do) the intellectual premises or outlook of most postmodernist thought. The range of evidentiary material that historians have learned to look for and think about has widened a thousandfold in the past two decades. The skill that we bring to reading any single document has been massively enriched by the guild’s professional encounters with literary criticism and anthropology. We now have disciplined, substantial strategies for recognizing that a single sentence in a document can contain within it many meanings and even many authors.
The burden that this proficiency imposes on historians is substantial. With every single text in an archive now opened up to a hugely expanded set of possible readings, and the total range of evidence that can meaningfully inform historical scholarship much larger, we come to a troubling crossroads.

If peer review means checking the factual content of another person’s work, there are very few people competent to check any given example of scholarship. In a few cases, no one is competent save the author himself, unless we’re asking about the cogency or usefulness of their interpretation.

The richness we have discovered in the archive leaves us gasping for ways to represent it all fairly. I found myself for my first book, Lifebuoy Men, recognizing that consumption, commodities, exchange and material culture were discussed frequently across the entire span of the records kept in the National Archives of Zimbabwe, even though none of those topics were prominent subject headings under which records were organized. I began to realize we have let that archive’s organizational headings actually construct our research agendas, that you get a radically different sense of what the archive contains when you read widely across its total span. It wasn't just my understanding of the topics I was most concerned with that changed, but slowly, my entire sense of what colonialism was and how it was shaped began to shift.

In that sense, any single document from it is an ellipse, an intolerable leaving out of a larger truth. Any single complete passage from any one document is an ellipse of sorts, too, whether festooned by three dots in a row or not.

Suppose I am interested in what colonial “native commissioners” in Rhodesia had to say about the affairs of African communities. Those officials have left behind a rich documentary record of official correspondence, memoranda and often memoirs or journals as well. The problem is that their official correspondence often is concerned with the trivial, banal business of administration, and their memoirs are often vastly more concerned with hunting big-game than with the African subjects the officials governed. If I just extract what I am interested in, and ignore hunting and the daily business of administration, isn’t that an ellipse of sort? What would be the alternative? Every work of history would be like Borges’ encyclopedia, doomed to contain the totality of the past for fear of omitting any part of it. Or it would be a history bound to never be anything more than what the literate within history represented it to be at the time: a history of modern Zimbabwe told as a series of lawn bowling scores and white supremacist speeches.

The answer in the end has to come down to trust. Trust in ourselves to do the right thing, and to know the ordinary, heuristic ellipse from the dishonest one, to bow to the necessary truth without becoming obssessed by the impossible pursuit of a perfected one. Trust in our colleagues to do the right thing until they prove beyond a doubt that they have done otherwise. Trust that some ellipses are simple and others complex, and trust that two people can open the same yellowing pages, see something divergent within and yet neither be in breach of a professional covenant. But trust is not merely given once and forgotten: it is renewed through mutual scrutiny, through the reliable fulfillment of responsibility, through the deepening of respect. Maybe this is where historians are falling down on the job some, where the silences and fractured conversations of academic life exact a heavy price, where the burdens of professionalism and careerism have fallen most heavily. Maybe we all ought to be talking more about own ellipses, and what is covered over by the dots that stitch together our more tattered collective practices.

[permalink]


September 18, 2003

A Tale of Two Administrators

John Sexton, the new president of New York University, has some big plans. He wants tenured professors to make undergraduate teaching a central part of their duties, and he wants to expand his faculty in new directions by having professors who are primarily dedicated to teaching, professors who specialize in information technology and the Internet, and "art" professors who do not have doctorates but who are highly accomplished in professional domains outside of academia.

Stanley Fish in today's New York Times continues his running debate with the Illinois Legislature about the costs of higher education, effectively defending the academic status quo and arguing that the substantial reduction of the costs of higher education is impossible without fatally compromising universities.

There was an interesting discussion of Sexton's ideas at Invisible Adjunct and several other websites, but I was taken aback a bit by the degree to which many respondents viewed those proposals as superficial window-dressing for the further adjunctification of the academy.

On the other hand, I was a bit surprised at my own feelings reading Fish's article: it was a perfectly reasonable, intelligent rejoinder to populist politicians acting in a typically anti-intellectual manner and yet I couldn't help but feel that something was systematically wrong about it as well. Part of it was simply that Fish, as is his wont, was not defending all the various expensive components of higher education as part of some sacred or important mission, but much more pragmatically as services provisioned to demanding consumers. That's a potentially clever bit of strategic appropriation of market-driven rhetoric (Fish asks whether Illinois Republicans really believe in price controls) but somehow it seems the wrong route to go at this point in time, a two-edged sword likely to rebound on him and academia as a whole.

Then it hit me: the reason I was really a bit skeptical about Fish's article is the same reason some others are skeptical of Sexton's proposals. Fish essentially says that if public universities were given a massive budget cut, they'd react by gutting a whole host of popular and essential services that their students rely upon, all of them: "offering fewer courses, closing departments, sending students elsewhere, skimping on advising, hiring the pedagogical equivalent of migrant workers, eliminating remedial programs, ejecting the students for whom remedial programs are necessary, reducing health and counseling services, admitting fewer students and inventing fees for everything from registration to breathing."

It reminded me a bit of the implicit threat that Jerry Brown made when Proposition 13 made it to the ballot in California, that if it passed, the legislature would have to respond by gutting every service that Californians valued. This both did and did not happen in some truly complicated ways, some of which bears on the current recall campaign. What I think I found vaguely irksome about Brown is what I found vaguely irksome about Fish: rather than prophecy, these statements have a vague feel instead of being a kidnapper's threat to kill a hostage. "Don't make me do it, man!"

You could present the reduction of academic budgets instead as a positive but difficult choice, and say, "If you choose to cut the budgets of public universities and cap their tuition, you must be aware that you will force university administrators and faculty to choose between many desirable programs and projects, and you will then admit that the era of growth in knowledge, growth in research, growth in the mission and extent of education, is over." And you could ask first whether that's what we want to admit, and if so, what kinds of Solomanic guidance politicans or others might provide to academics about how you decide whether to cut anthropology or cognitive science, history or physics, English literature or Arabic language, microbiology or astronomy, and so on. Because Fish is right: go that route and you're forcing hard choices that will hurt people and reduce the scope of education.

What he's wrong about is the implication of his rhetoric, that sweeping threat to all programs and services, that higher education must choose to cut everything indiscriminately, that there cannot be a systematic logic to the reduction of its mission. Or more, I think what he's ultimately aware of is that if public universities are simply cut off at the knees by legislators, the practical political fact of life in the academic world is that those with institutional power will indiscriminately throw overboard everything and everyone who is less powerful.

This is where Sexton comes back in. He's not talking about the reduction of higher education, but at least on the surface, about its expansive re-invention. If many are skeptical when they read his proposals, it may not be because they have any particular reason to doubt Sexton's sincerity or even the conceptual attractiveness of his ideas, but because they know that established and powerful interests within the NYU faculty will not permit those ideas to be implemented in their most desirable and idealistic form. Sexton might say that a new "teaching faculty" ought to be viewed as the peers of the traditional tenured research-oriented faculty, but even if he desperately wants that, the established faculty will rapidly subordinate and denigrate such faculty as being second-raters.

Such a faculty would have no external source of validation to draw upon: no publications, no peer networks, no reputation capital. Or worse yet, their only source of external validation would derive from academically oriented Departments of Education, which would simply draw a teaching faculty back into the usual hierarchies of academic value, as experimental animals for education researchers. A "great" teacher in John Sexton's new NYU would only be great in my estimation by what they did in the classroom, and there are and can be no external standards commonly agreed upon that would allow us to compare one such great teacher with the next, to create a platform for the accumulation of reputation capital that would put Sexton's teaching faculty on an equal plane with the established ranks of academic scholars. The same would go for his practicioner academics, the "art" faculty, and the Internet faculty would simply be one more department in a specialized world of departments.

So the skeptics are right, I think, to think that you cannot have a revolution in one country, that even if Sexton is 100 percent sincere in his visionary drive, the practicalities of academic politics are such that his vision will curdle and be nothing more than a "mommy track" or compensatory wasteland for faculty scorned by the dominant players within the academic world. Fish's projection of a university indiscriminately laid waste by budget cuts and tuition caps is conditioned by the same thing. It might be that you could imagine a rational, orderly way to shrink an academic institution down to some coherent core mission and shuck off services that do not fit that mission, but the practical fact is that the people with power in existing academic communities will in the vast majority of cases apply no such principle, and seek only to cull vulnerable individuals, programs and services, regardless of how worthy or coherent they might be. Fish knows it: the end result is not a more coherent, smaller core that leans in some philosophically sensible direction but an amputated caricature of bigger, richer private universities.

I find this all pretty sad. On the face of them, I think Sexton's proposals are terrific. They're immensely appealing to me. They're the kind of thing I tend to advocate myself, a diversification and enrichment of what academia is about and a strong repudiation of the hostility that conventional academics display towards undergraduate education. I'd love to see him succeed in every respect and make NYU a showcase for a reimagined academy. And I do think that at least some universities, public and otherwise, are going to have to make tough choices and reduce the range of what they do, and I think it's possible to make those choices coherently and well. It's depressing if politically understandable why Fish doesn't even open up that possibility in his response to the Illinois Legislature. Fish is right, though, that it's not going to happen that way, that the result of simple budget cuts and tuition caps will just produce a grotesque mess that no one is happy with.

What I think this amounts to is that if you're John Sexson, and you're serious about your proposals, you're going to have to be far more breathtakingly ambitious. You can't just bring the new faculties in business: you'll also have to constrain the authority of the old ones. You'll have to work to construct a nationwide infrastructure that connects your new faculties to social and intellectual networks that empower them and put them on an even playing field with the old faculty. You'll have to reorganize massive, subtle hierarchies of power and influence inside and outside your institution. If it's just about adding some optional extra playing pieces on a set chess board, if it isn't about changing the basic rules of the game, it's not worth doing and it really will just be more adjunctification of academia. If you're a Republican legislator in Illinois, and you want to cut the public university system down, you're going to have to go beyond crude anti-intellectual caricatures and lazy rhetoric about the pampered professiorate. You're going to have to be willing to roll up your sleeves and go into the hearts and guts of the academic enterprise. You're going to have to make a coherent, comprehensive statement about what you think higher education is, and what the purpose of public universities are. If you can't do that kind of difficult, subtle, visionary work, then just keep writing the checks, because you're not going to magically produce a leaner, better university just by choking off the money.

[permalink]


September 18, 2003

Welcoming the Walk-On

Time to stick my foot on a Swarthmore-specific (yet also nationally resonant) third rail, and talk a bit about football.

If you don’t already know, Swarthmore had a football team until fairly recently and then we got rid of it. This decision got a bit of national attention in the wake of a widening conversation about the impact of specialized athletic recruitment on college admissions and the role of athletics on college communities. Within Swarthmore, it ignited a pretty intense firestorm whose embers are still pretty hot today.

My feeling was that the local defenders of football made a few pretty legitimate points about the way we went about making this decision, though I think they didn’t understand fully that our belief that we could have a competitive football team and do everything else that we do without compromising everything is broadly typical at Swarthmore and other elite liberal-arts colleges: no one wants to be the first person to stand up and say, “No, we can’t do everything well: we have to choose what we want to do well and throw the rest overboard”. The clumsiness of the decision wasn’t any one person’s fault: it was just how we go about making decisions in general. Some alumni suggested that we had departed from Quaker consensualism in our decision to drop football: I thought the decision was a classic demonstration of the worst part of Quaker consensualism, which is its inability to make a forceful decision rapidly and efficiently at the best possible time for that decision. (We had a good chance to cut football two years before we did so, and basically the whole community blinked, afraid to say no.)

On balance, though, I think it was the right thing to do. William Bowen and Sarah Levin’s new study of college athletics, Reclaiming the Game, seems to emphasize why it was the right thing to do, and also the further nature of the challenge ahead of us and many other institutions. As reported in the New York Times, Bowen and Levin have found there is a widening social and academic gap on many campuses, including highly selective ones, between atheletes and non-athletes, which they attribute to many factors but especially to the degree to which colleges are seeking out students who have invested a significant amount of effort early in life in developing and practicing a particular sport in order to bolster the prospects of their teams. These athletes thus enter college pre-segregated from non-athletes, with a strong sense of exclusive ties to athletic endeavors and inward-looking sub-communities of athletes.

The local, very passionate defenders of football here at Swarthmore, many of them alumni, offered a lot of arguments on behalf of the sport. Some of these I thought were, to be perfectly honest, thin or bogus. The proposition, for example, that football helps train young people to fulfill leadership roles may be true enough, but it’s not a unique argument for football, really, just an argument for competitive sports in general. I can’t see how you can possibly argue that football alone or football distinctively trains young people to be effective leaders—and if you do, you are arguing then that only young men can be so trained.

However, following what I’ve called “the ethnographic two-step”, the more fundamental question that occurred to me from that passionate response is simply to wonder at the passion of it all. Dropping football made a certain kind of modest sense to me: it was a finanancially and administratively prudent thing to do. The vehemence of many alumni’s expressed ties to football made me realize that there was something at stake here that went way beyond football itself. Football was a container or stand-in for some deeper divide between older alumni and the contemporary Swarthmore, and between athletes and non-athletes, some more primal and unspoken source of divisiveness.

I think in part, this reaction comes from what Bowen and Levin have found, which is that the world of the walk-on athlete, the “good enough” first-year student who decides to try playing football or running or what have you just for the hell of it, is mostly gone and forgotten. In part, it comes from the fact that the older alumni of Swarthmore and many other liberal arts colleges attended in an era before college was a nearly universal option, before intensely competitive admissions, before the specialization of the academy, before selective colleges and middle-class professionalism were wedded so tightly together. For them, I think, football was one of their last points of communal connection to the college as it now exists. They don’t really recognize or know the students; the professoriate is very different in its temperment and make-up, the entire purpose and social feeling of the college has changed. The student body appear to many older alumni to be too intense, too intellectual, too hermetically distant from the world as it is. Football, whether or not current students went to games, was a recognizable link to the college as it once was; football was a sign of Swarthmore’s integration into the wider domains of American society rather than its distance from them.

The thing of it is, that it is for this precise reason that I think the alumni who most passionately criticized the decision to cut football should have welcomed that decision, and should push the college (and others like it) toward further changes. Not towards cutting more athletics, but towards meaningfully reviving college athletics as part of the general culture of the institution, as part of a real calling to the liberal arts. What that means is turning away from intercollegiate athletics where the main point is to win and towards interamural athletics where the main point is to play.

The lessons that many defenders of football most passionately attributed to the sport are lessons that can be learned in any kind of competitive athletics, at any level of competition. Leadership, self-betterment, focus, intensity, the joy of winning, the importance of respect for an opponent, the balance of healthy mind, healthy body, the relationship between physicality and internal states of mind, the balance of thinking and being: you name it, it comes from any competitive sport, beginning or advanced. In the day that older Swarthmore alumni played their sports, the walk-on athlete was no fading memory but the heart of the system.

This is consistent with what I think the best values of a liberal-arts community ought to be. We should not seek out or preferentially admit an athlete who has intensely specialized in one sport in order to improve the chances of our teams, or even because we regard such intense specialization as a notable mark of achievement which suits our ethos as an institution.

I was asked by one colleague who was critical of the decision to cut football if I felt that way about physics or history or any academic subject. The answer is yes. I don’t think we should preferentially admit a student who has invested 18 years of their life becoming a Ph.D physicist and nothing more before they ever get here, or an 18-year old prospective student who knows everything about the history of 18th Century England but nothing about anything else. This is not to say that those students are untalented or unworthy, but if what they want from college education is the mere continuance of a deeply trained specialization that they have already committed to, they should go to a large research university.

What we should want are people committed to being generalists, people committed to exploring the connections between themselves and others, people who will break down the barriers Bowen and Levin are describing. We should want walk-on athletes and walk-on physicists and walk-on historians, people who discover their abiding passions in midst of college and through an exploration of the highways and byways of knowledge and experience.

The people who had the most expertise with football told the college three years before we cut the program that you just can’t play football in that spirit in this day and age, that you’ll lose all the time and you’ll place the students who play in physical jeopardy, that it is too dangerous a sport to play casually against specialized, experienced competitors. To me, that meant it was imperative that we not play it, but I would feel the same about anything, academic or athletic, that was described in similar terms.

If, on the other hand, there was an intercollegiate conference where football was played entirely for fun, with a ruleset and mentality to match, I could easily see us playing it again. But in that sense, I still see no reason to prefer football over other games, and I would argue that we already play rugby, ultimate frisbee and other sports in that spirit.

Bowen and Levin’s work is affirmation, I think, that Swarthmore made the right decision about football, erring only in that we didn’t make that decision soon enough, before overpromising what we could not in the end deliver. The decision ought to be part of a more comprehensive reconsideration of not only athletics but academics as well. The result of that reconsideration, if I had my druthers, would be a college whose ethos as a whole was closer in key ways to the reigning spirit of the day when football was in its amateur, generalist, walk-on glory at a college like Swarthmore. For a Swarthmore alumnus/ae who feels alienated from a college without football, I would hope that this first step could actually be the beginning of a road back to deeper and richer connection, a way to embrace a once and future liberal arts education that looks to be more than just narrow scholasticism.

[permalink]


 

September 16, 2003

STFU Harold Bloom

Quoted in September 15th’s New York Times regarding the National Book Awards’ intent to give its annual medal for distinguished lifetime achievement to Stephen King: "'He is a man who writes what used to be called penny dreadfuls,' said Harold Bloom, the Yale professor, critic and self-appointed custodian of the literary canon. 'That they could believe that there is any literary value there or any aesthetic accomplishment or signs of an inventive human intelligence is simply a testimony to their own idiocy.'"

Let me return fire in the same spirit.

Harold Bloom is a pompous ass who can’t even be bothered to live up to the first responsibility of intellectual life, which is to do your homework and respect the difficulties that are native to complex ideas and arguments.

Anybody on the right who has bothered to sally forth against pompous left-wing intellectuals for their isolation from everyday life, for their elitism, ought to be just as drawn to criticism of Bloom at this point. That won’t happen, because he plays a tune on their favored banjo, a tune that a certain species of strangely paradoxical anti-intellectual intellectual-wannabee finds perversely soothing.

You can make an argument (and should make an argument) that there are meaningful qualitative differences between works of culture. You can refuse the easy, lazy way that some work in cultural studies refuses to talk about those differences, or the way that some works of literary criticism subordinate those kinds of aesthetic value judgements to identity or politically-linked strategies for constructing literary canons.

The most important point is that those qualitative judgements are hard to make, not easy. They're the meat-and-potatoes business of literary criticism. They require a lot of laying of philosophical and intellectual foundations to make in general (which Bloom has done, though in ways I profoundly disagree with) but also a lot of labor in each and every specific case, which Bloom has not done.

I suppose you might be safe casually slanging a limited class of cultural works in the New York Times, but Stephen King is most assuredly not a safe target, which anyone who was awake and alert while reading him would know. First, King’s actual craftwork as a writer commands respect, regardless of the subject matter of his writing. However, I also think his ability to capture some of the social and psychological content of middle-class American life in the late 20th Century is favorably comparable to Updike, Irving and a number of other writers that might actually appear on Bloom’s stunted little radar screens.

Those two points alone have to be thought about seriously even from the small-minded, mean pedestal that Bloom perches so troglodytically upon.
Beyond that, however, is King’s subject matter. Obviously, for Bloom, this is the real issue: someone who writes about vampires and animate cars and evil clowns and psychotic romance-novel fans can’t possibly be good. Here we’re into a wider morass of debates about what constitutes good or worthy culture, and Bloom if he likes can categorically rule out anything that has a whiff of “penny dreadfuls”. If he were fair-minded about it, he’d have to concede that would leave some of Shakespeare in the garbage bin, and probably other works from his precious canon as well, but I wouldn’t expect him to be fair.

I would say simply that on this subject that categorically I disagree with him. The culture which matters most is not merely the culture that aesthetes praise as worthy, but the culture which indures, inspires, circulates, and is meaningful and memorable for many people, to the widest audiences. Sometimes that involves the adroit manipulation of archetypical themes and deep tropes of the popular culture of a particular time and place, and King does both of those things. I don’t know how he’ll be read a century from now, but I do know that in this time and place he not only tells a damn fine story (most of the time: even I would regard some of his work as hackwork) but manages to say some important things about consumerism, family, childhood, apocalyptic dread, obsession and many other resonant, powerful themes of his day and age.

The Dead Zone is in many ways one of the best framings of an important moral question about individual responsibility and knowledge that I can think: what are you called upon to do when you are certain you can prevent evil from happening through your own actions? Misery is a psychologically taut investigation of obsession, failure and the nature of literary merit. At his best, King’s importance as a specifically American literary figure seems unquestioned to me, and his work as iconically representative of the American spirit in his time as Last of the Mohicans was in its day.

This is not to say that I think this award ought to go to anybody who sells a boatload of books, or be a popularity contest. It's not even to say that King is unambiguously a deserving recipient. But the case for King’s merit is serious even if you believe that there is good literature and bad junk. That Bloom doesn’t bother to make his case seriously is more an indictment of his own intellectual indolence than a meaningful criticism of King or the National Book Awards.

[permalink]


September 15, 2003

Man, I can be rude sometimes. I keep getting really nice comments about stuff here from Ted Wong, one of the more interesting guys in the Swarthmore-Bryn Mawr-Haverford system, but it never occurred to me to ask if he had a blog too.

Yup, he does.


September 15, 2003

I agree with Crooked Timber: Unqualified Offerings sums up the insanity of the "flypaper thesis" very nicely. I still don't understand (abstractly) why a patriotic shitstorm has not landed with a colossal thump on the heads of those who advance it.


September 15, 2003

Software Industry Needs More Greedy Capitalists, Part XVIII

I’ve made the point before that the computer games industry is weirdly slow to capitalize on possible sources of profit. The movie industry is in thrall to the pursuit of the mega-blockbuster, but game designers almost seem afraid of trying for the biggest possible audiences. This has become one of my biggest criticisms of Star Wars: Galaxies, for example. Somewhere in the beta stage of the design process, SWG's developers appear to have consciously decided to make a game that was maximally hostile to average or casual players, not to mention people who had never played a massively-multiplayer persistent world game before. Kind of a weird thing to do when you’ve bought the license for one of the two or three most popular cinematic properties of all time.

One of the other places where this strange aversion to profit emerges is attempts to design games aimed at other target demographics besides 18-34 year old middle-class males. It shows with games for girls, which make a Barbie dressed in a pink ballet costume look like the epitome of a cross-over toy. You could take nine-tenths of the games designed explicitly for girls and put a splash-screen disclaimer at the initial load: “CAUTION: This game has been designed by men who are not entirely certain what a 'girl' is. They were furnished with blueprints that suggested that certain colors and themes are useful, and several pictures of actual ‘girls’. Care should be taken in the playing of this game by actual girls: this game may or may not have anything to do with their ideas about what would be fun to do in a computer game”.

Beyond girls, though, I’ve been even more struck at how absolutely rock-bottom horrible most games and educational software for small children are. My 2 1/2 year old is already a proficient mouse-user and loves to sit at our old PC playing, but the range of software available for her is pretty depressing. If it’s not just plain a bad, cheap licensed-property piece of crap (Disney is especially prone to license absolutely retch-inducing stuff that seems to have been designed by a 15-year old who knows a little Fortran), it’s buggy and extremely fussy about memory usage and operating system requirements.

One exception so far has been the Jumpstart series of games, which I gather is more uneven in quality when it gets to mid-elementary school levels, but the preschool and toddler games are really quite well done, and teach a lot of good mouse navigation skills.

The few gems aside, what is surprising to me is that so few game designers think about creating kid-friendly variants of existing software. One of the things my daughter loves to do is create characters for Star Wars: Galaxies using the slider to make them tall or short, fat or thin, green or blue and so on. If you just sold that character creation system and a small 3-d environment with simplified navigation (no combat, no multiplayer), I’d buy it in a minute and install it on my daughter’s computer. The same could be said of Morrowind—my daughter loves to create a character there (I have to get her past the complex skill-selection stuff in the beginning) and walk around the gameworld. She doesn’t like combat or the scarier places in the gameworld, so once again, you could just create a simplified navigation system, a selection of avatars, and a 3-d environment with friendly NPCs in it. Voila! Instant toddler fun. I guarantee lots of kids would enjoy something that simple—and it surely would be simple to produce. Disney’s multiplayer game Toontown (an exception to the normally wretched quality of their licensed work) is a good demonstration of that.

Is there some reason I’m missing why no one has done anything of this kind? Why is so much children’s software so bad? Is it the need to appeal to parents with the proposition that it’s “educational”, which usually results in insincere, uninvolving, hack-design work in children’s culture as a whole? Anybody got any ideas?

[permalink]


September 12, 2003

Armchair Generals R Us

Long post coming. Hold on to your hats. Apologies for repeating some of the things I’ve said on these subjects in the past.

No September 11th anniversary remarks, exactly: I pretty much wrote my anniversary entry back in the middle of the summer. The best anniversary piece I saw (amid a sea of banality) was Robert Wright’s “Two Years Later, A Thousand Years Ago” on the New York Times’ op-ed page. Aspects of Wright's thinking definitely resonate with my own: people who think about the war on terror in terms of Iraq or military force are thinking too small.

I received a couple of interesting responses about my September 8th entry. A few people felt I had broken with my usual style of trying to see all sides of an argument and leaving room for a shared conversation between people with different convictions. A few also commented that while I make it clear what I think is a losing strategy, I don’t say enough about what it takes to win the war on terror, or even why I think we’re losing, really.

Fair enough. I will say that I really do think that some of the “flypaper” arguments, including versions of them emanating from the Administration, are either knowingly dishonest or transparent bunk. In either case, I’m also not real clear on why a patriotic shitstorm doesn’t descend on people who are basically arguing that US soldiers should serve as human targets. As I’ve said here before, it’s hard for me to leave room for legitimate , complex argument with pundits and writers who don’t recognize a responsibility to track the shifting sands of their own claims and logics and acknowledge when they’re rethinking earlier claims and premises.

That to me is one of the differences between an ideologue and a public intellectual: the ideologue is only an opportunistic chameleon, refashioning his claims at will to maximize the fortunes of his political faction. If Andrew Sullivan wants to argue that it was never about WMD, and that Iraq was only chosen as a target because of existing pretexts, that it doesn’t really matter which Islamic authoritarian state we attacked as long as we attacked one, that’s up to him, but it’s definitely moving the goalposts. If he wants to claim it’s still about building a liberal democracy that will then spread inexorably, that’s also up to him, but he might want to say something, anything, about how exactly he thinks liberal democracies actually come into being and how that could be done in this case.

If I charge Sullivan or others with that task, then I need to rise to the challenge myself. Since I do accept that there is a war on terror, why do I think we’re losing it? And what do I think needs to be done instead? What’s my game plan?

Why do I think we’re losing?

There have been some undeniable successes. I’m not as impressed with the follow-up, but the initial operation in Afghanistan was masterful on several levels. Some of the changes to security both domestically and worldwide have been equally impressive and effective. And for all that the Bush Administration gets criticized for being unilateralists, they have actually managed to turn the question of terrorism into an effectively global, urgent matter and to align most states behind a consensus that combatting terrorism is an important goal for the 21st Century.

However, my first major argument that we are losing the war has to do not with the fact of the Administration’s unilateralism but the style of it. The key security figures inside the Administration have been determined even before September 11th not just to carry a big stick but to speak loudly about it, to bray to the heavens their disinterest in what everybody else thinks. This is completely unnecessary and ultimately self-defeating. It is one thing to quietly determine that in pursuit of legitimate objectives in the war on terror, the United States will not be checked, slowed or diverted, and to quietly communicate to key allies and important geopolitical players this determination. It is another thing entirely to go out of your way to insult, belittle and demean the rest of the world, even to the point of politically undercutting your closest ally, as Donald Rumsfeld has done to Tony Blair on two or three occasions.

So first, we are losing because, as I wrote at the start of the war in Iraq, we cannot win alone, or even just with Poland, England and a number of other nations in our corner. That has obviously become clear even to the President just purely in terms of the costs of the occupation and reconstruction of Iraq, but it goes deeper than finances. American conservatives continue to rail against anti-Americanism abroad as if they could argue (or even attack) it into submission. It doesn’t matter whether it is wrong, morally or rationally. It exists. It is real. It is powerful. It determines in many cases whether the war on terror goes well or goes badly, whether other societies vigilantly watch for and argue against terrorism and understand themselves to be in the same boat and in the same peril as the United States. The degree and depth of anti-Americanism in the world today would not exist were it not for the unnecessarily antagonistic, contemptuous style of the Bush Administration in pursuing the war on terror. This is not a binary thing: it is not as if there would be no such anti-American response had the Bush Administration been the soul of discreet diplomacy, nor is it the case that existing anti-Americanism makes it impossible to achieve meaningful success against terrorism. It merely makes it considerably more difficult. Unnecessary things that make success in war more difficult are bad.

Second, we are losing specifically because we squandered a considerable amount of ideological and persuasive capital with the clumsiness of our justifications for the war in Iraq. This is where the shifting sands of rationalization really do matter, and matter not just as bad arguments but as bad public relations of the kind that cannot be undone through compensatory slickness at a later date. The choice of Iraq as target, then, handed our opponents in the war on terror a propaganda coup that they could scarcely have dreamed of in 2000. We voluntarily cast ourselves as the imperialist brute that our enemies have long caricatured us as.

Third, we are losing specifically because we have shown little interest in opening meaningful lines of persuasive connection to the Islamic world, and have given a great deal of unintentional credibility to the thesis that the United States is pursuing an apocalypic crusade against Islam itself. I am not talking here about happy-happy we are the world Islam-is-a-religion-of-peace stuff here. I am talking about three quite specific things that we could do and are not doing.

a) We have to regard a stable settlement of the Israeli-Palestinian conflict as a urgent requirement for our own national security, not merely as some altruistic gesture on behalf of world peace. Settling that conflict and appearing to adopt a rigorous neutrality about the fundamental claims of Israelis and Palestinians in the process, e.g., operating from the premise that both peoples have an inalienable right to national sovereignity, is as vital a war aim as taking out the Republic Guard positions near Baghdad was. Our apparent (here I could care less whether this is “real” or not) favoritism towards Israel and the Sharon government in specific is an absolutely mortal blow to our chances of isolating terrorist organizations from the broader span of Middle Eastern societies.

b) We must recognize what the wellsprings of al-Qaeda and other Wa’habist organizations really are: Saudi money and a condition of political alienation throughout the Arab world that the reconstruction of Iraq, should it succeed, will not magically eradicate. We have to find better levers to move autocrats than the threat of invasion, because most Middle Eastern autocracies already knew what we are now discovering: invasion and occupation of even a single society is expensive, difficult and perilous even for the mightiest superpower, and the chances that even the most psychotically gung-ho gang of American neoconservatives could do it again and again throughout the region are minimal.

c) Our targeting of Iraq in the first place, compared to our initial attack on Afghanistan and the Taliban. More cosmopolitan interests throughout the Islamic world readily recognized the legitimacy of regarding the Taliban as a source of instability and terrorism. But equally they knew what US planners either did not know or refused to countenance (or cynically ignored) that the links of Hussein’s regime to Islamacist terrorism were in fact strikingly tenuous and in some cases actively antagonistic—and yet, here we were, devoting an enormous amount of effort and power to making Iraq our primary target. Loathsome as Hussein was (and apparently still is), much of the Arab public knows that his loathsomeness was only distinguished by its extremity, not its general type, and his links to the kind of terrorism that struck the US on September 11th were weak. He wasn't the right guy to hit. This has given dangerous credibility within the Islamic world to the proposition that the real logic of the attack on Iraq is a general, flailing assault against all things Islamic or a greedy quest for oil. When someone like Andrew Sullivan stands up and says, “Well, we had to attack somebody, and Saddam Hussein seemed the most convenient”, it tends to confirm that suspicion.

Fourth, we are losing because US policy-makers within the Bush Administration (and conservative pundits) continue to think about a democratic Iraq roughly the way Field of Dreams thinks about baseball spectators. If you invade and occupy, they figured (and still seem to think), it will come. The opposite of tyranny is not democracy. Democracy is made, and made from the roots and branches of a society. It is not given out the way a G.I. gives out Hershey chocolate. The U.S. military can only be the security guards for the people who will really make Iraq democratic. The people who do the real work have to be the people who already know a lot about Iraq from the bottom-up: Iraqis themselves, Americans, Europeans, Middle Easterners, anyone who has spent time there and brings useful technical and administrative skills to the task. Iraqi democracy will have to be locally intelligible and adapted to its history and culture. Its people will need to have a sense of ownership over and responsibility for their own fate. And it’s going to cost a boatload of money, far more than $87 billion, because we’re going to need to build the infrastructure of economic and technical prosperity on our own dime. That’s what being an enlightened occupier is all about. It’s also going to cost American lives. Far more of them, in fact. Not because in some macho fashion, as “flypaper”, we’re fantasizing about drawing all the terrorists in the world to Iraq and killing them all, but because for the military to be security guards for the reconstruction of Iraq, and civilian planners and experts to advise and instruct, they’re all going to have to be available, public, accessible and vulnerable. You can’t do any of that work inside a bunker.

Wright's observations in his New York Times article are critical. It is not wrong to believe that we are at a moral crossroads between the expansion of human freedom and its diminishment. The problem is that the Bush Administration talks that talk but does not walk that walk. They do not understand that the battlefield lies on that crossroads and the weapons are mostly not guns and bombs. They do not understand that you actually gotta believe in democracy to create democracy, to believe in pluralism to spread pluralism, to hold yourself to a higher standard to spread higher standards.

How do you win the war on terror?

Soft power—economic power, moral power, persuasive power, diplomatic power, are vastly more important than military power. Military power isn’t quite the tool of last resort, but it is a tool whose place in an overall strategic assault on terror is quite particular and whose misuse or misapplication carries enormous peril for the overall plan. The Bush Administration has more or less flushed some important tools of soft power down the toilet for a generation: our moral authority, our persuasive reach and our diplomatic capacity have all been horribly reduced. This is like trying to fight a major conventional war without air or naval power: we have given up real resources of enormous strategic value and gained very little in return.

How to restore those sources of soft power and get back on track against terror (and to be gloomy, their restoration is going to take much longer than it took to piss them away)?

First, refine the so-called “Bush Doctrine” of preemptive attack. Yes, we should still reserve the right to do so, but the circumstances in which we do so and the magnitude of our response should be carefully limited. When we announce our right to do anything by any means necessary, we rightfully terrify even our potential allies. Specifically, we should make it clear that one of the main rationales for pre-emptive attacks and regime change will be aimed at nations which actively encourage and solicit the operations of multiple terrorist groups within their borders. Which, I note, did not include Hussein’s Iraq. Along these lines, we should actually harken to one of the few good ideas that Donald Rumsfeld has had, which is to invest heavily in precision military forces capable of rapid, targeted responses all around the world.

Second, focus on the problem of failed states, and do not wait for them to become havens for terrorism. Failed states threaten everyone with more than terrorism, and inflict intolerable suffering on the people trapped within their borders. Recognize, also, that a coordinated global response to such societies is going to require immense resources and huge multilateral networks (UN-sponsored or otherwise).

Third, recognize that we can hardly build democracies abroad if we do not demonstrate a rigorous, unyielding respect for democracy at home, even if that respect exposes us to the inevitable risks that an open society must be willing to incur. In other words, ditch John Ashcroft and anything resembling John Ashcroft posthaste. Nothing is more corrosive to advocacy for liberal democracy in other societies than an unwillingness to abide by its obligations at home. We cannot possibly succeed in promoting an enlightened, expansive, democratic conception of the rights and obligations of civilized human beings if we keep prisoners in perpetual limbo in Guantanamo Bay or reserve the right to deprive our own citizens of their rights by federal fiat.

Fourth, urgently renounce the kind of protectionist hypocrisy that the Bush Adminstration displayed with steel tariffs or that the US government has long displayed with agricultural tariffs. Part of giving elites in other parts of the world a greater stake in a globally interdependent society is ensuring that they do not have to endlessly submit to neoliberal policies established by global instutitons while wealthy nations flout those same policies. Whatever the political price the Bush Administration—or any Administration—has to pay, those tariffs and any other kind of asymmetrical international policies have to fall. Nothing provokes a revolt against a monarch faster than the sense that the monarch is above the law that he imposes tyrannically on everyone else. If I had to bet on what might lead to a greater flowering of democratic governance in China over the long run, I’d say that sooner or later economic growth and the power of a large, globally engaged bourgeoisie is going to eat away at and possibly active confront an enfeebled totalitarianism. Give people a stake in global prosperity and they’ll do the work of transforming the world for you—but you can’t give them that stake if you draw up the drawbridge and make the American economy a fortress.

Fifth, take the rhetoric of a nonpartisan approach to the war on terror seriously, rather than a bit of transparent rhetorical bullshit stuck at the beginning of a State of the Union Address. Meaning that at all costs and in all circumstances, the conduct of the war—which will even in the best case stretch across decades—has to be sealed off by an impermeable firewall from party politics. You cannot expect the Democratic Party to sign on to a nonpartisan covenant when it has so far been utterly clear that the Republican Party intends to exploit the war on terror for political gain at every single opportunity. By all means we should have a partisan debate about the war and all aspects of it, but the Bush Administration (and any successors) needs to go the extra mile to demonstrate in the best possible faith that the objectives of the war on terror are subscribed to by virtually everyone within American politics. That’s something that has to begin with the Bush Administration and its allies on Capitol Hill, for they have more trespassed than been trespassed against in this regard. This is important not just for the integrity of the war within the American political scene, but as a strategy in prosecuting it abroad. The more that there is a perception that the Bush Administration is merely bolstering its own narrow political fortunes, the harder it is to build a long-term, deep-seated interest by other nations in combatting terrorism.

Sixth, pursue flexible and redundant strategies for securing vulnerable targets against terrorist attack rather than the rigid, expensive and often draconian strategies that have so far mostly carried the day.

Seventh, I’ve already laid out how we ought to go about the business of aiding Iraq towards liberal democracy. Now that we’re committed there, it’s important that we try to meaningfully follow through on that objective rather than flail around impotently waiting for the magic democratic fairy to sprinkle Baghdad with pixie dust.

[permalink]


September 8, 2003

One of my favorite thinkers I've met in my travels around the Internet, Gary Jones, has a blog now. He says it's just going to be "agricultural geek stuff" but looking at it, I can see he's already broken that promise. Good.


September 8, 2003

Operation Meatshield

On the situation with Iraq, at this point, lining up behind the President with any kind of enthusiasm is the worst kind of partisan bad faith, an abandonment of reason, ethics, and pragmatism.

I can accept a skeptic who wearily, resignedly argues that because the President represents the United States and because he’s committed us as he has in Iraq, we have no choice but to look for the best possible long-term resolution of that commitment. I can accept someone who reminds me that there were many people whose motives for supporting the war before it began were well-intentioned, reasonable or potentially legitimate. I continue to feel, as many do, that unseating Saddam Hussein is something that anyone ought to recognize as a positive good. I can even accept, as I noted some time ago on this blog, that there are many within the Bush Administration who may have had good intentions or reasonable opinions in promoting an attack on Iraq.

I am not prepared to cut any slack to anyone who thinks that supporting the current policy as it has been shaped by the President and his advisors is sensible, effective or ethical. I’m not interested in the outrageous hair-splitting and relativist, deeply postmodernist nonsense being spewed out by many conservative commentators, the knowing utterance of lies and half-truths, the evasions, the excuses, the total disinterest in the hard questions that now confront us and the total inability to concede even minimally that many of the critics of the attack on Iraq predicted much of what has come to pass.

I continue to believe that there is a sound argument for the judicious use of force in pursuit of a legitimate war on terrorism. This is precisely why I feel such white-hot fury at the current Administration. It is not merely that we were lied to, and not only that the one thing I was prepared to concede to the Administration, that the possession of weapons of mass destruction presented a legitimate casus belli, turns out to have been the biggest and most aggressive lie atop a pile of misstatements and deceptions.

What fills me with loathing and anger is precisely my belief that there is a war on terror, and that we are losing it . The President is the Commander-in-Chief, but his battle plan stinks. Forget all the admittedly important talk about imperialism or the morality of war or anything else for the moment. The first issue, before we get to any of those debates, is that the top general has opened his “central front” on a battlefield that favors his enemy, exposes his own troops, and has no strategic value whatsoever, in service to a speculative, half-baked geopolitical vision, a "democratic domino theory", which is crude at best and quasi-psychotically delusional at worst.

The fundamental strategic idea of the war in Iraq, when the dust of the initial campaign settled, turned out to be a kind of 21st Century Maginot Line, plopping a bunch of US troops down in an exposed situation and daring every possible organization and group to take a shot at them, while also leaving endless space for geopolitical end runs around the fortress. Worse, for the mission of reconstructing Iraq to succeed, there is no way for the troops to hide or defend themselves fully from attack. There are even some conservatives who have been brazen enough to say that this is a really good idea, that US troops are “flypaper” for terrorists. Who is the political constituency betraying our soldiers? Who is failing to support U.S. servicemen? Anybody who calls for them to be “flypaper”, to be meatshields, who asks them to serve as impotent human targets, that’s who. I don't think it's possible to be more cynical than that, to be a more callous armchair general.

I’ve seen the press report on military families expressing support for their men and women in harm’s way, and they should keep on doing that. Those families shouldn’t fool themselves, however. Those men and women, courageous and giving as they are, are in almost all cases struggling mightily to make the best of a bad situation. In many cases, considerable good is coming from their efforts. Iraq may yet emerge as a freer, better, more hopeful society, and the Iraqis will be able to thank the United States if that happens. But whatever is happening in Iraq that is good, it is not a victory in the “war on terror”. Yes, Iraq may come out of this better. It is hard to imagine that it could be much worse than it was under Saddam Hussein.

However it comes out, its final state will mean almost nothing in determining whether terrorism becomes an even more potent global force: it will only determine whether one nation and one people live better or worse than they did before 2003. In contrast, the manner and style with which this war was prosecuted in the first place encouraged and empowered terrorists, and the necessary long occupation that now must ensue—for I acknowledge that we can’t just pack up and leave, that milk is spilt—has given terrorists an easy target and enormous ideological capital all around the world.

If you have a loved one serving in Iraq, and you believe that we have to take the fight to terrorism, then tell the President he’s fighting in the wrong place and more importantly in the wrong way. Tell him that his mistakes in pursuing this war have made terrorism stronger. Don’t let him use your loved ones as target practice for terrorists, and don't let him misuse their sacrifices for narrow, selfish, partisan gain. Ask him to use American military power where it needs to be used in that struggle, and to forbear using it in ways that actively strengthen terrorists.

If you believe instead that the war in Iraq is the first strike in a global war on tyranny, then ask some tough questions of the President. Why is Iraq different than Liberia? Or the Congo? Or North Korea? Or Saudi Arabia? And what happens after you unseat the tyrants? Just how do you create liberal democracies using military troops? If you have family in the US military, ask yourself whether your loved one has been trained to be a civilian administrator, a mediator, a political scientist, a lawmaker, a traffic cop, a speechmaker, or an anthropologist. Ask yourself whether our fighting men and women have been given the tools or the practice or even just the money and materiel to succeed in this mission, and whether you support their enlistment in what is surely a war that will last decades and cost many of their lives, the war against all tyranny everywhere. Ask yourself if you hear even a peep from anyone in the Administration who seems to have the slightest glimmer of a clue about how to create democracies abroad through military occupation, or if any of the right-wing blowhards who have promoted the war seem to either.

This is either a war against terror, fought in the wrong place, in the wrong way, by the wrong leadership, or it is a wider war against tyranny and for democracy, fought without even the faintest clue of what to do next by a leadership that barely understands or believes in democracy themselves. It is rapidly becoming an endless misadventure whose only continuing justifications lie in the repeated errors of the people in charge of it. They fail, and then use their failures to argue that those failures are why they must be allowed to fail some more. Don't let them. We cannot withdraw quickly or easily now, but we can ask that a failed leadership shoulder the burden of their failures where it belongs, squarely on their own backs rather than on the backs of U.S. soldiers.

[permalink]


September 7, 2003

Where the Girls Aren't

Catherine Orenstein’s essay in Friday’s New York Times on “Sex in the City” is a textbook case of why a certain style of feminist critique, or the “cultural left” more generally, has so thoroughly lost out in the war for hearts and minds. You could write a parallel to Danny Goldberg’s Dispatches From the Culture Wars that deals more with intellectuals, academics, writers and thinkers who identify with the left but who completely lack the ability to perform a sophisticated reading on popular culture. You’d think that Susan Douglas’ great book Where the Girls Are would have in particular sent this species of feminist cultural criticism back to the drawing board. But here’s Orenstein, walking into the same cul-de-sac with eyes wide shut.

I keep being told by friends and associates that every single instance of this ham-fisted approach to the meanings and content of popular culture, every case of people on the left displaying a near-instinctive loathing for mass culture, every moment where a prominent liberal or left thinker comes off like a caricature or a scold, is just isolated or exceptional or unusual. Maybe so, but enough exceptions amount to a pattern.

Orenstein basically manages to demonstrate that while she may have watched “Sex in the City”, she doesn’t understand it. It’s not one of my favorite shows, either, and I don’t watch it very often. I think it’s a great show, it’s just that I don’t personally enjoy it that much. I’ve got no pressing political beef with it, for two reasons.

First, because I think it captures, with a goodly amount of authenticity and self-awareness, a real enough and rather interesting social world, and I think to ask a show or film or book to portray an idealized world instead of invoking something recognizable is one of the classically fatal impulses that the cultural left (and cultural right) display. Orenstein invokes “The Mary Tyler Moore Show” as the preferable alternative to “Sex and the City”, presumably in part so that she doesn’t come off as a high culture snob who thinks that only the films of Lizzie Borden would make an acceptable substitute for “Sex and the City”. What she doesn’t seem to get is that “The Mary Tyler Moore Show” worked in its time and place because it had the same invocational authenticity then as “Sex and the City” does today. You can’t rip that program out of its time and say, “Here, do another one of these”, because the moment that show legitimately invoked is long past. When we watch Mary Richards on her odyssey to independence now, we watch her historically, as a window into a past sensibility. It's still funny, and occasionally even still relevant, but it's not set in the world of now, however much it might be set in the world of NOW.

Both the cultural left and the cultural right ultimately have a simple-minded conception of the relationship between practice and representation. In their shared view, to represent a social world is straightforwardly to summon it into being. To show something on television is to celebrate it, or in Orenstein’s typical formulation, “glamorize” it, unless it unambiguously sets out to demonize instead. So if one wants to see changes in world, then harass television executives and movie producers to make products to match, that follow a kultural komissar’s checklist of approved and unapproved symbols and signs. It’s a foolish impulse, both because it usually has the opposite of the intended effect (if followed, it most often it results in turgid, prissy, self-righteously boring work that no one watches unless they're compelled to do so) and because it has buried within in it an authoritarian desire to order and mobilize the work of representation to narrowly instrumental ends (the compulsion to watch tends to follow closely on the heels of the audience rejecting the turgidly idealized work.)

Second, I have no real complaint against “Sex” because it is so keenly aware of and open to most of the issues that Orenstein raises. Sometimes the question of materialism, or the emptiness of the quest for male companionship, or the futility of Samantha’s nyphomania, is laid out through irony, and sometimes it’s laid out pretty forthrightly, but these things are in any event always issues. This isn't Horatio Alger or some crudely propagandistic brief for the characters as a human or social ideal.

The show showcases the kinds of debates that Orenstein launches so blithely as a complaint against the program. I don’t know how you could see the show and miss its ironic, self-referential spirit. There may be complaints to be made about the ubiquity of such a spirit, of the postmodern geist, but they ought to be made knowingly and appreciatively. Orenstein comes off as nostalgic for “Mary Tyler Moore” because it was the last time she actually understood a television program: she doesn’t seem able to grasp the possibility that popular culture could contain double meanings, that it is possible for a program to show something that is at once authentic and superficial. Welcome to 2003. Mary Richards, wherever she is, watches and laughs at "Sex and the City", even if Orenstein doesn't, laughing both with and at its characters.

Orenstein's article is not just an intellectual sin. When you approach popular culture the way that Carrie Nation approached a cask of booze, you shrink rather than expand the power and coherence of contemporary feminism, reduce it to a prim and prohibitionary force. If Orenstein is concerned with being worthy of the feminism that her mothers bequeathed her, she might want to be concerned with how to keep feminism a meaningful and growing concern relevant to contemporary life rather than perpetually, mordantly looking backwards to past glories.

[permalink]


August 25, 2003

Vaccines and The Ethnographic Two-Step

Responding to a discussion of vaccines at Crooked Timber, I started thinking about how few public thinkers in the United States seem to have mastered an important habit of thought that I’ve come to call the “ethnographic two-step”, a tool that anthropology could contribute to public thought, but that mostly goes unused, even by anthropologists. (Something that Micaela di Leonardo observed some time ago.) For the most part, commentators and writers tend to resolve themselves towards one half of that two-step dance rather than follow the whole routine.

The two-step goes something like this: you have to understand why people do what they do from the inside out, but you also have to be willing to critically evaluate what they do and if necessary, judge it or condemn it. (Or for that matter, praise and seek to reproduce it if you find it laudable and applicable to others.)

I am not entirely sure it matters which of the steps you take first. You might see or hear of some practice or community that you are certain is illiberal, anti-democratic, repressive, or unfree, and then resolve to try and understand why it exists in the world. Or you might seek to understand the logic and ideas of some group or cultural form and then decide, having understood, that you find it repellant. I suspect that the latter is preferable, to approach an unknown with a sense of curiosity and allow judgement to emerge organically out of that investigation. I am sure, however, than anyone who wants to make a meaningful contribution to the deepening of knowledge and the achievement of lasting justice is obligated to do both things as part of the same overall effort.

The caricature of “cultural relativism”, most cuttingly articulated by Dinesh D’Souza, is usually crude in its intellectual history of the trend and simplistic in its assessment of the contemporary scene, but it scores points because it does identify, however poorly, a real habit of thought in public discourse in both the United States and Western Europe. What D’Souza calls “cultural relativism” comes from many sources, including modern libertarianism, old-style Burkean conservatism, certain varieties of anthropological analysis and a host of other sources. This way of thinking takes a human group, society, or institution and holds it off at a permanent distance, always exotic. It often amounts to a half-voiced, half-gestural view of the world that ends up sounding rather like Star Trek’s Prime Directive, a belief that intervention never changes anything for the better, and that cultures are what they are, and natural to themselves and of themselves, and not us, never us. The more conservative relativists apply this belief largely to domestic issues and groups, and the more liberal ones, usually to foreign or non-Western societies and practices. It is rare to find someone who views WASPs, Yanomani Indians, corporate boardrooms, Islamic fundamentalists, the Oval Office and biker gangs in New Jersey with the same dispassionately relativist eye.

On the other hand, on the contemporary US political scene, we have the neocons, the religious right, and the cultural left, all of whom are fervently consumed by a transformative project which takes little or no interest in understanding the inner worlds and sensibilities of the target of their wrathful vision. What’s wrong is wrong, and what’s right is right, and these constituences tend to take it for granted that the difference is obvious and that anyone who hesitates or seeks further understanding is willfully blind to that difference and therefore morally culpable themselves. It doesn’t matter to any of these groups why people do what they do—if what they are doing is wrong, their motives are assumed to be venal, self-serving, and volitionally wicked, inasmuch as there is any thought given at all to movtives.

The problem is that those who seek transformation have no real hope if they are unwilling to see the world the way that those they wish to transform see it, or more precisely, they leave themselves one avenue and one avenue only for realizing transformation, and that’s force, compulsion, coercion. More troops, more laws, more speech codes, more restrictions. Certainly that works now and again, but not very often and only in highly particular situations where the practices in question were largely legal or governmental in the first place.

Equally, the critique of the complex tendency towards relativism scores points because relativism, however intricate, tends to be both self-deceiving and morally hollow. It often blinds itself quite deliberately to the existing connections and mutualities that link an observer to the culture he or she wants to understand, and creates a false sense of Olympian distance. It often creates, allegedly in service to empathy, a two-tiered view of humanity where the “us” have moral and ethical obligations and standards and the “them” are innocent, prelapsarian, animals or children to whom “right” and “wrong” do not apply. Few relativists are true nihilists who reject ethics altogether, but they simultaneously make ethics a parochial, local matter that affects only “us”, while also ballooning “us” into a position of universality.

Take vaccines. On one hand, yes, when people come to a crypto-theological conclusion about something like vaccines, they now commonly reach for a quasi-psychotic confection of dissenting studies and evidence to create a rhetorical case for their decision. Many of those who resist vaccination in the contemporary US are clearly exploiting the willingness of everyone else in their community to vaccinate their children and accept this risk. (E.g., in a community where 99 children are vaccinated against pertussis, 1 child can forgo the vaccine without fear of pertussis and have no risk from the vaccine itself to boot). They place themselves outside the commonweal and make themselves the ethnographic exception, and then instrumentally misuse the rhetoric of science and the right to privacy to justify what is little more than parasitism. The refusal to vaccinate is at its worst when it comes cloaked in the remnants of left-leaning counterculturalism, because in that case it even pretends to a kind of phony communitarianism, a loyalty to some never-never land social world that is yet to come.

And yet. And yet.

There are two things that many scientists, doctors and policy-makers who express frustrations about these beliefs simply don’t understand. The first is, that while it may not be rational (or civic) to avoid vaccination (or many other things) based on the evidence available, it is rational to suspect that the evidentiary materials scientists and doctors are assembling may be suspect in their particulars or even in their broad outlines.

Doctors and public health officials do not instantly bow to the truth of their own data (witness their palpable collective unease about the body of evidence confirming that wine and other alcoholic drinks have benevolent effects when consumed in moderation) and sometimes their data also contains genuinely messy information about what is true and not true. The public grasps this, and it makes a lot of skepticism and willingness to ignore bodies of scientific evidence at least partially rational—especially in cases like autism or asthma, where the condition is poorly understood and where its rising incidence is admittedly mysterious. In the end, seen from the inside out, skepticism about science makes a good deal of practical, everyday sense, not because it is ill-informed, but the exact opposite, because of long experience and knowledge, and because science sometimes situates itself as a force outside of community and authoritative over it, not as a structure of reason that rises organically from the deep roots of our lives and times.

The other thing that many scientists and policymakers do not grasp when they decry public irrationality about gauging risks is that we don’t evaluate risks through probability. We evaluate risks through narrative. Meaning, we tell stories about the consequences of risking and losing and decide based on the qualitative evaluation of the tragedy or pain involved and the relative preventability of the events that would incur that pain. It is the meaning embedded in the stories, the interpretations of the world, the qualitative way we evaluate not just why things happen but how it feels when they happen, that makes all the difference, and ought to make all the difference.

This is what is happening with vaccination and autism. Parents tell stories in their own heads about what autism is and what is would mean, and those are very bad stories—a bright, ‘perfect’ child without any problems suddenly turned into a child who is cut off from emotional and intellectual connections with the world. They tell stories about what it would mean if it turned out one day in the future that something they did (vaccination) was what caused this thing to happen. The badness of that story is reason alone for a few parents to turn aside, and that’s perfectly understandable if incorrect—because it is how we actually (often correctly) evaluate the real meaning of probability in our lives, in terms of what we can choose to do or not do, but also how horrible or painful the consequences of the dice coming up badly would be. Seen from outside, dead is dead, but told from inside, most of us have deaths we'd prefer and deaths that terrify us. It doesn't matter if the risks of the two are the same.

Beyond that, every decision to not vaccinate has a local meaning and genesis that also requires some investigation, ranging from the conditions of a particular individual parent’s previous encounters with biomedicine or serious congenital conditions to entire communities of faith or practice trying to find a way to demarcate their distance from the ills they perceive in the wider society.

That’s a good example of the ethnographic two-step: a choice or practice can be both wrong and perfectly reasonable, immoral and sensible, repellent and legitimate. More importantly, it means that if you object to something and want to see it changed, you begin to have a roadmap towards persuasion that incorporates humility, deep understanding and ethical clarity. Then you’re not just stuck advocating more laws, harsher punishments, stronger use of force, and you’re not stuck portraying the object of your animus in increasingly inhuman and mechanistic terms, people who do what they do for reasons unknown and unknowable.

[permalink]


August 21, 2003

The Comics Journal Hair-Trigger, Or Why Johnny Blogger Ought To Have His Own Comments Section

So I got off a couple of little zingers about science fiction conventions while trying to describe why I don’t think my own fannish interests in science fiction, comic books, computer games or anything similar are at all like two young women hanging around a hotel evidently seeking (and apparently failing) to get laid by some Austrialian kidvid stars.

Deflector screens went up pretty hastily over at Electrolite once the faintest hint of a critique of science fiction fandom was sensed in my piece. I like a cheap joke at the expense of someone badly dressed as Commander Adama as much as the next guy, but knocking science fiction fandom or mistaking it for narcissistic celebrity worship would be a supreme act of self-hatred on my part. Not only do I have a recent essay on this blog about my irritation that Kang the Conqueror’s invasion of the planet Earth was taken with insufficient seriousness by Marvel Comics, my manifesto on the state of academic cultural studies is in many ways a call for it to learn from what Patrick Nielsen Hayden defines fandom as, “a bohemian network of affinity groups”, and to adopt a critical voice which is a more rhetorically middlebrow and more powerfully influenced by and intertwined with the interpretative frameworks that fans create.

I am a science fiction fan. A comics fan. A fan of computer games.

My real, material connections to the networks Patrick describes are thinner and less social, more solitary, than many. I don’t think it makes me less a fan if my participation in fan networks is mediated through online message boards, email listservs, and so on, and are largely expressed through my own canonical knowledge of science fiction literature and media and through private acts of devotion like festooning my shelves with action figures. You don’t have to write filk songs or go to conventions or write K/S or be a paid member of the E.E Doc Smith Official Fan Club to count within Patrick’s definition, I hope.

I had thought that was clear in my Wiggles essay. The distinction I’m shooting for there is between something that squicks me—squicking being a visceral, not entirely rational desire to distinguish between your own practices and someone else’s—and my own fan involvement with science fiction, comic books and computer games. What squicks me is the narcissism and perhaps also lack of proportionality that a certain modality of fannishness seems to license for those in its grip, the lack of ordinary empathy for the humanness of the people who write the books and sing the songs and act the parts.

That it wasn’t clear may have something to do with my own writing, but I think Patrick’s reaction also has a lot to do with what I have come to think of as the “Comics Journal approach to cultural devotion”, which is to react with a mixture of erudite fury and defensive praise of the comics form to even the slightest whiff of someone failing to appreciate the genius of comics and more importantly the legitimacy of devotion to comics. What’s interesting is that this reaction can be triggered easily by either a clueless “outsider” slamming comic books as greasy kid’s stuff or by someone who is unabashedly a “fanboy” in his tastes and critical appreciation for comic books, who could care less about the critical potential of sequential art or about the genius of Maus but who knows a lot about Wolverine and could care less whether Wolverine is art or not.

This is a roundabout way of suggesting that many science-fiction fans, of any and all definitions—and fans of any kind, really, from soap operas to Holmesians—have remarkably thin skins and are quick to take offense. Believe me, I know: I’ve spent a lifetime having to defend my tastes and interests to almost everyone around me, to explain, as Patrick does, that my fandom is “orthagonal” to what the non-fans think it is.

A neighbor and colleague of mine recently got into my inner sanctum, my home office, because his little boy was over and had found my treasure trove of action figures and science fiction novels, all watched over benevolently by my Alex Ross-painted Dr. Fate poster. He was being very pleasant about it, but I could see behind his eyes: “Holy cow, I had no idea he was such a freak”. My own little Comics Journal guy was working up a pretty rigorous screed about how science fiction is great literature and all that, but I managed to stifle it.

The thing of it is, though, that maybe my neighbor really wasn’t thinking that. In fact, when I took the time to listen to him, he seemed pretty interested in the theological usefulness of the Manicheanism of Star Wars and Lord of the Rings (he’s in the Religion Department).

And maybe you can say that a fan who lets his fandom turn into creepy devotional obsession or just ordinary narcissistic disregard for an author or a performer’s humanity is a problem without indicting fandom as a phenomenon. At least, that’s what I thought I said.

[permalink]


August 20, 2003

Get Ready to Wiggle

We recently went on a trip to West Virginia, both to see the area of the state around Seneca Rocks, which I had always been curious about (I ended up thinking it was pretty but not extraordinary, John Denver notwithstanding) and to see a concert by The Wiggles, the Australian quartet of kidvid stars.

My toddler and my wife both like the Wiggles a lot. I found them really odd at first—sort of a queasy mix of catchy songs, non-ironic Pee-Wee Hermanisms, local UHF kiddie-show host amateurism, and Dr. Who-level cheeseball set design. I like them a lot better now, though I still find some of their stuff annoying and some of their songs are horrible mindworms that get in and never leave. They’ve also got a resident pirate character named Captain Feathersword who sort of functions like Worf on Star Trek, as the “go-to” character overexposed in a lot of songs and sketches because he’s the only one with a character schtick more compelling than bland niceness, non-fattening gluttony or narcolepsy.

In any event, the most interesting thing on the trip for me was an up-close look at the everyday landscape of celebrity and fandom in America. I’ve been to science fiction conventions a couple of times, and that’s one sort of fan culture. Science fiction fans are sometimes sometimes kind of skeptically non-slavish in their devotion. I remember being at one convention in Los Angeles when I was a teenager where William Shatner started singing some kind of song about whales and half the auditorium emptied in 16.5 seconds. Or scifi fans are so freakish and peculiar in their fannish attachment to particular celebrities or characters that they’re actually kind of interesting in their own way, and certainly not banal.

What I saw this time with the Wiggles was more dreary and ordinary and depressing, and I can only think of one similar incident in my life. I went to the bar mitzvah of the relative of a friend of mine when I was in junior high school and my friend knew I was a huge fan of Star Trek (the original series). So he dragged me over to meet Walter Koenig, the actor who played Chekov, who apparently also was related to the family and was there as a guest. I acutely remember the politely suppressed but profoundly pained look in his eyes when my buddy started introducing me as a Trekkie, and I just shook his hand and left him alone as hastily as I could.

Since then, I’ve never wanted to meet a celebrity in a public place, or seek an autograph, or even really take note of a celebrity’s presence in any noticeable way. When I was a cook in Connecticut after graduating from college, Dustin Hoffman once walked into my kitchen (apparently without asking, just decided to see what was cooking) and peered into my soup pot. I didn’t even look at him: my co-workers had to tell me later who that guy was.

We were at the same hotel that the Wiggles were staying in. My wife took a walk after we arrived from a 5-hour drive while I hung out with our daughter in the hotel room. It was getting late, and our daughter was in her pajamas. My wife called from the lobby, telling me that she thought the Wiggles were coming back from their evening performance (we had tickets for the next afternoon) and maybe I ought to bring Emma down to see them. Well, ok, I thought, she’d probably think that was neat.

So we go down to the lobby, her in her p.j.s. And there are about four or five other mothers with toddlers hanging around, a couple of whom I gather later live fairly nearby and others of whom are staying in the hotel as well. The toddlers are tired and cranky. It’s pretty late. I'm already feeling a bit uncomfortable with the whole scene. And then I notice a couple of young women who don’t appear to have any toddlers in tow waiting just as eagerly.

Well, Emma starts to get bored and I start to get uneasy. So I take her back up to the room. The next morning Emma and I are walking through the lobby and we see Greg, one of the Wiggles, walking along. Emma doesn’t really take any notice even though she sees him. I don’t look directly at him, but I can see him flinch visibly as he passes us. He’s waiting for the inevitable ambush. I have no desire to ambush him. I just feel sorry for him. Predictably, he gets ambushed just about ten feet later by other parent-toddler duos, and then again and again, and so do the other Wiggles over the day.

I just keep thinking about how much time out of their day gets eaten up by this constant flow of “meet my child”, “take a picture”, “sign this please” and how confined they must feel by it all. More, I keep thinking about the pathos of the two young women who seem to want to meet the Wiggles for other reasons, the women without children. We see them about four or five times during the day, always trying to get the attention of the group. The Wiggles seem pretty patient with them, but also completely uninterested in what they’re apparently offering.

None of this is exactly a secret in our culture. As always, it is a bit different to actually see it up close, grubbier and sadder and more ordinary and just everyday human when it’s just in some industrial town in southern West Virginia on a hot and hazy summer day. I think fandom and celebrity worship are really fun, enjoyable, culturally productive and often creative activities when they’re confined to message boards, fan societies, fan fiction, encyclopedic mastery of the entire opus of a particular actor or writer or genre. Somehow when they translate into a tangible material connection to the everyday lives of the performers themselves rather than the text of the performance, it’s a different story.

[permalink]


August 5, 2003

Pirates 1, Sinbad 0

I have not seen very many movies this summer. A colleague of mine and I keep saying we want to go see the Matrix sequel, but at this point, I’d say she’s going to have t come over and watch the DVD in October at our house.

I did have a chance to see “Pirates of the Caribbean”, though, and like a lot of people, I thought it was a lot of fun. In fact, it underscored how unfun most summer movies have become, how in the timing and nature of the thrills they allegedly provide (every few minutes, you have to have your spectacular car chase fight-scene explosion-laden laser-shooting scenes) they have become as predictable and repetitious as pornography.

“Pirates” had funny dialogue, a great performance by Johnny Depp, a couple of the greatest sight gags in the history of the cinema, and some good quick-paced action sequences that came at appropriate moments in the story and stayed no longer than necessary. The plot was a little convoluted, but it often avoided easy cliché: even the climax did not do the expected thing, and allowed the stiff-upper-lip English officer a chance to be something other than a cad.

I get calls now and again from journalists working the pop culture beat. I like talking to quite a few of them—there are some smart, interesting folks at the Dallas Morning News, for example. I’m kind of Robert Thompson’s understudy, a sort of apprentice quote slut.

My usual schtick when I’m talking about pop culture trends is to provide a bit of historical perspective and note that what appears to be a new trend has a deeper history. That’s Standard Historian Trick #245, but it’s often true. Other times, I say that what appears to be a trend is not a trend at all, but that’s rarely heeded—by the time a reporter gets to me, I have a paragraph saved for “expert opinion”, but the basic angle is already set in stone.

I also get a lot of mileage out of casting myself as the contrarian to the Usual Suspects, those bluenose experts who hate children’s TV or think reality TV is the end of civilization, or the kind of left-leaning cultural studies scholar who has a hopelessly dour, functionalist understanding of what’s going on in pop culture.

The most basic thing I have to say is the one that makes the least impact, not just on journalists, but on cultural producers, and that’s because it is banally true and because no one in the culture industry really knows how to systematically act on this banal truth. Pop culture journalists ask why certain movies or TV shows fail and others succeed, and try to find a trend there that predicts future success or failure, that reads the entrails and tries to divine the underlying secret threads that knit the cultural economy together.

The basic thing I say is that the best predictor of success is letting interesting, skilled and independently-minded creative people make something with a minimum of direct interference (though enforcing budgets strikes me as reasonable prudence) and not letting hacks, dullards, mindless derivatives, isolated egomaniacs or cynical burnouts anywhere near a camera or a script.

“Pirates of the Caribbean” is doing well in the summer sweepstakes because it’s a good movie. Simple as that.

Ok, yes, every once in a while a movie or a TV program that is truly and unambiguously shit makes a ton of money. Yes, all the time, movies and TV shows that are really great lose money. Some of the great ones are really only great for those audiences that get their rocks off on films made by Northern Europeans who hate everyone, including themselves and are guided by a list of arcane and inflexible aesthetic rules.

Moreover, much of what gets slagged off as shit is actually pretty inventive in some fashion. The first season of “Survivor” was really damn interesting. “American Idol” interweaves some really primal narratives with the pleasure of feeling superior to the truly horrible aspirants while rooting for the truly superior ones. Though I personally hated the film “Titanic”, I had to concede that it was visually interesting and actually rather daring in its coupling of a big-budget disaster film to a single unabashedly sentimental love-across-the-railroad-tracks love story.

A lot of the time, the very best television will succeed if executives have the patience to let its audience build, even when it doesn't seem to go anywhere at first.. If good stuff fails, there is often an underlying reason that could be dissected intelligently. “My So-Called Life” was a very well-done show, but it was too real in its portrayal of a certain kind of adolescence, too painfully reminscent of its self-indulgences. The very best films may just have the bad luck to miss the audience they deserve while in the theaters, but a lot of films get a second and third life these days.

Much of the time the cultural marketplace deals out rewards and relative punishments that have some degree of sensible correspondence to quality. So when critics and film producers conclude that the twin failures of “Treasure Planet” and “Sinbad” mean that audiences just don’t want to watch traditionally-animated films any more, and went to see “Finding Nemo” because they prefer computer animation, they’re drawing the wrong conclusion. The simpler conclusion is this: “Treasure Planet” and “Sinbad” both sucked, while “Finding Nemo” was a good film. If “Finding Nemo” had been “traditionally animated”, but with its script and voicing, it still would have been a good and very popular film.

You don’t have to look for a trend and wonder if the audience is really hungry for movies about pirates, or if the audience is tired of movies made about TV shows, or if audiences find big-breasted action heroes in silvery jumpsuits less appealing this year because the Moon is in the Seventh House and Jupiter has aligned with Mars. There’s no trend, and no mystery: audiences can be fooled for a week, but not much more than that. In the cultural marketplace, almost everything that succeeds, succeeds because it offers someone some authentic pleasures; almost everything that fails does so because it breaks a covenant with its audience and gets made only to get made.

There is, of course, a success that goes beyond financial or beyond the duration of something being produced. Some shows and films have a success that can’t be quantified, and that kind of success is the deeper issue that ought to be what both journalists and academics are interested in. “The X-Files” succeeded because it was a well-made television program with some original twists on established narrative forms—but also because it tapped into deep strains of American paranoia about government and authority. It’s just that the latter success, what makes the program grist for both the scholarly and popular mill, doesn’t necessarily have much to do with the show’s economic viability. You could make another ten programs that try to tap the same wells (and the show’s producers tried) and not get anywhere.

What resonates in popular culture is often only clear in hindsight, and nearly impossible to predict, except in a cloudy, oracular manner. What makes money and loses money is easier to understand. Making crap will often pay back your investment, but if you really want to strike it rich in a big way, you’d better find someone with a firm, distinctive, original grasp of what entertains, amuses, delights and inspires, give them some money and stand back and watch what happens.

[permalink]


August 1, 2003

Powers and the Comic Book Human

I’ve been reading comics for a very long time. Superhero comics. The kind with impossibly muscled men and mammarily-gifted women in tight costumes. Yeah, I read other genres of comics too, and sure, I agree with Scott McCloud that “sequential art” has a lot of untapped potential, but basically, it’s the superhero genre that defines comics for me.

During the 1990s, the genre went through a lot of typical late-20th Century aesthetic contortions, passing rapidly through various postmodern, ironic, metafictional revisions of itself. Multiple explorations, both dystopic and utopic, of the superhero as authoritarian dream, iinvestigations of the superhero as modern myth, representations of the superhero as sexual fetish. Satiric self-mockery of the genre as the refuge of maladjusted post-adolescent men and self-hating eviscerations of the genre by creators eager to transcend it or kill it off altogether. Some superhero comics tried to get back to their conceptual roots, and others tried to take their characters to the logical ends of their evolution.

I still kind of enjoy comics, but I feel like all of this exploration of the creative space that superhero comics inhabit has left relatively little satisfying room for either business as usual or further postmodern reconfiguration of the genre’s underpinnings. I don’t really want to read just one more story about how the Avengers kick the crap out of the Yellow Claw and his evil plans for world conquest, and I don’t want to read just one more story about how Batman is really some kind of fascistic S&M leatherboy.

There’s only one way to go forward, and that’s to tell good stories about interesting characters who happen to live in a world where people have superpowers and dress in costumes. To do that, comics writers are going to have to show the creative courage that the best science-fiction writers sometimes display, and that’s to figure out what it would mean to be a real person, a fully imagined human character, (with superpowers or no superpowers) in an unreal world.

There are a few series that have pulled this off either for a short span of time or in a sustained way. The initial issues of Kurt Busiek’s Astro City did a fantastic job of thinking about how to explore the everyday human scale of a superpowered world, before it degenerated into just another comic book about people in tights beating on other people in tights.

The best model out there now is the series Powers, which uses the police procedural as a way to reframe a fully imagined world where superpowered and normal people uneasily coexist. Powers is the way forward, and if standard superhero comics can’t go that way, they’re going to die the final death so many have predicted for so long.

The problem with the standard superhero comics is the problem that all serial melodrama has. The longer your characters go on, particularly if you’re not allowing them to age, the more that the accumulation of contradictory events in their lives and within their worlds creates a kind of toxic layer of underlying sludge that turns the characters and their surrounding mythos into a kind of fever-dream patchwork unreality.

Nothing ever moves forward in the fictional setting of Marvel and DC comics. The individual characters change slightly. They gain a power, or lose a power. They get married or someone they know dies. They are replaced (for a while) and then reassume their roles. Sometimes, as in soap operas, especially dramatic, irreversible developments get undone by spells or dreams or amnesia or by a creative decision to pretend it never happened. The temporal anchors of characters within their worlds change slowly over time: once upon a time, the two older men in the Fantastic Four fought in World War II, but that’s been erased. Presidents come and Presidents go, usually the real-world ones, but sometimes not—Lex Luthor, of all people, is currently President of the United States in DC Comics, in a rather pungent critique of the political order of things in the real world.

But the setting never really changes. Reed Richards may invent things that would completely, utterly change the world that we know, but they just sit in his headquarters, gathering dust. Superheroes may teleport to the moon or travel to the stars, but humanity just keeps taking the subway. Batman and Spiderman may spend hours every night stopping five, ten, fifteen muggings, and yet there’s another fifteen muggings to stop the next night. The Joker may escape the asylum and murder 100 people and threaten to murder another 10,000 but when he’s caught, he just gets thrown back in the asylum—from which he routinely escapes. Demons from Hell and angels from Heaven may routinely appear in public on the comic-book Earth and the existence of God and Satan may be as empirically verifiable as the existence of atoms and DNA, but ordinary people are either not notably religious or if they are, struggle with the usual crises of faith familiar to us in our lives.

Somehow all of this sits very badly with me now in a post-911 world, because it just reveals how much the superheroic character in his standard setting exists in a world full of cardboard standups and Potemkin villages. Marvel and DC say they don’t want to make their worlds be worlds where everyday life has changed to match the fantastic technologies and superpowered realities of their central characters so that we can continue to project ourselves into those worlds, so that the setting stays recognizable.

Well, it’s not at all recognizable to me. There isn’t a human being I can identify with or compare myself to save for a few sensitively drawn supporting characters in a few isolated titles.

I can’t project myself into a world where the people put a mass murderer back in an asylum every time he escapes, knowing he’ll soon escape again. Imagine if Charles Manson escaped from jail every summer and killed forty or fifty people. The only way I can understand that is if the writers depict the ordinary people of DC Earth as having enormous, boundless compassion for the mentally ill. I can’t project myself into a world where a lunatic environmentalist terrorist like Ra’s al Ghul routinely tries to exterminate millions of people, is known to have done so by the governments of the planet, and yet escapes and finds sanctuary time and time again. You can just buy that Osama bin Laden has escaped a global manhunt by hiding in remote areas of Pakistan, but if he’d killed hundreds of thousands of people and threatened to kill more, I don’t think there would be any sanctuary at all. There’s only so many secret headquarters out there. Played for camp, as in James Bond, you can just buy these sorts of premises. Played as grimly as some superhero comics do, you can’t.

I can’t identify with a world where in the recent past, several major cities have been destroyed utterly by alien invasion and nuclear terrorism, as on DC Earth, without any long-term political, social and cultural consequences for the people of that planet. Ho-hum, another city blown up. The only person who seemed traumatized on DC Earth by a major West Coast city being destroyed was a superhero. Everybody else just went about their business. On Marvel Earth, a time-travelling conqueror from the future just killed everyone in Washington DC and conquered the planet, putting hundreds of thousands of people in concentration camps and killing millions. It was a great story, taken to the limit, and then the next issue came and it was all forgotten. Ho-hum, planet almost conquered, could be attacked again tomorrow from the future, millions will die. Big deal. Move on to the next story.

We’ve seen in our world what happens when several thousand people die and a building crumbles. In their world, unimaginable trauma is shrugged off like the common cold, all in service to the next storyline. Al-Qaeda just gets written in as another stock organization of faceless goon villains.

The arms race between people who write the comics has escalated out of control. There was a great Joker story once years ago where he killed three people. It was tense, exciting, gripping, and meaningfully horrible. Now the Joker offs thousands and even Batman just punches him once or twice a bit harder. Naughty mass murderer! Damn you, villain! The psychological and political banality of comics humanity renders most standard superhero comics unreadable, alien, remote. They’re the adventures of a few colorful characters in a cardboard universe of pod people.

If the people who write DC and Marvel comics want to save the genre, and walk the road walked by Powers (the writer of Powers is already walking that road in his Marvel series Alias), they’re going to have to make their unreal worlds more real.

To make them more real, they’re going to have to accept and embrace and evolve the unreality of the setting and all the humanity it contains, not just of the main characters. If superheroes can teleport to the moon, maybe fewer ordinary people would be on the subway. If a villain kills a hundred people, maybe he’ll be executed. If Batman stops twenty crimes a night, maybe the criminals will actually go to another city where there’s no Batman, or even more daringly, maybe people in Gotham City will actually start to behave differently, or maybe Batman will have to try and think about why people commit crimes rather than just punching criminals in the face every night. If there’s an invasion from the future or from space that kills millions of people, maybe the governments of Earth will actually try to organize defenses against such attacks. Maybe if you lived in a world where Hell and Heaven were relatively tangible places that regularly interacted with daily life, where the spirits of people damned to torment could be summoned up to testify to the living by any two-bit sorcerer, you’d behave a bit differently.

Maybe all the people of those worlds live in fear all the time, or maybe they’re just different and better than us in our world, where we live in fear even when thousands or a hundred or the next-door neighbor are murdered.

Life in extraordinary fictions needs to be extraordinary in order for it to be identifiably human.

[permalink]


July 12, 2003

Lisa Ruddick's "The Flight From Knowing"

Like several other academics posting blogs, I found this article deeply affecting in many ways. (The thread at Invisible Adjunct reminded me of the Ruddick article, which I recalled having read excitedly when it was first published in the Chronicle.)

Ruddick absolutely crystallizes my own state of mind.

Her article clarified my own struggles with my second book perfectly, both when I was working on the project some in 2001 and again now, as I wrestle with the hardest two chapters of the book. Ruddick helped me to understand what it is that I’m trying to accomplish with it. What started as a fairly straightforward comparative life history of three Zimbabwean chiefs, with an interest in the theoretical status of the concept of agency, has metamorphosized into a mediation on why academic African history, like most scholarly fields, generally has lost a sense of contact with an evocative sense of what it means to be human and a commonsensical engagement with society and culture in those terms.

The book as I am trying to write it is about trying to tell stories that I find compelling, to witness the things I think need witnessing, to tell the truth as I see it, to make sense of a history that is often very remote to my own life and background, to explain the questions and issues that need explaining, to connect with other human individuals and the human spirit as a whole. It's not, I hope, about the next moves on some giant chessboard where the game being played is theoretical one-upmanship of the kinds of futilitarian, despairing, sterile positions that postcolonial theory makes so readily available and inescapable in my own field.

I always go back to a moment in graduate school that my friends from those days remember very well, when one person in our program who was the ne plus ultra postmodern theorist of our cohort shoved me into a room and told me not to talk about any of the key French poststructuralists because I hadn’t read their entire corpus of their work. I was, well, is what I’ve said about them wrong, in your judgement? No, just not sufficiently knowledgeable, because if one is more knowledgeable, there really is no basis for rejecting any of their insights, as I was struggling to do.

That was the tyranny of theory crystallized to one magic moment, made all the more peculiar coming from a fervent anti-foundationalist who was dedicated to equally fervant anti-foundationalists. How could a dedicated Derridean possibly claim that a comprehensive knowledge of the canon was required even to speak? That was the kind of practice made omnipresent by graduate students and their teachers, and by the general intellectual sociology and institutional practice of academic life. The haunting sense that one did not know enough to speak, that one did not possess all the theory necessary to “ground” the more homely details of one’s research.

That was where the joy and the passion of inquiry started leaking out of me like air from a balloon, where it somehow became shameful to say that I had been drawn to African history simply because it seemed interesting, because its intellectual challenge was substantial, because there were real things worth knowing in it that did not seem to me to be generally known.

That is where what Ruddick calls “questions of conscience” started dissolving into questions about what the proper theoretical position was, something one determined by mapping all the positionalities and reading all the professional tea leaves, not by an understanding of how any particular theory actually helped the struggle to understand and speak. Indeed, for some theorists, the point was to complicate and render speechless all possible positions, to render the academic enterprise simultaneously impossible, invalid and indispensible all at once.

It has taken me a long time to struggle through all of this in my book, in relatively plain-spoken language, and the way this summer is going, it’s going to be a while longer before I finish. But Ruddick’s article—and the blog discussions that recall it this week—help a lot.

[permalink]


July 12, 2003

Crooked Timber has opened up shop. Way, way too much material here that's not just interesting but essential. I'm going to be spending a lot of time reading there.


July 12, 2003

Who Cares About Africa?

Judging from an article in the July 5th’s New York Times, Ronald Walters doesn’t.

Ronald Walters, who ought to know better, seems to have accepted that the current struggle in Zimbabwe is about the just redistribution of land from white interlopers back to its original African owners, and that to criticize the Mugabe government is to collude in an insidious white-dominated plot to prevent social justice from occurring. Walters says he’s not on Mugabe’s side when it comes to repression (very generous and principled concession!) but that he is “on the side of the people who claim there’s a justice issue in terms of the land”. Fine. That also means you’re not on Mugabe’s side, but Walters, judging from the New York Times article, doesn’t see it that way.

Walters doesn’t want to be one of those guys who want to “beat up on Mugabe jeust because he took land from some white people”. Let’s leave aside the whole issue of consistent governmental respect for law, the economic importance of property rights, and constraints on the power of the state, all issues that are of crucial importance in understanding why postcolonial African states are such perfect case studies of arbitrary misrule.

Let’s just say for the sake of argument that if land was taken from white people who took it and returned to its former black owners, justice would be done. Walters appears to be operating on the assumption that this is actually happening in Zimbabwe, and that it is the reason why Mugabe has become a target of international criticism.

Anyone paying attention to the situation, anyone even modestly knowledgeable, knows that is absolutely not what has happened. There was one round of meaningful land reform in the early 1980s under ZANU-PF rule. Meaningful because it actually involved some kind of real effort, budgetary expenditure and planning, not in terms of results—even that was a distastrous and preventable screw-up made all the worse by some erratic policies with cooperative farming and industry. At least it was a fair try, on a very small scale. Ever since then, land reform in Zimbabwe has been a colossal joke, a carnival of escalating corruption. The land taken since the 1990s has been taken on behalf of the rich and powerful as a vanity possession. Acres once productively worked, acres that once employed many and allowed Zimbabwe to export prepared foodstuffs and grain, now lie empty and fallow of habitation, owned by a party bigwig and unused.

Walters appears to be trying to defend a program of taking land from white people just because it takes land from white people. That’s not the politics of justice: that’s the politics of empty retribution. If Mugabe was interested in programmatic land reform that operated from some consistent principle, had a consistent policy design, and proceeded in a transparent manner, then the whole situation would be different. There is a real issue, and real justice that needs doing, and that shouldn’t be forgotten. But it has nothing to do with Mugabe’s maladministration of Zimbabwe.

It’s not just that Mugabe is not pursuing anything that could be called land reform. It is that the entire issue of land and colonialism is a colossal diversionary tactic that Walters and others have fallen for just as spectacularly naively as the American media. Zimbabwe’s current state has nothing to do with land, except that the land seizures have helped deep-six the economy even further than it already was. Mugabe has pushed the issue precisely because he understands that his one slim hope for hanging on is to hoodwink the network of Western intellectuals and activists who once mobilized on his behalf in the years of the liberation struggle, to make them think that this is the same struggle, 25 years later. It’s not.

The land issue is a figleaf for a corrupt statist oligarchy that have ravaged their own nation and stolen its birthright as fundamentally and thoroughly as the Rhodesians did. To talk sympathetically about Mugabe’s land objectives is the equivalent of buying snake oil from a carnival barker.

Maybe Walters was misquoted, but that’s how he comes off in the article: morally vacuous and uninterested in what might constitute either good land reform policy or good moral justice in Zimabwe.Certainly if that’s what he thinks, he’s not alone. Robert Mugabe got a surprisingly enthusiastic reception in New York City not long ago from a number of local black politicians.

At least Salih Booker is brave enough to admit that there’s a journey to be travelled from reflexive idolotry for Mugabe and every other nationalist hack turned authoritarian to the real question: where does justice reside in Africa, and with whom? If we desire to show solidarity with the struggles of ordinary Africans, where should our sympathies lie, and where should our condemnation fall? It doesn’t matter who else is playing the condemnation game: wrong is wrong and right is right, and there is no excuse for softball pitches in that game.

I’m like Booker: I used to follow the party line and think that Mugabe and his associates were at least talking about real issues, with a real interest in confronting fundamental problems. I have to say, looking back, that I was wrong not just in terms of their later conduct, but in overlooking the clear, unambiguous signs of Mugabe’s political character right from the moment of his assumption of power and even long before it. He and his coterie have always been authoritarians and brutalists. It is just that we used to excuse that because of the exigencies of the “liberation struggle”, or to attribute stories of their conduct to Rhodesian propaganda. There was certainly plenty of that, to be sure, but just because the wrong people say it for the wrong reasons doesn’t make it untrue.

Which brings me to the other Africa story in the news this week. I have a hard time understanding why the Bush Administration is actually interested in Africa. I don’t really trust much that this President says, for which I think I have ample reason. Sometimes, however, you have to support the words, if not the sayer, and the interest Bush is expressing is, well, interesting.

It may be that the President chose to visit Africa precisely because the stakes are so low there for the US government and because African nations will have to be grateful and pleased by anything the United States offers, regardless of what it might be--contrasted against most of the rest of the world, where the President is now disliked both by general populations and leaders with an intensity that far surpasses garden-variety anti-American carping. It may be that it is also a cost-free way to claim that the United States remains interested in confronting autocracy and the problem of failed governments, as it claimed to be in pursuing war in Iraq. It is at least good to hear our President speaking about Taylor and Mugabe, and confronting Mbeki about his failed HIV and Zimbabwe policies. He doesn’t quite pull off the empathetic “Sorry about slavery thing” as well as Clinton, at any rate.

I don’t know whether we are going to intervene in Liberia, but I feel fairly certain that if we do, we will in a very limited, half-assed way that will probably make things no worse (they can’t get any worse, I think) but won’t make them any better. Don’t hold out for a similarly limited intervention in Congo, however, no matter how bad things are there.

To the deeper issues of poverty, inequity and injustice in Africa, Mr. Bush has few answers, but then, that could be said about Ronald Walters, too. Like Salih Booker and Bill Fletcher, I believe the answers to those issues begins with a consistent, principled understanding of what constitutes justice and injustice, a commitment that we pursue regardless of where it takes us and regardless of whom stands condemned. The moment we exempt someone because of an antiquated reading of the obligations of nationalism or racial politics, because the way we read right and wrong is by seeing who is on which side before we speak ourselves, is the moment we turn away from any kind of meaningful address to Africa’s sufferings.

That’s as true for George Bush as it is for Ronald Walters: you don’t give a dictator a free pass just because he’s supposedly on your team, and you don’t overlook the consequences of failed policies just because they seem favorable to your own ideological hobbyhorses, whether that's nationalism or the free-market.

[permalink]


July 8, 2003

Of Geese and Men

In Union County New Jersey, the county administration recently authorized the killing of thousands of geese, producing a predictable outcry. One activist complained, “I just can’t conceive of someone doing that to something as sensitive and intelligent as geese”. Another shrugged, “Even though goose poop is unpleasant, it can be cleaned up”.

Perhaps. What can’t be cleaned up so easily is the philosophical incoherence of this particular form of environmentalism.

The numbers of Canada geese and deer near human communities in the Northeast are not ecological systems that need to be preserved against human intervention: they are the consequence of human intervention (as is virtually everything we call “wilderness” in North America today, and indeed, as were environments in pre-Columbian North America).

The question is not the preservation of something natural or outside of the rhythms of human life. It is simply a question of aesthetics, in the end. What is more beautiful, a lake or stream with many Canada geese on it and feces on its banks, or a feces-free bank with no geese? Which is more beautiful, a mixture of woods and meadows with many deer feeding and living within that environment while also causing damage to automobiles and household gardens (as well as to many plants they eat, and the other mammals and animals that rely on the plants that the deer eat)?

This question cannot be answered with reference to the management or preservation of ecological systems. Even the argument for the preservation of biodiversity (which weighs heavily against deer and Canada geese) is in some ways an aesthetic argument: it is not clear that an ecology with a smaller number of generalist species is in some utilitarian or functional way inevitably superior to an ecology with a larger number of specialist, niche species except perhaps that the generalist-biased ecology is more vulnerable to catastrophes because of a smaller reserve of genetic variation. It is more that humans tend to find variety in nature appealing, for very good cultural and intellectual reasons.

There is no way to argue coherently that a deer or goose, for example, is a more sensitive, intelligent, desirable animal than any other, and therefore deserves to live more than the species which their overabundance threatens. In a less managed, less human-habitated environment, there would be fewer deer and geese because predator species like wolves and coyotes would feed on them. For the person who regards the death of geese as a tragedy because of the intrinsic intelligence and sensitivity of geese, their death from a wolf would have to be just as tragic, and just as objectionable, as death by county supervisor. Meaning that the goose-loving activist cited above must also be a predator-hating activist in order to make any kind of sense at all.

The argument that geese or deer must not be culled ultimately comes down to, “What animals do you like best, and what experience of nature do you treasure most?” If that’s seeing geese in the local pond—and you don’t ever walk barefoot through the grass next to the pond—you will inevitably and legitimately object to having geese killed.

In matters of public policy, what you like is a defensible basis for action only inasmuch as you can make a good case that your tastes are instrinsically better than anyone else’s, or that the preservation of what you find pleasurable is part of a good management strategy for the public nurturing of cultural and aesthetic diversity, or that your preferences are shared by the majority. Such arguments only work well if they do not involve actively harming or diminishing the preferences of other individuals. I can argue that funding performance art through the NEH is good public policy because it nurtures artistic achievement that ultimately enriches all of American society and that would not be otherwise enriched because of the lack of a marketplace for such performances. The existence of such work does not reduce the supply of summer action blockbusters, so it expands rather than contracts the total cultural marketplace. Contrarily, I could argue that a statue being placed on government property ought to conform to majoritarian preferences in its design. These are good—if arguable—premises.

Can you make a similar argument that forbids the execution of geese under any circumstances, and authorizes only much more expensive and less questionably effective techniques for discouraging their overabundance? I doubt it. Even with the culling of geese and deer, they remain in large (and replenished) numbers in environments around the Northeast, meaning that those who get pleasure from seeing them will still do so following culls. Failure to cull, in contrast, means that the pleasure that goose and deer watchers receive directly and aggressively diminishes the pleasure that those who seek natural environments free of goose feces and without the depredations of overly large herds of deer. Is a goose more beautiful than a yard without goose crap? I can’t see how to make that argument work without a hopelessly sweeping recourse to the sanctity of all life, the kind that obligates one to start wearing a Jainist screen over one’s mouth.

Environmentalism ought to be one of the most potent, urgent political forces on the landscape. It can’t be until it grows up a little and gets beyond being a sentimental fashion accessory for the suburbanite who wants everything: a goose, a lawn, and eternally preserved property values to match. A real stewardship of the environment has to embrace culling and some of the costs and difficulties of living inside of an ecology that includes animals and plants that inconvenience and occasionally even endanger human well-being.

[permalink]


July 3, 2003

In Which The Nation-Builders Find Out That Nations Don't Grow on Trees

So I notice that the crusade to liberate humanity from tyranny seems to have slacked off a bit. Iraq is free, right? So, what’s next? There’s a long list of dictators out there, and an even longer compendium of preventable human suffering. Grab up the torches and pitchforks, guys!

Anyone who hesitated about the Iraqi War, if you read Michael Totten or any number of other folks, was a morally confused, faint-hearted, secret sympathizer with Saddam, a compromised lackey of totalitarianism. The least critique of the war was enough for the red-meat proponents of war to cast the critics into the pit of darkness inhabited by authoritarians, enough to make one ethically responsible for every body buried in a mass grave, enough to charge the critic with personal culpability for torture and suffering.

The thing of is, I shared some of the anger that the pro-war advocates levelled at the left in the US and Western Europe, and I still do. I agree that many people have betrayed their own political principles by conspicuously looking away from the misdeeds they find inconvenient, and in particular, I feel considerable anger for the long legacy of Western leftist alibis and silences about the moral catastrophe of Third World nationalism in its (typical) authoritarian manifestations.

However, the only unique argument for war in Iraq appears to be in shreds: that the combination of Hussein’s misrule and repression, his possession of WMD and his imminent plans to act against the United States and its vital interests justified not just a war but an urgent war, a war that could brook no delay, no negotiation, no inspections, no dissent. All that is left of that combination is Hussein’s misrule and repression, which was grotesque and horrible and deserved to be crushed. Unfortunately there’s a lot more where that came from, and given the standard the prowar voices have set, any hesitancy about the military conquest of any tyranny in the world is intolerable hypocrisy.

Totten at least brings the same stridency and relative lack of complex introspection to the table when he’s talking about Liberia that he demonstrated on Iraq. That only makes the problem worse, however. Why isn’t Totten calling for a US invasion of the Congo, for example? Because it’s in the French sphere of influence? That’s no excuse! (As well as being arguably untrue, given US support for Mobutu over the decades.) Wait, has he called for the immediate invasion of Zimbabwe? Well, I’m guessing he’s said very bad things about Mugabe in the past, but as we know from reading his essays, that’s not enough. Merely saying that things are bad is just liberal hand-wringing and covert endorsement of repression, isn't it?

There are states in Central Asia that are sliding rapidly towards overt repression of human rights. Pakistan is a military dictatorship that has nuclear weapons and Islamofacists in the government. Libya and Syria are controlled by authoritarian regimes that have supported terrorism in the past. Somalia is an anarchic mess where the population suffers daily from violence and neglect. Sudan is ruled by a regime that regularly perpetrates crimes against humanity in the prosecution of a racialized civil war against one half of its own population. North Korea has one of the worst regimes of the last fifty years, its population starving and repressed, and it is building nuclear weapons. There’s also a little place called China, but that at least raises some complex issues, so maybe we can wait to talk about it.

You get the idea. Totten temporizes a little—Africa is far away, and not strategically vital. Then he says, “Let’s go and and invade, because we have to”. At least he’s consistent, and I actually admire that—but that consistency means he literally cannot ignore or push to the backburner a single one of the cases I’ve cited. Every single one of them requires an invasion, right now, and an outside administration designed to build a good nation. Any failure to advocate invasion in any of these cases opens Totten up to the rhetoric of moral outrage he has so liberally vented at so many targets, because his past rhetoric has allowed no exceptions or nuance (except for committed pacifists, to whom he has given a free ride.)

As the Bush Administration is now discovering, nation-building is hard and expensive work with an uncertain outcome that exposes the nation-builder to enormous financial and human cost. It can’t be done by people who aren’t terribly clear in the first place about what a liberal democracy is and why and how they work, and I think at least some of the top figures in the Bush Administration lack that clarity. Moreover, it is work that cannot be done unilaterally. It would be funny if it wasn’t sad to watch the Administration flail about trying to find their way towards a bigger multilateral fig leaf to cover their exposure in Iraq—just like it’s rather bitterly funny watching President Bush talk about how important multilateralism is to any intervention in Liberia.

There is a lot of sudden nostalgia in the public sphere for the British Empire, which has produced some interesting, challenging writing that I think resurrects some points of value and complexity about the British Empire that had disappeared from public conversation for too long. (I hope to write a bit about Niall Ferguson’s Empire here soon). Typically this nostalgia blithely sails past the most crucial point of all, made most cogently by Basil Davidson in his book The Black Man's Burden. The long-term legacy of colonialism is pretty horrible when it comes to making nations. Yes, not all or even most of what is wrong today in African nations is the direct responsibility of the British, French or Belgians, but as an exercise in nation-building, imperialism in Africa was a spectacular, flaming failure.

Seventy or eighty years of colonial rule and large sums of money and human effort could not manufacture nations that were built around borders that made no organic or historical sense to their inhabitants, around political institutions that were alien and often structurally malformed to begin with, and around a legacy of corruption that was hardwired into imperial administration. What could be more corrupt than the racialization of power that the British and French Empires were centrally premised upon, more a betrayal of the core principles of liberalism than the construction of a two-tiered legal and administrative system that defined Africans as permanent subjects rather than citizens, more autocratic than making a “native commissioner” the lord and master of people he scarcely understood? These are not ills that afflicted African nations suddenly on the first day they hoisted the flag: African rulers slipped easily into the chairs warmed by the posteriors of their former rulers.

Nations are not built like assembly-line products. The hopeless, helpless careening of the American administration of Iraq from one principle or design to the next is entirely familiar and depressing to any student of modern empire. Ferguson, Robert Kaplan and others are probably right that the American government and American people have no stomach for building a long-term administrative apparatus for imperial rule. To avoid casualties to US troops occupying Iraq, those forces will have to treat every Iraqi as a potential enemy. To build an Iraqi nation and deliver the services that a government ought to deliver to people who are citizens rather than second-class subjects ruled as racial or cultural inferiors, US forces and administrators will have to be open to attack and responsible to the Iraqi people in a day-to-day manner, accessible and transparent. Time to choose which it will be, and I rather doubt that the latter choice is sustainable by the White House because of its inevitable costs.

Or by Michael Totten and people like him. Totten even won't admit that what he's envisioning is imperialism. What else can you call the military administration of a society by the citizens of a another nation? None of the most over-the-top prowar voices seem to understand that you cannot make a nation with a gun. You can only kill the dictators. (And perhaps not even that). There are ten thousand Charles Taylors in Liberia waiting for their chance to be President-Until-Killed-or-Exiled. Robert Mugabe is surrounded by little Mugabes, and even opposition leaders in states that are custom-designed for autocratic rule have a bad habit of becoming autocrats when they are victorious in opposition.

You can only make nations slowly, through persuasion and example and investment and the painful unfolding of history. If you want something resembling liberal democracy in Iran, for example, then put your money on Iranians who want it too, not on the US military. The fighting in the Congo will end when the fighters finally decide that they cannot live this way any longer, or their victims successfully fight back, or when a single group of combatants achieve a necessary and structurally solidified monopoly on force sufficient to suppress any opposition. There is no way for outside military powers to impose any of those things on the Congo, not without a force of a million men, decades of work, an intellectual clarity about the nature and origins of liberal democracy and trillions of dollars to match, and maybe, probably, not even then. If China is going to be a free society, it's going to get there the same complex and messy way that Western Europe did, because there are social groups that have meaningful power who want to be free and are willing to pursue their own liberation.

It is true that sometimes outside military force is productive or necessary either for humanitarian reasons or for the protection of global security. Afghanistan was necessary, whether or not a nation that meaningfully serves its people is left in its wake. It might even be a good thing to send the US Marines to Monrovia in some limited fashion. It would warm my heart a little to know that Charles Taylor is dead or imprisoned: a worse example of human evil is hard to find in the world in the last three decades. It is what follows that matters, however. Removing Hussein or Taylor or Mugabe accomplishes little if they are followed by equivalent rulers commanding (or failing to command) similar powers over their peoples.

If you live in a universe where the failure to pursue the defeat of tyranny with military force makes one morally culpable for tyranny, we are all of us either culpable or all of us committed to a new global imperialism that would have to be systematically different in some fashion than the imperialism we have known in the past. None of the people who have anointed themselves the moral paragons have offered even the smallest hint of a specific programmatic vision of how such a mission might lead to a world of free nations governed by liberal democracies and an end to human suffering. It’s time to put up or shut up for the nation builders. Calling for the Marines to invade Liberia is no big deal: it’s what follows that matters.

The new imperialists and nation-makers have gotten all the mileage they’re entitled to from outsize, hysterically overwrought condemnation of their moral inferiors. Since they can’t even draw a road map that gets from A to B, they’ve got no right to sit behind the steering wheel.

[permalink]


June 30, 2003

Sounds of Silence

I am not a quiet man. Even back when I graduated from high school, I was voted BMOC: Big Mouth on Campus. Most people who have spent time around me end up hearing a lot about my opinions on subjects ranging from “The Phantom Menace” to the Iraqi War. Unlike readers of this blog, who choose to come here, they get subjected to that simply by being around me.

I hope that I’m not a crashing bore or a strident loon when I start holding forth. My voice and my tendency to use it are pretty hard-won things for me, however, and I tend to react poorly when someone tries to take them away from me, whether out of malice or in a well-meaning quest for social justice.

The hardest issues in life are the ones where two or more incompatible positions have some undeniable validity to them.

It is absolutely true that women in mixed-gender settings find that their opinions and ideas get disregarded, ignored, reparsed and credited to men, marginalized and belittled. It is absolutely true that a man can speak and be described as courageous and ballsy while a woman saying the same thing gets tossed off as bitchy and tendentious. It is true that men in general and this man in specific talk too much, dominate conversations, interrupt, and wield rhetorical and institutional privilege. It is true that because of masculine privilege, male mediocrities and halfwits often manage to suck up most of the air in a room, blather on endlessly at the expense of everyone else, and commandeer the collective attention of entire groups towards the propping up of their own fragile egos and carefully tended pomposities. This is all especially true in academia.

It is also true that starting from these truths in search of redress and transformation sometimes ends up simply redistributing these injuries. It is also true that acknowledging the truth of these claims opens the door not just to women whose voices have been unfairly and painfully marginalized but to reverse forms of pre-emptive domination and a speech privilege that can be generously abused by female mediocrities and drones whose previous exclusion from conversation has been a function of the fact that they either have little worth saying or want the privilege of making argumentative claims without the responsibility for giving them evidentiary weight.

How can I distinguish between someone calling for an equal place at the table and someone who is simply being an anti-intellectual manipulator, a covert agent of 19th Century romanticism advancing the cause of feeling over thinking, connection over individualism, dialogue over debate? Or worse yet, manipulating that latent vein of the romantic temperment simply in order to gain exclusive control over the terms of conventionally argumentative institutional and intellectual contestation? Gerald Graff in his recent work Clueless in Academe does a good job of pointing out how Deborah Tannen’s oft-cited attack on the “argument culture” as a male-dominated enterprise ultimately functions as a supremely skilled example of that which it claims to despise, how Tannen not only participates in “the argument culture” but in some ways trumps it by melding her evidentiary claims to a rhetorical strategy that places any objection in a patriarchal prison before that objection even begins.

This is also how claims about women being silenced sometimes start: they put any male reader or listener in an impossible position. If in the particular case being cited by the critic, a male reader or listener judges that in fact silencing hasn’t really happened, or the critic has questionable or malicious motives for making the claim, or the critic is misusing a public conversation as a place to make claims without being willing to defend them, or even that the critic is well-meaning and has a fair point that has limited scope, then the male listener is pretty well screwed in advance. To object, even politely, in any particular case, if the valid general point about masculine privilege and female silencing has already been made, is to cast the male speaker as Exhibit A for the prosecution, a captive specimen of patriarchial insensitivity. If you tell me that you feel silenced by my speech, and hurt by your silencing, then I have to decide whether to do something that you say hurts you. Put in those terms, it is an impossible situation, and one that imposes silence by holding men hostage to empathy. It becomes a self-fulfilling prophecy: the men who continue to speak after a preemptive call to silence are often the ones who cause the problem in the first place.

In the face of this dilemma, men either choose to object, and are then cast as hurtful assholes who just can’t shut the fuck up no matter what, or they choose to be silent—but that silence can’t be interpreted, especially when we’re talking about computer-mediated conversation, where you can’t tell who heard and chose silence and who just never listened in the first place. Even if you know a male listener chose silence, you don’t know why. Is it because he thinks the particular female complaint is right? Or is it because he judges the female critic unworthy of a reply? If he speaks up to say, “You’re right! I am an asshole who talks too much!” he reproduces the sin in question--as well as comes off as a self-righteous wuss cravenly looking for someone else to tell him, please tell him, that he’s a really good person, a parodistic archetype of the "sensitive man" who is also often subversively making the same old bid for domination of the conversation. "Women are right! Let me tell you how they're right! Shut the fuck up while I tell you to shut the fuck up!".

You might well say, “Well, if we invert the problem of silence back onto men, then they’ll know how it feels, and be moved to real transformation”. It reminds me of something that happened to me when I gave a faculty lecture here after my daughter was born and showed a picture of her at the start of my PowerPoint presentation, as a kind of explanation of why my project was in such a preliminary and crudely conceptualized state after a year’s leave. A female colleague confronted me afterwards and said, “That’s totally unfair. You realize no woman could get away with that: it’s cute when a man does it, but banal when a woman does. People expect her to raise her kid, and it’s not an alibi for her failure to complete work.” Quite right. Absolutely. That’s true, and it is unfair that I can do it and a female colleague can’t. But I would rather strive for a world where we all get to show pictures of our kids and talk about the burdens on our labor time rather than a world where none of us do. I would rather distribute privilege to everyone than deny it to all. I’d rather be called to expose and fight a dismissive reaction to a female colleague’s courage than be called to stop doing something myself. I'd rather be honest than have to self-censor.

The problem is that unless you reject outright the idea that some speech is qualitatively better, more useful, more helpful, more germane, more entertaining than other speech in various particular contexts, you also have to leave room for a given individual woman’s speech or a given individual man’s speech being better or worse, more worth hearing or less worth hearing, more useful or less useful, than someone else’s speech.

What makes this especially difficult for me personally is that I have a very hard time separating out anti-intellectualism, which I view as largely illegitimate and very personally hurtful, and absolutely valid criticisms of male assertions of privilege in conversation, including my own assertions of and abuses of privilege. In my experience, the latter is sometimes used as a Trojan Horse for the former.

For me, my voice really is hard-won, despite the fact that I was also born to it as a man, and I suspect this is true for many other male academics. Being a geek and intellectual from age 5 to 18 is no easy road for men or women, but in some ways, men probably get it worse in that male violence and intimidation—and the over-valuation of male physical prowess—are much more open., apparent and pervasive. In 4th grade (and most grades after that) answering a question in class usually meant I got the shit kicked out of me during recess. I kept answering questions anyway. That’s a hard legacy to surrender lightly, especially when you suspect, sometimes with some validity, that the people calling you to silence now are doing so with some of the same motives or interests as the people who used to kick the shit out of you in 4th grade.

So what do we do to acknowledge or deal with the validity of the complaint against some men in specific and against masculine privilege in general? Two very different things, depending on the nature of the complaint: we have to learn to distinguish between questions of power and questions of etiquette.

We have been very badly served by those forms of feminism, Foucauldianism and other kinds of critical theory that undifferentiatedly locate power everywhere, or reduce all kinds of interaction and social relation to nothing more than power differentials. When we react to every single form of daily practice as if it is as vitally connected to power in the world as every other practice, we lose any ability to set an agenda and react proportionately to the problems we face. To me, this is one of the subterranean ways in which certain flavors of Foucauldian rhetoric end up being reactionary: by placing power everywhere, and refusing to speak of some kinds of power as peculiarly or particularly illiberal, they encourage a kind of simultaneous rhetoric of radical anger fused with a futilitarian inability to actually do anything except complain about relative trivialities, because it is the trivialities which are accessible to critique.

We need to recognize that sometimes, male speech in cross-gendered settings is no more than bad manners. Manners are rules. Rules exist so to promote a kind of equal opportunity access to a game, an equal chance to achieve objectives. The consequences of rule-breaking in some cases are minor, and even when they are more significant, they still have limited importance. They are annoying, the kind of thing one can kvetch about but not attack as an urgent social problem requiring urgent solution. Would all conversations be better with mutual respect and a consideration for all participants? Of course they would be. The “better” in this case is largely aesthetic and transactional: a conversation of this better kind would be more pleasing, more interesting, more revelatory. All participants would learn more from it. This is worth striving for, but one is not entitled to do more than get annoyed and frustrated about a conversation where certain forms of male speech have prevented the most interesting possible experience from emerging.

To follow this a bit further, when it comes to the aesthetic side of things, there is no necessary reason to favor either a “feminine” or “masculine” flavor to conversation. This is one of the areas where Tannen’s work is most seriously abused, when the comparison between a conversation that promotes expressions of feeling, personal revelation, empathy, and sharing and a conversation that privileges conflict, debate, argument and opposition is conflated with the difference between social justice and repression. If these two kinds of conversations really are “female” and “male” identified—a representation which I’m wary of from the outset, given how much it touches on stereotype—then they amount to nothing more than a “tomayto, tomahto” kind of difference, a preference and nothing more. I like both kinds of conversation: it depends on my mood, on the nature of the issue on the table, and on the other participants. (Moreover, some women I know speak “male” and quite a few men I know speak “female”.)

Where urgency is justified is when male modes of participation in conversation are connected to male forms of institutional and social power. But if those connections are read too generally and generically, they amount to nothing. It’s like Andrea Dworkin defining everything as oppressive heteronormativity, including any and all expressions of love between men and women. That’s a critique that ultimately amounts either to nothing or to a nearly-nihilistic kind of revolutionary demand that views 99% of everyday life as we know it as contaminated.

Anybody who wants to walk in between has to do the hard work of making specific claims about specific kinds of illberality residing in specific practices within specific institutions. For example, what kind of power flows from asynchronous conversation in a voluntary-membership virtual community, for example, particularly one with no moderation governing what can or cannot be posted and where no one can physically interrupt the simultaneous speech of others? I submit: virtually nil. In that kind of space, there are no meaningful claims to be made about illiberal power, only claims about rudeness and aesthetics—which are not trivial concerns, but they come with a different kind of rhetoric and a limited right to make urgent demands on others. On the other hand, what kind of power comes from a department meeting that decides on the tenure of a female academic? Quite a lot, and here claims about male speech and male participation in discourse might justify an urgent rhetoric of demand and reform if they can be made in tangible, specific form.

So what’s the solution? If we’re just talking about the problem of manners, then we can promote a positive etiquette of conversation, a set of understood rules, that calls attention to male misbehavior. That requires men to be part of the discussion, however, and it also requires an understanding that manners are no more than normative suggestions. The more formalistic we get about the rules of conversation, the less productive conversations are. There is a happy medium in committee meetings, academic workshops, classroom discussions and bull sessions that we’ve probably all experienced at least once or twice where conversation flows smoothly between all participants, male and female, where there is a sense of productivity and accomplishment and movement, where egos sit in the back of the rhetorical bus, and where there is no obligation or requirement that everyone speak. When that kind of golden conversation happens, it does not happen because there are highly formal rules that require everyone to be recognized in turn and given their two minutes to speak. Most of the time, highly formalized speech is unproductive speech, a paper egalitarianism that robs all participants of their creative energies and their expressive freedom.

When formalisms matter is when the question of power is legitimately front and center. In tenure meetings or interviews with job candidates, in faculty senates, in processes that have a major structural role in deliberative choices for institutions, then formalism might be a way to address the problem of gender and speech, and only then. That’s when one might appropriately say to men, “Shut the fuck up (until it’s your turn)”. Saying “shut the fuck up” on any other occasion misunderstands the nature of the problem, applies an inappropriate remedy, and actively hamstrings the dialogic process that might change the everyday practice of conversation for the better.

[permalink]


June 25, 2003

Living in Historical Time

My father died unexpectedly two years ago on June 5th of a heart attack. My daughter was born two and a half years ago in January. My first Father’s Day in June 2001 as a father was one without my own father.

One and a half years ago the World Trade Center lay in smoldering ruins. American forces occupy Iraq today, sentries at the gate to an unknown future.

I sometimes tell friends that I am just now coming out of the “toddler cave”, newly ready to socialize.

This is true. It is equally true that I am perhaps just emerging from the shadow of a grief that reduced me to a ghost of what I had been, from feelings I hesitate to label "depression" because of all the pop-psychological narratives the term invokes and the casual access to my most indescribable inner spaces it seems to promise.

Until two years ago, no one very close to me had died or even been seriously ill. Three of my grandparents died when I was an adult, one of them this year, and certainly I mourned them, but their loss did not strike close to me. I sometimes feel I am like a child with a defective immune system, kept in a bubble, and then suddenly exposed to a serious illness as an adult. I had no defenses against loss, no expectation of tragedy.

Perhaps in that respect, I was like most Americans were on September 10th, 2001. It is hard for me to resist coupling my own feelings with the larger canvas of history. 2001 welded the two together for me. My strong reactions to 9/11, my sense of wrenching dislocation between the world that was and the world that is coming to be, are conditioned on my experience of personal tragedy. This confluence has inflated my feelings about both moments. I feel like I am living in historical time now, the narrative of my personal life entangled with the unfolding story of my own era. I don’t think I have ever felt that before—perhaps that is quintessentially what Generation X has always seemed to lack. It may have been a heady sensation for the Baby Boomers. It is a debilitating one for me.

In this case, it only redoubles my sense of grief and loss. For me, the scene of the towers falling was directly fused to my mental picture of my father dying alone on the floor of the bathroom in his office building, working too early in the morning to be found while he was dying. My sense of general fear about the threat to my life, your life, all our lives—even and especially including the lives of others caught up in the “war on terror” elsewhere, as victims, perpetrators and in the no-man's land that lies between the two—is intensified by the deepening of my love for my daughter, my sense of engagement with and responsibility for the future that she represents.

A few friends have suggested that therapy might help me with grief, with my feelings of confusion and wonderment at the pains and powers that come with crossing into the gravity and shadow of mature adulthood in such a short span of time. I don’t think so, any more than I think the therapeutic impulse can confront usefully the social transformations that September 11th has wrought in the world as a whole.

Some things cannot be cured, and must be endured. Or if not just endured, instead changed for the better and faced with responsibility and a principled understanding of what the limits and possibilities of action are. We can know, learn and grow in wisdom, and even fight back against the burdens of our time, looking for and making the miracle of progress in a world that has ceased to believe in it. But grief is grief. It is right and just that we feel loss from which there is no restoration.

I can raise my daughter, and try to move back from grief. I can try to find again my sense of joy in the world and reconnect to friends and life. My father is dead and will always be, and everything that depended on the changing possibilities of his life in the world is gone. A ruin is broken stone and scattered metal forever, no matter what gets built on it later.

[permalink]


June 24, 2003

Star Wars Galaxies: A Beta Review

On June 26th, Star Wars Galaxies (SWG), easily one of the most anticipated computer games of all time, finally will be released to the public.

I’ve followed its development closely for almost two years, and have had the opportunity to participate in beta testing SWG for the past month and a half. (So THAT'S where my blog has gone to. Now you know and knowing is half the battle.) The game is the latest example of the genre of computer game that I find the most fascinating, the massively-multiplayer online game (MMOG), where players not only play in a shared environment with thousands of others, but where their characters and the gameworld are persistent, changing and evolving over time.

In an accompanying essay for today, I explain why I think this genre is so interesting, and why its devotees are so fierce (and fiercely intolerant) in the desires they project onto these kinds of games. Here I want to focus more straightforwardly on Star Wars: Galaxies itself.

It is coming out, as MMOGs now habitually do, in a storm of controversy. Part of this is because like all MMOGs, SWG is an unfinished product—necessarily and always—but to a degree that alarms some beta testers. It’s true that as we finish the beta testing, some major bugs remain, and I have to hope that they will mostly be squashed before the game goes live next week. The last week or so of beta has not been terribly encouraging on this score, however, and anyone planning to play the game at launch needs to be aware of the relatively shaky state of the game at the moment. The developers may pull a rabbit from their collective hats before the 26th of June, or they may not. There are going to be a legion of lesser bugs and problems that will bedevil players for the next several months at the very least, but those can be tolerated, and come with the territory.

Much of the early naysaying comes not from testers worried about bugs, but from malcontents who wanted Star Wars Galaxies to be a different kind of game than it is, and who have been consistently negative about the design philosophy behind the game almost from the outset. This I am less worried about, because to be momentarily smug about it, Star Wars Galaxies is much more a game for me and the kind of way I like to play MMOGs than it is for them—and I’m pleased it came out that way. Sucks to be those guys, but that’s the way the silicon crumbles.

SWG is, in my opinion, eventually will be the best of the current MMOGs on the market—but it is not a revolutionary design that takes the genre to new places. It is in some ways the BMW of the first generation of MMOGs, an impeccably built version of the core design ideas of the genre, with lots of bells and whistles and added features. But it is not racy or novel, and in certain ways, remains a bit staid. It is very much a MMOG, and nothing more. Some changes late in development also seem to me to be contradictory or problematic, and may need correction later.

The so-called “first generation” MMOGs were Ultima Online, Everquest and Asheron’s Call. In truth, there is no real second-generation MMOG as yet, despite a slew of new products on the market in the past year. Some of these, like Asheron’s Call 2 and Earth and Beyond, are simply failures. Others advance the genre in some singular or modest way, like Shadowbane, but remain largely within the genre’s constraints. SWG is no different in that regard, but it incorporates many of the best features from its predecessors, and in particular, from Ultima Online. One of the early, uninformed knocks on Star Wars Galaxies from its critics was that it was “Everquest in Space”. This is completely wrong, but it might not be unfair to say that it is “Ultima Online in Space” with vastly better graphics, more capacious gameplay, and a fictional backdrop that is more familiar and less derivative than Ultima Online’s faux-medieval fantasy mish-mash. In other ways, it is also The Sims Online, or what that disastrous game should have been: a world to live in rather than merely visit for a battle or two.

So what do I like about Star Wars Galaxies after a month and a half of beta play? First, the graphics are stunning. I normally don’t care much about graphics, if the gameplay is interesting. SWG, if you have a fairly top-end system, does a better job of using graphics to create a living, breathing, immersively visualized world than any other MMOG to date. Each of the ten planets in the game has a distinctive visual style, with flora and fauna to match. Plants sway in the wind, flags ripple in the breeze, spaceships fly overhead, butterflies and small animals wander across the landscape.

The look of players is the richest, best part of the visuals in the game. Character creation is a game unto itself: I suspect some people will simply sit there and create character after character and be almost satisfied with that. There are a large number of races to play, each visually distinctive, and a huge range of ways to individuate your own character—sliders that age the character, make him fat or thin, tall or short, give him a variety of hair styles and skin hues, and so on. When you get to know another player in the game, chances are you’ll remember him or her partly because of how they look—no two characters look exactly alike. (I’m hoping all of this stays intact: very late in beta, there was some ill-advised graphics optimization going on that was producing much less impressive facial appearances on characters and non-player characters alike, even on my high-end machine.)

Don't mess with the frog.

Second, I like the game because of the consistent quality of the gameworld itself. Each planet is dotted with a variety of creatures as well as “non-player characters”, other sentients of various races (including representatives of the Empire and the Rebellion). The animals generally seem to belong to an ecosystem (albeit a fantastic Star-Warsy sort of one) with predators and prey appearing near each other. Sometimes you can actually watch from a distance as a predator species stalks one of its prey, or see two factions of sentients square off and battle each other. Animals have remarkably life-like AI routines: some creatures stalk or track your character, others watch you warily and maintain their distances, some flee if attacked unless you attack their young or their lair, and still others approach you curiously and sniff at your feet. Non-player characters may gloat when they dispatch your character or run like cowards when they are overmatched, saying “I’ve got a bad feeling about this…” (Other times they just stand there, however: the deathblow animations don’t work consistently.) All of this makes it much easier to feel a sense of immersion.

Third, I like the game because it bases character development on skills rather than levels, meaning that over time, the difference between new characters and established ones is less about a massive differential in sheer power and more about differentiation of competencies. Yes, there are characters who are masters of their chosen professions, and others who are just novices, but both sets can interact meaningfully. Moreover, whatever you become is not permanent: if your character gets tired of one set of skills, he can surrender them and choose to learn another. This is a smart move from a managerial standpoint (no more complaints that later changes to the rules “gimp” or cripple an established character: if you’re unhappy with changes to your skills later on, just give them up and go into a new line of work). But it also makes for better gameplay.

Fourth, the game has a lot of potential to have the most interesting virtual economy of any MMOG because it promotes player interdependence and makes players themselves the source of almost all equipment and supplies, through crafting—in contrast to MMOGs that center on “loot”, acquiring gear through killing creatures and enemies in the game environment. This has made the kind of players who are used to fighting and being rewarded directly with equipment very frustrated, because they will have no choice but to turn to other players to get their weapons, armor and the like—much as some players of Shadowbane were frustrated when it turned out that to excel in the game, you actually had to have real-world political or social skills of some kind in order to achieve power within a guild. If SWG’s designers can more fully work out the virtual supply-demand chain and get the “drains” and “faucets” of the economy right—as of now, four days before launch, I wouldn’t say it’s working exactly right—this will be one of the most distinctive aspects of the game.

Finally, the game also has a zillion beautiful “small” touches. I’ll just mention one of my favorites: the chat system is tied into the emote system, so that the use of certain keywords in chat makes your character emote properly in synchronization with what you have just said to other players. If you say, “No, I don’t know where that is”, your character will shake his head negatively. (You can turn this system off if you don’t like it, which is another example of the great detail work in the game: the interface is highly customizable).


Sharing a laugh with a Stormtrooper.

So who doesn’t like SWG, and why? The famous “Bartle typology” that applies to players of multiplayer persistent-world games has four categories: achievers, killers, socializers and explorers. Achievers want to beat the game through developing the best, strongest, most ultimate character—and therefore want the ability to make their character better than everyone else. In Star Wars terms, they want to be Han Solo—and they want most other players to be stuck being Greedo or an anonymous stormtrooper. Killers want to directly compete with and defeat other players. Socializers treat a MMOG like a graphically enhanced chat room: they are playing to build communities, forge conversations, interact with others. Explorers want to see everything the gameworld has to offer, and try everything the game mechanics permit, just because it’s there. Some observers have suggested that there is a fifth “Bartle-type”, the builder, who wants to be a sort of apprentice to the game developers and leave permanent structures or features on the gameworld for others to use and experience.

SWG is not a very satisfying game for achievers, and killers may find it a bit frustrating in some ways, though player-versus-player combat is a reasonably vigorous part of the gameplay for those who choose to pursue it. For socializers, explorers and builders, it may be the best MMOG ever to date. In terms of other tribes or types common among gamers, it is easily the best MMOG for so-called role-players, who want to inhabit a gameworld as if it were an interactive fiction, and act out the persona of a character. For the players often referred to as powergamers, those who invest huge amounts of time trying to be the fastest to achieve hierarchical dominance in a MMOG, it also has some real satisfactions, but probably also some frustrations. But curiously, I also think it is a good game for novice MMOGers to try, despite its very difficult learning curve.

That being said, there are some shortcomings, beyond the current technical problems.

First, I worry a lot about some of the game-balancing efforts that were going on late in beta: rather than fine-tuning a carefully calibrated sense of difficulty, they were lurching in both the economic and combat aspects of the game between wildly different settings, and at the time of this writing, some of the game, particularly the mission system which is a crucial source of initial monetary capital for combat players, is too hard, while other aspects, such as the fee for listing items for sale on a galactic marketplace, is too forgiving. Balance is never finished, and I’m sure there will be meaningful tweaks and adjustments throughout the entire lifespan of the game. I just hope they’re not wildly away from the happy medium when we start out. I’m equally alarmed by some of the late-game tweaking of advancement rates: I really fear that the game has become hostile to solo players and so-called casual players. Time will tell, and as the developers note, it’s better to start hard and ease up than the other way around.

Here comes PETA!

Second, I’m a bit nervous about parts of the experience for crafters, who manufacture the goods that all players will need. There are aspects of gameplay that are highly tedious, in part because some of the items they can make have no imaginable marketplace among players, and as a result, players will find themselves repetitively making items that they will then destroy. It is a hard puzzle to crack: if you let novice crafters make highly desirable items, you undercut the labor value of advancing in rank and the economic differentiation that rank should provide—but if you leave them nothing but useless dross, then you promote a lot of empty gameplay. Early in development, there were plans for “schematic revocation”, which would take away low-level component designs from high-level crafters and force them to buy those components from novice crafters. This was a crude solution to a complex problem, and I’m glad it went by the wayside. The developers may have solved this problem with some cheap consumables called weapon powerups—because the real solution is making sure that everything a player can make has some actual utility to other players, and therefore is worth buying. There are “skill tiers” in a number of professions where there is almost nothing to make that will have measurable markets but where you are condemned nevertheless to having to make it. On this issue, SWG may not be there yet, but it has the potential to get there, at least.

Third, I’m a bit concerned about the experience of combat. For one, this has been one area where the tweaking has been dramatic and rather dizzying, and I have little sense of where the roulette wheel is going to actually stop. There remains a basic problem, which is that the roles of different kinds of combatants are not highly differentiated in terms of the different skill sets they have invested in developing. Different firearms skills pretty much work out to the same thing. There are good reasons for this, but it tends to make combat a sort of lazy affair where everyone just shoots away and the target either dies or doesn’t die. This is especially true when very large groups of 15-20 characters go out hunting together: there isn’t much that can stand in their way. At this very upper end, the game needs more challenging, tactically clever enemies.

Fourth, to echo a common complaint heard on the beta forums, the game may not yet be “Star Warsy” enough. This criticism has sometimes come from disingenuous sources, mainly players who wanted SWG to be highly centered on player-versus-player combat and who vent their disappointment with it through every possible channel. However, it is true at times that the game, though highly involving and enjoyable, feels more like a variant of the MMOG genre and less like an interactive fiction set in the universe of the Star Wars films. Some of that is the nature of the medium, but the developers could do a lot more in the coming months (and I think intend to do a lot more) in order to deepen and enrich the sense of belonging to the Star Wars universe. Ambient sounds and music that made their appearance late in the beta helped a lot, as did the growing number of stormtroopers marching through cities and the like.

My main list of the “Star Warsy” elements that are still missing would be:

a) some sense that the Empire in SWG is the Empire of the Star Wars films: right now, it’s quite bland and morally neutral, a visual presence but one that has no impact on gameplay or even the gameworld as a whole unless you choose to play a Rebel-aligned character.

b) There are a number of missions in the game that a player can choose to undertake, but some of them don’t feel to me as if they really “fit” the Star Wars universe—they feel more like MMOG conventions and less like mini-narratives that could be going on alongside the major narrative of the films.

c) The dialogue available for non-player characters often seems colloquial and contemporary, and not the slightly stilted mythico-cornball voice of a lot of the Star Wars films.

d) Much of the gameplay that one can engage in is distant from the central narrative action of the Star Wars universe. Much of that is inevitable—that’s the difference between a fictional setting you inhabit and one you experience through passive media like film and books—and some of that will also be addressed by players who enliven the world through roleplaying. But the developers have more work to do in this area as well, provisioning content tools to players that immerse them more fully in this particular universe.

Fifth, some of the skills and professions available to players are either poorly conceptualized at present or have received little to no testing. The Merchant profession, for example, has skill tiers that are extremely dull and generic compared to almost every other profession, and its most interesting skills have been used by almost no players in the beta. It’s hard to say whether it works or not. The Bio-Engineer, to cite another example, is just conceptually a mess, a kind of design afterthought that functions as a sort of cul-de-sac supplier for the more versatile Creature Handler profession. There are gameplay elements tied to some professions that I strongly suspect are going to consistently malfunction after the game goes live—experimentation in crafting, for example, seems to lurch wildly in its functionality from one small patch to another, and it is very difficult to say how it is supposed to work or whether it works at all.

With some nervous skepticism, I still eagerly await the launch of Star Wars: Galaxies. I haven’t really fallen in love with a MMOG since the original Asheron’s Call, but I think this game is the next one that will really command my attention not just as an object of study but as a source of lasting entertainment.

[permalink]


June 24, 2003

MMOG of My Dreams

This semester, I co-taught a course on computer-mediated texts and digital culture. We did a week on computer games, but some of the core readings involving interactivity and hypermedia that we dealt with also frequently used computer games as a point of reference. I felt it was pretty clear that about three-quarters of the students simply didn’t understand why anyone should care about computer games (save for the undeniable fact of their economic importance). I would say that is a reaction I encounter more generally: computer games seem like impenetrable geek weirdness or adolescent silliness to my colleagues and most of my friends.

Sometimes I wonder whether that reaction isn’t fairly accurate, and whether my intellectual enthusiasm for these games is just a rationalization of a bad habit, my own version of Richard Klein’s Cigarettes Are Sublime. Certainly computer and video games are very far from what I imagine they could be, even within the terms of contemporary technical limitations, and it is often difficult for me to envision how they could get from where they are to where I think they could be.

Somewhere within the game form, however, is a kind of creative practice that has the potential to be a radically different kind of cultural experience as revolutionary and transformative in its own way as movies were in the 20th Century.

Maybe even two such kinds of experience, in fact. There is one kind of game which could become the only full realization of what Espen Aarseth has called “ergodic literature”, where the experience of reading is about choosing pathways to follow through a huge branching structure of narratives that overlap and recurve back on each other. This is what most “solo” computer games could be and what a few of them come seductively close to achieving in a primitive way, rather like “The Great Train Robbery” first laid out some of the visual possibilities of cinema by putting the camera into motion.

Then there is the massively-multiplayer online game (MMOG) set in a persistent world, where thousands of people create characters who inhabit a changing but regularly recorded world, where developers shape the outer parameters and structures of their experience but players themselves in aggregate and individually also craft the “text” that is read and understood by other participants.

I have previously talked about how both of these kinds of games are presently limited by the industry models that govern their production and the bounded creative vision of many people involved in their production. But the MMOG is also limited in part simply by the ferocity of desire that its devotees lavish upon it, and the unmanageability of their aspirations for it. Many MMOG players glimpse in the form an impossible possibility and that mere glimpse is enough to drive them almost mad. I include myself in this charge.

What is it that they see? Simply put, they see the enrichment of life itself through its fusion with fiction, a true Dreaming, an almost-sacred possibility of communion with imagination. A novel as capacious as life, a fiction unlimited by the labor time or mastery of its author. Life 2.0, with all of what makes life organic, surprising, revelatory, but always coupled to joy, fun, excitement, adventure. Dramatic conflict without tragedy, narrative motion without the boredom of everyday life, defeat without suffering. A fiction that one does not merely consume but always creates, where you can find out what happened next and where you can see what is happening beyond the frame of the camera or the page of the book.

Unreal, of course, and unrealizable. MMOGs are still limited by the labor time and capital of their creators, and are still mastered by them. They still have a frame, an outer boundary, past which one cannot go, a dead zone where representation and possibility and imagination stop dead. They have rules, like all games, which constrain, sometimes painfully so, what can be done within them. All of that is a structural necessity, and will always be.

Because they are multiplayer, they are also constrained by the aggregate of the humanity they contain. MMOGs make it clear that hell is other people. Richard Bartle did not pull his famous typology out of a hat: MMOG players recognize the achiever, socializer, killer and explorer archetypes because they are so visible within the experience of the games, apparent through conflict. Other players are the only way to make the narrative and imaginative capaciousness of a MMOG real, because we do not have and may never have AIs good enough to meaningfully simulate the sentient inhabitants of an interactive fiction.

Yet, other players are not like you, no matter who you are. They don’t bring the same desires or expectations or visions to the table. In some cases, their visions are commensurable with your own, but in many cases, they are perpendicular or even actively, aggressively contradictory to what you want to do and see and have happen within the fiction you are all experiencing. Ten thousand chimpanzees typing and one might eventually write Shakespeare. Ten thousand chimpanzees typing on the SAME typewriter and the best you can probably hope for is a text that contains Shakespeare, Beavis and Butthead, Stephen King, Thomas Pynchon and Nancy Friday all on a single page.

I can see a great many ways that current MMOGs could be better, richer, more capacious even given their limitations. I can see future tools, like better AIs and emergent-systems governing the generation of content, that will make them bigger and richer and more engaging. But they can never contain the desires that they invoke, and that may always make the genre both fascinating and tragic. Fascinating because it is a palimpsest, a Rosetta’s Stone, to the desires that fiction itself awakes and fails to satisfy, a revelation that books and moving images have only been the weakest gruel to try and feed that hunger. Tragic because to feed a starving man just enough to waken him to the fact of his starvation is to let loose on the world a scouring, devouring appetite that searches desperately for satisfaction without knowing why it cannot find more than a moment’s rest from its cravings.

[permalink]


June 4, 2003

My Little Hobgoblin

Eric Rudolph—assuming he’s guilty of the crimes of which he stands accused, something that I think there is at least as good a reason to believe is a fair assumption as there was to believe Osama bin Laden the mastermind of 9/11 in the months following that attack—is a terrorist. There seems to be broad agreement about that in the public sphere. Not just a terrorist, but one who is morally indistinguishable from the other targets of the war on terror. His fatality count is lower than al-Qaeda’s, but that is not the measure of whether one is morally guilty of terror.

Eric Rudolph appears to have had the aid and sympthy of more than a few people in the area where he conducted his fugitive existence. It also seems there is broad agreement among pundits and bloggers that this is a vexing thing. Am I wrong in thinking, however, that conservative commentators have had, on average, only a small proportion of the vehemence they would have had about such sympathy in comparison to what would happen if there were a number of people spotted in Santa Cruz, California with “Go Osama!” t-shirts on? Andrew Sullivan is quite clear that Christian fascism and intolerance is as bad as any other form—but where is the equivalent of his “Sontag Award” ? Where is the red meat feeding-frenzy over signs in Murphy, North Carolina expressing support for Rudolph? Where is the pulpit-pounding? Where are the bills in Congress proposing to rename Rudolph the Red-Nosed Reindeer "Liberty"?

More than anything else, this is why the disease of partisanship in American public discourse disturbs me so greatly. It is not because I believe being “in the middle” is somehow an intrinsically good thing, or that everyone should seek balance, or that neutrality and objectivity are desirable and achievable. Strong sentiment and distinct philosophical positions are a good thing. Bland, safe, calculatedly moderate arguments carry no necessary virtue.

I am viscerally repelled, however, by the profusion of thinkers and speakers, bloggers and otherwise, who seem unable to recognize that once you make a stand on principle, your flag is planted there for all to see. If you’re going to surrender your principles and lower that standard, then have the guts to say so. If you’re going to continue to hold other people accountable for moral and philosophical inconsistency, then have the courage to hold the line when the fault lies with people you normally count as allies. In fact, that’s when it matters most to speak up and be counted. It is no big deal for a neocon to hammer liberals all the live long day. That takes no effort and it takes no cojones. If a particular blog or opinion column or politician’s speeches are 95% bashing of the usual suspects and a meek 5% of the time involve some modest peep of self-reflection, then you’re in the presence of someone whose public thought degrades rather than elevates the life of the nation.

Consistency is no hobgoblin when it is about matters of fundamental ethical principle. I have no problem with someone who wants to understand the roots of Eric Rudolph’s actions and approach his sympathizers with an honest desire to comprehend their faith—but if so, you must display the same sense of ethnographic curiosity in approaching al-Qaeda. I have no problem with someone who approaches al-Qaeda and any sympathizers with uncompromising moral fury—but they then need to be equally dedicated in the pursuit of Eric Rudolph’s fellow travellers.

There are two absolutely basic things that a public intellectual is obligated to do. The first is to seek out issues, questions and problems which are highly relevant to your basic principles and philosophies , and apply those philosophies with rigor and honesty, making your core views as transparent as possible in the process. The other is to seek out those problems and questions which your own philosophies cannot deal with adequately, to expose and confess your own contradictions and limitations. Most public thinkers fail both tests, often badly, pursuing only the easy chance to score points for their own team.

It is time to play a different game, to take back public life from the stunted, withered, corrupted spirits who now rule the field. That is what the defense of liberty now requires: an incorruptible willingness to go wherever we must, even if we find the trail leads to our allies—or ourselves.

[permalink]


May 21, 2003

Sorry again for the long, long gap between updates. The rush to the end of the semester is always brutal, and I find myself looking to mid-May as a time to produce all of what I have been meaning to do. And then I get a free week and proceed to garden and sleep and see a movie or two and dawdle and stay away from the office. Then comes grading, and now, at Swarthmore at least, Honors exams.

But finally now an update, and I promise, really, more to come soon.


May 21, 2003

Monastery or the Market?

My piggy-backing on other people’s blogs continues: this time, I’m responding to Michele Tepper’s essay “Doctor Outsider”, which I found via Invisible Adjunct.

Tepper nails the snobbery of the academy dead on in one respect. There is no doubt that most academics regard the pursuit of careers outside the academy by ABDs or Ph.Ds to be a sign of failure or mental breakdown.

There is so much bundled up in that reaction. For one, the academic fear of irrelevance, coupled with an equal and opposite pride in the ethereal virtue of being outside the world of everyday life. For another, the confusion about desirable outcomes, the difficulty of figuring out how graduate school culminates in the anomie of the tenured life.

What I think Tepper does not properly credit is that many academics express distress at a Ph.D in the humanities and social sciences choosing a career besides academia is that they’re thinking like utility maximizers. Privately, they’re asking, “Why invest the time in doing a doctorate when most of the post-academic careers that one could choose do not require or benefit from having a doctorate?” Look at Tepper’s own career choice: couldn’t she have done that without a doctorate? Look at Kenneth Mostern’s postacademic career: couldn’t he have just done that in the first place?

Of course this is an appalling indictment of the near-total lack of pedagogy within doctoral programs in the humanities and the social sciences. There are certainly professors who teach their graduate students very well, but what they teach is largely the art and craft of being an academic. Becoming a Ph.D in history or literary studies is not about deepening expertise and knowledge that can be put to general use. Most undergraduate courses that are taught well bequeath knowledge and thinking skills to students that have many possible uses. Most graduate study in academic subjects is the opposite: it has no other use besides the reproduction of academia in its present institutional form.

This has a lot to do with the derision that greeted Elaine Showalter’s recommendation that Ph.Ds in English would make great screenwriters. I’m sure some of that reaction was the unbridled snobbery of academia, but some of it was also practical. A Ph.D who would make a great screenwriter would have made a great screenwriter before they got the Ph.D. The Ph.D surely would not have helped them become better screenwriters, and as a utilitarian credential, it not only does not open the doors for an aspiring screenwriter, it may actively close them. Quite a few newly-minted humanities Ph.Ds have found that their degree is an active impediment to seeking employment outside the academy. Even if the job-seeker is willing to start in an entry-level position, potential employers often feel that is inappropriate for someone with a doctorate—but that person often also lacks any experience that would qualify them for more advanced jobs.

Most academics shudder at the specter of the marketplace, and blame “corporatization” for all the ills that afflict universities and colleges. I think it is not nearly so clear-cut. It’s possible that universities and colleges aren’t corporatized enough, and in any event, most of the academics who decry the intrusions of the market into academic life are totally unwilling to embrace an alternative return to the university as a sacred, artisanal institution whose legitimacy derives from its relationship to the democratic public sphere and ideals of citizenship.

I don’t think this is a false binary. It really is a basic choice, to some extent, at least as a foundational principle about what is worth doing and why the academy exists. Though the partial commercialization and corporatization of the academy certainly has been accelerated by exterior pressures, I think many faculty collude in the process, often precisely those who protest most strenuously complain about it.

For example, anyone who has ever accepted either a Foucauldian or Gramscian understanding of what the university does—who either sees it as part of a ‘truth regime’ deeply connected to dispersed forms of bio-power or who sees intellectuals as engaged in a ‘war of position’ with the aim of revolutionary transformation of civil society—has more or less opened the door to the corporatization of the university.

That sounds like a perverse claim, but the direct consequences of abandoning a vision of intellectual life as involving a progressive accumulation of knowledge whose purpose is open-ended, non-ideologically fixed critical thought for an informed citizenry in a liberal democratic society is that it leaves academics no basis for articulating a privileged place for higher education in terms of the general logics of 21st Century global society.

If the university is nothing more than another power/knowledge factory or a subversive redoubt for the production of opposition to late capitalism, then there is no intelligible argument for its continuance in a non-market form that can be made within the terms of the larger public sphere. Of necessity, those arguments have to be oddly private, made only within guild circles, between academics, in journals and monographs and conferences and committee meetings. And the only grounds for continuing the conventional practices of academia, like tenure, peer review, or scholarly production itself are hermetic and inertial ones: they are what constitutes valid power/knowledge claims, so they are what we do, or they are how we move chesspieces on the board of the “war of position”, so we bow to the rules of the game.

The only grounds on which one can legitimately resist the marketization of higher education, in the context of a larger public argument, is that some set of progressive and sacred values resides within it, that as an institution is cannot be and must not be understood in terms of a productivist logic.

There is something to be said for productivism, but only IF the entire operation of scholarship is laid bare to it. Imagine academic departments where continuous employment was guaranteed only by two things: bringing paying customers in the door and producing and disseminating knowledge that mattered, where “mattered” was judged by the size or importance of the larger non-academic audiences consuming that knowledge. I don’t think that is entirely a horrible vision. It would have the virtue of (cruelly) clarifying regimes of labor value: you’d have to be either an effective pedagogue or an effective communicator in your scholarship. In that system, the hundreds of other students I have had who would gladly pay for an extension of the broad liberal arts experience they had as undergraduates might find a graduate pedagogy to satisfy that aspiration. People like Michele Tepper might find that the work they did as graduate students actively assisted a variety of professional aspirations by pedagogical design rather than adaptive necessity.

We would sell what the market demanded, not what we austerly deemed the market required. Such a university would have to abandon requirements entirely, because the are a way of skewing the intellectual marketplace within a curriculum. You couldn’t determine whether the market for pedagogy was operating properly if there were required courses, because ineffective pedagogues who were good bureaucratic infighters could simply claim more than their fair share of the requirements and so claim a captive pool of “customers”. You’d have to abandon peer review or strenuously reduce it to no more than fact-checking. And so on.

It certainly would be a different kind of institution. To reject it out of hand because it bows to the market requires rethinking everything in academic life that invokes some kind of market differentiation between scholars and teachers. If you want to reject that vision completely, then don’t judge scholars by the quantity of their scholarship. Don’t judge them by the number of students in their courses. Don’t judge them by the grants they bring in. Don't judge them by how many citations they get in other academic publications. Don't judge them by their internally determined commodity value, by what other scholars deem valuable or interesting. Don't judge them against labor markets at all, in any way.

The easiest way to do that is not to judge at all, but that too is impossible if there are a limited number of jobs and a large number of job-seekers. What that means is that in a rigorously non-market academy, we have to judge by quality of knowledge and nothing more, and that this judgement cannot be by any instrumental rubric, whether left or right. The moment you say, “This knowledge matters more than that knowledge” and that assessment is based on a general utility, profitability, or significance, rather than ethereal correspondence to truth and beauty, you’re open to an intellectual market of some kind.

As I said, I think that’s something of a virtue, at least potentially. To admit that the ordering of faculty life is legitimately subjugated to some kind of market is also to admit that the bugbear of “corporatization” is with us not because of evil administrators or the sinister forces of late capitalism predatorily inserting themselves into our lives. We do it to ourselves, every day. The grad students at Penn who take up arms against corporatization by unionizing today are clamoring to join a profession where they will, of necessity, practice corporatization tomorrow. Not because they will fall from grace, but because the normative practices of contemporary scholarship accept and even embrace half-formed market logics of value, often quite particularly and intensely within the academic left. Any perspective which strongly instrumentalizes knowledge production opens that door, because it abandons an artisanal and sanctified understanding of academia.

If you want to defend scholarship as monasticism, you had better be willing to accept in generality an otherworldly and non-instrumental understanding of academic virtue, to believe in knowledge for knowledge's sake.. You cannot conceive of higher education as such only when it is convenient to do so: the philosophical obligations of such a view must of necessity run far deeper.

If you’re sometimes open to a market understanding of what is good about some knowledge or pedagogy, then you have to be at least notionally open to much of what comes with “corporatization” . For example, grad students trying to unionize ought to be embracing corporatization, because the devaluing of pedagogy that permits an Ivy League institution to fob off its paying undergraduate customers on poorly paid and ill-trained graduate student instructors is made possible not by an exposure to the marketplace but by relative insulation from it. More customer rights demanded by undergraduate students in a market-driven rhetoric might lead universities to take the steps they responsibly ought to take: dramatically reducing the number of Ph.D candidates in the humanities and the social sciences, hiring contract faculty at reasonable salaries to teach courses, reforming sham curricula that pretend that putting 600 undergraduates in front of a video monitor of a lecturer is education worth paying $20,000 a year for, and so on.

Right now, what most academics seem to want, even and especially on the “left”, is a quasi-statist academic market, a market whose terms they exclusively define, where fixed consumption of knowledge outputs is dictated by control over disciplinary canons and library budgets and production targets are met by the dictates of tenure and promotion. This seems to me to be the worst of all worlds, without the generative fecundity of a ‘real’ marketplace of ideas and education and without the sacred, contemplative virtues of a life of the mind that serves the wider civic needs of a liberal democratic society.

[permalink]


April 28, 2003

We never talk anymore

Via Caveat Lector, this wonderful essay by Kenneth Mostern, a “post-academic”, someone who had the sinecure of tenure and cured it by leaving academia.

When all is said and done, I love academia, but still, it hasn’t always been what I sometimes imagined it would be.

I thought I was choosing my dreams and rejecting security, but it turns out I was choosing security at the possible cost of some of my dreams.

What Mostern most accurately identifies is the strange absence of talk between academic professionals about their own work or the larger weave of their intellectual interests. To some extent, this has to do with time, or the lack of it. A professor is also of necessity an administrator and a teacher and a scholar. The work expands to fill any time vacuum: clear a space for some purpose and you quickly find unsought obligations filling it.

As Mostern notes, however, that’s not an adequate explanation of the problem. It’s the alibi that everyone uses to lightly explain away the puzzling vacuum at the heart of academic life.

I had a chance a few years ago to attend a dinner for a guest lecturer. Some of my favorite colleagues from Swarthmore were there. The conversation started with issues that were fairly specific to the speaker’s presentation and work, but very rapidly grew into a fast-paced bull session aimed at the primal question, “What is a good society”? Afterwards, I talked with one of my colleagues who hadn’t been there about how this had been the best discussion I’d had since I was an undergraduate, and my feeling of melancholy about how rare and odd this conversation actually was. My colleague looked puzzled and said, “Sounds awfully simplistic".

A student organized a panel a month ago on the integration of the social sciences, Again, the panel was composed of some of my most valued colleagues, people who are accomplished scholars and teachers, who always have something interesting to say. It was a great panel, but I was also stunned and depressed by one thing that emerged out of it. Some of our brightest and best students, including the organizer, felt that this discussion was the first time they’d heard about some of the issues we covered, about how we worked through the intellectual terrain of our own discipline within the social sciences, how we confronted a new problem or a new idea with our own toolkits. I don’t think the students were exaggerating: we don’t talk that much about these kinds of issues, either to them or to each other.

A significant group of Swarthmore faculty met early this year to talk about a grant designed to help facilitate year-long seminars between mid-career faculty about new areas of mutual interest and inquiry. (I’m pleased to say that my colleague Mark Kuperberg and I submitted a proposal for a seminar on emergent systems and computer simulations that will be the topic for next year’s seminar.) I have to say I was stunned when several bright, interesting colleagues of mine essentially shrugged in response to the idea of the grant and said that if it wouldn’t help them get research work to the state that it could be published in a specialized journal, it wasn’t terribly useful. Conversation between faculty about a subject not directly functional to their research was not a sufficient end in its own right.

You can overstate the hold of this strange silence: this semester I have been having a wonderful time with another interdisciplinary seminar on emergent systems and complexity and also been part of another faculty group reading postcolonial theory. I do get a chance every once in a while to talk with colleagues about their work, but usually because of accidents or strange interruptions of routine.

It is not because we are too busy. It is because we are afraid. For one, we are afraid because of having tenure, not because we have yet to have it: all of us with tenure fear starting a conversation that will reveal an irresolvable intellectual and political divide between ourselves and a colleague.

Who wants to live for 30 years with someone who hates you and will work to undermine you, especially knowing as most of us do that an academic environment offers innumerable opportunities for a “dour machiavel” to damage colleagues in ways that cannot be confronted or stopped? I was speaking the other day with a colleague from another institution that I like a lot and I confessed (that's the right word for it) that I really liked Paul Berman's Terror and Liberalism. He looked surprised, "But it's quite a neocon book, isn't it"? I suppose, I said, but on a few things, I think the neocons have a point.. That earned me a quick look of concerned surprise, much as if I had said I had cancer or AIDS. For most academics, better to keep silent and tend one’s own gardens in the very public privacy of one’s own specialization.

We are afraid of our own intellectual ambitions, afraid that other academics will think us simple or lacking knowledge and expert command of our subject matter. That is partly an artifact of graduate school training, its internalization of shame and its paranoid wariness.

More potently, it is an artifact of the massive saturation of the intellectual marketplace with published knowledge and academic performances of knowledge at conferences, workshops and events. We fear exposure of ignorance because in truth, most of us are ignorant.

The heuristics that disciplines and old ways of processing the flow of information provide to academics are on the verge of uselessness. The canon has no authority any longer, and there is no compass to point the way towards what we ought to know. No wonder some graduate professors rule their students like cruel eunuchs: they no longer know how to reproduce their own practices and can only train others through authoritarian mystifications and capricious dictates.

We have to embrace certain kinds of beautiful simplicities--one of which is to acknowledge the gloriously irreducible complexity of the human condition and meet it without the security blanket of well-manicured social theory and reflexive turns to our own epistemology, to write histories and sociologies and anthropologies that have the emotional intimacy and ambiguity of the best and richest fiction--and that are as seductive and engaging to read as those fictions. We have to be unashamed about speaking plainly, to feel that our deepest obligation involves being legible to our colleagues.

We should embrace our teaching mission and slow the ceaseless overproduction of derivative, second-order knowledge, of monographs or experiments whose only justification for being is a tenure dossier and the hollow, insincere rhetoric of a whiggish mission to add one more grain of sand to the pile of knowledge, a rhetoric that we spurn at every other moment of our waking days. We should ride the wave of information in its wild state, embrace the strange attractors that lure us from one subject to the next.

We should be more concerned with our quality of mind and less concerned with our production of scholarship, and place greater value by far on one good conversation about the nature of a good society than the publication of five journal articles. That’s how we get to a new academy humming with passion for ideas and a generosity of spirit, where academics treat each other with the same tender pedagogical regard that professors at a college like Swarthmore now reserve for their brightest undergraduates, where the excitement of discussion and debate replaces the damp silence that nestles over the academic calendar like a fog.

[permalink]


April 14, 2003

Sorry for the relative absence of material here recently. April is the cruellest month in the academic calendar, and I'm feeling its cruelty with special intensity at the moment. Much more coming soon--I have a backlog of things to talk about. (And few of them on the war, perhaps thankfully.)


April 14, 2003

Masipula Sithole

Masipula Sithole died unexpectedly April 4th.

One of the major changes in my life in the past decade has been a growing willingness to simply regard the defective character and moral vision of some African leaders—most notably Robert Mugabe—as a major cause of postcolonial African nations’ problems. To be sure, it’s not quite that simple. Even venality, megalomania and cruelty are not simple things to explain or understand.

As my willingness to attribute at least some suffering to bad leadership—or even just bad moral character—my admiration for individuals who stood outside of the prevailing structures and failures and courageously reached for something else, something better, rose accordingly. Masipula Sithole was one of those Zimbabweans, a person who could have simply kept his head down and gone about his business (despite the fact that his brother Ndabaningi was so infamous in the late 20th Century political history of the country).

Mas did not keep his head down. Nor did he settle for mere political opposition, the kind that simply strives to replace one set of postcolonial autocrats for another. He stood for something else: a free civil society, a liberal society, for the values and ideas and honesty that are the precondition of meaningful democracy. As Fahreed Zakaria has observed, elections do not make democracy. Opposition parties do not make democracy. Liberalism as a system of values and internal commitments makes democracy.

People like Masipula Sithole make democracy. Mas wrote a popular opinion column which he used to criticize the Mugabe regime, but not as a rigidly ideological critic. He spoke truth to power, and truth to Zimbabweans, about who they are and whom they might want to be. He asked Zimbabweans if it really was true that they were “sons of the soil”. He asked Zimbabweans what kind of country they really wanted. He used wit, persuasion and patience.

He dared to dream. When I first met him here at Swarthmore, where his son Chandiwana Sithole was one of my favorite students, he gave a talk about the possibility of a “United States of Southern Africa”. It seemed hopeless, impossible, impractical, unlikely. It still does. It was a lovely dream, though, at a time when no one dreams about Africa, or when they do, they dream mean little insincere and Machiavellian dreams dressed up in glorious rhetoric about an “African renaissance”.

I hope Zimbabweans of good will can hold on to his dreams, and all the dreams like them, hold on to his intellectual and political legacy, hold on to the desperate thought that the future might be better than the tragic present. Mas left us all too soon, and we now can only clutch desperately onto the many gifts he gave so generously, gifts of time and thought and insight.

[permalink]


April 3, 2003

The Fog of War

Like Matthew Yglesias and a scattered handful of other bloggers, I don’t really have a strong opinion of the current military strategy of the United States in Iraq.

I am not a general, nor do I play one on TV. Playing lots of games of “Civilization 2” does not give me great insights into whether the 3rd Infantry Division ought to be flanking to the north of Karbala.

Even if I did have aspirations to be an armchair general, I sure as hell wouldn’t try to do it with the kind of information available to the American public at present. There is a great deal of accidental and purposeful misinformation flying around the global mediasphere, especially in the US media.

The war could be over tomorrow, or five months from now. My concern has never been about what would happen during the war, but what I see as some of the inevitable long-term consequences of the way in which we went to war and some of the unavoidable disasters that will follow in its wake.

About the only thing I do feel confident in saying is that if American policy-makers overestimated the degree to which Americans would inevitably be met as liberators by the Iraqi people, they’re stupid. I don’t know for certain that this misunderstanding was typical within the Bush Administration, but I think the evidence is fairly good that there's a fair amount of this particular hubris in the air--or there was before the war started.

What I am struck by more than anything else, however, is that hardly anyone in the public sphere seems in a deeply thoughtful mood, either in the mainstream media or in Blogistan. My own feeling in watching or reading some of the news coverage is a complex mix of emotional and intellectual melancholy combined with a sense of open curiosity about what is happening on the human scale of these events. The complex sadness I felt when I saw a photo of a Marine cradling a girl whose mother had just died in a crossfire, or the amusement I felt seeing a young Iraqi boy with a Batman shirt accepting gum from a US soldier: none of this is a talking point in some predetermined, shrill argument for or against the war.

Where are the novelists and poets of the daily grind of the war, the people who call us to some deeper meditations about the meaning of it all, who bring us together in a contemplative pause where the lion lays down with the lamb and the warblogger sighs heavily in sympathetic unison with the critic of the war? Where is the general humility in the face of events vastly larger than ourselves, the reflective pause?

Why must every unwinding of the widening gyre be ripped back immediately to the hurly-burly of crudely diametric rhetorical combat? Why can’t Andrew Sullivan or James Lileks or Glenn Reynolds allow themselves the necessary luxury of moral ambiguity as well as empathy for the whole wide world and all the frightened people in it? Don't they have a single doubt or regret? Isn't anything messy or difficult in their world? Why does Patrick Nielsen Hayden get vaguely harrassed for feeling a moment of magic connection with a single American soldier as opposed to the generic abstraction of humanity as a whole? Why must antiwar bloggers drag every utterance and image coming from generals and politicans and soldiers through a brutalizing vivisection? Why is everything part of some media conspiracy? Where is the curiosity, and yes, the excitement, the pulse at the temples, the little heart-skipping trill of empathetic fear for men and women in harm’s way, all of them? Where is the simple fascination with the awesome technological and logistical scale of the war?

Why is everyone in such a rush to line up all the ducks in the world in a row?

[permalink]


April 3, 2003

Some adjustments to my blogroll. I especially recommend cobb, the blog: Michael Bowen is one of the most breathtakingly unclassifiable and interesting writers I've come across.


March 25, 2003

I'm sure most people who know the Internet already know about it, but I strongly recommend The Agonist for coverage of the war. Mirrors are here and here and here. It's vastly clearer and more compelling than the television coverage.

Always adjusting my blogroll--I use it as a transportable bookmark for myself now. I've added the Invisible Adjunct, which is a wonderful blog and Matthew Yglesias, who is really terrific. (Maybe they do make them smarter at Harvard after all.) I'm also enjoying Tacitus quite a lot. I give up on Andrew Sullivan, regardless of my commitment to be open-minded. After a while, he's excrutiatingly predictable: I could write his next column for him.

My main entry for today is a very long essay; I'm going to leave it here for now, but move it into the sidebar eventually.


March 25, 2003

The authentic temptations of interventionism

I am going to take a bit of a detour into some of the substance of my current book-in-progress. The book started as a comparative study of the individual histories of three Zimbabwean chiefs, but it has slowly grown from that foundation into a set of essays on various aspects of modern African and imperial history seen through the lens of these three men’s lives.

One of the themes that crops up in several of the essays is my feeling that we need to revisit the moral and intellectual origins of British imperialism in Africa, to rediscover the extent to which the colonial governments the British established were built through improvisation and negotiation as well as military force and coercion.

When I started drifting in this direction with my work on this book, it was early in the year 2001, and I thought that the book would be relevant largely to audiences with a particular interest in Africa, or perhaps at best a wider audience of readers interested in the role of individuals and the nature of human agency in history.

In the past year, some of my thinking has developed an uncomfortable new relevance, in an unexpected direction. Many intellectuals, from many different perspectives, now assert that the United States has taken a huge step towards ruling a formal empire, one that more than a few commentators have likened to the British Empire as it stood just prior to the 1870s. Empire and imperialism are terms loosely used and abused by many. Virginia Postrel is right to be skeptical about the word, but only up to a point. Most centrally, empires have territorial holdings in places that they do not regard as being part of their own national sovereignity--and we appear to be on the brink of that.

This is a good time to revisit the 19th Century establishment of the modern British Empire, not to for the purpose of taking cheap shots at the Iraq War via analogy but so that we can understand the authentic appeal of empire. We forget how fervently many people of goodwill and high moral character saw the spread of British power as something that would benefit all of humanity. The “civilizing mission”, for many, was not narrowly or viciously ethnocentric: it was to bring the common joys of peace to warring nations, to bring the benefits of trade and industrialization, to bring medicine and science, and even for some observers, to bring democratic values, though few Europeans saw those as pertaining to Africa or Asia in the near-term future.

Even among the most open-minded and radical thinkers of 19th Century Europe, this perception of imperialism as a progressive, emancipatory intervention in non-Western societies was checked by a fundamental, deep-seated racism. Whether this racism was a root cause or engine of imperial expansion or a parasitic accompaniment, it nevertheless had the effect of choking stillborn any possibility of imperialism as emancipation.

The historical consensus is that imperialism was not wrong merely because it was racist, but because it violated the sovereignity of innumerable peoples and cultures. This sounds like a simple claim, but it is a very complicated and important one, because sovereignity lost in this case was lost in certain respects forever. This was not a case of the simple occupation of territory and its eventual return. Much of the moral anger at colonialism and its aftermath has to do with the ways in which Europe’s expansion foreclosed the huge variety of divergent futures that non-Western societies had pointed towards before 1492, creating a human monoculture, a single condition of possibility.

This is where I think the genuine temptations of intervention present themselves. Sovereignity has become not just an important principle, but for many on the left has become the only moral value which they defend in international affairs, especially in reference to relations between the West and the developing world.

Thomas de Zengotita had a great article in Harper’s Magazine in January 2003 that argued that even those who claim to have turned their backs on the Enlightenment are profoundly dependent upon it any time they make a claim about social justice or politics, anytime they argue about how the world ought to be. Even the injunction to respect non-Western values and sovereignity over those values ultimately derives its ethical force from the Enlightenment.

When you defend sovereignity as the only moral principle in all the world, and say that all intrusions, forcible or otherwise, are wrong by their very nature, you ought at the same moment to deny yourself any and all judgements about the places and peoples you deem sovereign. If East is East and West is West, then the twain really must never meet, and humanity is sundered from itself, the globe inhabited by ten times ten thousand variants of the genus Homo. If you rise to sovereignity as the singular sacred principle, then human rights, civil liberties, democracy, and freedom are no more than local and parochial virtues.

And not even that. Because once sovereignity becomes an impermeable barrier to intervention, we have to ask, “Are nations the proper unit of sovereignity?” The answer is clearly no: peoples or “cultures”, in the ethnographic sense of the world, are what assert the most meaningful claims of sovereignity, of an inalienable right to difference. Meaning that from such a perspective imposing Roe vs. Wade as the law of the land on a town of Southern Baptists in Georgia is morally little different than invading Iraq with tanks: the difference is only in scale and method of imposition. The Constitution itself is then an imposition, as is any law which intrudes a larger political power onto the scene of some bounded, well-defined practice of everyday life in the name of enforcing a larger system of rights and obligations which the smaller community refuses.

We lose also the ability even to criticize forcible imperial interventions into other cultures or sovereignities because some cultures are demonstrably imperial by their "nature". If it is the culture of Islamic societies to convert other societies to Islam, by trade or by force, or the culture of early 21st Century America to bomb and invade, then who are we to criticize? That’s just their way, and in an ethical system that vaults respect for sovereignity to the supreme position of virtue, all ways have their own legitimacy, even violations of sovereignity committed in the name of cultural authenticity.

A journey through that hall of mirrors always brings us back to interventionism. We are all interventionists now. We should be able to spare a gentle thought or three for late 19th and early 20th Century British imperialists as a result.

The question of the 21st Century is not whether interventions should happen, but how they should happen. It is a question of method and result, not of yes or no.

The reflexive protection of sovereignity is what has led us to this bad moment, where a weak and evasive leader, George Bush, can pursue an utterly destructive method of intervention and command the loyalty of many people of good will because the alternative seems to be the hypocritical defense of a corrupt network of hollow national leaderships, and the betrayal of human emancipation. The United Nations is a broken institution because it claims to represent the world, but only truly represents the will of heads of state, many of whom do not represent their own people. The Zimbabwe which sits in the General Assembly is not Zimbabwe: it is Mugabe.

The alternative to war is not isolation. It is not to render unto Hussein what is Hussein’s. The alternative to George Bush is not the United Nations: they are both contemptible in the execution of their obligations to humanity.

The British Empire failed because not because it violated sovereignities, but because it was hypocritical in its mission to civilize. It killed and imprisoned and punished those who sought no more than to defend their legitimately different ways of life, using military force where dialogic suasion was the only moral strategy. It defined the parochial and local virtues of English society as the central values of civilization. Civilization is not tea and lawn bowling. The British Empire democratized at home and constructed new autocracies abroad. It promised the rule of law and respect for citizens and then made imperial subjects into permanent subjects denied legal recourse and forever condemned to servitude. It held forth the promise of rights and snatched them back the moment that men and women walked forward eagerly to claim them. It ruled without hope or interest in understanding its subjects, and dismissed the many genuine moments of connection that presented themselves as graspable possibilities.

We are already well down the road to similar failures. The United States Constitution wisely has as its first principle that the power of government must necessarily be constrained in order to secure the blessings of liberty. Where are the constraints now on American power abroad? There are none remaining. Is our judgement so unimpeachably correct, our government so godly, that we can be trusted with such a power? The Founding Fathers did not trust their own creation with that kind of untrammeled authority. The Declaration of Independence underscores that freedom comes from below, from the determination of a people, not as a grant or gift from an overlord—and it makes clear that all peoples everywhere have a right to be represented, that decisions should not be taken in their name without their willful assent.

If we are bringing democracy to the world, then let us bring democracy, and follow the best traditions and instincts of the United States. Intervention is a double-edged sword. If we act against sovereignities in the name of human rights, then we must be open to being acted against. If humanity as a whole rejects capital punishment as a fundamental violation of human rights, for example, then the United States has no business pursuing it--not if we want the right to intervene on behalf of human rights ourselves.

That is the crystalline moment where interventionism become immoral imperialism: when the pursuit of human emancipation is not a reciprocal obligation that binds the actor as well as the acted upon, when the honest pursuit of freedom everywhere curdles into cynical oppression.

[permalink]


March 18, 2003

Put away your puppets

I want to be very careful about avoiding the blog echo-chamber effect with what I put up here, where I’m just doing an elaborate version of “me too” on something that fifty other web pages have already noted.

But here comes a “me too”: Justin Raimondo’s essay on the right and wrong kind of antiwar protests, which I found via Electrolite, is vitally important. Anyone who is planning to oppose the war needs to read it, print it, staple it to their clothes, memorize it. Put it in a waterproof bag and take it into the shower with you.

This is not the time for the usual self-indulgent let-a-thousand-flowers bloom, let the nutty Spartacist have his turn at the podium, let Sheryl Crow talk about how war is bad for flowers and other living things, approach to political action. This is not a festival or a be-in or a happening. It’s not a space for creative frolics and really cool paper-mache puppets.

The war is coming, unless Saddam Hussein blinks in the next 24 hours. None of us can stop it. Give that up right now: you cannot stop the war. Don’t even try. Don’t even fantasize that you can.

You can only prepare to exact a political price from the people who led us so poorly to this point, and to do that, you need to make the war a bigger issue than the antiwar.

Raimondo nails it perfectly: all the plans for direct action that involve “no business as usual” gimmicks like blocking traffic, chaining oneself to fences and the like are pure, unadulterated narcissism. They’re about anointing yourself a virtuous, righteous person and performing your virtue on the public stage. You want that, come by my office and I'll give you a little "I'm a Good Person Because I'm Against the War" badge to pin on your shirt and I'll applaud you every time I see you walk by.

The "direct action" visions circulating out there now are not about building the largest possible coalition of opposition to the Bush Administration, not about building a political consensus, not about laying the groundwork for 2004. If you really care about opposing the war, you need to put your own selfish needs to proclaim your virtuousness aside and keep your eyes on the prize. Large public gatherings that are respectful, quiet and rhetorically modest would be a good thing, sure, but for the moment, little more than that. Raimondo's "Lincoln-Douglas Debates" are a good idea, too.

It’s not about stopping the war. It’s about what comes afterwards. For the moment, we might as well sit tight. Anybody who leaps in on day one with stuff like spilling red paint on the steps of City Hall or lying down in front of military trucks runs the risk of looking like a tremendous doofus depending on what happens in the first week of the war, and will probably alienate many potential supporters even if the actual unfolding of the war does little to improve Bush's standing or credibility.

Let’s say that Saddam Hussein’s troops use chemical or biological weapons, or there are significant terrorist attacks within the domestic United States. For reasons of public image alone, that would be a bad time to be pursuing silly little direct actions or be caught on tape screaming "Down With Running Dog American Imperialism! Up With the Virtuous Multitude!".

More importantly, if something dire happens involving chemical weapons or terrorism it means that an antiwar movement is going to have to be generous in conceding some of its own faults and errors, because it’s going to mean that Bush had some legitimate reasons to go to war. At that point, we would need to make it clear that the issue is not war itself, but the incompetence of the way the run-up to war was handled, and the lack of vision about how to handle its aftermath. If antiwar activists spend that first week chaining themselves to fences and burning American flags, they will have already lost the antiwar struggle should at least some of Bush’s reasoning be vindicated by the course of events.

Prudence, patience and planning are what’s needed now. That’s what has worked for the Republican grassroots: ever since Barry Goldwater’s defeat, they’ve been organizing steadily, laying down deep connections with actually existing communities, thinking about what kinds of rhetoric carries water in the public sphere, and disciplining or ignoring errant nutcases and fringe elements. If you want to exact a price for this war, led in the way that it has been, you’re going to have to be similarly focused.

[permalink]


March 17, 2003

Riffing off the rich, interesting discussions coming out of the Games Developers Conference at Greg Costikyan's Games*Design*Art*Culture, I offer my own lengthy and tortured vision of the road ahead for the gaming industry.


March 17, 2003

The innards of power

What I hate most from some protesters and activists in the antiwar movement is their defective conception of how power actually works inside the Bush Administration, their stereotypical and caricatured vision of their opponents.

I understand: it’s a political tactic. Ridicule makes good agitprop, or so they think. Just on that score alone, I think they’d better think again. George Bush may actually be strengthened by the constant drumbeat of portrayals of him as stupid, because it helps him do the usual populist judo-throw of dismissing his opponents as effete Eastern eggheads and crackpot college students. Anti-intellectualism runs deep in America.

Caricature also makes many folks uncomfortable: it seems unkind, overwrought, hysteric. It’s ok when it comes from David Letterman or Conan O’Brien: they’re paid to do this stuff. As a weapon in the streets, I wonder about its effectiveness.

What really worries me, however, is that at least some antiwar activists I encounter seem to believe wholeheartedly in a conspiratorial interpretation of how the White House actually works on a day-to-day basis. Bush, Cheney, Rumsfeld and other officials are depicted as always knowing what they want to have happen and always doing what exactly what they need to get what they want. Everything is a plan, everything is instrumental, everything goes according to script, and there is perfect unanimity between all parts of the administration. It all locks perfectly in place.

In a more scholarly context, I tend to characterize this view as “power always knows what it needs, and always does exactly what it should to satisfy its needs”, with the usual corollary being that if the powerful do not get what they want, it is because they were opposed. If you want a tremendously sophisticated and often interesting example of this way of thinking, Perry Anderson’s essay in the London Review of Books is a pretty good read. For Anderson, everything that is happening now is part of the same seamless design, even much of the opposition to the war. (I clearly fall under the heading of a 'prudential opponent' who is part of the problem, not the solution, in Anderson's eyes.)

This is simply wrong as an empirical depiction of the ethnographic reality of power. No, I don’t have a bug under the table in the Oval Office. But once upon a time, there was such a bug, placed ever-so-kindly by a gent named Richard Nixon. Now that the question of what Nixon knew and when he knew it is part of the past, the transcripts of his taped conversations and meetings are a treasure trove for historians and anthropologists. Not just for students of the Nixon Administration, but for anyone who wants to know more about how power actually works on the inside.

Reading the transcripts, you sometimes see the conspiratorial, instrumental side of power. Sometimes Nixon and his aides saw the political chessboard clearly and acted forcefully (and illegally or illicitly) to move the pieces according to a grand design. Yes, the transcripts also show that they had more information and more tools to act with than Joe Schmoe on the street: that’s what power is in a nutshell. But far more often, they were fumbling in the dark just as much as anyone else, sharing crackpot theories, going off onto weird tangents, toying with idiosyncratic hypotheses about why events were unfolding as they were, and enunciating contradictory or confusing directions. Sometimes the transcripts are like an echo chamber of Richard Nixon’s labrythine mind and sometimes they are like a bull session between a bunch of middle managers gathered around the water cooler.

This is not the only fly-on-the-wall glimpse into the interior of power available to us. Working in the Zimbabwe National Archives and other archives, I have often read documents that record some inner aspect of deliberations between colonial officials, and similar patterns appear. Sometimes they’re chillingly directed and forceful, but most of the time, they reveal confused men trying to sort out situations they barely understand with imperfect tools that they use badly, if at all.

What does this mean for the Iraq War? Simply this: we do not know for certain what is going on inside the inner circle of power, but we dare not assume it is dictated by some relentless service to a rigorously instrumental goal that is clearly perceived by all of the men and women making policy, that they are power acting as power ought to achieve the things that power wants.

Bush, Cheney, Wolfowitz, Perle, Rumsfeld, Powell, Rice: they’re all human, too. I am not being a carebear, embracing them in a warm and fuzzy hug. Humans can err and prevaricate and commit knowing misdeeds. Humans can be monsters or they can just be bumblers. But any theory about why they are doing this that begins and ends with an assumption that they are not humans just like us is a non-starter. Any representation of their thinking that depends upon them being god-like in the clarity and perfection or their knowledge, or demonic in their ability to completely ignore or commission human suffering, is foolish. They are not homo omnipotens, a different species.

The question of “why” matters enormously. There are reasonable working hypotheses out there. Personally, I favor the proposition that this is a kind of hubris, that Wolfowitz and Perle actually believe in the “democratic domino theory” and are carrying out a geopolitical experiment on its behalf with all the good intentions in the world. There may be many other motives and ideas, some of them contradictory, some venal, some foolish, some pragmatic. Some of the people in the administration may even wish they hadn’t gotten themselves into this mess but see no way out of the corner they painted themselves into.

If we’re going to talk about the “why”, we have to be talking about human beings. We have to have the same open ethnographic curiosity we bring to the question of why people in general do the things they do. Because only with this kind of understanding do we have any hope of seeing at least some avenues of escape, some possibilities for redeeming the blunder we are now embarked upon, some chance to connect with the interior of power.

[permalink]


March 17, 2003

I am begging you

Within 48 hours, possibly 24, we will be at war. A simple message, then, to the people who represent us.

We cannot win alone. Not against what we now set ourselves against. I speak not of the immediate military conflict, but of the struggle it sets in motion. But you have relentlessly worked to make us alone, or nearly so.

We cannot win alone, not against the proliferation of weapons of mass destruction or acts of terrorism, because as we have been told so repeatedly, they can be made quite easily by almost anyone who has the will to do so. September 11th could have happened 40 years ago: there was nothing technologically novel about it. Planes into buildings. Simple. As simple as mustard gas or smallpox. All it took is someone willing to do it.

We can only win against that if we figure out a way to make a world where few would even think of doing such a thing, a world where if they do dream such a terrible dream, everyone around them, from their mother to their lover to their leaders to their children, would treat them as anathema. We cannot kill them all, not with all the bombs in the world, much as anyone who might think or desire to do such things deserves death. I will weep no tears if Osama bin Laden dies, nor mourn Saddam Hussein.

We cannot kill them all. No law, no matter how just or righteous, holds any power if many people ignore it. There are not enough police nor enough jails nor enough guns to give such a law power.

We cannot win alone. Not against tyranny. Because tyranny is a cage built by the prisoners themselves, brick by brick, and we cannot win if there is a general will in the world to make or tolerate such cages.

We cannot win alone. We need people of good will everywhere to join us, and believe in us, and hold us close in their hearts and desires, to want the dream of freedom that we can embody. Our guns are at best only the key that unlocks the door: they will not help us be at home once we cross the threshold.

There are people of good will waiting to join us. They will not unless we join them. Remember what the American revolutionaries said in rejecting the tyranny of an overseas monarchy: no taxation without representation. No governance without representation. Listen to all the people in whose name you now act, and whose future you now shape, both here and abroad. Make this their struggle, our struggle, not just yours.

Please.

[permalink]


March 12, 2003

Clifford the Big Red Water Treatment Problem

There’s the famous old observation about Pluto and Goofy, about how weird the rules of anthropomorphism in the Disney universe are. Pluto is a dog and Goofy is a kind-of-dog, but one is a dog-dog and the other a master-dog, except that the dog-dog also has a kind of sentience as well, at a sort of sub-Scooby Doo level.

I’ve been adding to my personal file of similar observations a lot lately, watching children’s television and reading children's stories with my 2-year old daughter. Here’s the latest bunch.

1. Doesn’t Clifford the Big Red Dog create a serious sanitation problem for that oh-so-perfect little island town he lives in? Weren’t the villagers actually correct to object to his arrival for that very reason? Who is paying for the not-inconsiderable costs of cleaning up after him? And how exactly do they do it, anyway?

2. On the same subject, Clifford the Big Red Dog got big because Emily Elizabeth loved him so much, not because he is a mutant or freak. Doesn’t that mean that all other dogs in the world are not loved nearly so well? Shouldn’t every other dog feel sad at being so relatively unloved that they remain their natural size?

3. Why doesn’t the wolf eat Little Red Riding Hood the first time he meets her in the forest and asks her where she’s going? Why go through all the folderol of dressing up as grandma?

4. What the hell is up with the world Little Bear lives in? It’s animal-anthropomorphic except that there are humans in it, too. The animals don’t behave like animals at all sometimes and other times behave perfectly like their source species. For example, Father Bear fishes with a fishing pole while wearing a suit, but his brother Rusty fishes by swiping at the fish with a paw and throwing them up on the river bank. The animals are a really weird mix: they live in what looks like a European or North American forested area with mountains in the distance, but they run the gamut from domesticated animals to wild animals that belong in that environment to wildly out of place animals like a monkey. At least a show like Oswald or Maggie and the Ferocious Beast is consistently whimsical.

5. Where are Max and Ruby’s parents? They have a grandmother who has come to visit them at least once. There is never even a hint that they have a mother or a father.

6. What are the rules governing monster genetics on Sesame Street? Do monsters have parents that look somewhat like themselves, or completely different? Are grouches monsters, or something else entirely? It doesn’t help matters that the parents of some monsters like Elmo have been visualized differently on different occasions or in different media.

7. I was stunned to find out in one episode that Bing and Bong of Tiny Planets are friends, not father and child. So what’s up with these two? One hates to pull a Bert-and-Ernie-are-gay at the drop of a hat, but…

8. Speaking of Bert and Ernie, the “Journey to Ernie” segment that appears in current runs of Sesame Street is a pretty devastating confirmation of Malcolm Gladwell’s thesis that Blue’s Clues has displaced Sesame Street as the template for educational television. It’s like a really bad version of Blue’s Clues. More importantly, Ernie cheats. Poor Big Bird has no real chance of finding Ernie before the third box, even though he ought to have a chance, judging from the apparent rules of the game. At least with Blue’s Clues, the rules make sense: no guessing before finding the final clue.

9. What's with Caillou being bald? And why are US toymakers reluctant to show him being bald? It’s really hard to find a Caillou toy where he’s allowed to be bald like on the show: they always give him a hat or put soap bubbles on his head or some such.

10. What is the deal with the character Sarah Phillips on the show Liberty’s Kids? She’s a loyal Tory monarchist who also believes passionately in equal rights for all people. Er, right. (The whole show is history-by-committee: it not only manages to indulge in excrutiatingly calculated kinds of inoffensiveness but also manages to make the American Revolution even duller than it is in school textbooks.)

[permalink]


March 11, 2003

Crazy Taxi

I had not meant to write as much in this space about the coming war as I have written. It is on my mind more than any event or issue has been in my life, including September 11th. I will try to get back to games and television and science fiction and Africa and the craft of history and all the rest of the things I meant to reflect on in this space.

But the coming war, well, I am having trouble sleeping because of it.

What haunts me is an overwhelming feeling that everything about our lives is about to change, and a strong sense of certainty that whatever the short-term results, the long-term changes are going to be for the worse. Perhaps in subtle ways, perhaps in gross and obvious ones.

What grips me is the sense that an extraordinary compound mistake is about to be made, the kind that shifts the forward motion of history onto a new track. It is like being a passenger in a car driven too quickly and erratically by someone who won’t listen to anyone else in the car. Even when you want to get to the same destination as the driver, you can’t help but feel that there’s a way to go there which doesn’t carry the same risk of flying through the guardrails and off a cliff.

I am not a pacifist. I am not anti-American. I could support a military conflict with Iraq designed to remove Saddam Hussein from power.

I am convinced that George Bush, Dick Cheney, Donald Rumsfeld, Paul Wolfowitz and Richard Perle are exactly the wrong people at the right time to execute that mission. I am convinced that John Ashcroft is exactly the wrong man to be in charge of law, order and the security of American liberty at this time.

This is not because I am a primal, irrational hater of Bush. Last week, I saw Leon Wieseltier debate Mark Danner here on campus about the war, and I have to say that Wielseltier absolutely creamed Danner, in a polite and unfailingly rational manner. His arguments were philosophically and intellectually consistent and rigorous, whereas Danner was all over the map, mixing and matching foundational claims with relatively ephemeral points, and getting hung up on relatively petty carping about details. Moreover, he more or less conceded Wielseltier’s basic case by calling for a stronger inspections regime backed by the future threat of military force if Hussein refuses to comply.

Among the things Wielseltier accurately noted is that the repugnant hypocrisy of many current members of the Bush Administration towards Saddam Hussein is an entirely separate issue from whether war right here and right now is a necessity. Danner did not seem to understand this fundamental point. You may have erred in the past, but simply because you did err does not mean that you should be forever condemned to repeating that error for fear of being judged a hypocrite. We should hold Dick Cheney and other administration members responsible for having gotten us into this mess by cynically arming and supporting Iraq in the 1980s, and they ought to have the humility to apologize for having done so when they condemn Hussein now, but none of that answers the question of whether the current war is necessary.

This isn’t about hating Bush. Wielseltier’s right: if our opposition to the war is fed by a partisan sense that a Republican can never be right and a Democrat is always right, then it’s a non-starter. My opposition is about the fact that the run-up to war has been so systematically mishandled, including the arrogant and unnecessary pre-September 11th unilateralism of the administration, that the well is thoroughly poisoned.

The weakness in Wieseltier’s arguments for the war concerns consequences. If the consequences of going to war are vastly more damaging to democracy, freedom and justice than not going to war, then we should not. Yes, Saddam Hussein is a blight on the world, and we must bring him to justice. His misrule cannot be allowed to stand, not if we believe in progress towards a better global society.

Because of the way that the Bush Administration has approached the world since taking office, the particular costs of the particular attack they now advocate are the permanent loss of American moral influence and authority in most of the rest of the world, the reduction of our leadership to nothing more and nothing less than military and economic power. Because our advocacy for democracy has become so resolutely contemptuous of democratically registered sentiments in other nations, and so reliant upon the Middle East authoritarianism we claim to oppose, the liberties we claim to desire are stillborn in their crib. Because our leaders are sanctioning the radical and arbitrary violation of civil liberties at home in a time of crisis, our ability to serve as a shining beacon of hope is threatened. Even Tony Blair seems to sense some of this, judging from the current negotiations at the United Nations.

If the United States had participated in World War II in a radically different fashion than it did, totally ignoring its allies and devoting little or no meaningful thought about the design of a postwar world order intended to ameliorate the ills of the interwar era, then no matter how righteous the struggle against Nazism might have been, the consequences of the war would have been vastly worse and its transcendent moral necessity imperiled.

I am having trouble sleeping.

I have been wrong before in my predictions, and any historian knows the folly of trying to predict in the first place. Maybe the crazy driver will get us home and we’ll laugh at our groundless fears. At night all I can see is the yawning emptiness beyond the guardrail and the shrieking heights below.

[permalink]


March 6, 2003

To have an ending, one must have a beginning

There are good reasons to prefer Foucault’s geneaologies to histories, to look at the past as a process and free ourselves from the tyranny of origins and endings.

The problem is, as Homi Bhabha has observed about modernity, that “newness enters the world”. History is not just one damn thing after another. It is not turtles all the way down. Things really do change, and have beginnings and endings.

In the past few weeks, I have read a lot of anti-war writers, both in and off the Net, who see the coming war on Iraq as the beginning of an American Empire which they necessarily take to also and inevitably mean the end of American democracy. I have also seen a tremendous variety of anti-war writers and intellectuals that talk about the erosion or ending of civil liberties and democratic freedom in the United States as a result of the general “war on terror”, and in observing this erosion, try to mobilize a wider population to protect or preserve those freedoms.

I am apprehensive about some of the same things. There is still to my eye something odd about the implied history of such fears, at least coming from some of those who voice them.

If we should now fear the inauguration of an American empire, it means that whatever role the United States has played up in the world since 1945, bad, good or a bit of both, it wasn’t an imperial role, or if it was, it wasn’t the same as the imperial role we now say we fear. Otherwise, how could we try to mobilize against the coming of some new rough beast? If it's just the same old same old American imperialism, then it's just the same old same old activism, and the rhetoric of unique urgency is misplaced.

If an American Empire cannot coexist with a democratic America, are we saying that America was not a democracy from the last third of the 19th Century to the middle of the 20th Century? Because the United States had an empire: a small one, compared to England and France, but an empire all the same. If it was a democracy in those years, how did its democracy coexist with empire? How, for that matter, did Britain democratize domestically while ruling a steadfastly undemocratic empire, if the mere holding of empire always and inevitably destroys all democratic possibility everywhere?

If we should fear the destruction or erosion of civil liberties, and mobilize to protect democratic freedom in the United States, isn’t that a concession that the US was and still remains a free society whose existing liberties ought to be cherished?

Anyone who now forecasts an ending of good things in the coming war is necessarily admitting that those things had a beginning, a reality, that they once existed, and that they were good to have. You cannot lose what you never had.

This seems like an unexceptional observation until you look at who some of the people talking about the desperate need for the protection of precious liberties are.

Most of the anti-globalization left that came to Seattle and Washington cannot claim now that they wish to protect liberties that they have never previously acknowledged as existing. They cannot say they wish to save us from imperialism when they have habitually claimed that America and its agents like the WTO and the World Bank were always already an imperial actor in the world. The only distinction left for them is between bad and badder, and given the fervor of their mobilization against the bad, how much worse could it get?

The left of identity politics, the postmodernist left, the ‘cultural left’, mostly cannot claim to be trying to preserve hard-won freedoms, because they too have been largely unwilling to concede that those freedoms were ever won or meaningfully exercised by the communities and interests that they speak for. Before September 11th, don’t look to bell hooks or Molefi Asante, Andrea Dworkin or Judith Butler, Gayatri Spivak or Trinh Minh-ha for affirmations and defenses of existing civil liberties and functioning democracies.

The Marxist left, old and new, mostly cannot claim to be trying to preserve precious liberties, because they largely have never accepted, with crucial exceptions, that bourgeois liberties are anything more than a weak prelude to genuine emancipation. What’s to preserve? Can a late capitalist world be worse than it has been in the eyes of Eric Hobsbawm, Thomas Frank or Howard Zinn?

If we speak urgently about the need to preserve what was, aren’t we acknowledging that what was is better than what might come to pass, that the late 20th Century world and late 20th Century America really wasn’t so bad after all?

[permalink]


March 6, 2003

The strange problem of the unsequel

I was looking forward to the latest sequel to the computer game Master of Orion. It is what gamers call a “4X” game (eXplore, eXpand, eXploit, eXterminate). The first Master of Orion (MOO) is one of the all-time classics of computer gaming, fondly beloved by almost everyone. The second MOO met with a more mixed reception, largely due to significant problems with game balance on release and some frustratingly dull game mechanics in the endgame, when the player’s interstellar empire had expanded to gigantic proportions and the final conquest of one’s enemies involved tedious micromanagement of individual planetary systems. On the whole, it was a well-liked game, and certainly a faithful continuance of some of the key ideas and mechanics of the first Master of Orion.

I am pretty well cured of my desire to purchase Master of Orion 3 by following discussions on the official message board and other gaming-oriented sites. The game has some fanatic devotees but many more equally fanatic detractors, and from where I sit, the critics appear to be scoring some devastating points.

Since I haven’t played the game, I won’t delve deeply into the specifics of the arguments made by either side. However, the discussion is pointing out a peculiar problem that has larger cultural relevance. The developers themselves concede that their design for Master of Orion 3 bears little resemblance to the previous two installments in the series.

The key gameplay elements that distinguished MOO1 and MOO2 are by conscious design missing from MOO3. The first two games emphasized the player’s control over the management of individual planetary economies, with the classic “guns or butter” decision-making that strategy games often privilege. The third game takes virtually all of that management away from players: the developers say that Moo3 is not about “micromanagement” but “macromanagement”. The first two games emphasized the progressive discovery of technologies that made your empire more powerful, with each new technology marking an important and dramatically emphasized branch point in your strategy. M003 by conscious design automates technology discovery and strongly deemphasizes the individual importance of any given technology. It’s comprehensively a different game than its predecessors, not just in game mechanics, but in its fundamental spirit.

This strikes me as perverse. The game’s defenders argue that designing a game that was similar in any respect to M001 or M002 would definitionally be derivative and unoriginal. There is a huge excluded middle, however, between following the threadbare, necrophiliac creative logic of the “Rocky” films and breaking away entirely from prior precedent.

M003 is an unsequel, a strange thing that happens from time to time in popular culture. A franchise exists and falls for whatever reasons into the hands of people determined not to continue the franchise.

Why does this happen?

Sometimes it is because the inheritors of the franchise honestly do not seem to understand why the previous installments were successful. George Lucas doesn’t seem to understand why “Star Wars” and “The Empire Strikes Back” worked, which is especially peculiar, given that he created them. That’s an idiot savant instance of the unsequel, which reveals that the original creator was more lucky than inspired the first time around.

Sometimes it is because the current owners of the franchise legitimately do not want to fall prey to the “Rocky” syndrome and simply remake the previous installment, and in their desperation to break new ground, break instead through thin ice and plunge into the icy depths below. “Aliens 3” is a good example of this pattern: a well-meaning attempt to take the original franchise in new directions, but ultimately leaving behind some of its essential elements. This is an especially good example, because the differences between “Alien” and “Aliens” (or “Terminator” and “Terminator 2”) were substantial but also satisfying and faithful to the stylistic and narrative core elements established in the first film of the series.

Which is M003? Reading the boards, I’d have to say it looks like a case of creators who had absolutely no understanding of why the franchise they had inherited had been successful in the past. Whether that’s because of ineptitude, a willingness but inability to duplicate its appeal, or creative arrogance--thinking that they knew better--I have no idea.

[permalink]


March 3, 2003

The problem of perception: how do you know what is a typical sentiment?

Not long after 9/11, I found myself locked in a frustrating argument on a social listserv with a long-time participant who I thought was remarkably intelligent and eloquent in general. I had spoken with some frustration about what I perceived to be the weaknesses in the response of many liberals and leftists in the United States to the trauma of that day. My acquaintance didn’t disagree that the responses I described were inadequate, but he contended that they didn’t really exist. There were no liberals or leftists making those arguments, he said.

I knew what I had heard and read. The problem, as I observed then, was that a significant proportion of what I was reacting to was more conversational and informal, what people around me were saying or what I was seeing in a number of academically-oriented listservs and bulletin boards.

I found some of the emails and bulletin board postings that I thought were good examples, and forwarded them with identifiers stripped off. These, commented my acquaintance, were just lunatics and fringe elements. I countered with a number of published pieces by intellectuals on the left, most notably Chomsky. He’s unrepresentative, shrugged my acquaintance.

This made me angry then and still irritates me somewhat as I think back on it. I felt this was an attack on my integrity. However, since that time, it’s become clear to me that there is a much, much deeper problem of perception involved.

My characterization of the reaction of the American left to September 11 was shared by Todd Gitlin, Christopher Hitchens, Jonathan Rauch and many other observers who were not far-right ideologues. But I have also run into many progressives I admire who think that what I perceive to be a common, patterned response simply doesn’t exist or is the work of a highly marginalized fringe element, and that it is irresponsible and unfair to take this response as representative of liberal or left thinking.

That suggests to me that we need a discussion about how to make assessments of what is representative or typical of a particular political movement or worldview, about what the rules of the game are before we start to play. There are a lot of different, and sometimes contradictory, metrics available.

One way to judge what is typical is to take a highly limited range of writers and thinkers that we judge to be icons of particular political sentiments, and use them as a cross-section of representative opinion of a larger group. So if you want to know what libertarian-leaning conservatives think on a particular issue, you go and read Ayn Rand, Barry Goldwater, Virginia Postrel and Robert A. Heinlein. The problem here, however, is that such an approach necessarily requires a prior categorical sense of the movement or ideology that you’re trying to map. For example, you can’t know who might represent or stand in for the “cultural left” unless you already know what you think that term means—and it is a term which was invented as a negative label for the group it purportedly describes.

Another way to make claims about what is representative sentiment is to identify commonly reproduced arguments, broken down into discrete sentences or forms. This requires a comprehensive sense of a particular medium—say, all newsmagazines or all blogs—and is vulnerable to someone arguing that the arguments described are not the most important or central points. It is easy, after all, to caricature a particular perspective by pointing to highly formulaic or rhetorical statements that tend to be repeated and taking them to be the central persuasive point shared between many writers or speakers. It is also hard to figure out which medium is the most properly determinative. Blogs are important but they are also not nearly as widely disseminated and read as television news or op-ed columns in printed newspapers. Which is the standard by which claims about what is typical should be made?

You can make claims about what is typical in a sociological fashion, by using polling or voting data, or other systematic statements about the relationship between a particular piece of writing and some describable public. You can also look at what visible groups and organizations of people do in the world, using demonstrations or legislative policy as a way to measure what is a common thought.

Or you can make it impressionistically, by combining all of these metrics in a loose and unsystematic way. Which is, I think, what most of us are doing when we say, “The left thinks XYZ” or “The far right believes in this and that”. Our impressions are most powerfully determined by the fabric of our daily lives. I perceive academic leftists to be important because I spend a lot of time in academic circles.

I am perfectly willing to concede that there are both individuals and groups who identify themselves as liberals or leftists, or who can fairly be labeled as such, who substantially agree with me about the moral, ethical and political views I have of Islamic fundamentalism, terrorism, military action, American society and other 9/11 issues, where any disagreements I might have with them are about the details rather than the fundamentals.

I am still uncomfortable with arguments that Chomsky or ANSWER or postcolonial theory or any number of other constellations of political and social thought that do not share my root assumptions are marginal, unimportant, and easily exempted from a sense of what “the left”, broadly speaking is. They may not be your progressives, which is fine. But if so, then it’s your responsibility to describe what you regard as liberal, or left, or progressive and why your sense excludes intellectually and possibly even numerically significant groups of speakers and activists who can reasonably be represented as “left” (and who may self-represent as such). I don’t think you can just wave your hands and consign that all to some unimportant margin, unassociated with your own views.

[permalink]


March 3, 2003

Oral history and introversion

A Swarthmore alumnus planning to go to graduate school for a doctorate in history sent me an email last week asking for my thoughts on choosing African history as his field.

I had some encouraging things to say, but also some words of caution.

My reasoning for that mixed advice is complex. Some of it has to do with my sense that the field as a whole is burdened by unnecessary parochialism and defensiveness, even though it is one of the most methodologically innovative fields of study within the discipline of history. Some of it has to do with my own drift towards the kind of personal feeling of Afro-pessimism voiced by Gavin Kitching in a recent issue of Mots Pluriels.

Some of my counsel also had to do with making sure my former student understood that African history is logistically difficult. I have colleagues who can fly off every summer to their archive of choice (or in the case of the Americanists, spring break and Christmas if they choose) but securing permission to work in an African archive often requires negotiating complex bureaucracies, some of which can take years to grant the necessary permissions, and then costs a bundle in airfare besides. Even under the best circumstances, fieldwork is difficult and the gaps in between research trips can be substantial.

More of it is personal, and I don’t know how fair it is to off-load a middle-aged man’s feelings onto a 22-year old. Some of my current middle-age angst is just the affliction of comfort. I listen to colleagues who are setting off to Vienna or Paris or Umbria to work in the archives, and I confess from a pure creature comfort standpoint that Durban or Harare or Accra don’t seem quite as enchanting as destinations. Some Africanists I know take genuine, enormous delight in living in places like Abeokuta or Maputo. I have always found Harare fascinating, enlightening, stimulating, but not especially fun. That’s part of the learning experience of going there and part of the vital insight that African history brings to an understanding of global disparity, of course, but my bourgy-ness has been growing in tandem with my gut.

I was thinking about other reasons for my feeling, and came across, via Electrolite, this terrific essay by Jonathan Rauch in The Atlantic Monthly. (It’s official: Rauch is my favorite writer on politics and society, hands-down. Nobody else is as consistently interesting or crisply intelligent.)

Rauch talks about introverts, who contrary to the popular image are not misanthropists or shy. It’s just that they need time away from people to “recharge their batteries”, in contrast to extroverts, who are refreshed by the company of others, the more the merrier. Rauch writes,

“Do you know someone who needs hours alone every day? Who loves quiet conversations about feelings or ideas, and can give a dynamite presentation to a big audience, but seems awkward in groups and maladroit at small talk? Who has to be dragged to parties and then needs the rest of the day to recuperate?”

That isn’t quite me but it’s pretty close. I have always thought the label “gregarious loner” fit me pretty well. Happy to be by myself, but very social and friendly if I happen to find myself in a group.

So what does this have to do with Africa? At this point, I think you simply cannot work on the history of modern African societies and not do at least some oral historical research. It is an important part of almost all projects for a variety of reasons, ranging from the simply empirical to the sweep of the field’s intellectual politics. I accept the obligation to do this kind of work myself, and regard it as philosophically intriguing and exciting to design and consider the collection of oral testimonies.

The problem is that I often feel that I really suck at it, largely because I’m the kind of person Rauch is talking about. I find it painful and unnerving even to call a stranger on the phone to make an appointment or conduct business, and that feeling is aggravated dramatically if I have a sense that I’m actually intruding upon someone else’s private space or asking them for a favor or service without any prior connection or obligations between them and myself.

Actually doing oral historical research, particularly with the added aggravations of being a very foreign person in a very far-away place, is for me like voluntarily sticking my foot in a meat grinder. I know I have to do it, and intellectually I even want to do it. But thank god for the archives, so I have somewhere else to go some of the time.

I think if I had known myself well enough to know this about my own personality, I might have been wise enough to choose another field of specialization. But I didn’t, and so there you go. There is a kind of growth that comes through putting yourself in a situation where you have to go against the grain of your own personality and force yourself to do something you would rather not. At some point, though, you get all the mileage out of that compulsion that you're going to achieve.

[permalink]


February 19, 2003

Uh-oh, hide the dirty laundry under the bed and put the tablecloth on. Company's coming. Thanks for the nod, Patrick.

I do hope to move to Moveable Type and comment fields soon. (It'll help me manage other aspects of this whole enterprise, too, obviously.) I'm not entirely clear on whether I can do that and stay on the academic server that this page lives on at the moment, and as is typical for me in the middle of the semester, I'm entirely too muddled and disorganized to find out.


February 12, 2003

In Which The Enchantments of Blogistan Fade and I Am Left a Pumpkin

Some interesting thinking about blogs, social networks and the “power law” out there this week made me reflect a bit on what I am getting into with this blog. I also found myself looking back on the trail of breadcrumbs through the online forest, back all the way to GEnie and its Science Fiction Roundtable (SFRT), the first online forum I remember really giving me a heady sense of what the medium could accomplish.

I realize in looking back that I’ve always been looking for something online that I have never quite found, a kind of fully realized Habermasian public sphere. There are times I have felt close to that ideal, and other times that I have felt very far away from it while still entranced by some of the other possibilities of a fully wired world. I think I have often been too demanding or constrictive in what I have sought, and probably too sensitive and tightly wound about what I have gotten. Among other things, I find that it is hard to maintain my sense of humor in an online context, as if it is a gross physical trait that boils away in a sea of phosphors.

Clay Shirky’s essay on the “power law” and the way that freedom of choice necessarily constructs hierarchies among bloggers was convincing to me (though I liked Steven Johnson's observations about how to figure a different sense of choice into all of this). The discussion of choice and its discontents also reminded me somewhat of my colleague Barry Schwartz’ interesting thoughts on the dilemmas that “choice” presents to many people.

I have read some blogs off and on for about a year now, but didn’t really get fully into the blogverse until this October, when I was reworking my own pages. I experienced for a while the same sense of heady pleasure and enthusiasm that I usually feel when I dive into a new kind of online discourse or medium. I am still finding new blogs that I really like that I had never read before. I certainly hadn’t really been fully aware until reading Shirky’s essay about how discontented some bloggers are about perceived hierarchies and “A-list” cliques among bloggers.

I should have expected it: whining and moaning about an imagined “A-list” or “postocracy” has been a part of every single online discourse I’ve participated in, from listservs to MOOs all the way back to the SFRT. Now that I’ve seen the carping and pettiness on the seamy underside of the blogverse, I am going through my usual descent from enchantment to normalcy in my regard for my new toy. When it comes to the Internet, I think I'm less a "first adopter" and more a "carrion eater": I seem to show up just as rigor mortis is setting into the dead-horse-beaten corpse of a new online medium.

In a few cases surfing around this week, I have run into stuff that makes my flesh crawl. I like some of what Lileks writes, but having seen some quotes from a recent Bleat about Michael Moore, I had to agree that he’d stepped over—way, way over—some boundaries of human decency.

That being said, finding that there was a blog devoted to hating James Lileks that included a post from a person quite seriously wishing he could kill Lileks and his 2-year old daughter with a suicide bomb struck me as being even farther over those boundaries. Not only was some of the content of the page as sick or sicker as the worst Lileks could offer, it summed up in a nutshell the incestuous, derivative, inward-turning nature of most blogs. A blog devoted to hating another person’s blog—in fact, to hating that other person? Isn’t there a soup kitchen or something that needs a couple of extra hands out there? A better working definition of "wasted labor" cannot be found.

Unfortunately, I am also pretty well done with one major alternative, the closed-membership or heavily moderated virtual community or listserv. I’ve done a number of those, one of them for five years until I semi-quit this week, and they lack some things that I desperately crave.

For one, no matter how much people try to keep fresh blood flowing in, eventually any virtual community gets senescent. Eventually everyone knows what everyone else thinks, and the more you know about how some people think, the less you want to talk to them. Even in the case of the people you really like and find interesting, you eventually run out of old things to talk about and find yourself sitting and waiting for some new event or issue to hash out with them. At that point, no matter how determined everyone is to avoid it, metathrash is going to start happening, for the same reason that animals kept in cages that are too small start picking at their own scabs: just because it provides some momentary amusement.

Also, strong injunctions to civility, coupled with the asynchronous kinds of dialogues that most virtual communities rely on, ultimately have had some really constraining effects on my own writing. I find myself thinking too much in polite little dialogic chunks. Strong moderation or self-imposed civility seems for a while to take care of the worst trolls, flamers and ‘energy creatures’, but in the longer haul, it’s almost like a Darwinian form of natural selection that leads to much more skilled forms of passive-aggressive behavior, creating a better but no less frustrating class of troll or time-waster.

So blogs seem to be it for the moment for me. There is an openness and expansiveness, at least in theory, to blogging. I find it easier to imagine updating my own pages with some regularity in this way: my old design, based around many areas of static content, was simply too hard to update regularly. On the whole, I would rather write about what is on my mind and less about the latest meme constipatedly rumbling through ten thousand blogs (though here I am doing just that) and the devil take whether I am on the C-list or Z-list in the meantime.

[permalink]


February 7, 2003

A rolling snowball gathers too much moss

A colleague of mine sent me the URL for this fascinating page that uses "snowball sampling" to graphically represent the connections between the "also bought" lists that Amazon.com generates. (When you buy a book on Amazon, you can see that customers who bought that book also commonly bought several other titles.)

It's a beautiful, elegant demonstration of a larger pattern that worries me enormously, and that the collective output of American Blogistan is illustrating to an alarming degree.

In a virtual community that I have participated in for years, I can be counted on to wail and whine about the close-mindedness of people on the left--I'd whine about people on the right, too, but there aren't very many of them in the community in question. What aggravates me isn't so much the actual positions that people take, which I happen to agree with in many cases, but the fact that they come to the table with enormous inflexibility, smugness and preternatural hostility to any view that does not closely replicate their own fixed position.

There is never any sense that there is a need to think about how an issue looks to someone else who isn't already a member of the club and, who knows the secret handshake. There's no sense of an obligation to persuade anyone: an argument is just a license to harangue and browbeat. Idee Fixes R Us.

That's what I think that snowball sample shows via Amazon. I think it's what a lot of online discourse about politics shows, too. Few people seem to feel a need to explore ideas, try on a new theory or premise, and work towards a kind of intellectual transparency, where one's basic axioms and understandings are always visible and always open to the possibility of change.

The problem is that when you give up on the obligation to persuade and an openness to the possibility of being persuaded, when you turn your back on intellectual exploration and an abiding interest in how other people see things and why they do, you're effectively turning your back on democracy as anything but a polite way to manage intractable conflicts. You're accepting at that point that democracy is just another word for "kill or be killed", that in any conflict, you're either going to win or lose, and that winning involves making the other guy do things your way whether he likes it or not.

You don't have to be infinitely open to all comers and infinitely willing to concede all principles. Sometimes you do have to stick to your guns and trust in the rightness of your views. But you really do need to "jump clusters" now and again, to always seek and desire intellectual and political pluralism. Staying on your snowball is a sure recipe for frozen rigidity.

[permalink]


February 7, 2003

A thought-experiment on torture and expediency

The New York Times this morning has an interesting story this morning about the trial of Zacarias Moussaoui, who is accused of being a co-conspirator with the al-Qaeda members responsible for the 9/11 attacks. The trial has already had a number of twists and turns, including a rejected attempt by Moussaoui to plead guilty, but more recently, it has become increasingly clear that federal prosecutors are wishing they had never brought the case to a US criminal court in the first place. Now the Times reports that the federal government believes they may have to transfer Moussaoui’s trial to a military tribunal in order to avoid granting his lawyers’ motion that they be allowed to question Ramzi bin al-Shibh, the alleged al-Qaeda intermediary who carried messages between the 9/11 conspirators and al-Qaeda’s top leadership.

Bin al-Shibh, who was captured last October, is allegedly being "interrogated overseas". The Times also asserts that federal officials do not want some of his knowledge about al-Qaeda to become public knowledge and quotes a federal law enforcement official who says that they also do not want to disrupt “psychological games” they are playing with al-Shibh.

Call me a cynic, but here’s how I read this story: we’re allowing al-Shibh to be tortured (or directly torturing him ourselves) and we don’t want him to see the light of day so that this becomes public knowledge.

Time was when these kinds of accusations came up, US federal or local authorities just denied them flatly—mostly because they were in fact untrue, the kind of fevered exaggeration that conspiracy theorists invent casually; in a few cases, especially some notorious instances of police violence, because the accusations were true and actionable criminal behavior in their own right. Now instead what we hear are non-denial denials and sober discussions in the public sphere about whether torture might be justified in some circumstances.

After all, wonder some, what if you could have stopped a terrorist attack that killed tens of thousands of people if only you’d used all means available to get information from a captured suspect?

I want to take that question seriously. It’s an old question given new urgency. This is a version of the Stephen King Dead Zone Johnny Smith thought-experiment, “Would you assassinate Hitler if you found yourself miraculously back in time in 1931 or 1933, even knowing that no one would believe the Holocaust possible and everyone would think you a madman?”

What exactly is it that separates us from al-Qaeda? What are we defending? What are the limits of our defense? What would we not do to stop a terrorist attack that kills tens of thousands?

Because no one would say that everything that might stop a terrorist attack of that magnitude is justified—and I wish there weren’t so many people on the left who seemed to be saying that virtually nothing should be done to forestall such an attack. And yet, there are many commentators who seem to think that all they need to do is wave the magic phrase, “Do you want to be the person who failed to stop the use of a weapon of mass destruction by terrorists when you could have prevented it?” That’s not an argument in favor of a particular action: it’s a platitude that vacates all hope of rational discussion. In some cases, yes, we ALL want to be the person who failed to stop a terrorist attack because the method of stopping one would be worse.

So, I propose a thought-experiment to calibrate our standards, to discover where the lines are that almost none of us would cross, and try to figure out what the basis is for drawing them. Table for the moment the thought that the actions in question would actually not prevent terrorist attacks, or would provoke worse attacks. I’m just asking: where would you draw the line if you knew that a particular action was guaranteed to stop an actual attack?

Scenarios:

1. A campaign of total extermination directed at anyone who professes the Islamic faith.
2. Complete closure of the borders of the US, expulsion of dissidents and non-citizens, and suspension of the Constitution.

I fervently hope that no one reading says, “Sure, I’m ok with either of those”, stipulating for the purpose of this exercise that these measures were guaranteed to prevent a terrorist attack (which of course they wouldn’t be). So none of us think that any and all preventative measures are justified.

3. Through a lucky break, US intelligence discovers that a terrorist cell has a small nuclear weapon hidden in a basement in the middle of a medium-sized city in an Arab nation. We know roughly where it is within a ten-block radius, but not precisely. There are fears that the cell may try to move the weapon and we will lose track of it, and it is known that the cell may have the ability to transport the weapon to the US without being detected. The government of the nation in question, while not aiding the terrorists, will also not permit a house-to-house search or initiate one themselves. If we knew that we could detonate the bomb preemptively and remotely in some fashion, killing as many in the Arab city as the bomb would have killed in a US city, but preventing that attack, should we?

I’m hoping that anyone reading would say, “No”. The standard proposed here is that US lives are always and under all circumstances worth more than the lives of non-Americans, and that there is no scale at which the deaths of innocent non-Americans makes action to protect Americans intolerable. If someone told me, “Either those civilians in that city die or you do: the bomb goes off there or it goes off here,” I would say that I would rather die myself. I would rather be murdered than be a murderer.

4. A captured al-Qaeda operative known to possess specific information about a planned attack using weapons of mass destruction resists all interrogation, including psychological torture, beatings and shocks to his genitals. Officials looking into his background find out, however, that he has an unusual and intense aversion to the thought of rats gnawing his face off. So they take him off to Room 011 and put a cage full of hungry rats on his face. He gives in after losing most of a cheek.

Right, I know, sounds familiar. The point is, suppose you think torture is justifiable if it prevents an attack. Is all torture justifiable? How about sawing off someone’s genitals slowly, or stretching them on the rack? If not, why? What’s the dividing line between beatings and amputation, exactly?

I ask this in all seriousness. There isn’t an absolutely clear distinction, for example, between putting someone handcuffed in a dark room with a hood over their head for 24 hours and Pembleton and Bayliss grilling someone in The Box on the TV show Homicide. But if Pembleton and Bayliss could make an al-Qaeda operative spill the beans and save ten thousand lives through psychological trickery and pressure, I would be okay with that. I might feel queasy about the hood-in-the-dark scenario, though, and the full Winston Smith treatment would be intolerable. At that end of the spectrum, I say once again, “I would rather die if the only way to save me is hot pokers and iron maidens”.

At the other end of the spectrum, I suspect even the most lefty among us might say, “Hey, if Encyclopedia Brown manages to trick Bugs Meany into confessing while he’s being kept in the principal’s office, that’s fine.”

In between is where it gets tough. Solitary confinement and a minimal diet for a month? I guess that might be okay if it saved lives. Ten hours of interrogation under a blazing light culminating in the interrogator striking the prisoner three or four times in the face? If that produced the information that saved ten thousand lives, would I really say no? Obviously the problem here is that most of us recognize that we might trust ourselves to make these judgements carefully but we do not necessarily trust others—or governments—to do so, and we also recognize that this is the most slippery of slippery slopes, where a slap in the face one day turns into bamboo shoots under the fingernails the morning after.

5. Al-Qaeda operatives are meeting in a neighborhood in a Somali city, and they have a cache of weaponized anthrax with them. After the meeting, several are going to carry the anthrax to the US. Once they leave, US intelligence isn’t sure it can track them. A surgical strike with cruise missiles kills all the operatives and vaporizes the anthrax safely, but in the attack five civilians who had no association with al-Qaeda—people in the wrong place at the wrong time—also die.

I’m ok with this, because in this case we didn’t attack knowing we were going to kill those civilians in particular and because the lives saved are hugely disproportionate to the lives lost. In this scenario, I would rather that I live and those five innocents die. Those deaths are not really on my conscience: they go on the balance sheet of al-Qaeda, not the US.

The upshot of all these rigged hypotheticals is simply to say that if the US government is torturing bin al-Shibh or countenancing his torture and fears that being known, we can’t just shrug and say, “Whatever saves lives”. Because none of us endorse all possible steps that would pre-empt a terrorist attack, and few of us reject everything that might. Nor can we permit our government to say, “Move along here, none of your business, trust us.”

If we’re going to torture bin al-Shibh, we have the right—the obligation—to make that decision as a whole society, in the light of day. Maybe that’s most important thing that would distinguish us from our enemies, and the most important value we are trying to defend.

[permalink]


February 6, 2003

Patterns of fire in the skies

A reporter called me on Sunday to ask me what I thought the national mood was in the wake of the Columbia disaster.

Getting called like this is becoming increasingly odd and distant from even a generous accounting of my expertise. I’ve commented in the past for reporters working on popular culture, nostalgia and television—I’ve sort of become Robert J. Thompson’s understudy—but now I’m getting all sorts of general questions about the national mood, reality programming and about anything else you care to name. I’m cool with it: most of the reporters who call are interesting people, often working on interesting stories. I don’t feel especially expert on some of these queries, but if I’m going to blog, I might as well talk as well.

On the “national mood”, at any rate, there is something to be said. Not what that mood is, but what it means to ask that question. I take it for granted that there is no such thing as a single “national mood” about the Columbia disaster (or many other events). Almost all of us felt sadness about it, of course. In speaking of a general mood, however, we affirm first what Benedict Anderson noted about nations, that they seek to assert a simultaneity of experience through mass media, that we are all living in the same moment, and second, we propose to find meaning in the unfolding of events, to impose orderly narratives on the disorderly progress of history.

Seven lives lost in space do mean more to me than seven lives lost in seven different car crashes. No insult to the victims of car accidents, but we place death and tragedy within larger narratives and structures of feeling all the time. My father’s death was more devastating to me by far than the death of my grandfather, not merely because of a closeness between my father and I (and a distance between myself and my grandfather) but because of the suddenness and unexpected nature of my father’s death. The picture in my mind, the powerful story, is about him dying alone on the floor of the men's room at his law firm office, an hour before anyone else got to work. That picture matters vastly more to me than the simple, banal, predictable and statistically ordinary fact that he died of a heart attack brought on by a lifetime of Type-A intensity and stress. The death of one boy from starvation and abuse in New Jersey has a different meaning to me than the death of a well-loved and well-cared for child in an accident: both tear at my heart, but they are different stories, different meanings. Similarly, some of the ways that Americans have seen meaning in the Columbia disaster make sense to me even if I don’t think they’re a literal description of the hidden causes of the crash.

Like many Americans of my generation, space exploration plays a very special role in the architecture of my imagination and aspirations. A loss of astronauts—especially when it threatens the space program’s future—has a sharp and special pain to it. (Even though I freely concede that the shuttle program and the International Space Station were and are a mistake within the overall context of the space program.) Of course, there are also very real and sharply pointed discussions to be had about causes and procedures, in which we might hope to understand and so prevent future disasters.

In a more general sense, it was hard not to feel that this event was the sad overture to what is almost certain to be a year overflowing with tragedy, that the geist of our time reached out and ripped the Columbia from the skies as a foreshadowed taste of funerals to come. You can have that sense without judging this taste of bitter loss, just as farmers sometimes sense in their bones a season of coming storms. What can you do but endure? Obviously there is no real connection: coincidence is usually just that. We connect events like an artist connects lines and shades on a canvas. You find such a picture of synchronicity resonant or you do not. It's not wrong or right in some absolute sense.

This is why I find conspiracy theories offensive at moments like this. Partly they’re offensive simply because they're such badly conceived arguments. Whether it’s the Columbia or Paul Wellstone’s plane crash, to explain the event through conspiracy is either to assert that everything we are seeing about the event through our normal channels of information is wrong—in which case the conspiracy theorist’s own sources of information are equally suspect—or the conspirators possess technological, organizational and logistical abilities that vastly outstrip anything we ordinary folk witness in everyday life. In either case, there isn’t any point to talking about it if it’s true, because if it’s true, there’s nothing to do about it anyway—it’s like human beings complaining about the power of the Olympian Gods. If we can do something—if opposition is viable—then the conspirators don’t possess the powers attributed to them, in which case the conspiracies attributed to them can’t possibly be true.

The deeper reason I find conspiracy explanations of something like the Columbia accident or Wellstone’s plane crashing offensive is that they misunderstand the search for meaning. Finding meaning and connections in events is about interpreting them imaginatively, about creatively knitting together the separate strands of time that divide our lives, a gift to others groping in the random darkness of time. A conspiracy theorist takes an interpretation and mistakes it for an empirical statement. You can say, “Wellstone’s death makes it feel as if the Democrats are cursed, a dark cloud of misfortune and malevolent disregard hanging over them.” That’s completely different from saying, “The Bush Administration conspired to have his plane crash”: you’re not talking about what something means then, but making a statement about what is true and not true.

The standards are different in that case. It’s human to try and make things make sense. It’s stupid to leap from that to claiming that everything happens for a reason, always already at the willful command of some sinister structure whose visible face appears to us only and accidentally in oblique glimpses of tragedy and suffering. Sometimes dreams just die, and souls are lost. Sometimes the heedless rush of events masters us, rather than the other way around.

[permalink]


January 29, 2003

Camel's Heads on Beds and Other Just-So Stories

I listened to the State of the Union address last night, which is increasingly rare for me, whether I like a President or not. (Have I ever liked a President? Hm. Not really, not in my own lifetime.) The chances of hearing an honest-to-god speech are minimal. Bush's address had most of the ritual and formal hallmarks that these speeches do, and bored me silly for most of the time, just like Clinton's and Bush I's and Reagan's and Carter's did, when I bothered to listen to them. Who knows whether the legislation mentioned will happen, or even resemble in its actual drafted form the rhetorical promises we heard? The AIDS proposal sounds fine to me, but the proof is not in the performance of compassion by the President (or genuine sentiments: I don't much care how deeply or authentically Bush feels for Africans with AIDS: it's a non-issue to me), it is in how that money will actually get spent, assuming it ever does.

The final part of the speech was, in contrast, fairly interesting and I have to admit, in spite of my own feelings about the war, well-delivered, well-written and potentially convincing in certain respects, depending on the evidence that Powell and the Administration lay out in subsequent speeches. (I am not willing to just trust in assertions that "intelligence" has found certain things: with a decision of this magnitude on the table, a democratic public needs to see the specifics of what "intelligence" has found and how they found it, even if that compromises future intelligence. There is a very long track record that documents that no President, liberal or conservative, should just be trusted when he says that "intelligence" says so.)

Let me put it this way: if 9/11 had never happened, I might actually be prepared to support the attack on Iraq. But precisely because 9/11 did happen, I think it's a tremendous mistake.

Warning: returning to some familiar themes here.

Reading some of the blogs I look at every day, I see a repeated assertion from a lot of the attack-Iraq writers that echoes some of the arguments that Bush and his cabinet have laid out. I want to focus on the person whose blog I enjoy most, who I think is the best writer of the bunch, and the most commonsensical in the way he makes his case: James Lileks. (Andrew Sullivan, in contrast, is so fawning in his adulation for Bush's speech, that he seems to have surrendered all capacity for critical thought.) Lileks doesn't slaver over the speech itself, but instead goes the heart of its argument and reprises it even more convincingly.

The problem is that Lileks, like most of the warbloggers, has a Trojan Horse moment where he tries to slip by a point that is immensely contentious and tentative wrapped up inside the common sense and ethical urgency of the rest of his reasoning.

Lileks grasps something that some of the other warbloggers don't. It's not that Saddam Hussein must be attacked because he represses his own population. Yes, like Lileks and others, I find some of the waffling on the left about Hussein's repressiveness to be deeply nauseating. I do think that the people of Iraq will be grateful in the short and long-term to be rid of him. In that respect, the coming war is an act of liberation. But if we were going to war simply to liberate Iraq, then we have to go to war to liberate another five to ten societies that suffer at least equivalent oppression. I have said it before: you cannot justify the war in these terms alone, or criticize all who oppose it as necessarily favoring or sanctioning tyranny, not without embracing a sustained military campaign to eradicate all tyranny everywhere.

Lileks grasps this point. He says that the coming war is about "the torture and the wars and the oil-field fires and the gassing and the starvation and the palaces and the big grinning fark-you to the terms that ended the last war. Oh, and also the germs, and the gas, and the rockets, and the nukes." The argument for war rests on Hussein winning a very particular trifecta: tyranny + demonstrated intent to acquire weapons of mass destruction + demonstrated intent to use them aggressively against other states. Fair enough. That's the argument I might find convincing too, were it not for 9/11.

The sleight-of-hand comes when the warbloggers imagine the likely consequences of the war. I'm not talking about casualties. I have no idea what is a reasonable forecast in that regard, and neither does anyone else outside the Pentagon. Maybe they don't know, either. It's irrelevant, anyway. If we need to do it, we need to do it. What I'm talking about is how the rest of the world will react to a unilateral Anglo-American attack on Iraq. Warbloggers dismiss anti-Americanism abroad as unfair, incorrect or morally specious. I agree it's incorrect and often hypocritical in many cases. As I've said before, that doesn't make it go away. And much as we might dislike it, anti-American sentiments abroad have a real impact on American power and American society. There is a cost to being alienated from the rest of the world. We will pay that price both in our wallets and in increased insecurity. It might be worth paying, but it shouldn't be ignored.

More important by far is the specific reaction of Arab and Muslim rulers and their societies. Here's where Lileks whistles past the graveyard. He writes, "Defeating Iraq isn’t the camel’s nose in the tent - it’s the camel’s head in the bed of every other Arab leader." That's a wonderfully compressed version of a much more long-winded argument to be found elsewhere (including in the publically expressed thinking of Wolfowitz, Rumsfeld and Perle), namely, that the defeat of Saddam Hussein will communicate American resolve to the Arab and Muslim world. This resolve in turn will cause the Palestinians to realize that they have to take the best settlement they can while they can. It will cause corrupt Arab authoritarians to democratize pronto out of fear that they'll be the next targets. It will intimidate potential aggressors into being on their best behavior. I do not dismiss this scenario out of hand. Something like it is why I supported our operations in Afghanistan: we made it clear that there will be consequences for an attack on the U.S. or its allies.

But this is only one scenario, and it rests on some highly shaky assumptions. One, that other Arab and Muslim leaders are rational actors who will reliably pursue their own self-interest--something that the warbloggers discount when it comes to Hussein himself. Two, that these rulers can be counted upon to control their own societies, or that the general population of those societies will also be awakened post-Iraq to rational self-interest and prudent calculation. Weirdly, I think the warbloggers are counting on the authoritarians to be authoritarian in the short-term.

There's another scenario that I think is at least as likely in a post-9/11 context: a massive popular upswelling of support for al-Qaeda and similar groups, carrying with it a ready supply of martyrs willing to commit atrocities that they might have found unthinkable a decade ago. 9/11 wasn't made possible by new technology: it is something that could have been done forty years ago. It only took people willing to do it. It is entirely possible to imagine that a unilateral attack on Iraq will make many more such people, willing to do many more such things. It is also possible to imagine that existing states will be swept away through this popular reaction and replaced not by liberal democratic regimes but by Iran-style theocracies. I do not endorse these reactions by describing them, any more than I endorse a tornado by saying that it will happen when hot air masses meet cold air masses.

You can't get past this scenario by sleight-of-hand or talk of camel's heads on beds. You have to meet it head on. If that's the price, a possible price, is it worth it? If that's the cost of taking out Hussein, is it worth it in a world where responding to 9/11 ought to be Job One? Hussein is a serious problem, but in many ways, he is a different problem. If putting him on the top of the "To Do" list actually makes the other problem vastly worse, is that wise? If you had to see a resolution of the two problems as antagonistic rather than simultaneous, which one is more important?

It may be morally right. It may liberate a suffering people from tyranny. It may make the world safer. But so would invading Zimbabwe and taking out Robert Mugabe. Nobody is making a case for doing that right now.


January 28, 2003

Dear Sony: How To Get My Money

In a widely-linked and discussed piece, Wired recently looked at the internal conflict within Sony between the people who design the hardware for playing music--who want to accomodate the needs and desires of consumers--and the people who sell the music itself, who increasingly see consumers as a pack of thieves who must be stopped with stringent protections, including most recently disabling the ability to play CDs on a computer hard drive.

Let me lay out a little something for Sony's benefit. I have never downloaded a piece of music from the Internet or a local network. The only music I have in my own collection is what I have bought myself. The only free music I listen to is on the radio. I'm a law-abiding, credit card-wielding, music-listening, middle-class demographic music consumer. My only knowledge of peer-ot-peer networks is theoretical. I wouldn't have the faintest idea how to get songs using KaZaa or any similar service. The first time I actually transferred music to one of my hard drives was about six months ago because I got tired of lugging a favorite CD from work to home and back to work again. I've never put music on a CD-RW despite the fact that I have had CD-burners on two out of the three computers I use for about two years.

But I do like to listen to my CDs on my computer while I work, whether the CD is in the drive or not. And I do very much like the idea of getting only the songs that I want when I lay down some money for music.

The music industry in general blames its long economic decline on Napster-style piracy, on a generation that thinks music is for free. Like a lot of observers, I think that is simply incorrect. I'm Mr. Ideal Consumer, and I have bought fewer and fewer CDs over the past decade. I suspect I'm typical in that regard. Why?

First, because I have lots of CDs. I have as many blues CDs as I'm likely to ever want. I have as many CDs of African musicians as I'm likely to want--I like Thomas Mapfumo but honestly, unless he does something really amazing, one Thomas Mapfumo CD is as good as any other. I have as many CDs by melancholy quasi-Celtic white girl singer-songwriter gets-heavy-play-on-WXPN musicians as I'm likely to want. Well, ok, I can get another one or two of those. The upshot of it is, I have a lot of music now. I don't want much more unless there's a CD out there that really grabs me or I take a sudden intense interest in Zorastrian fusion jazz chanting.

Second, because I have credit card debt like a lot of Americans and when it comes to a battle between compound interest and the latest Sinead O'Connor CD, the bald one loses.

Third, because I don't want to buy whole albums any more for a single song that I like. Been there, done that, no more. Let's face it: there's almost no pop musicians who deliver an entire album that is conceptually and thematically coherent from start to finish, where you want to listen to most or all of the album many times. A little while back, for example, I bought a CD by Shannon McNally because I heard the song "Down and Dirty" on the radio. Great song. Loved it. But most of the rest of the album is just sort of bland or even occasionally annoying. Nothing awful, but nothing I want to hear again and again either. I have a small list of people where I think their albums are great from start to finish, and it gets smaller every day. Those are the only musicians, increasingly, for whom I want to shell out the not-inconsiderable sum of money to buy their albums.

On the other hand, there are a lot of songs that I'd pay $1 or $2 to acquire in digital format if I were guaranteed that my purchase of the digital format music was transferrable from one playing device to the next. I'm not building a digital music collection if it means that I lose everything I bought when the next upgrade cycle hits me, once every four or five years. I'm not building one if I can't transfer music from my desktop to an iPod. If the music I bought was transferrable to machines that I own, and I could buy it on a per-song basis, then I guarantee my total purchases of music would go up significantly. It is that simple: that's how the music industry gets back to greater profitability.

I can also tell them how to lose much more money. Here's a promise from me to Sony. The first time I buy a CD, plop it in my hard drive, and find I can't play it, is the last time I buy a Sony-produced music CD. Simple.

I'm the consumer you want, guys: I walk the straight and narrow. I'm a virulent anti-pirate. I have a credit card, a steady job and a growing income. I like music. Give me what I want, and you get more of my money. That's as simple an arrangement as you could want. Take from me what I already have, and you don't get my money ever again. I could live without ever buying another CD if it came to that. That is also simple.


January 27, 2003

You want a definition of evil? This is evil.


January 17, 2003

Pragmatically Waffling About War

I’ve been reading the blogosphere more intently in recent days than I have for a while, and mostly I see a pretty dramatic divide between people who think that there is no legitimate argument whatsoever for any kind of military action in Iraq and people who think there is no legitimate argument whatsoever against military action in Iraq.

I think neither of these things. I am prepared to accept and even cautiously endorse a UN-approved, semi-multilateral attack on Iraq. I am opposed strongly to a unilateral or near-unilateral attack. Not because I think it’s fundamentally wrong or immoral or a greedy war for oil or revenge for Bush’s Daddy or any of that. And definitely not because I have any great fondness for the UN, which I think is ineffective at best and a human rights disaster in its own right at worst. I oppose a unilateral attack because I think it is imprudently costly in geopolitical terms.

Even hard-core attack-Iraq bloggers like Andrew Sullivan recognize that an immediate military assault on North Korea in contrast would be a bad idea. Why? Because the negative consequences are too great to ignore, not because they’re great fans of North Korea’s rulers. I’m kind of sick of bloggers like Michael Totten suggesting that to harbor doubts about the advisability of a unilateral attack on Iraq is tantamount to endorsing Hussein’s authoritarian misrule. Hello, excluded middle! If a failure to endorse military action at all costs and regardless of circumstances means that you are moral kissin’ cousins with Benedict Arnold, Tokyo Rose and Jane Fonda, then Sullivan, Totten and the rest of the attack-Iraq brigade can join the rest of us down here in the ninth circle of hell—because I don’t see them systematically laying out the next part of the grand military campaign to liberate humanity in Zimbabwe, Liberia, the Congo, North Korea, and China. Not to mention Pakistan and most of Central Asia. And, uh, maybe Saudi Arabia.

The reason I oppose a unilateral attack on Iraq is simple: because the stakes do not justify the risks. That’s all. The risks are simple and potentially catastrophic: a high probability of enormous social and cultural convulsions across the Arab and Muslim world—social convulsions that will strengthen, rather than weaken, terrorist networks like al-Qaeda and possibly bring to power more fundamentalist regimes in a number of states; a strong probability of a serious fracturing of the relationship between most of Western Europe and the United States at a time when coordinated action by Western societies is important; and a general rise of popular anti-Americanism throughout the world, when we desperately need to build bottom-up networks of support for liberal democracy and globalization.

It doesn’t matter if those reactions are wrong or unfair, which I think mostly they are. I’m not endorsing them by noting their existence. But critiquing them in your blog doesn’t make them go away. They are very likely to be a real consequence of an attack on Iraq, especially one that includes (as it almost necessarily does) protracted American involvement in the reconstruction of Iraq. The larger consequence in turn is that the war on terrorist networks and hardcore Islamic fundamentalism, which I do support in large measure, actually suffers serious losses because of a unilateral attack on Iraq without concomitant gains. In fact, I think the exclusive focus on Iraq has already distracted from that vastly more important goal.

So far, I have not seen much credible evidence that Iraq possesses significant weapons of mass destruction. I think it’s a possibility, and I am fully willing to be convinced that he has them if Bush and Blair will only present some substantial evidence that goes beyond, "Trust us, he's got 'em". 12 warheads isn’t enough for me: it’s a pretty picayune finding. (And by the way, how come the attack-Iraq brigade, who had nothing but dripping contempt for Blix's team until yesterday, are now certain that the inspectors are the cat's meow?)

So far, I have not seen much credible evidence that if Hussein has such weapons, he intends to use them imminently against the people of Western Europe and the United States directly, or supply terrorists with them for direct attacks on the West. In fact, I think it’s a reasonable supposition that he might only use such weapons if he was about to be killed or captured, as a scorched earth tactic. Certainly there is evidence that he will use them against his neighbors: he has in the past. But the same could be said about North Korea. If we can contain North Korea with a threat of retaliation, why not take the same approach to Hussein? Al-Qaeda and Islamic fundamentalism in general are more threatening directly to us by far, and they can’t be contained or negotiated with. Can we get back to paying attention to them instead?

Does my reluctance mean that Iraq’s own citizens will have to suffer at the hands of Saddam Hussein? Yes. Sucks to be them, just like it sucks to be a Zimbabwean or a North Korean right now. If we want to start a military, political and economic crusade to free humanity from dictatorship in Iraq, and we mean to continue it everywhere, then I might actually sign on board as a supporter. But we all know that’s not going to happen. So attacking Iraq because you’re against dictatorship only makes sense if you think we can get away with it, if it will do more good than harm, not because you think Saddam Hussein is uniquely intolerable to free people everywhere.

It’s not because I’m soft on Saddam, or a pacifist, or part of the hate-America-first brigade. It’s because I think a unilateral or near-unilateral attack does more harm than good in pursuit of a legitimate objective. A multilateral attack is worth undertaking, precisely because it softens much of the possible reaction to the operation and provides the United States with a certain amount of geopolitical protective cover, spreading the risk.


January 17, 2003

No More Pledge Breaks!

You know that when Alessandra Stanley writes casually in the New York Times that “at long last the time has come to consider privatizing public television or turning it over to the state”, some sort of subtle, magical threshold has been passed in a long-running debate. It’s as if the National Review had a column saying, “You know what, maybe it’s time to support a woman’s right to choose abortion, I dunno” or The Nation ran a piece that said, “Aw, heck, go ahead and bomb Iraq into the stone age, that fucker Hussein has got to go”.

In some antediluvian political era, I suppose it made sense to support PBS when Jesse Helms or some other scion of the religious right when after it for accidentally being courageous and airing Tales of the City. Calling for its eradication in the blistering heat of the 80s culture wars would have looked like capitulation to the wrong people for the wrong reasons. Before that, in the 1970s, it made sense to support it because there was nothing else like it, no alternative provisioner of “Sesame Street” and programs made by British people.

But now? Seriously, what is the point of PBS television? It’s not a source of daring or inventive television: that comes from HBO or FX or any number of other cable channels—pay, extended and basic cable. It’s not the only or even best source for televisual imports from Britain. It’s not the best source of educational children’s television: “Sesame Street”, as Malcolm Gladwell has observed, is yesterday’s news compared to “Blue’s Clues”.

Everything about PBS television looks tired and bland at best. Most of its documentaries make Ken Burns look like Ken Russell. At its worst, in the case of one of the PBS channels we get in Swarthmore, it features re-runs of “The Lawrence Welk Show”, John Tesh concerts, and other geriatric effluvia.

When PBS has a genuine success, like “Antiques Roadshow”, it often seems like a sort of accident—and it’s clear that most, perhaps all, of what has been good on PBS in the last decade could easily find a cable home elsewhere.

There’s a genuine usefulness to local public television. Every cable box ought to be required to have a channel where local school boards, state governments and so on can broadcast their proceedings and communicate with the citizenry. That’s it. Leave the documentaries and the British TV and the kids’ shows to the cable channels who will do better justice to all of that programming anyway. If PBS can’t be daring, provocative and distinctive, if it can’t provide us with something we can’t get anywhere else that is also in the public interest, then we don’t need it.

(P.S. None of this applies to public radio, which is still pretty damn great and largely distinctive within the radio marketplace.)


January 16, 2003

This is very funny. Actually, so are all of them.


January 16, 2003

Why Professor Johnny Can't Write Good Books: Part Ten Thousand in a Continuing Series on the Shortcomings of Academia

Swat alum Sasha Issenberg, with his typically keen eye for really interesting news stories (Sasha needs to have a blog of his own!) called my attention to a really interesting interview with Lindsay Waters, the executive editor of Harvard University Press.

I found myself talking out loud as I read it: "Yes!", "Yes, yes yes!", "OH YES!". I am really hoping that no one was outside my office door.

Here's three really great quotes from Waters:

"Recently, chief academic administrators have begun to demand that candidates for tenure publish two books, not just one, because more is somehow better; they actually don't give a damn which presses churn out all these unreadable, uninspiring volumes. It's my contention that the tyranny of the tenure monograph has contributed to a crisis in the humanities."

"People should not be given tenure because they have published books, they should be given tenure so they will have the leisure to write really great books."

"When I was growing up out in the sticks of Illinois, the university seemed to me to be a shining city on a hill, a place where people actually got paid to read widely, and to have fun with ideas. If it's ceased to be such a place, it's partly because people my age - I was born in 1947 - aren't encouraging younger thinkers to be more daring."

The only things I disagree with in the interview is that Waters suggests that journals are the right place to publish monograph-style material (journals suck too), and that Negri and Hardt's Empire is the kind of really great book that people ought to have the leisure to write with tenure (or a prison sentence). Well, ok, I take that back, a bit: it's a great book in that it is bold, stimulating, and wide-ranging. I just think it's full of shit on a number of key points.

That's beside the point. Waters nails the fundamental dilemma of academic knowledge production at the moment. Too many people are publishing too many mediocre books because the monograph has become the single most crucial criteria for indexing a scholar's productivity. The result is not just too many bad, disposable books, it's the cheerless, careerist, productivity-mad sensibility that afflicts most academic life.

What is especially bitter, I think, is that many of the leading scholars at the leading research universities are the people who simultaneously decry a productivist, "corporate" transformation of the academy while acting as the worst and most tenacious enforcers of its standards. They're also the ones who get rewarded in many cases for their rapid-fire production of three or four monographs that either are repackaged versions of each other or are suffused with unreadably au courant insta-theories that come off like Mad Libs ripped out of the pages of Representations and Positions.

There are people in the academy who can write lots of good-to-great books in relatively rapid succession. Frederick Cooper, Shula Marks, Jonathan Spence and James Scott come immediately to mind. I envy them. It is ridiculous to make them the gold standard that defines what most academics ought to be trying to do, or worse yet, feel entitled to do. It is even more ridiculous to do what one department I know of apparently did, which is to say that Jonathan Spence's output represented a reasonable minimal standard for tenurable candidates to aspire to. That's like saying that you wouldn't want someone teaching fiction workshops in your English Department unless you thought they had a reasonable shot to win the Nobel Prize in the near future.

Tenure should make the academy a joyful, passionate, uniquely liberated place. It should lead to people taking the time to allow books to simmer and stew until they're truly a pleasure to read both outside and inside the academy, a provocative stimulus to thought. Instead, it is a crucial part of a systemic imperative that makes the academy one of the most dour, joyless, and conformist parts of contemporary American society, a hive of insecurity and anxiety. Waters briefly blames administrators, but they're only doing what faculty want, or profess to want. This is not something imposed on us from outside. We build those walls, every day, sometimes not knowing that we are doing it or why. It will be up to us to take them down, brick by brick, to write books that amuse, delight, instruct, inspire or aggravate, rather than books that get us one more line on our curriculum vitae.


January 15, 2003

Why I Vote For People Who Support Cannibalism

Well, Lawrence Lessig (and the rest of us) lost Eldred, by a 7-2 decision. It wasn't even close.

Much as I agree with the fundamental political argument of Lessig's case, I also think the Court was right to rule as it did. I have been perfectly happy in the past to have the Supremes do the necessary scut work of achieving fundamental, crucial social justice when Congress and the Presidency have been too gutless to do it, but this is one instance where looking to the judiciary to save us from hard political labors just won't wash.

Congress clearly had the authority to extend existing copyrights by 20 years. It was and remains an outrage that they did so, and ordinary Americans ought to be boiling mad about it. The extension strikes at the heart of imagination and creativity. Writers, artists and creative people of all kinds ought to oppose the extension. Academics ought to oppose the extension. Gamers ought to oppose the extension. Ma and Pa Kettle in Peoria ought to oppose it.

Those of us who know it was the wrong thing for Congress to do have the responsibility to organize and act as citizens. We can't ask the Supremes to do it for us. So that's the next step: building a political coalition that brings together libertarians on the right, critics of big business on the left, and sensible people who believe in creativity and imagination from all ends of the spectrum to target vulnerable Congressional districts.

Make a pledge today and communicate it to your own representative, especially if you live in a swing district. If you'll support a more enlightened approach to intellectual property and commit to the restoration of copyright to its former duration, I will vote for you. Period. It doesn't matter if you also support legalized cannibalism: I will vote for you.

I will not vote for you if you fail to support sustained reform of intellectual property laws to favor a 21st Century democratic culture, which is both a source of American strength in the world and a crucial part of the 21st Century American economy. What is one of our main exports? Culture! And that's not because Jack Valenti is imprinting himself onto every microchip of every appliance, or because Disney has a 200-year copyright on "Return to Neverland II". The successful American production of mass culture took place because of shorter, not longer copyright. Disney wouldn't be Disney without the public domain: no Snow White, no Cinderella, none of that, if it wasn't for the simple but vital proposition that in a democratic society, at some point works of culture belong to the commons and serve as a vital resource for the renewal of the human imagination.

Take the pledge. Write a letter. Organize. The Supremes aren't going to save us this time, and we should never have expected them to.


January 14, 2003

Toontown: It's the Content, Stupid

Over the weekend, my 2-year old daughter and I played with a 3-day trial version of Toontown. In many ways, it is the best massively-multiplayer online game on the market at the moment, and deserves more attention from gamers and game developers.

Like many gamers, I played in the public beta which closed this past fall. I had assumed, as had many others, that the game was being closed down or put on hiatus due to a post-dotcom contraction at Disney Interactive, but it has quietly launched and is scheduled for a more substantial promotion later this year.

Toontown has the same fundamental problems that the MMOG genre as a whole has. After a while, character development becomes a chore, a “treadmill”. However, Toontown’s designers understand a fundamental principle that eludes many of their competitors, namely, that close attention to making the game a thematic whole with an immersive spirit goes a long way towards relieving or deferring boredom with the treadmill.

There is a consistent feel or ambiance which pervades every aspect of the game’s mechanics. Players spend most of their time fighting “Cogs”, which is my favorite part of the game. The Cogs come in four basic types, but the basic idea is the same for each type: they’re the kinds of petty authoritarians, bureaucrats and drones that menace us all in everyday life. For example, there’s a telemarketer Cog and a paper pusher Cog. Their weapons are red tape, paper shredders, finger-wagging and so on. Either Disney’s reputation for having an oppressive corporate culture is ill-deserved or Toontown’s designers have gotten off a great joke at their employer’s expense. I found myself running around looking for new Cogs just to see the variety of things they say and do (to my normally fearless daughter’s dismay, as she found the Cogs rather frightening, especially their “death” animation).

The “combat”, which involves pelting the Cogs with various cartoon gags like pies and anvils from the sky, is also well-done, especially in how it handles cooperation between players in a no-fuss, no-muss manner. If you come across another toon fighting a Cog, you just join in. You sometimes see a line of four or five players abreast fighting two or three Cogs. There is none of the tension that mars other MMOGs about “kill-stealing”: cooperation is natural and intrinsic to the design.

To gain gags, you play a variety of games against computer or real players, and these are fairly enjoyable, but limited in number and after a while, somewhat boring. This is Toontown’s other limitation: like most of its competitors, there is much less to the gameworld than you might think. My daughter wanted to go see Disney characters like Mickey Mouse and Donald Duck, and sure enough, they’re in the game—well, a few of them, at any rate. Aside from Mickey, they don’t do or say much. Poor Donald just cruises around silently in a boat in a small pond. There are tons of toon storefronts in Cog-infested districts of Toontown but they’re all the same inside, and you can’t really buy anything for your character in them. There are even toon movie theaters but they’re not showing anything—too bad, because it would be great fun to go inside one and see a Quicktime version of “Steamboat Willie”.

Boredom is the MMOG wolf, and it can only be kept from the door for so long. Toontown keeps it out better than most because its ambiance is so consistently immersive, so much a part of every aspect of the game. Everything reinforces the sense that you’re in a cartoon world, doing cartoon things. When you compare that to the generic fantasy mish-mash environments of Everquest or Ultima Online, it becomes clear why players rarely become immersed in the game's narrative (as opposed to becoming immersed, powergamer-style, in its mechanics). Even the designers of Star Wars: Galaxies, whom I have a lot of respect for, don't entirely seem to understand the importance of aligning game mechanics with the fictional setting. When some players asked that they use "bacta tanks" (that's the thing Luke Skywalker gets healed in after his encounter with a snowmonster in "The Empire Strikes Back") rather than cloning as a way to explain a player's revival after being killed, the developers shrugged and said, "Maybe, if that matters to you". Yes, it matters. Toontown shows why it does.


January 13, 2003

Why I Am Not a Libertarian

The New York Times’ excellent three-part series on the McWane Company, a manufacturer of metal pipes, has reminded me of why I am not a libertarian.

There is a generalized, popular “libertarian impulse” in the United States that I have come to appreciate more and more over time, a suspicion of official power and authority. The Republican Party has often irresponsibly and sometimes ridiculously played to that impulse with its rhetoric against “big government”. That’s a rhetoric the Republicans drop like a hot potato when it applies to their own home districts, where they cavort and frolic in pounds of pork with a facility that Tip O’Neill would have envied. At the same time, the Democrats, as well as other progressives and liberals, continue to walk heedlessly into the trap set by the Republicans with their credulous, trusting willingness to invoke the power of the state to deal with any and all kinds of social questions and issues.

That is where popular American antipathy towards government is reasonably justified, because in many circumstances, the intervention of the state is either ineffective or a worse problem than the initial crisis. One iconic example is the long-term perception of global population growth among American and West European liberals, particularly environmentalists. It is hard to find a problem more demonized and feared, and in the 1970s and 1980s, many liberal thinkers did not hesitate to call for solutions which casually made use of the power of nation-states, sometimes along lines that were implicitly and unnervingly authoritarian.

The famous Paul Ehrlich-Julian Simon “bet” about the future is a telling example of the hubris that many progressives brought to the table on this (and many other) issues. And now global population growth is slowing dramatically without drastic actions by most governments. The answer wasn’t pure laissez-faire, either, of course. The three most important factors appear to be the legal and social empowerment of women (something that requires the authority of a state to guarantee), the growth of middle-class individualism and consumerism, and urbanization. An adequate supply of reliable birth control appears to be crucial as well. None of these changes have involved overt, quasi-authoritarian interventions by the state into human reproduction, the kind of thing that Ehrlich and the “population bomb” mafia were chomping at the bit to implement.

Many times, the modern state cannot do much to achieve social transformations that are desired by most of its citizens. Many other times, inviting it to try, and conferring new powers upon it, is clearly a transcendent social danger in its own right. But the Times series on McWane hits the libertarian impulse right in its blind spot. If we are concerned with the achievement of individual and collective freedom, then all institutions which exert illiberal or illegitimate constraints on such liberties are suspect. Capitalism has power, too, and that power can be abused as readily as any other.

Never mind the fundamental problem that Enron and Worldcom underscored, that large corporations with contemporary forms of governance allow a few men to make bad or greedy decisions that ruin the lives of tens of thousands without any real possibility that the victims might have prudently avoided such harm. We might simply characterize this as a kind of “crony capitalism” and look for legal reforms that would provide more transparency and accountability in corporate governance. McWane poses a more fundamental problem. In the impoverished communities where it operates, it requires innocent men to needlessly risk their lives and health.

An ideological libertarian or hardcore free-marketer simply shrugs at this, and says, “If you don’t like it, you can leave Tyler, Texas and seek employment elsewhere. No one makes you work for McWane”. This answer should be intolerable to anyone who strives for the realization of human liberty. Leaving aside the not-inconsiderable real costs of moving to follow changing markets (and noting that one cannot easily follow them across international borders), we should not have to choose between family and community on one hand and safety or life itself on the other. Anyone born in Tyler who wants to stay there should be able to without having to pay a price in blood.

Nothing about what McWane does is necessary in competitive terms. As the Times series observes, in Birmingham, Alabama, McWane’s main competitor, Acipco, is a “blue-collar heaven” with stringent safety requirement and numerous perks for its workers. We don’t need McWane: no one does. If we can send UN inspectors to Iraq, we can send OSHA inspectors to Tyler. If we hope to make the world safe for liberty, to make a just and prosperous world for our children, then we need an end to dictators—and we need an end to 19th Century capitalism.


January 8, 2003

Khan's Non-Prosthetic Chest

I hate the Web sometimes. Why? Because now it's possible to fairly definitively settle the kinds of inane arguments about trivia that used to go on for years and years between friends or spouses without having to write a letter to Cecil Adams.

I've just had one of my most treasured illusions destroyed. For years, I have been absolutely 100% certain that Ricardo Montalban had a prosthetic chest in the film "Star Trek II: The Wrath of Khan". My brother and my wife have insisted for years that it was the real deal. So over Christmas, we decided to settle the matter.

Apparently, on the new DVD of the film, in an interview, director Nicholas Meyer sets the record straight. It was his real chest.

Say it ain't so, Ricardo! This was especially hard news for me to hear, because I basically live for the chance to say "I told you so" and for once I had to hear it said to me. Man is not meant to know some things.


January 7, 2003

John Barnes Disappoints

John Barnes doesn’t get the literary accolades that get showered on Gene Wolfe, China Mieville, or Octavia Butler. Rightfully so: his style is pretty plain and to the point. Many compare him to Heinlein, and stylistically that’s a fair comparison. He doesn’t occasion the intense loyalty that rigorously “hard science” writers like Gregory Benford get from some readers

He’s written a few books that I found very weak: Kaledioscope Century and Mother of Storms, for example.. One For the Morning Glory, while sporadically amusing, basically stakes out the same postmodern terrain as The Princess Bride, and far less satisfyingly so (as my friend Glen Engel-Cox noted in his Amazon review of the book.) Barnes also has a tendency towards gratitutiously nasty scenes of violence that seem to drop into his books as if from somewhere else.

But I still think he’s often a terrific and underrated writer. He’s a very good storyteller, with a strong grip on characterization, and without the affectations and self-referentiality that afflict many SF authors. His work is mostly without the reliance on a single (often bad) idea about the future or a particular technology that makes much science fiction diagrammatic and less than the sum of its parts.

My favorite of his works is A Million Open Doors, which I mentally have classed with Octavia Butler’s Xenogenesis series as an important fundamental rewriting of science fiction’s hidden reliance on 19th and early 20th Century colonial narratives of exploration, conquest and ethnographic encounter with “exotic” peoples. Butler accomplishes this by very thoroughly making the central perspectival position of her SF be that of the colonized and not the colonizer. The Xenogenesis series isn’t just a “we get conquered by aliens and fight back heroically” narrative, which simplistically inverts the old archetypical narrative of Western expansion. It’s a “we get colonized in all the ways that people got colonized and absorbed by the expansion of the West” story, which is very, very different.

A Million Open Doors performs something of the same trick, but in a less obvious way. It is a terrific novel about ethnography and cultural encounter framed within a more contemporary, relativist, morally complex ethos that does not derive its governing sensibilities from the early 20th Century but from the here and now, from the messiness of cosmopolitanism and localism in the early 21st Century. It’s also, in typical Barnes fashion, a really good story with compelling characters. The sequel, Earth Made of Glass, isn’t quite as strong, but it’s still fairly good.

All of this is a prelude to my sorrow that the third book in the series, The Merchants of Souls, sucks so badly. Barnes has sometimes written books that seem to fulfill a contract, nothing more, but I had somehow gotten the impression that this series mattered to him much more than that.


You wouldn’t know it by this book, though. To my great dismay, its central plot, such as it is—for this book is uncommonly and often drearily “talky”, with very little happening—straightfowardly rehashes a horribly tired Frankenstein trope of intelligent machines rebelling against their creators.

 

 

Every time I thought the book was about to take off, it stalled and died. There's an obvious (if interesting) plot twist that Barnes seems to be heading for that would at least introduce a meaningful element of political intrigue into the story, but Barnes just passes the possibility right by. (Without giving too much away, there's a character who gets assassinated whom I figured had probably arranged his own assassination to further his political ends, but nothing doing: the truth is vastly less compelling.) It has almost no story, almost no characters of interest (one character struggles with suicide for most of the book and I kept wishing she’d just go ahead and off herself since she was so tedious), and even its speculative elements manage to make the otherwise interesting setting seem banal.

I hope Barnes has got something better in store for the next book in this series. Otherwise, he would have been wiser to leave well enough alone with A Million Open Doors.


December 24, 2002

Patty Murray Makes Sense (Sort Of)

I have not been shy since 9/11 about criticizing thinkers on the left and academia in their articulation of antiwar arguments. My own support for the “war on terror” in general and in specific for US operations in Afghanistan is undiminished. I do not think that the successful prosecution of this war requires some of the attacks on domestic civil liberties that the Bush Adminstration has pursued, and I think the exclusive focus on Iraq in the past ten months has seriously damaged the war effort rather than enhanced it.

That’s neither here nor there: I simply want to stress that I support US military action against terrorist networks abroad, and I do believe that this is a “war”, or a protracted struggle—pick the term you prefer—which we must fight and fight well. Like Todd Gitlin, I think a lot of what passes for the U.S. left has been either tactically stupid or morally gutless in its take on the post-9/11 world. I think Andrew Sullivan and others have been perfectly right to skewer various commentators with “Sontag Awards” for some of their more blatantly incoherent thinking.

However, if Sullivan wants to think about the downside of something like a “Sontag Award”, it does seem to me that there is now a roving posse of pundits searching for their latest antiwar victim, and that occasionally thoughtful, important contributions to the national debate are being taken out of context and turned into fodder for some pretty hateful, quasi-McCarthyite feeding frenzies.

One example of this tendency is the way that remarks by Senator Patty Murray of Washington were recently circulated across the Internet and in the national press. When you read Murray’s actual comments, delivered to a group of students, while they may be debatable in some respects, they actually make some important points that do not detract from the prosecution of the war on terror. On the contrary, the issues that Murray is talking about are vital to the success of the war effort.

Murray has been pilloried for saying that bin Laden’s popularity in the Arab and Muslim world has something to do with his charitable work in those nations and that our lack of popularity might have something to do with our failure to do the same. Every yahoo with an email address has been bombarding the bandwidth ever since about Murray’s disloyal praise for Osama bin Laden and frothing at the mouth about how she ought to leave the country and join al-Qaeda.

That’s not how I read her comments, not in the least. It takes a determined ignoramus to twist her statement to that end.

This is not to say that what Murray says is completely on target. Bin Laden’s popularity is a complex phenomenon, and I don’t think his charitable contributions are a major factor in it, except inasmuch as his charity helps to burnish his image, in a fashion that is fairly typical of Wahhabi fundamentalism, of concern for ordinary Muslims in contrast to their oligarchic and isolated leaders. It is also true that the US spends a good deal on development assistance, so Murray’s statement that we “haven’t done that” isn’t really accurate in the strict sense of the word—but it is true that the US is largely not perceived in the Muslim world (or the developing world generally) as a charitable donor whose largess is distributed out of genuine concern for world poverty. It doesn’t matter whether that is an accurate perception or not, merely that it exists.

Nor do I endorse the implicit message of Murray’s comments, that butter rather than guns is preferable in responding to al-Qaeda. You don’t have to agree with that part of her comments—you might even criticize that sentiment—but it is absolutely vital that we try to understand why al-Qaeda, Hezbollah and other groups have the authentic appeal that they undeniably do to many in the Arab and Muslim world.

This is not a war that can be won solely with bombs and guns, though military action has had and will continue to have a major and legitimate role. Nor can it be won only with fabulous prizes and soup kitchens for the poor of the Muslim world.

This is primarily a struggle against an ideology, a way of seeing the world. How do you win such a war? In part, by understanding what makes it powerful and by persuasively countering its appeal with an appeal of your own. The United States has several vulnerable flanks in this war which our enemies will continually attack. One of them is our failure to push for a settlement of the Israeli-Palestinian conflict. Another is our relatively uncritical support for autocratic Arab and Muslim regimes that are alienated from their own populations—and it is this issue that Murray’s comments call attention to.

A successful war against al-Qaeda and similar organizations is going to require that we protect those flanks more intelligently, by all means available. In some cases, that will mean countering the efforts of bin Laden and others like him to project themselves as the protectors of the poor and downtrodden of the Muslim world. We need to give those populations a reason to favor a liberal democratic and globalizing world, and we cannot do that only--or even primarily--with guns.

If you support the war on terror, then you need to learn to stop hitting people on your own side with “friendly fire”.


December 19, 2002

Having seen "The Two Towers", I have some thoughts about the movie and Tolkien fandom. If you're geeky enough to want to see them, proceed to this safely sequestered (and lengthy) analysis.

The Shame of Brooklyn

In the meantime, some thoughts on a recent story about Brooklyn College's denial of tenure to Robert D. Johnson.

Edwin Burrows, a senior historian at Brooklyn College, complains in December 18's New York Times that it is “outrageous” that scholars from other institutions would complain about the tenure case of Robert D. Johnson when they’ve only heard Johnson’s side of the story—many of us from reading the History News Network.

Fair enough. The Times article actually lays out the case against Johnson more than any of the materials that have appeared on HNN to date, more even than Burrows’ own letter, signed by some of his senior colleagues.

Apparently no one at Brooklyn questions that Johnson’s scholarly achievements are exemplary. No one questions that his teaching at Brooklyn and elsewhere was as good as it ever gets, that Johnson inspires and connect with his students to a remarkable degree.

So what do they question? Does he shirk service to his institution? Not at all. Does he drop his pants and moon the faculty senate? No. Is he drunk and disorderly in the classroom? Nope. Does he froth at the mouth and adjust his crotch at the lectern? Doesn’t seem that’s the case.

What is his offense against collegiality? Well, he strongly, perhaps even stridently, disagreed with his colleagues during a search for a professor of European history. How perfectly horrible. That never happens among tenured professors in perfectly proper departments. He appears to have believed that he had more insight into the dossiers of the candidates. That terrible fellow! Throw him out! What a bad colleague! He allowed some students to take his classes without the proper prerequisites (something that many of his colleagues at Brooklyn also do, and something that any intelligent teacher allows from time to time, based on their individual assessment of a student’s capabilities). He even worked with some graduate students who had been assigned to someone else. My god, a proper lord knows better than to meddle with another man’s vassals. Feudalism these days just isn’t what it used to be.

The Times reports that his colleagues began to suspect that he had “an independent, contrary streak.” Screw his scholarship and his teaching and his intellect: he has an independent, contrary streak.

Certainly that’s not what tenure was meant to protect. Certainly that’s an offense which cancels out the value of teaching and scholarship to an academic institution. How could Brooklyn College run if its professors exhibited a tendency to be independent and contrary?

The Times article doesn’t even raise another issue that the HNN coverage has dealt with, namely, that Johnson made enemies when he pointed out that an event scheduled on campus about the contemporary Middle East seemed woefully unbalanced—an act that seems a service to his community. Some of my antiwar colleagues are quick to cite cases where professors have been illegitimately punished or suffered for antiwar views--and there are some--but are less quick to note that there have been some similar instances of punitive action against academics who support the "war on terror" in whole or in part or even those who are perceived as doing so.

Unless there’s a smoking gun that the Department of History at Brooklyn College has yet to reveal or even hint at, the only real outrage in this case is the denial of tenure to Johnson. The whole case is one more arrow in the quiver of academia’s critics, one more revelation of the corruption of the profession as a whole, one more reason to question whether tenure ever serves the purpose for which it is allegedly designed. No one who voted against Johnson’s tenure ought to claim to be a progressive or leftist, certainly: the logic of Johnson’s denial is the kind of logic that any grey-suited “organization man” would cherish. It is the logic of the bureaucrat, of the worst and meanest impulses of professionalization.

If the people who support Johnson’s denial of tenure have a smoking gun, they’d better find a way to get it out into the public debate over this case, confidentiality be damned. This isn’t just a case of individual injustice as it stands: it is another example of academia’s seemingly boundless capacity for self-diminishment. At a historical juncture where the wider American society is surely going to begin interrogating the value of higher education in a steadily more pointed and assertive manner, cases where a professor is thrown overboard despite exemplary scholarship and excellent teaching because he is independent, contrary and maybe even occasionally non-cooperative in his dealings with colleagues confirm all the worst stereotypes of academic life. It is hard to go forth into the public sphere to defend the integrity and importance of a liberal arts education with those kinds of stereotypes in circulation, and harder still when they appear to have some considerable basis in reality.


December 4, 2002

I Roxxor U, Psychologists

There it was, in big black letters on the cover of Entertainment Weekly last Friday: DO VIDEO GAMES CAUSE VIOLENCE?

No, but headlines about video games causing violence cause me to want to pull an Oedipus and plunge needles into my eyes. Here is a discussion (to characterize it charitably) that never ends, never goes forward, never changes.

Sisyphus was a sissy: he had no idea how easy he got off. He might have been given the task of trying to get psychologists, sociologists and other experts to think in a more sophisticated way about what “violence” is, about what it means to represent “violence”, and how representations of violence affect the actual behavior of real people in a real-world context.

I admit I had a brief, wild, unrealistic hope after 9/11 that no one would ever again be stupid enough to argue that the representation of violence is a central or important cause of violent behavior in the world. I don’t think Mohammed Atta or Osama bin Laden were influenced much by Doom or The A-Team, except perhaps in their desire to obliterate the culture that produced such works.

More importantly, I thought it was utterly, completely, unambiguously clear that the prevailing thesis that watching violence in games or on television desensitizes one to violence in the real world was disproven once and for all by the reaction to the televised destruction of the World Trade Center. Many people at the time commented that it was “like a movie” (and probably specifically thought about the film Independence Day) but in making this statement, people were observing that the events they were watching were also radically different from a movie in their meaning and emotional impact. Almost no American was “desensitized” to those tragic images: we all instantly recognized that reality is different. If anyone was desensitized, it was the usual suspects who worry incessantly about desensitization: they were the ones who instantly moved on to the usual bitching and moaning about the mass media and American society.

Entertainment Weekly’s article is little more than an admiring profile of the research of Craig Anderson, a psychologist at Iowa State. One of the things that startled me most when I was researching Saturday Morning Fever was the methodological flimsiness and intellectual weakness of the vast majority of academic work “proving” that television causes violence. Given how confident the promoters of such work are about the scientific validity of their findings, I expected more than what I found. But the “proven” quality of the hypothesis that television causes violent behavior often rests on older work whose research design would never pass muster now within psychology, let alone in a wider context.

Anderson, in contrast, like some of the more recent television-violence researchers, is more careful to use appropriate controls and a variety of experimental tests. Even so, his work has some of the same feel that Alfred Bandura’s 1960s experiments on aggression and television had. Bandura had kids watch a “violent” cartoon and then took them into a room with a Bozo the Clown punching bag and asked them if they felt like doing something. It is not clear to me what it means if the group exposed to the “violent” cartoon hits Bozo a bit more often. Nor is it clear to me what it means if Anderson’s subjects, exposed to “violent” video games, exhibit a statistically significant tendency (which, I might note, might still be a very small effect) towards aggressive or “antisocial” behaviors.

For one, these are college students or other knowing subjects being tested in a cultural environment where they are perfectly capable of guessing what they’re being tested on and what hypothesis might be on the table. But quite aside from that, getting to the point where we get some sense of the relationship between laboratory experiments and actual behavior in the real world is something that never, ever happens in this kind of research. Real world-oriented research tends to show something other than a simple cause-effect relationship between the representation of violence and its practice in everyday life.

L. Rowell Huesmann’s work, for example, showed that if viewing violence was accompanied by parental or other teaching about interpreting the meaning of the work being viewed, the fractional effects observed by other researchers vanished. That's a narrow (and testable) sense of the real world's relationship to laboratory-produced behaviors. It's not even getting at the larger question of the actual observable behavior of people in the real world, but what drives me wild is that many people like Anderson think their work actually tells us something about that behavior. It doesn't.

Equally frustratingly, a range of really good works about the importance of violence in children’s play and in popular culture, among them Gerard Jones’ Killing Monsters, Jane Katch’s Under Deadman’s Skin, and Jib Fowles’ The Case For Television Violence, not to mention Jon Katz’ entire oeuvre, just go completely ignored by the “media causes violence” mafia. If these books were allowed to enter into the public debate--if we actually had a debate as opposed to people imperiously trumpeting narrowly useful work in psych labs as if it scientifically demonstrates an inalterable truth about the vast complexities of human behavior and cultural meaning in contemporary American society--then maybe we could get past rolling that boulder back up the hill for the 10,000th time.


December 3, 2002

Free Speech Means Never Having To Say, "I Sue"

Reading the History News Network last night (quickly becoming one of my favorite daily reads), I came across the story of yet another free-speech dust-up on a campus, this time at Yale University. On the day of David Horowitz’ visit here, it seems an appropriate subject to consider.

Skewered by Andrew Sullivan for her critique of Bush’s policy on terrorism, Yale historian Glenda Gilmore threatened a civil suit against the Yale Daily News for comments published in its discussion forum. Looking at those forums, I would say that they have some of the same defects that Swarthmore’s local Daily Jolt forum has, or any online forum where anonymity of some kind or another is allowed. Online oldsters like myself burned out a long time ago on this kind of free-for-all discussion in places like Usenet: regardless of the issue at hand or the ideologically dominant tone, the mood in most such forums usually hovers between mindless and mean-spirited.

So I am not especially surprised that Gilmore may have been distressed when she read some of the comments in this forum. I suppose I am not especially surprised at her reaction to those comments, either, but I am depressed by it. Leaving aside for another day the basic charge that Horowitz, Sullivan and others direct at academia, that it is now dominated by anti-intellectualism and anti-Americanism, a charge that I think has some merit to it, what truly frustrates me is the way that many progressive academics continue to feed Horowitz and others like him plenty of ammunition by conforming to their representation of them as intolerant, thin-skinned and enemies of free speech.

Why, after all, would a professor write about Bush’s anti-terrorism campaign, or any other subject? Why engage in public discussion? Presumably because the writer hopes to persuade others of the rightness of the writer’s views—others who do not already agree with the author. When you embark on the business of persuasion, you take it as a given that there are going to be parts of the wider public sphere where ugly, stupid or cruel things are said. When those things are said by anonymous posters in an online forum, all you can and should ask is that moderators enforce agreed-upon standards of conduct. You don’t try to shut down the whole forum in response.

What passes for the American left these days seems bound and determined to commit both intellectual and political suicide in every way possible, and there is no quicker path to that end than appearing to fear an open discussion or suppressing dissent. Yet time and again, that’s exactly where many American progressives, especially academics, seem bound and determined to go. As Jonathan Rauch observed sagely in the Atlantic Monthly following 9/11, U.S. progressives now appear to value achieving outward semblances of equality more than they do the achievement of freedom.

I will probably take up more on this subject after Horowitz’ speech here tonight, but I think it would be vastly more effective to simply try to engage Horowitz or Sullivan or anyone else in a civil discussion, to raise the bar of the debate, and to steadfastly refuse to do what they expect. It is possible to dissent from the Bush Administration’s plans for Iraq—I know I do in many if not all respects—and to do so (I hope) intelligently and perhaps even persuasively in a wider public context.

Instead, Gilmore—and many others like her—seem addicted to using the meager institutional tools available to them to suppress rather than encourage discussion and effectively turn their back on the responsibility to persuade. At the least, this behavior demonstrates a kind of intellectual sloth, an unwillingness to roll up the sleeves and formulate arguments in terms that might carry the day on hostile ground. At the worst, it completely undercuts any principled argument against the Bush Administration’s intrusions into civil liberties by endorsing such an intrusion as long as it is the “right” people who are doing it.


November 25, 2002

Prickly Paradigm Press: Good Idea, But Work On the Execution

One of the things that has attracted me to online media for a decade was the possibility that they could allow academics to break free of some of the limitations of scholarly publication, or more properly, frame the value of carefully written and precise scholarly prose in specific terms. All academics should sometimes write as scholars. Few academics should write exclusively in those terms, or if they do, they should not try to constrain others to do so. I am interested in any publication venue that widens the range of things that academics write and read, online or otherwise.

As a result, I was excited and pleased when I found Prickly Paradigm Press, which is publishing a series of short pamphlets that privilege clear, polemical and expressive writing by academics. I was very pleased at least at the concept of the press. So far though, the promise that the pamphlets will give rein to things which have “not been said before” or even display “intellectuals unbound” is substantially unfulfilled.

Marshall Sahlins’ initial pamphlet for the series (Sahlins is also the executive publisher for the press) is pretty good, though vintage Sahlins: no surprises here. I’m a fan of Bruno Latour’s work in general, but his whiny post 9-11 diatribe War of the Worlds is a paint-by-the-numbers piece. If you’ve read Chomsky on 9-11, you’ve already read Latour, more or less except that the latter adds some slightly different theoretical flourishes. The third pamphlet in the series is an interview with Richard Rorty, and I haven’t read it. I like Rorty well enough, so I probably will like this, but he’s not exactly underexposed.The fourth pamphlet, by Deirdre McCloskey, is for me the real keeper in the series. Entitled The Secret Sins of Economics, it’s a smart and generally appreciative dissection of the discipline. Maybe this is familiar stuff for some, but I found it fresh and a compelling read.

The final pamphlet currently available, by Thomas Frank, is in contrast pretty stale, almost banal, as well as being a reprint of a chapter from his book One Market Under God. In the pamphlet (and the book), Frank portrays himself as a lonely voice in the wilderness who has been striving against the onrushing tide of “cult studs” in his dedicated attentiveness to the real architecture of the culture industry. Wow, I haven’t heard that one before! A he-man real leftist takes on cultural studies as shallow and reactionary! Very novel. Not. I’m biased, I admit: as I argue in one of my pieces kept on this site, I also would just as soon that cultural studies got over its own pretensions to being politically committed. Where I depart from Frank is I don’t think that admission invalidates cultural studies: quite the contrary. Moreover, as in One Market, Frank largely makes his argument through a kind of collective ad hominem, namely that "cult studs" who contend that mass audiences are knowing, active agents are simply followers of intellectual fashion. Um, no, it's because a lot of us have been convinced of the validity of such a conceptualization by an actual course of research and investigation. Generally, Frank can't be bothered to actually argue against something: it's enough for him to caricature and dismiss. Anyway, this is not a pamphlet that has anything particularly new to add to this long-running Punch-and-Judy show: it is déjà vu all over again even if you haven’t read One Market Under God already.

Let’s hope that as the series gathers steam, it begins to put out some genuinely novel, unexpected or daring pamphlets. The format is the right one: the series only needs to capitalize more effectively on its own promotional language.