May 12, 2005

Update your addresses: Easily Distracted is now at http://weblogs.swarthmore.edu/burke.

 

April 6, 2005

Slow blogging for now, both because I'm horribly overwhelmed with work and because I'm slogging away at a transfer of this blog to WordPress, which involves some drudgery AND a learning curve.


April 6, 2005

God Doesn't Do Feeding Tubes

The day before Terri Schiavo died, I happened to hear Randall Terry on NPR talking about the case. He said (I’m paraphrasing here) that it was important to keep Schiavo alive because there might be medical technologies coming any day which would restore her consciousness or improve her condition.

I’ve been rolling that around in my head a lot, because it seems to reflect something really odd in the attitude of the many of the most strident activists who demanded that Schiavo retain her feeding tube. “Err on the side of life,” they said. But most of them also spoke of the mystery of God’s will. This is the keystone of the official Catholic theology on these subjects, that human beings should not contravene God’s will by deciding for ourselves who lives and who dies, by making the hour of our deaths a matter of human contrivance. It’s why the Catholic Church’s attention to these matters is philosophically coherent, and American evangelicals who came along for the ride in the Schiavo case appear so manipulative or self-serving in contrast.

Even the Catholic argument is problematic when it comes up against the fact that the preservation of life in any of these contexts always involves the active agency of human beings. The only really consistent implementation of the implications of the argument for “culture of life” as it has appeared in recent months is found in those forms of Christianity whose adherents refuse all medical interventions whatsoever. That is true submission to the will of God, as it appears in such a characterization. As soon as you open the door a crack to allow that God wishes us to contrive our own ways and means of protecting life, of healing the sick, of staving off death—as soon as you stand behind feeding tubes, breathing machines, and so on, not to mention surgical intervention, artificial limbs, organ transplants, antibiotics, as soon as you stand like Randall Terry and say, “New medical technologies are just around the corner,” you’ve long since accepted that human beings contrive on matters of life and death, that the mystery of God’s plan for each of us runs straight through the will of human society.

God does not put feeding tubes in people. He does not hook them to breathing machines. He does not do CAT scans. He does not diagnose. He does not invent new medical technologies. If these are God’s will, then God Himself does not err on the side of life, God Himself chooses death, for not all the things which may yet exist to save our lives exist now, and they did not exist yesterday. A century ago there was no feeding tube: Terri Schiavo would have died a long time ago.

A feeding tube, a machine, a new medical technology: these are human things, human decisions, as surely as the human decision to unplug the machine or withdraw the tube. You cannot say that it is human intervention to pull the tube and forget it was human intervention to insert it. You cannot see God in one decision and absent him in the next. You cannot say it is God’s will that Terri Schiavo live another ten years so that human beings might invent a technology which will restore her to a fuller life. That technology is ours to invent, and it is ours to wrestle with the questions of life and death that this technology affects.

There is more. How can those who tell us to choose life at all costs—who demand that we intervene with all the medical technology, all the knowledge, at our command without any thought to the cost of such intervention—then show no perceptible interest in supporting the maintenance of life and health on a day-to-day basis? Why aren’t they out there with equal fervor for health care reform? For preventive medicine? For supporting medical and scientific research that would lead to the “new technologies” that Randall Terry expects?

God does not put in the feeding tube, and he provides none of that. Human beings do. If they demand that human beings put a tube in someone, how can they not demand that human beings do all the other things that they can do to sustain and nurture life?

If you start by conceding the moral and practical difficulty that life and death present to human beings, you can’t be called to account when you don’t have all the answers, when you’re not found with equal fervor in every possible moment and site that should demand your attention given your expressed views. If you start by categorizing everyone who disagrees with you as evil, by demanding that any judge or official or person who fails to bow to your will be removed or censured, then any break in the consistency of your alleged convictions glares like a neon light.

I don’t doubt the genuine depth of feeling among many people about Schiavo, or for that matter, about the illness and death of Pope John Paul II, who also had his life extended through medical intervention. I do doubt the authenticity of feeling among the political leaders, the organized activists, the shrill and mean-spirited who took every opportunity to arrogantly flog their supreme religiosity, to boast over the branches. I don’t think they gave a damn about Schiavo and I don’t think they care much about “the culture of life” either. They were just flexing their political muscles, just testing their weapons, just seeing what they could get away with.

[permalink]


 

March 22, 2005

Shame

How we can know when it’s too late for public reason, too late for the kind of thing I do. When has the clock reached midnight?

Let me drag in an argument I’ve made in a forthcoming article about Thabo Mbeki, the African Union, and the possibilities for change in postcolonial African states. Among other things, I argue that a key consideration in fighting an oppressive system or regime is whether that system can be shamed in any way, whether there is a ghostly, residual presence of some sense of obligation or inhibition, some hidden commitment inside the regime’s architecture that makes it vulnerable. The problem I’m grappling with is, “Under which historical circumstances do the rulers of a particular system, or the elites who support the rulers, concede to the inevitability of change and reform?” Because it does happen.

Two of the examples I give in the article are late colonial British officials in Africa and white rulers and citizens in the waning years of apartheid in South Africa. In both cases, I argue, it was possible to shame them, to force them to leave an opening for reform when the gap between the conceptual underpinnings of their rule and the reality of it was overwhelmingly hammered home. I don’t mean to undercut the brutality of either set of rulers, their inhumanity, but both groups had made certain kinds of rhetorical and conceptual commitments at the base of their authority that opened up a kind of hemophilia in their rule, a slow bleeding wound. Both systems left artifacts lying around within their architecture of authoirty that could be used against them. Gandhi’s challenge to the British in India is another such instance, and much of the civil rights movement in the United States another. Such tactics work only against a system which is still capable of feeling shame, which can be called out in terms of the gap between what it says it is and what it actually is.

The contrast I observe in the article is with certain postcolonial regimes in Africa. There’s no reservoir of shame left in certain kinds of autocracies: Idi Amin, Sani Abacha, Jean-Bedel Bokassa, Omar Bongo: it doesn’t do any good to protest non-violently in the streets, or write polemics, or embarrass them at diplomatic functions. There is no restraint left, no sense of nagging chagrin or worry. Attempts to shame those regimes by their own citizens usually end in their gulags or in flight into exile, though sometimes the pot boils over into uprising, the kind of uprising where there are only two conclusions: the autocrat or the crowds dead, because there is no restraint in between. Attempts by outsiders to shame these rulers end in raucous laughter or in perhaps in ghastly pantomimes of official concern if sufficient pressure is brought to bear by other governments.

So where are we right now in the United States on the shame-o-meter? Let’s just say that I think the reservoirs of shame are draining awfully fast towards zero, and the case of Terry Schiavo is a pretty good dipstick for measuring that evaporation. Like a lot of commentators, I don’t especially have a fixed opinion about the case itself—not the kind of opinion that expresses itself as policy of any kind. There are reasons to have sympathy for any of the people caught at the center of the case, or little sympathy for any of them. Reasons to feel a connection to Terry Schiavo, reasons to feel that she’s got nothing to do with my own life. I could see why her parents might want to keep her alive, and wonder why her husband just doesn’t let them. I could see why her husband might try to honor what he understood as her wishes, and wonder why her parents are putting everyone in their family through hell about that honest desire. I see darker motives for all of them, but in any event, it’s just another human story in a world full of them, as interesting or uninteresting as any. As a pure kibitizer, my instinct would be to keep her alive: what does it hurt, if she's in a persistently vegetative state and her wishes on that subject were expressed in at least potentially ambiguous ways?

I don’t see any reason for a policymaker to take a position either way on what should happen, because the law the state policymakers created at the point before this happened was that spouses should get to make the decision. You might change that law, and cite this case as a reason for that. Maybe you should have to have a living will for your wishes to be legally binding. Fine. I’m not opposed to that. Keep someone alive unless they’ve written a legally meaningful statement about their wishes. Don’t leave it up to spouses or parents: you could make a case for that.

But that’s not what’s happening here either. The United States Congress is concerning itself with micromanaging the resolution of a single individual case. It’s the opposite of the problem I wrote about in “The Idiot God”. There I was complaining about the state’s lack of knowledge of the fine-grained texture of lived experience. Here I’m complaining that the state is intervening from the top in an intensely fine-grained and individual case. Why? Because the party in power is trying to suck up to a minority constituency of Americans who voted for the party, without any shame at all about it. They're not even pretending there's a general policy question here.

If they had shame, they’d be embarrassed, chagrined, mortified that the highest legislative body in the country and the President of the United States can find the time to have a special Sunday session and work out high-level compromises to save a single life, any single life. How about all the other people who died last week who could have been saved? What about the people who don’t have quality health care who died or were hurt? Why not have a Sunday session to help them pay their bills? Why not have a Sunday session to help a man who’s losing his house, help a woman who can’t buy her medications, help a child who can’t get enough food to eat? What makes Terry Schiavo Citizen Number 1, the sleeping princess whom the King has decreed shall receive every benevolence in his power to grant? It isn’t even a serendipity that the King’s eyes happened to alight on her as he passed by. Serendipity I could deal with: if the President happens to read a letter from some poor schmuck and it touches his heartstrings and he wants to quietly do something, he tells an aide to look into it, he puts a twenty in a White House envelope and sends it on, ok, it happens. Serendipity wouldn’t be shameful.

This is, and it’s being done so brazenly that I think it suggests that the point of ultimate shamelessness is fast approaching. When it does, if it already has, then there really will be very little for anyone to do besides mockery and silence, besides accept our second-class citizenship in a country owned and operated by plutocrats for the religious right.

The one hope here is that it appears a significant majority of Americans, based on a new poll, seem to recognize just how shameful Congressional involvement is in this case, just how much it is motivated by an indecent politics rather than a decent humanitarianism.

Of course, keep in mind that Orin Kerr at the Volokh Conspiracy complains that the poll makes "biased" statements which, uh, happen to be true in its questions, such as "doctors say she has no consciousness and her condition is irreversible". Give me a break. Should the poll question read, "There's a few doctors here and there who have a theory that she has no consciousness and her condition is irreversible. On the other hand, her parents think she smiles at them, and some religious people think that the Rapture could be tomorrow and Terry will awake and rise to Heaven. A truck-driver from Virginia who looked at her photo in the newspaper says that he thinks she looks aware. Some evolutionary psychologists say consciousness is an illusion anyway. A Buddhist in southern Asia who has never heard of the case observes in response to it that all life is suffering. A couple of comedians last night made jokes about her being a vegetable. So what do you think of this case?" I think, I hope, that a strong majority does see what the story here really is: the shamelessness of the Congress and the President. Because they're only going to find their sense of shame again if we force them to.

[permalink]


March 18, 2005

Volokh's Bloodlust

I am going to gingerly attempt to be one of about three people to not simply condemn Eugene Volokh’s now-infamous “bloodlust” post.

You could defend Volokh a bit simply by confessing to an emotional connection to his post, by saying that it’s perfectly understandable that one should feel a brief moment of sympathy with the brutality of the Iranian victims towards their victimizer. But Volokh is very clear that he’s not just giving vent to an embarrassing impulse: he appears to be utterly serious about admiring the model of jurisprudence this case. So it would be condescending to just pat him on the head and say, “There, there, we’ve all felt that way from time to time.”

You could instead suggest that maybe Volokh is giving voice to a form of popular sentiment, to an everyday vision of of justice, criminality and punishment that burbles underneath our other ideas about jurisprudence. A decent proportion of American popular culture, for example, contains or endorses similarly retributive ideas about ultimate justice. I remember watching the first Robocop in a movie theater in Baltimore, and much of the audience simply ignored Paul Verhoeven’s ironic tone to cheer on the ultraviolent punishment dealt to criminals in the climax.

Retributive justice doesn’t just crop up in popular culture as a straightforwardly right-wing idea, either. There are plenty of both light and serious entertainments that feature protagonists who struggle with or are conflicted by the appeal of a retributive solution to criminality and evil. There are plenty of thoughtful writers and artists who pause to dwell on the desire for retributive justice without self-righteously condemning that desire as barbaric or shameful.

Certainly I have known many people on the left who have at times ambivalently accepted, justified, or at least looked away from the exercise of retributive justice in popular struggles: necklacing in apartheid South Africa, for example. Like some of Volokh’s critics, I wonder if Volokh understands fully that his endorsement of this particular case would make it difficult for him to condemn similar exercises of popular justice in other contexts, but that could be said also about anyone who has at any time accepted the possibility that justice can be achieved outside of strict forms of constitutional due process and the forbidding of cruel and unusual punishment.

I’m guessing that we all in some fashion situationally or circumstantially accept that “poetic justice” might legitimately exist in the real world as well as in our fictions and entertainments. I’m also guessing that in some fashion most of us recognize that there is a retributive strain woven into liberal democratic systems of crime and punishment. We also sometimes try to put a convicted criminal in direct contact with his victims: we may not let them beat or stab the person who did them harm, but we do allow the families of victims an opportunity to emotionally assault a criminal, to try and make the criminal feel the anguish and suffering of the people his actions have harmed. Yes, there’s a huge difference between that and the Iranian case, but there’s still a kind of conceptual kinship, a moment where we try to peel the intervening layer of state mechanisms away and create a direct human confrontation between victim and victimizer.

I think the most intricate reaction I have to Volokh’s piece involves the complex genesis of modern liberal-democratic jurisprudence, and some troubling doubts I have about its relationship to the long history of human consciousness and personhood. I’m not entirely certain that some of those who are attacking Volokh recognize the implicit commitments embedded within their attack. If they do, fine, but I want to be sure that what is implicit become explicit at some point.

A historian who studies European societies from the late medieval era to the present has to be uncomfortably aware of just how different the fundamental conception of justice, crime and punishment were in the not-too-distant past. Michel Foucault is far from the only one to have noticed this fact. Hangings in England and elsewhere appear to have been unselfconscious forms of popular spectacle and entertainment in the not too distant past. You can look and look for the haunted conscience of modern subjectivity in those crowds and never find it; instead what you see are many people gathered to watch hangings the way we might gather to watch a 4th of July parade. Many rural communities throughout Europe into the early modern period tried and punished domestic and wild animals for committing crimes against property or crimes of violence. Many people, both elites and commoners, appeared by our standards to be indifferent to certain kinds of pain and suffering on the part of others.

The cultural and social specifics on such issues were often very different in non-Western societies before 1750, but the relative alienness of those pasts to liberal-democratic sensibilities in the present day is often equally pronounced. We’re accustomed to shuddering in horror at the prevalence of human sacrifice in some of large-scale pre-Columbian societies, but it’s fairly clear that our moral understanding of such practices didn’t exist within those historical worlds. It’s fairly clear that it took the violence and destructiveness of the Atlantic slave trade to turn the practice of kinship slavery in West and Equatorial African societies into a moral issue instead of an ordinary part of social practice.

I’m not saying here that because modern ethical frameworks did not exist in the past that we cannot judge those past societies as immoral. But I am saying that to judge commits one to a narrative of progress, to an acceptance of the present as superior to the past. The crowds who gathered at hangings in England before 1750 were not barbaric or savage within their own context. They can only become so from within our own contemporary frame of mind, our own understanding of human progress. To successfully curl our lips in disgust at the past in this respect means not just that we accept that we are different than they, but better.

That has some tricky implications when it’s brought into the framework of the case that Volokh cited, because here we are dealing not with the past, but with two different framings of the present. I hasten to say that the Iranian case cited is not “backward”: in its own way, it’s as modern as we are. The world lives in simultaneous modernity now. But it is different, and it’s a form of difference that I think at least some of those condemning Volokh might otherwise show extraordinary wariness about judging or attacking. Nobody among those to attack Volokh is quite saying, “Those Muslim barbarians!”: they’re very carefully keeping their eyes on Volokh himself. But you almost can’t attack Volokh in this case without committing to a vision of human progress that suggests the Iranian judicial system and even the ordinary Iranians who participated are in some way barbaric.

The whole discussion has a strange ironic cast to it. Volokh almost sounds like a parody of the classic cultural relativist—make no judgments about the Other, in fact, romantically admire the Other for having a better, older, more elementally human way of living socially. Volokh's strongest critics sound like the classic arch-defenders of the Western tradition: the Other in this case is a barbarian, backward, savage; the sooner that the forces of progress and reason can bring this savagery to heel, the better.

Of course, as the really good discussion at Unfogged suggests, the idea of Volokh or anyone else choosing to just embrace a completely different paradigm about violence and justice, to change fundamental social and political norms like a fashion statement, is wrong-headed long before you get to the matter of whether it’s desirable. Volokh in this sense is just as silly as counterculturalist environmentalists who think everyone should live like hunter-gatherers: whether or not that's a good idea, the proposition fails because it's not possible to scribble over the fundamentals of the social order in order to live some other idealized way of life, not at that scale.

So ultimately I don’t disagree with the strong criticism of Volokh, but I do think that both the Iranian case that drew his attention and his uncharacteristically volatile admiration for that case are more complicated than they look at first sight.

[permalink]


March 17, 2005

Fighting For the Equality of Banality?

Dahlia Lithwick has a good column that takes on the recent debate about women writers and newspaper op-ed pages. Bloggers have had their own version of the same debate recently, and it presaged some of the charges and counter-charges being made now about op-ed pages.

Lithwick makes the point that female op-ed writers have felt obligated to say something about the issue, but that men evidently don’t, and that the issue can’t be resolved until the men feel that same obligation. You could say the same about the debate over blogs and gender: almost every blog I know written by a woman had something to say about the issue when it came up, while many male-authored blogs didn’t say anything, including my own.

But Lithwick ends up answering her own question, “Why don’t the men respond?” in ways that I think she scarcely notices. By noticing male absence and female presence, she preemptively identifies male absence as a problem, a symptom, as having an assigned meaning. She begins to make guesses about why men don’t respond, all of which in some fashion assert that non-response as a failure or inadequacy, even when she's sympathetic to what she sees as the reason for that failure (for example, fear of being the target of politically correct ire).

I think that’s a very deep problem with this recurrent debate. I’ve written before about why I find Deborah Tannen and Tannen-ite arguments so frustrating and this is a large part of what is frustrating about them. They preemptively circumscribe the possible answers that a male listener can make to some accusations about exclusion or suppression of female voices, they reductively compress answers that might try to assert the complexity or range of the problem into simple statements of evasion, complicity or shame.

Once Lithwick has put it the way that she has, the choices of response narrow enormously. You can either say, “Look, there’s no pattern at all” and expect a scornful reply, and rightfully so. The pattern, whether we’re talking blogs or op-eds, is real. It exists. Or you can say, “It’s because women writers can’t or don’t want to write op-eds the way they have to be written”, and expect equal scorn, and again rightfully so. Kevin Drum got his drubbings pretty justifiably because he asserted that women don’t write certain kinds of blogs, when demonstrably they do write them. Or you can say, "You're right, I was afraid of speaking up"--but notice that Lithwick hasn't tried to imagine or create space for what legitimate thing a male op-ed writer might want to say that would make them afraid of the response they'll get.

What you can’t say is, “I think you’re asking the wrong questions.” Or “Sometimes what you’re talking about is a problem, and sometimes it isn’t. Sometimes women op-ed writers are being excluded, other times what they’ve written just sucks: there isn’t a generalization that covers all cases.”

Most importantly, what gets shoved aside is a prior conversation about the nature of an op-ed page (or a blog) that scrupulously doesn’t yet bring gender into the picture. I think it’s fair to say that at least some of those who want more female voices or more diversity also want op-ed pages to be different from what they are, to not only change the representation but the content. It’s equally apparent that some of the female critics don’t want those op-ed pages to change one iota in their tone or composition, just to have a better balance between men and women.

Tannen’s paradigm, when it creeps into these discussions, tends to pin everyone to Tanner’s ultimately stereotypical, high-toned version of Mars and Venus. Tannen observes those conversational or argumentative differences as matter of sociological or ethnographic fact, and to some extent they are, but tends to suggest—and her followers even more so—that those differences ought to be that way, and the “female” style ought to displace the “male” one because it’s morally or socially preferable.

If the question, “What should a mainstream newspaper’s op-ed page looks like?” or for that matter, “What’s the kind of blogging I like best or want to link to?” precedes the complaint about gender and representation, if it doesn’t presume a certain answer to the problem of gender, then suddenly almost everyone is freed from the script. Kevin Drum doesn’t have to make the incorrect assertion, “There aren’t any women writing the kind of blogs that I link to”; he can say, “These are the kind of blogs I link to” and then be forced to struggle to define what those kinds of blogs are. One of the things he can say, if he wants, is that he only links to blogs that have an already-existing pre-eminence. Yes, I know where that answer’s going to cause a problem in gender terms, but the point is to defer that problem to the next conversation, to not presume that problem in advance, to give everyone a chance to talk about why they have a particular aesthetic without having to already defend that preference as one which causes a diversity problem, or to presume that men will have one preference and women another.

In the context of blogging, for example, I want to be able to say, “I don’t enjoy blogs that are more like livejournals or diaries for the most part” before I have to deal with the question of what that means in gendered terms. I want to say, “I don’t enjoy simple news aggregators either,” and “I don’t enjoy single-note ideological blogs” before anyone guesses about what that means for the gender composition of my preferences.

In the context of op-eds, male editors and writers should be able to say, “But I like op-ed pages just the way they are in terms of the kinds of content they feature” before someone says, “So you just like to read what men have to say, eh?”

When we get these discussions in the right order, there’s a much better chance that we’ll find out that some women op-ed writers also want op-ed pages to read just as they do now, and some don’t. We might find that men also divide along those lines: maybe there are men who want to write op-eds or men who want to read columns that are fundamentally unlike what is typically found there now.

The inevitably messy question of what makes a columnist (or blogger) “good”, which is a question that can only be answered in interesting ways if the people answering it are allowed to be brutally honest about what they think makes a columnist or blogger “bad” as well. Let me ask it this way: if I find the kind of stuff that your average pundit tied to the Democratic or Republican Party has to offer a load of banal crap—I pretty much feel the way Jon Stewart does on this issue—then it’s hard to know why I should fight for Susan Estrich to publish the same kind of banal party-line punditry on op-ed pages as the men already publish. I’m not sure what exactly that accomplishes beyond a sort of so-what equity. I think that equity is a good thing, sure, but it’s not where I want to spend my energies, fighting so that some women can publish the same amount and kind of crap as some men. If subverting the dominant link hierarchy means linking to a female Instapundit, well, pardon me if I think that’s not exactly a triumph worth investing lots of labor in achieving.

Sure, the point that many made in reply to Kevin Drum is apt: there’s already many female Instapundits, and if you like Instapundit, maybe you ought to be linking to them. Why don’t you?

But we already know the story about power laws and blogging, and there’s a version of that with op-eds, too. What gets up there first reproduces itself over time, a fact which is not without implications in gendered terms, given historical patterns of male dominance. However. If it’s worth spending time fighting the power (law), I’d rather spend that time to find what interests me, whomever the author might be, than trying to laboriously sift and sort to get a 50-50 balance of men and women saying the same old stuff. In fact, to satisfy my interest in different content, I almost think I have to be indifferent to the question of gender equity: I just need to go where I like, to what I like. It may turn out that’s by women, it may turn out that’s by men, but if I assume in advance that it has to be by women, I’m going to pre-purchase a Tannenite bill of stereotypical goods, I’m going to force myself to ignore some of the things that catch my eye because they’re written by the wrong kind of person, because the first and last goal is numerical parity.

[permalink]


March 16, 2005

Calling Patrick Nielsen Hayden

There’s been quite a few revivified “space operas” in recent years that I’ve really enjoyed, many of them playing around in some of the same kinds of conceptual spaces, in futures where the definition and meaning of “humanity” has become quite plastic. Some of them read a bit like “Book of the New Sun meets E.E. Doc Smith”, such as John Wright’s great series that begins with The Golden Age.

One of these series I quite liked was by Tony Daniel: the two books published are Metaplanetary and Superluminal. It wasn’t the absolutely greatest work of its kind that I’ve ever read, but it was extremely enjoyable, with an infectiously page-turning narrative, some interesting ideas, some strong characters. I’ve been looking at Amazon for a while wondering when the next volume was coming out. Today I decided to google “Tony Daniel” and I found out that the next volume is never coming out, because his publisher decided to cancel it.

I don’t quite understand why an SF publisher would stop short of finishing a three-part series, even if the sales weren’t stellar on the first two parts. Why not go the distance? Some SF series which start in a rather unheralded way gain steam over time and eventually pay off pretty handsomely.

I think this is the most disappointing series cancellation I’ve come across. I was also pretty frustrated when Patrick Adkins’ excellent retelling of Greek mythology, focusing on the untold stories of the Titans that preceded Zeus, was cancelled, but I could understand that a bit better: it was a pretty boutique series, and by the third volume, Adkins was starting to get into territory where he was going to be retelling stories we already have heard rather than filling in a story only known in its outlines, a harder act to pull off.

I turned up a bit of evidence that another series I sort of liked, though this one was a bit cheesier, The Journeys of McGill Feighan, by Kevin O’Donnell, might actually be about to start up again after a really long hiatus. I just found out that a long-interrupted series by Pamela Sargent on the terraforming of Venus that I thought was decent enough was actually completed in 2002, which I didn’t realize—now the third book is already out of print!

There’s also old series where I occasionally mournfully snoop around to see if somehow there was another volume that I never saw—Sterling Lanier’s two Hiero books, for example.

I really wish somebody would make sure that Daniel’s series isn’t one of those series that people have to snuffle around looking for a conclusion to for the next two decades. Somebody give that man a contract, please. I promise to buy two copies. I want to see how it all turns out. Isn’t there some karmic principle that says that if Robert Jordan can continue his quest to destroy the remaining trees on planet Earth as long as there are still people dumb enough to buy his crimes against language and narrative, then a tightly plotted, entertaining SF trilogies should at least get to go the distance?

[permalink]


March 16, 2005

On the Other Side of the Screen

One of the things I was afraid of when I had a child was that everything I’d previously said about childhood and children was going to be contradicted. I’d had a couple of people predict that, in fact, that I’d change the way I thought about television, commercialism, and so on.

So far, not happening, and I don’t feel that’s taking any close-mindedness on my part. Yes, ok, I do see in a new light how difficult the advice to monitor what your child sees can be. For example, I really find the Sci-Fi Channel’s promotional bumpers very irritating, because you can be watching something largely innocent with your four-year old and then the promotional bumper comes on that plugs a horror movie with pretty explicit footage. Walk away from the TV at the wrong moment and you might have a problem. Yes, ok, I am impressed at how seductive some commercial tie-ins can be. Emma was walking through the grocery store with me and every time she saw some character she knew on a cereal box or other product, she wanted it.

But come on, these are not such difficult challenges. I just pointed out to my four-year old that we already tried Frosted Flakes once before and she didn’t like them, and she wouldn't like them any better just because the characters from Robots were on the back. Or I just tell her point blank that she’s not getting X, Y or Z. And she’s pretty good at censoring her own input: if something frightens her, she looks away and asks me to change the channel or skip the scene.

What I’m more pleased about is that Emma makes me feel even more confident about my more general claims on the innate “interpretative intelligence” of many children, more certain that most advocacy groups concerned about children’s media consumption flatly misunderstand children and underestimate their abilities.

We were watching Sesame Street a few days ago and a segment I remember from long ago came on where Ernie frets to Bert that he doesn’t feel special in any way. Bert enthusiastically reassures Ernie about his special individuality. It’s actually a kind of role-reversal: Ernie is uncharacteristically morose and depressed, Bert uncharacteristically enthusiastic and expressive. Ernie then turns to the camera and reassures the audience that yes, they too are also special. He instructs them to run their fingers through their own hair to feel how special their hair is.

So ok, I run my fingers through my hair. Emma looks at me curiously. “What are you doing, Dad?” I reply, “Ernie said to run fingers through your hair.”

She looks at me incredulously. “Dad, he’s not talking to us. He’s talking to someone we can’t see on his side of the screen.” Considering how often children’s TV shows try to showily interact with the audience, I shouldn’t have been surprised that she had a worked out interpretation of what was happening when they did so. But I was surprised, and delighted, even if I felt like the village idiot.

Then the segment ended and Emma turned to me. “Dad, I think Ernie was just pretending he didn’t feel special. I don’t think he really felt that way at all.” That’s not the first time that she’s smelled out the hidden agenda in a kid’s show without any prompting from me, but I was still impressed.

I honestly don’t think that this is because she’s smart (though she is and yeah I’m proud of her). I think it’s because so many well-intentioned children’s shows are much more obviously manipulative in their intent than the producers of those shows or their parental devotees tend to think. I think many kids have a nearly instinctive nose for manipulation, and most of them have an equally innate suspicion of it.

Good for them.

[permalink]


March 16, 2005

Tinpot

The New York Times story in last Sunday’s newspaper about the Bush Administration’s production of canned television “news” reports is bad news, and sensible people of all ideological stripes should be worried about it.

One of the first things you notice when you travel to most of postcolonial Africa is the creepily amateurish and cheesy way that official propaganda operates in the officially dominated public sphere. I don’t think it looks that way just to outsiders: just about every local I got to know, from the guys at the local bar to colleagues at the University of Zimbabwe, found most of this propaganda laughable and obvious. If the government was able to manipulate people successfully, it was usually way through mechanisms and channels way off the public stage, away from the masks of power. The more tinpot the autocrat, the more tinpotted his attempts to control his image. Occasionally there have been autocrats with a certain style, with the grand vulgarity that Achille Mbembe has written about: Mobutu had a craft about his obscene grandiosity. Occasionally you get dictators with a gift for signifying the insanity of unconstrained power: Sani Abacha’s sunglasses, Idi Amin’s deliberate absurdities, Jean-Bedel Bokassa’s performative cannibalism. But it’s all tinpot, even the relatively bland stuff like putting the head of state’s portrait in every nook and cranny of public space or using the state-run media for banally crude repetitions of everyday official falsehoods.

That’s the way I feel about the Bush Administration’s media efforts. Before I get to any grand concerns about the nature of “objectivity” or “bias” in the media or the expansion of federal authority, I just feel a more visceral, emotional disgust at this shift. Sure, the US government has always made propaganda of some sort or another, and sure, most Presidential Administrations since 1960 have had complicatedly collusive relations with the press at times, but this just feels different. Whatever larger issues it raises, it first and foremost seems to lack class. To be tinpot. To make the United States feel less like a place that is unlike everywhere else in the world.

And that is a practical problem that goes well beyond my own discomfort. The Bush Administration continues to flail around about trying to improve the image of the United States abroad, particularly in the Arab world, now turning to Karen Hughes as the latest in a spectacularly inept selection of personnel and strategies for coming to grips with that problem.

You want to know how to improve America’s image abroad? To sell our policies? You want to know how the Bush Administration should promote its policy objectives, whether domestic or foreign? Here’s a different solution: don’t produce canned news reports. Prepare a digest of articles about a particular issue from ten major media outlets across the ideological spectrum, from National Review to The Nation. For extra credit, throw in some blog entries. Release it to the world. Do it every week on every major initiative. Put together a package of major American voices on political questions and send them on a tour. Send Juan Cole, Paul Berman, Leon Wiesieltier, James Fallows, Katha Pollitt, Fareed Zakaria and Paul Wolfowitz abroad as a group to talk about the war in Iraq, doing panel discussions and individual lectures.

Do the same for any major initiative, even domestic ones. Collate what we already have to say as a nation and people and make that digest available for any who seek it.

That’s the opposite of tinpot. It’s what officially dominated public spheres in autocratic societies don’t do. It’s what a government unafraid of the freedom and diversity of its own citizens would do, should do.

Failing that, stay out of the business of manufacturing news. It’s not as if there is any shortage of media support for the Bush Administration: why manufacture news when there’s Fox? Once upon a time, when the United States government blasted the United Nations’ truly horrible plans for a New World Information Order, it was pretty easy to do so in a principled way that carried credibility and authority. Now I wonder. Stay out of the business of making puff-piece “news” about controversial policies not just because it’s a dangerous intrusion of the federal government into alarming terrain but also because it lacks class, at a time when the success of American policy abroad right now depends being a class act.

[permalink]


March 15, 2005

Ecoutez et Repetez Beep

If anyone asks me if I can speak French, I tell them I can still do dialogues from my high school ALM textbook, the ones where we used to listen to the dialogues on a tape loop and repeat them at the sound of the tone. My rather dotty high school French teacher would generally react roughly the same whether we collectively mumbled or precisely reproduced the French on the tape, unless you were dumb enough to say “Pair-ehs” instead of “Paree”.

“Michel, Anne, vous travaillez? Euh, non, nous regardons la television. Pourquoi?”

“Il est laid, ce bebe! Eh, doucement, c’est moi!”

Very useful stuff, that.

Scott McLemee writes about recent claims that the lecture is vanishing from the armamentarium of academics, and I can’t help but think in part that the essay by Stanley Solomon that McLemee is reacting to assumes that what people in education departments say about pedagogy is in fact what is actually happening pedagogically in university classrooms. This seems a bad assumption to me: I sometimes feel that the scholarship written by specialists in education on pedagogy exists in a separate, parallel dimension, as if it were about a university in a Borges story. The assumptions of such scholarship, and its prevailing statements about what is or is not a best practice, at least have an oblique relationship to everyday teaching practice in most departments and by most professors.

In practical terms, I suspect almost every academic lectures, even at the smallest liberal arts college where discussions are considered the pedagogical norm. I prefer discussion formats but there are plenty of times where I lecture, and even classes that I design substantially around lectures.

If lecturing has gotten a bad name, it’s because there are some consistent flaws in the way that some professors do it. First, and obviously, some professors are just hopelessly boring in their lectures, whether or not the content of the lecture is well-constructed. This is not a personal judgment: I think almost anyone can learn not to be horribly boring. Faculty are boring when they stick too tightly to a pre-written text: you can’t be interesting if you’re just reading something aloud, and in any event, I think reading a pre-written lecture sometimes suggests that you don’t really have a confident command of the material. Faculty are boring when they ramble disjointedly about nothing in particular: that’s the opposite problem from just reading a fully written lecture. Faculty are boring when they don’t communicate or connect with their audience, when the lecture is not constructed to and for the students in the room, but someone else entirely.

A lecture like that is very much like my old high school French class, and it deserves to be slagged on. Yes, there were and probably still are academics who commit those sins. No, there's no reason to mourn if that kind of lecture, which really serves no use, disappears.

I think an even more common problem is the failure to make a strategic decision about why and how to lecture within the overall plan of a course. Some professors schedule lectures simply to schedule lectures, and then look to fill the preordained calendar with things to say. My rule of thumb is, if there’s a good, compact, and usefully engaging reading that covers background material well, I’m glad to assign it and to use that as my method for delivering that content. In modern African history, I don’t feel for the most part that there is any textbook that serves that purpose in a way that I am comfortable with. So I take on that job myself. It would be a mistake to just get up and just repeat what an assigned reading had said, but that’s what a lot of academics do.

It’s also a mistake not to call upon what you’ve said in lecture later in the course, either to expect to see it used on exams or papers, or to carry it through into discussion. That too is an issue: lectures that just seem to hang out there in isolation, never being put into any kind of practice, an ordeal for their own sake.

Probably you can overcorrect for these problems. There’s a boundary where being engaging starts to slip into a contentless song-and-dance routine, where entertainment erodes education. There’s some usefulness to repeating and emphasizing what was said in the readings. There are topics which you feel obligated to cover in a lecture that are hard, for intrinsic reasons, to connect to the rest of the content.

And yes, some people really are just extraordinarily gifted at lecturing, not to be imitated by the rest of us. One of my colleagues here in Political Science is pretty famous with generations of Swarthmore undergrads for his lecturing skills. I’ve never forgotten some of the material I learned from lectures by Bruce Masters at Wesleyan University, where I was an undergraduate: he just had a way of compressing immense amounts of detail into highly memorable, well-organized, entertaining packages.

I actually suspect that the real problem out there is not that the lecture is disappearing, but that most faculty have no idea how to manage discussions so that they don’t just turn into meandering bull sessions or self-confirming smugness. I think that’s a much harder pedagogical nut to crack than the placement and delivery of lectures.

[permalink]


March 9, 2005

At the Checkpoint

This week the bad David Brooks ambled out and degraded the intelligence of his readers. I’ve come to have an appreciation for Brooks when he’s on his game: he can do a good job being both entertaining and interesting as well as provocative. But his love poem to Paul Wolfowitz is the other David Brooks, the one given to intellectual sloth and bogus generalizations.

Wolfowitz’s most ardent critics are not and have never been the “infantile left”, if by that Brooks means the antiwar faction that tends to be most strongly drawn to the views of Michael Moore et al. Wolfowitz actually plays a small role in Fahrenheit, albeit a memorable one visually. For the antiwar movement that Brooks gestures towards, the neoconservative argument about Iraq is a mere smoke screen for something deeper, something prior: the expansion of American economic and geopolitical hegemony, US interests in Middle Eastern oil, the Bush family fortune, and so on. Even anti-Zionist critics tend to see neocon arguments about democracy in the Middle East as a mere distraction from a “real” agenda, namely, the support of Israel.

So Brooks wants to play gallant knight and rescue his fair damsel from these dragons, but in so doing just re-enacts the same sick, sad, dull tableaux that the neoconservatives in power and their ardent groupies like Christopher Hitchens and Michael Totten have been replaying over and over again since the drums of war started pounding. It’s a shadowplay that lets them avoid the elephant in the room, namely, a substantial, thoughtful, deeply intellectual disagreement about the historical genesis of liberal democracy in the world.

Wolfowitz is interesting. I agree with that. He’s also a genuine intellectual.. He has a theory about liberal democracy and its relationship to 21st Century humanity, a theory that other thinkers have done even more interesting and passionate elaborations of. He’s in the middle of a test of his theory. These are all true: I've long said that I think it’s a mistake to just cast all this aside and go looking for the “real” motive, like oil or graft or Zionism. It’s a curious paradox: one of the most anti-intellectual Presidents in the last century has subcontracted out his foreign policy, the center piece of his Presidency, to intellectuals.

Now it does occur to me, somewhat bitterly, that I thought we’d learned a lesson about this with Vietnam, that allowing intellectuals to test grand geopolitical theories without some common sense checks and balances, not to mention some healthy pluralism and skepticism within the circles of power, is a really bad idea.More importantly, though, if Brooks wants to write a hagiography of Wolfowitz, he’s got to ask an intellectual’s question about his favored intellectual. Namely, how well has his grand geopolitical experiment gone so far?

I think there’s a few interesting things out there in the last month or so that a Wolfowitz defender can legitimately say are intriguing, promising: the Iraqi elections, the political news out of Egypt and Lebanon. These are developments that even his critics have to pause and be thoughtful about, particularly the Iraqi elections. I don't share the extreme hostility of Juan Cole and others towards those elections: I think something genuine and meaningful happened. I think there are long-term developments that are also promising for Wolfowitz and his defenders. For one, it’s clear that the Iraqi insurgency (I think insurgencies, plural), whatever it is, is not an authentically popular revolutionary movement with genuine aspirations to gaining national power or controlling the state, that instead it is a kind of gangster nihilism. That can only be slightly encouraging, however, since the United States and its Iraqi clients have for the most part failed to establish themselves as the overwhelmingly preferable alternative. Iraqis drawn to neither, or trying to pursue the renovation of their society in serious ways while refusing to bow either to intimidation or to become compliant clients, have very slim reeds at which to grasp.

The criticism of Wolfowitz has always come from much more powerfully serious thinkers and activists who question the generality of his theories and models and the specificity of his understanding of the region he’s experimenting with. The defenders of the war in Iraq, and Wolfowitz in specific, usually refuse to engage with this criticism at all. If they do, they’ll gloss it, carelessly, as amoral “realism” (as David Adesnik did to Matthew Yglesias this week).

What’s at stake here is both an abstract theory but also a quite empirical argument about how and when liberal democracy has taken hold in the world, and what actually defines “liberal democracy”. What's at stake here is also a principled argument about the conditionalities and realities of interventionism, one that asks in all seriousness that the pro-interventionists explain how they know which injustices require the immense cost and suffering of an intervention and which do not. Here it’s not just that Wolfowitz’ theory is up against a very strongly detailed, intellectually meticulous, and wide-ranging opposition, but also that Wolfowitz and his defenders are prone to a kind of horribly sloppy, contemptibly instrumental tendency to grab at any shred of evidence supporting their theories and complete ignore anything else. I have complained about this tendency before, and I’ll do so again, I’m sure. It offends me, mortally, deeply, profoundly. Nothing offends me more about the war, in fact, than the blunt instrumentalism and rationalizations, the evasions, the diversions. I'm a skeptic even about the best-practices argument for the war, but I'd be a much happier man now if I saw more examples of people making that best-practices argument.

Take Lebanon, for example. The neocon argument allegedly has always been premised on arguing that the achievement of freedom is the more important litmus test of the Bush approach than the narrow or exclusive establishment of democratic mechanisms for the selection of national and local leaders. Freedom of speech, of assembly, of conviction, the rule of law were to be the benchmarks. I think that’s actually a very sound insight. Outside pressure on postcolonial African states in the 1980s and 1990s was obsessively focused on getting multiparty democratic elections scheduled, without considering the far more important problem of political liberalization. The consequence of that, in part, was fair multiparty elections in states like Zambia that merely replaced the old corrupt autocrat with a new corrupt autocrat, the old ruling party with a new one.

Now in Lebanon suddenly the whole project of the neocons has taken an abrupt turn into much more conventional kinds of formulations about sovereignity and self-determination, that what is important for Lebanon is to get Syrian troops out, that the absence of Syria equals the achievement of democracy. Sorry, how so? If you’re really interested in the spread of liberalism, e.g., freedom, then you ought to be just as excited by a half-million people in the streets peacefully demonstrating for Syrian presence, even if you don’t like what they have to say. But evidently it’s more important to poke Syria in the eye and play certain kinds of power-politics, to move the yardstick of what constitutes “democracy” to “whatever George Bush wants”. When people say what you like, they’re heroically free. When they disagree with you, they’re lackeys and stooges. (Christopher Hitchens is especially fond of this formula). Remind me: who are the amoral realists here? Remind me also: how can we, of all the actors on the global stage, afford to make an argument that sovereignity alone is the simple key to political liberalization, given what we’re trying to do in Iraq?

The criticism of Wolfowitz has come most strongly from scholars and intellectuals who protest that liberal practices and democratic norms grow from the bottom-up, in organic ways, within the complex histories and cultures of a given society. If you believe that, than the screaming ignorance of Wolfowitz and most of the Iraq war planners about Iraq itself and its surrounding region becomes an issue. It’s only irrelevant if Wolfowitz’s faith in the simple universality of all modern human subjectivity is warranted, if all people everywhere not only yearn equally for liberal democracy, but have exactly the same highly specific working model of liberal democracy in mind when they so yearn. So Brooks is lazy, even contemptible, in absenting the real challenge to Wolfowttz from his panegyric. I think there’s plenty of interesting developments on the ground in the last month that a more honest defender of Wolfowitz and the neocon vision of the Iraq War can point to and suggest that perhaps the neocons had some things right after all—but not if that requires an echo chamber or strawmen in order to be said with confidence.

There’s another direction where I wish that Wolfowitz’s defenders took him more seriously, where I wish that Wolfowitz and the other neocons inside the Administration took themselves more seriously. We have one problem that the defenders of the war who share the neocon vision continue to lazily evade the real debate. We have another problem that the actual implementation of the vision comes pre-equipped with a crippling set of double standards that amount to a thorough form of self-sabotage. Even if you grant that the war’s aims are based on a serious and credible intellectual premise, you’d have to be worried about how badly that premise is operationalized.

This week’s news about the shooting of an Italian intelligence agent at a US-manned barricade is a good example. For months now, both Iraqis and observers have been talking about a pattern of reckless military aggression at checkpoints. They have often been met with overwrought, hysteric condemnation from pro-war pundits and bloggers, with accusations that showing concern over such incidents is just a tactic in a conspiratorial attempt to weaken the war effort. Hitchens hit the low note perfectly when he declared that the US can only lose in Iraq if it defeats itself, with the clear suggestion that any and all criticism of the war effort is a form of treason. Sorry, but that’s got it exactly opposite. If the war really is following the most generously constructed version of the neocon argument, it is absolutely crucial to treat every Iraqi citizen with the same presumptive respect as the US Constitution instructs the US government to treat its own citizens.

The whole point of the occupation is to demonstrate the virtues of the rule of law, to move Iraqis from subjugation to autocracy to a society in which their rights-bearing humanity is fully recognized by the state. I’m absolutely in sympathy with the soldiers at those checkpoints, with their legitimate anxieties and fearfulness, facing the very real possibility of death from suicide bombing. They’re not monsters when they shoot quickly at any possible threat. But at the same time, if you hand the men and women on those dangerous, deadly firing lines a ready-made alibi, if you don’t have meaningful oversight or a demand for restraint, even saints in time are going to pre-emptively open fire on anything that even vaguely concerns them, and more orphans and even allies are going to tumble out of the back of cars coated in the blood of their loved ones and associates. And afterwards, they're going to say that the car was speeding, or failed to respond to commands, when very possibly the car and its inhabitants were guilty only of being in the wrong place at the wrong time. If the uncritical, unthinking defenders of the war habitually froth at the mouth every time this happens and cry "Rally round our troops, boys!", presumptively believe that whatever the Pentagon is serving up today must be true, they're turning their backs on their own declared war objectives. The Iraqis are owed the same oversight, diligence and skepticism about authority that we would demand for ourselves.

If there is anyone who ought to be deeply, gravely concerned about unwarranted shootings at checkpoints, accidental deaths of civilians, torture in US prisons, killings of surrendered prisoners, it’s the advocates of the war, at least the ones who believe in the Wolfowitz vision as it is represented by Brooks, Hitchens and others. They ought to be concerned for very functional reasons, because failures of these kinds are effectively losses on the battlefield as grave and serious as Bull Run or Gazala. They ought to be concerned also for philosophical reasons, the same way I would be concerned if the police started busting down the doors in my own neighborhood for what seemed flimsy reasons and then hauling away some of my neighbors without any real due process.

Wolfowitz and his defenders want to convince us that humanity is united by its universal thirst for liberal democratic freedoms, well then, how can they possibly fail to react to injustice or error in Iraq with anything less than the grave and persistent concern they might exhibit in a domestic US context? Where’s the genuine regret, the mourning, the persistent and authentic sympathy? I don’t mean some bullshit one-liner you toss off before moving on to slam Michael Moore again for three or four paragraphs, I mean the kind of consistent attention and depth of compassion that signals that you take the humanity and more signally the rights of Iraqis as seriously as you take the humanity of your neighbors. Only when you’ve got that concern credibly in place, as a fundamental part of your political and moral vision, do you get to mournfully accept that some innocents must die in the struggle to achieve freedom.

The Wolfowitzian defenders of the war want to skip Go and collect $200.00 on this one, go straight to the day two centuries hence when the innocent dead recede safely into the bloody haze of anonymous tragedy. Sorry, but this is not on offer, least of all for them. If they can’t find the time, emotion and intellectual rigor to be as consumed by the case of a blameless mother and father turned into gore and sprayed on their children as they are by what Sean Penn had to say about the war last week, then their entire argument about the war is nothing more than the high-minded veneer of a more bestial and reasonless fury. If Brooks or anyone else wants to rise to toast Paul Wolfowitz, then they’ll have to live up to the vision they attribute to him, and meet the real problems and failures of that vision honestly and seriously.

[permalink]


March 3, 2005

Down In the Dumps

Wealth Bondage has a wonderfully expressed post, "How to Write Like a Liberal Sack of Garbage", that I found through The Weblog. It’s the kind of critique that leaves me at a loss, though. Vastly more interesting and intelligent and ambivalent than the usual dog-bites-man drama of radical anger at perceived liberal wussiness, but it leaves me in the same place, half-Jimmy Cagney dancing on top of the burning gas tanks screaming “Top of the world, Ma!”, defiant at the accuser; half Jimmy Stewart with a startled, sleepy innocence saying “Gosh, gee-willikers, what’s all the fuss about?” Maybe the right thing is Travis Bickle: you talking to me? You talking to me? Or Groucho Marx, knowing that to be kicked out of the club is far better than to be admitted to it.

In any event, here’s what I posted in the comments thread at Wealth Bondage:

I'm sorry, I missed the part where the virile alternative to liberal eunuchry and mounting the Cross with hammer in hand was laid out. If it just comes down to what gives you pleasure--letting id take pen in hand, and glorious glossililalia shouted freely to no one save the others huddled in the miserable pews, venom shared among poison congregants against the new ruling scum and liberals too cowardly to just give up and join the defeated in their justifiable rages--then I'll stick with what gives me pleasure, which is the pursuit of public reason and the hope that decency is widely distributed if slow to rise to the surface when it is under assault. Because then it just comes down to which circle jerk you want to join: yours or the liberals, and I suppose I rather prefer to wank where I'm used to the other wankers.

If on the other hand you think you've got a better hand to play against the Horowitz types, I think I missed that part of the entry. I got to the dump part but you sort of just left me there in the garbage with everybody else who followed the piper's tune. I suppose it's better to huddle in loving com-misery amidst the garbage than to actually be garbage like Horowitz, but it doesn't seem to actually rise to anything resembling a "plan" with the usual feature of a plan, the promise of superior accomplishment of some shared objective. Since most of the bill of particulars laid against the oh-so-reasonable liberals is that they're foredoomed to failure, the implication is that you’ve got a better mousetrap in mind. If not, then we're back to deciding which circle to jerk in, and that's just a matter of taste. De gustibus non est disputandum.

To continue on this theme, I know it seems a hopelessly prosaic, uncool, crude response to the poetics of the original essay to ask, “So you got a better idea?” But that’s a big part of it for me. For eighteen billion reasons, at least two or three of them pretty good ones, I simply don’t buy that Coulter, Limbaugh, Horowitz and others succeed in the public sphere merely because of power, merely because of domination, merely because of false consciousness, or whatever analytic alibi is being peddled this week. I don’t deny that power is present, that domination plays a role, that the culture industry exists, that some people have a clearly false understanding of the world and their place within it. But the bad propagandists and conspirators of the populist right also succeed because they sometimes hit a persuasive note, understand the architecture of popular consciousness, reach people where they live. Stuart Hall tried to get the British left to understand this under the assault of Thatcherism, that it doesn’t do any good to go whining off about the media or the corporations or evil old Maggie while you wait around for your own favored Godot, the hidden king-figure of the true and righteous masses that you think lies slumbering six fathoms deep in the social landscape and needs only hear the right notes on the ideological horn to awake and arise. It does even less good to descend into the self-confirming doominess of the postwar Frankfurt School, to take consolation in the purity and righteousness and fury of our own thought while agreeing in dismay that the masses are asses and that we hate television.

So if you take it as a given that Coulter, Horowitz and so on are indecent, destructive, and malevolently instrumental, that they have no interest in genuine communication or democratic discourse, that they intend to use the public sphere as a tool to destroy democratic practice and all their enemies in the process, that nevertheless doesn’t mean that you have to accept an account of their effectiveness that removes them from history and turns them into superhuman demons gifted with inexplicable powers. If they connect, however imperfectly, with some audiences, that is not merely a consequence of having web pages or access to Fox News. It is also because some of what they say is heard by some mass audiences as having some truth value.

I write as a liberal sack of garbage not because I think that I am writing to Gentlemen and Ladies on the other side, and thinking they accord me the same respect. I write it because the only way to win a rigged game is to play fair and hope that the onlookers will eventually notice who cheats and who does not. I write as a sack of garbage because I believe first that you cannot take arms against a vast sea of your fellow humans and either hope or wish to win. Because I think you have to listen for what your enemies say to find out what among their statements makes some fractured sense to the larger audiences drawn to them, and figure out how to rescue those accidental honesties and make them respectable, real ones. I think you can only do that with your cards on the table: the game is not being played against Coulter or Horowitz, but with an eye to the spectators. Setting out to win that game of sympathies with a conscious will to lie, to hide the cards, to match cheat for cheat, is a bad idea both because it obscures the ultimate purpose of struggle and because it actually hands another weapon to the cheaters. The spectators are watching: if we start to match them lie for lie, cheat for cheat, cheap shot for cheap shot, we walk right into the caricature that’s been drawn of us. I write as I do because I’m hoping to connect with popular veins of consciousness and knowledge that are very different from mine on terms of mutual toleration, possibly even respect, to persuade others with a certain humility of ambition and affect without losing sight of my faith in the rightness and soundness of my views. The Happy Tutor suggests in a response to my criticism that yes indeedy, the thing to do is to lie, to match every lie with a lie. I just can’t do that. I don’t think I ought to, either.

In the comments at Wealth Bondage, Turbulent Velvet suggests that I’m being kind of uncool by taking it all so seriously, or seeing myself as one of the targets of the original essay. To some extent, this is a demonstration of the basic difference here between preferred kinds of public language that the Happy Tutor’s essay is concerned with. Me, the sack of garbage who tries to communicate plainly, ploddingly earnestly, in good faith, naïve and pompous at the same time; on the other hand, the clever, ingenious, subversive voice full of double-meanings and ciphers that can score wounds and then plausibly deny that any wounding was meant. The square and the hip, the establishment and the subversive. Summing it up that way just makes you want to open a big can of Kumbuya-ya, to think of the conversation as just one of those accidental misunderstandings that happen when two superheroes meet up, or just one more episode in the long marital quarrel between liberals and the left. Kiss and make-up.

But as Turbulent Velvet observes I seem perfectly happy with the thought that I’m not on the left any longer, if still a "liberal" of the kind the Happy Tutor describes. Significantly for me that’s because I have absolutely zero desire to be reminded to join the team, to wait until it’s the right time to talk about those questions which people on the left have deemed unwise to talk about. Shut up about Ward Churchill, etc.: it’s not yet time, you’re helping the bad guys. Smells like team spirit. To be honest, I don’t care about any of that and I think the only way to not getting to care about any of that is to not worry about whether you’re “left”. It’s not why I write; it’s not what I write. I write what I like, what I feel compelled to write, what I think is true but also what happens to draw my interest.

But it’s also that I doubt very much the strategic instincts of some of the left, both for deep structural and highly contingent reasons, and as much in the case of the Happy Tutor’s essay as any other. I recognize the left, broadly speaking, as sharing my sense of the dangers of the present moment, so I care very much what people on the left think, both for personal and strategic reasons. I mean, no matter what, it's the old neighborhood, you know? In its just-kidding-you-liberal-pussies way, the Happy Tutor’s essay suggests that the earnest liberal tribunes of public reason suck first and foremost because they’re going to lose the battle against the populist right and in the most humiliating way. That’s what makes me grab the essay by its lapels and say, “So what’s your great idea, motherfucker?”, at which point it dissolves into totally great, seductive, beautiful prose that is roughly as prescriptively useful as Negri ahd Hardt calling the Multitude to some nebulously teleological barricades. As I observe later in the comments thread:

It frustrates me that this is seen as a dialectical honey pot and thus a triumph because I, a bear, have earnestly wandered into the trap, where the children can stick Piggy's head on a stick and howl in delight and yet also say, "What, me worry? Do you think we really meant to criticize?" It's an old kind of pomo kung-fu and I confess to grevious weariness with it even while appreciating its wit, its inventiveness, its rhetorical and tactical brilliance. Thus goes my entire professional life, I suppose: unable to indulge the stupidities of crude anti-postmodernism, unable to tolerate the cul-de-sac hipness of the painfully pomo. Forgive me my passing annoyance; I'll go back to playing the part of Gomer Pyle, and the sophisticates can get back to lounging about in their bathrobes and smoking their pipes.

Still, what the Tutor offers is certainly better than listening to umpteen million less interesting leftists tell me about the fabulous efficacy of Michael Moore at mobilizing the masses and how all we need is one, two, a thousand Fahrenheits. Even so, it doesn’t convince me that my own sense of the road ahead--both privately felt and publically counseled--is lacking and I should just kick back and let someone else drive for a while.

[permalink]


March 1, 2005

Impersonation

Scott McLemee writes about the recent revelation that Emma Dunham Kelley-Hawkins, thought for a while to be a 19th Century African-American author, turns out to have been white. Over at Crooked Timber, the discussion has been about what this says about historicist literary criticism, that a mediocre author is interesting when she’s black and boring when she’s white. I don’t think it reveals anything especially horrible about historicism except for perhaps that it tends to have a bit of a threadbare functionalism in the way it reads and understands culture, that it reduces literature to the status of document. (Isn’t that what the discipline of history is for?) It’s also a bit frustrating that some historicism amounts to little more than a multicultural salvage project, to get “one more” person of a particular identity onto the list of “author”. This seems to me to just miss the more interesting problem of the historically changing nature of authorship and culture to a quest for “more voices”, as if we have to correct past injustices and silences by reading the cultural order of our own day into the past.

McLemee has some more interesting insights in his piece. For one, how much scholarship rests on the transmission of facts which, when you trace them back, tend to decompose into mere hints, suggestions or dubious interpretations at their point of origin. When I taught a class on a single primary text a few years back, some of the students were a bit unnerved at how weak some of the received wisdom or conventional scholarly understanding of the text actually seemed when we got into the specific close reading of it. This is one reason why academics make a big deal out of questions of precision and craftwork, because a tremendous amount of knowledge production actually relies on trust.

I think there’s another issue floating around in here that strikes me as important, if somewhat tangential. Some observers ask cynically how it is possible to read a novel in a new way simply because you think its author is black, to find nuance that you then say is not there once you’re back to thinking the author is white again. I think that says something very interesting not about the gullibility of literary critics but about how easy it is for many of us to convincingly simulate or represent the voices of other identities.

Periodically there have been enormous controversies over works commonly taken to be authored by people of color that turn out probably to have been written by whites. The Education of Little Tree is one of the best known examples, but there are many others. In fact, the entire modern global history of “identity” is absolutely loaded with people who capably perform or simulate an identity other than their "natural" one for their entire lives. Impersonation is as basic a fact of ethnic, racial, gender, religious and other identities as is “authenticity”.

I think there are a lot of things you can take away from this fact. One of them is that anyone who tries to enforce and regulate claims of authenticity in the domain of culture and representation ought to be regarded with suspicion. That’s the obvious lesson. The more subtle one might be that it is far more possible for people to empathetically and intellectually understand the experiences of others than our received wisdom about race, gender and other identities assumes, that a white American through intellect, will and emotional insight can credibly imagine what it is like to be a black American, that a woman can credibly imagine what it is like to be male, and so on. The govering metaphor I like to apply to this capacity is one of translation: that we can translate the experiences of others, sometimes through impersonation, sometimes just through intellectual inquiry.

Accepting—even embracing—impersonation as a possibility would be unsettling to the kind of historicist literary criticism that looks to textual content for final or fixed clues about the social identity of an author, that assumes a white man couldn’t convincingly act the part of a Native American or that Shakespeare couldn’t sound highly educated if he weren’t himself so. There’s a pretty good argument to be made that the central adaptive purpose of human intelligence in the course of its early evolution was the ability to imagine what’s going on inside the consciousness of another human or animal: we shouldn’t be surprised when we find black authors who sound white or white authors able to simulate blackness. It’s an ability that we prize in authors and artists some of the time: I suppose I think we should prize it all the time.

Perhaps that’s the problem with historicist literary criticism: by borrowing the often dour obsession of historians with the factual reliability of the archive, some historicist critics miss the point that their central job is to understand fiction. And even here, the insight you can take away from those fictions, from the capacity to make persuasive fictions about the inner voices who are not ourselves, is potentially a powerful historical insight, a fact about identity in the past. It's a bit of a cliche to stress the instability and mutability of identity, but far less so to grant the possibility that people often really, truly, deeply understand the inner terrain of other people's consciousness across race, class and gender lines.

[permalink]


February 24, 2005

The Loonatics Have Taken Over the Asylum

Please show your children this image of “Buzz Bunny”, one of the characters from Warner Brothers’ upcoming “updating” of their classic animated cast. He’s the futuristic version of Bugs Bunny.

Let me know if the image “tests well” with your kids. Mine said, succinctly, “He’s really scary”. No prompting on my part, I promise.

Bugs Bunny, in any version, should never be “scary”. Nor do I particularly think he ought to have superpowers and fly around protecting a futuristic city. Warner Brothers has screwed the pooch before with attempts to cash in on Bugs and Co., most notably in the film “Space Jam” and in their babyification of the characters in “Baby Looney Tunes”. This appears to me to be the biggest bomb of all, though.

This is the flip side of what is so bad about the state of contemporary intellectual property rights. Lots of observers have noted that it denies to current creators the ability to do what Disney did throughout the 20th Century, which is to revitalize old characters and stories that were in the public domain. What the current IP environment also does that is equally bad is that it condemns the current holders of valuable intellectual property to wallowing in squalid pigpens filled with their own droppings.

The WB’s kidvid offerings are struggling, consisting mostly as they do of commercial tie-ins, third-rate leftovers from Japan, and misguided but not entirely awful attempts to revisit old franchises (the current “Batman” series on WB). The two standout hits are “Teen Titans”, which I personally like a lot, and “Mucha Lucha!” which I like even more. The answer to the problem of a struggling line-up is (as it always is) originality. “Teen Titans” works because it borrows (rather than dull-wittedly imitates) a lot of the visual tropes of Japanese animation but also because it reworks some of the themes and narratives from the comics that inspired the series in entertaining ways. “Mucha Lucha” works because there’s never been anything like it before, and because it’s wildly inventive and entertaining in its own terms.

In contrast, I would say that the entire concept, from art to themes, of “Loonatics” betrays a near-total lack of understanding of the intellectual property it proposes to take advantage of. At that juncture, you either have to think that some uniquely untalented person has been given the assignment and has sold his bosses on his bad ideas, or that the bosses issued a commandment to do something, anything, with the intellectual property they had locked up in their vaults. Squeeze one more drop of cash out of it, go back into the played-out mine one more time.

That degrades whatever value the old properties have, not to mention rarely pays off in terms of the new series. The latest layer of content is what sometimes sells an interest in the deeper layers. A good Looney Tunes series or film might prompt a young person to want to view the DVD anthologies. A bad one might strangle that impulse at birth. It may also be that sometimes you can’t go home again. Maybe Bugs and Co. can never be redone, maybe their moment is over. Maybe we don’t need more cartoons about them, but instead new cartoons made with the same originality and passion that the best Looney Tunes cartoons were made in.

I’m agnostic about whether there ever can be new Looney Tunes, but I’m not about whether “Loonatics” is it. This series feels like a disastrous act of vandalism.

[permalink]


February 22, 2005

The Trouble With Larry

Most of what I have to say about Larry Summers has been said already by others. He is not a martyr to political correctness. Many of his critics were exaggerated or extreme in their reaction, but the speech he gave was really quite weak.

It’s perfectly ok to get up and say something like, “We have to remain open to a variety of explanations for the relative lack of women in the sciences, including genetic or innate differences between men and women”. But Summers didn’t say that: he went on to speculate that this was the correct hypothesis. The current state of knowledge on this subject suggests fairly strongly that this is not a good hypothesis. If you’re the president of Harvard, you ought to know that if you’re going to shoot your mouth off on the topic.

One thing that I think many observers have overlooked, however, is that the most inexcusable thing about Summers’ provocation is that even if he’s completely right in his hypothesis, it has nothing to do with the representation of women on the Harvard faculty.

Let’s ignore the large body of research that casts doubt on or hugely complicates the working hypothesis that men are somehow adaptively better at science and mathematics. Let’s assume that Summers’ hypothesis is valid. Even in the best case scenario for this kind of conjecture, we’re only talking about tendencies, not gender-based absolutes. Meaning that even if Summers’ hypothesis actually is the best explanation for the imbalance in the sciences, this imbalance should pose no difficulty for Harvard should Harvard judge it desirable to have more women on its science faculty.

Harvard is wealthy and powerful enough that should its president deem it to be a priority to staff its faculty with the most brilliant left-handed sociologists who cook a mean risotto and have surnames that start with the letter “M”, they could do so. Even if you wanted to be generous to the argument that affirmative action goals result in declining standards, it only applies to the average institution, to institutions which are presumed to lack the clout or financial power to compete for scarce goods and which therefore are presumed to have to lower their standards in order to achieve diversity.

None of this applies to Harvard, Even if genetic or innate differences mean that no more than 15% of the top scientists and mathematicians are women, Harvard could pay whatever was necessary to recruit from that 15% and achieve a faculty which had a 50-50 balance of men and women.

So not only is Summers’ hypothesis a poor one in light of available research, it isn’t an alibi or explanation for gender imbalances at Harvard. The only way Summers could account for that imbalance would be to say that in his opinion achieving gender balance is an unimportant objective, or at least not worth the trouble involved. Now that, if he said it, would be a much bolder and more provocative statement, and curiously enough, a more defensible one than what he actually had to say. What Summers actually said was dubious factually and intellectually, and it was a lousy explanation for the specific institutional problem he was attempting to grapple with. If Summers wanted to get up and say, “Look, I don’t actually care what the genesis of the imbalance between men and women in the sciences is: it is in my judgment too expensive and labor-intensive for one institution like Harvard to heroically compensate for it”, then at least he would have started a conversation that could shed more light than heat. If you want to make a critical reply to that statement, you actually have to either demonstrate that it’s not that expensive or difficult to do, or that for some reason achieving gender balance is such a pressingly important objective that it outweighs many other priorities that might exist in the process of hiring and tenuring. These are both useful claims to constantly revisit, revise and challenge even when you agree with them.

[permalink]


February 17, 2005

Misrecognitions and Mythologies

Like Amardeep Singh, I found recent reports about the mass visitation of Rastafarians to Ethiopia very interesting. It’s not the first time that the imagination of some in the African diaspora has come into collision with the historical reality of African societies.

There’s a deep history here, persistently characterized by what the literary scholar Ken Warren has called “misrecognitions”. The famous narrative of Olaudah Equiano, in which he claims to have be born an Igbo in what is now Nigeria (Vicent Carretta in 1999 suggested that Equiano may have in fact been born in North America) has a sequence that describes the genesis of these misrecognitions. Equiano goes from the specificity of his own community to a wider awareness of the multiplicity of the African societies around him to being a slave aboard a ship, where all that diversity and complexity were violently compressed into a new social identity. Africans did not cease remembering, knowing and practicing their cultures of origin in the middle passage, but in the iterations of their memory and the circulations of people and goods across the Atlantic, the historical evolution of specific African societies and the African diaspora were also disconnected from one another. The incorporation of “Yoruba” or “Igbo” or “Kongo” identities and practices into the African diaspora proceeded without immediate or direct reference to what “Yoruba” or “Igbo” or “Kongo” peoples were becoming within Africa itself.

By the 19th Century, and even more so the 20th, the African diaspora was completing the circuit back to Africa more often and with more and more autonomy. As travelers, pilgrims, investors, expatriates, missionaries, migrants and even colonizers, Africans in the diaspora came to African societies. Sometimes for a short while, sometimes for the whole of their lives. But Africa was rarely what they imagined it would be, and often, Africa disappointed. It disappointed because it was never home, and because Africans were largely disinterested in, bemused by or puzzled by the diasporic imagination of Africa.

This is still the case today. The Rastafarian gathering in Ethiopia was just one example out of many, where the vision of Africa held by some in the diaspora came into contact with the reality of a particular African society and its history, two ships passing awkwardly in the night. The veneration of a mythologized Haile Selassie by the Rastafarians bears very little resemblance to how Selassie is known and remembered by Ethiopians today.

This is a dynamic not especially unique to the African diaspora. Irish-Americans who travel to Ireland, even before the recent economic boom, do not find the Ireland that is known to them within American popular culture. Nor is this just a diasporic problem. Civil War re-enactors, for all the meticulousness of their attention to material history, can sometimes be remarkably disconnected from the cultural, social or intellectual realities of antebellum America. Virtually every popular understanding of history that you can think of tends to run aground on the reality of the past it imagines.

Historians have a generic and often rather dour professional antagonism to these kinds of disconnects, a primal urge to dispel such illusions. More signally, there is a fairly large body of scholarship that argues, with varying degrees of scholarly care and balance, that there are particular bundles of images, representations and constructions of societies and their histories that are essential to the maintenance of domination, oppression, racism and the like, and the wellspring of structured forms of identity that constrain or oppress the individuals who are saddled with such identities. To a very large extent, that argument derives from Edward Said’s Orientalism, the basic blueprint for such arguments.

I’ve been thinking a lot about these issues because next year I’m once again going to teach a course called “The Image of Africa”, about the intellectual and cultural history of how Africa has been represented by other societies, including by Africans in the diaspora. As I think on it, I realize how far I’ve moved in my own assumptions from Said’s blueprint. The first time I taught the class, it was largely an attempt to document an African version of “orientalism”, of the use of images of Africa in colonialism and racism. That first version of the course had a middle section on the diaspora, and it was there that I vested all the ambiguity of the course, all the sense of an openness or debatability about the questions the material raised.

Over time, some of the questions I raised in that section of the course started to escape into the wider context, both for my students and for myself. First, I began to really doubt Said’s understanding of the genesis of “orientalist” images: it was entirely a one-sided, top-down, highly instrumental process in his reading. Power knew what power needed, power commissioned the optimal set of representations that maximized and authorized its domination. Said himself began to back away from this later in Culture and Imperialism, and some of the most skilled and intelligent scholars who sought to expand Orientalism, like Timothy Mitchell, also complicated the model. But the core remained the same: orientalism bore no resemblance to the historical reality of the places it represented, and its content came entirely from “the West”.

Later, I began to wonder far more about the ways in which orientalist images and representations were understood to be straightforward contributors to racist or imperialist ideology. The functionalism of Said’s original analysis became more and more urgent, demanding, and simplistic. To some extent, critiques that followed in the model of Orientalism began to presume, with increasingly less and less explicit theorizing, that not only were such images incorporated into racism or colonialism but were explanatory or causal to it. Eventually, the scholarship decomposed into a narrative of activism and an off-the-shelf theory of cultural interpretation. At that point, the mirroring of the cultural right and cultural left matured. Though drawn to very different texts or images, both groups in the United States shared an understanding of culture as cipher or code, that almost-subliminal references to past tropes or images or stereotypes somehow transmitted the entire history of associated ideologies and systems to later generations, that even a subtle incorporation of historical misrecognition or misrepresentation contaminated the whole of culture.

One of the things that dramatized how troubled these assumptions were was the controversy over the aliens of “The Phantom Menace”. The last time I taught my class, I had the course culminate in a series of debates about contemporary controversies in popular culture and the media, and we tackled the discussion about racial stereotype in “Phantom” as part of that series. The obvious case of stereotype were the Trade Federation, who really did seem to me to be Charlie Chan reincarnations. But even in this case, where the referent was obvious, it was much harder to work towards an argument about the meaning or effect of that referent. How could younger audiences who had no idea who Charlie Chan was, and lived in a world where the associative stereotypes of Asians involved really have very little immediate or dramatic social force be negatively affected by the invocation of the stereotype in such a disguised form? You could make the argument about the embeddedness of historical memory that allows that message to be heard and incorporated into consciousness, but that’s a very hard and difficult argument, not an easy one. When you get to the character of Boss Nass, which some critics saw as a disguised “African chief” stereotype, the problem only got much harder. I could see a bit of that allusion there, but for my students, getting to the point where they could “see” the same thing required watching a bunch of old Tarzan episodes and similar works which they would otherwise never encounter. Once you can “see” it with that trained—not at all naïve or everyday—eye, you’re not home-free. The question remains: so what exactly does “seeing” Boss Nass as a chief do to those who see him? What’s so bad about it, really?

This is a question which, once asked, blazes across a very wide political and intellectual landscape. What’s wrong with Rastafarians believing in an Ethiopia and a Selassie that bears little resemblance to the reality of those places? What’s wrong with believing that Cape Coast Castle in Ghana was a major site from which slaves were shipped across the Atlantic, even though it was not? What’s wrong with any or all the myths we carry around about the past?

It seems to me that the answer that Said provides in Orientalism is no real answer at all, because it’s ultimately so incurious about the messy historical genesis of such images and so crude about its reading of their instrumental necessity. The answer of the positivist historian is no answer, either: that in all cases and circumstances, there is an absolutely equal and undifferentiated requirement to recall people to the historical truth that is misrecognized in their imaginings. That’s a dreary and rigid response to the richness of memory and imagination. But neither can we afford a beneficient embrace of all myths, all illusions, all fantasies of other places and other times. Some of them really are pernicious or malevolent, some of them really do contribute to the origins of disorder or injustice in the world.

I’ve come to think, however, that the test you have to apply to make that determination has to be several magnitudes more demanding and precise than it often presently is.

More modestly, sometimes such images or constructed pasts simply contribute to a communicative disconnect, to a failure of potential. You can’t help but wonder if maybe the Rastafarians would learn something important if they looked for the real Selassie—perhaps in fact they would renew their faith in the Selassie they imagine by recognizing just how imaginary he is, by unshackling him from a real man who walked the Earth. Maybe you can’t find Ethiopia until you know you’re looking for something that has yet to exist, something that you have to make from nothing, something you have yet to gain rather than something you have already had and lost. Perhaps this is also not so modest a point, but instead the real source of dangerous consequences. Perhaps we should worry less about garden-variety stereotypes sprinkled through popular culture like gaudy ornaments of some barely-recalled past, and worry more about the fervid dreamers, who see a whole and coherent picture in their imagination and set out to compel the world to align itself with their vision.

[permalink]


February 9, 2005

The Idiot God

It's hard to imagine libertarianism flourishing as a political movement anywhere besides the United States. The fusion of tropes of rugged individualism, strong skepticism about the power of the modern state, celebration of civil liberties and constutionalism, and sometimes naïve valorization of the market not only define organized, committed libertarian thought in the United States but also a pervasive temperment that winds its way through a lot of American culture.

At the same time, I’m also struck by something that is harder to grasp and identify, something far less tangible: that the modern state (not the nation! different project!) is marked almost everywhere by a growing disaffection from the populations it governs. That disaffection manifests in a dizzying array of forms: the retreat into religious community or cultural chauvinism; the cynical anomie of many cosmopolitans and elites; the back-handed embrace of forms of corruption as being more human and reliable ways to obtain services. I’m far from being the only one to notice this larger decomposition of the liberal-bureaucratic state, and there are an enormous variety of interpretations of what it means and what ought to be done about it (if anything). Some continue to take this as a sign of the state’s subordination to globalizing capital, or to some less well-defined oppressive “modernity”. Others think it has to do with the modern state’s failure to reach achievable bureaucratic-rational forms and structures. Neoliberals and enthusiasts of globalization see it as confirmation of the need to reduce the state’s intrusive authority in many domains, especially those of the market. Secularists worry about the sources and character of religious resurgence.

I tend to think that what is going on is a little of all these things and more besides. I do sometimes find it hard to convince some of my closest liberal friends that there is any real reason for most Americans to feel antipathy towards “big government” as an idea or practice, unless we’re talking about the Bush Administration in specific. They point to all the positive things that government does in America, and all the positive things that it might do. “Are Americans really against roads, against regulating the stock market, against product labels, against Medicare, against police, against libraries?” they say plaintitively. (I’ve joined in the chorus on many occasions, and I still would do so, depending on the provocation.)

What is it that makes many Americans, as well as other societies in the world, receptive to the idea that government is more enemy than friend? And why do I think that unease or antipathy is in complicated ways justified or understandable, that it comes from someplace authentic that I think liberals might learn to tap into and address sympathetically?

At least one part of the problem falls back onto some of the venerable insights of Max Weber and others writing in his tradition: the modern state is a sort of idiot god or drooling child that blunders well-meaningly into the intimate terrain of everyday life and makes a mess without every really understanding what it’s done or where it's been.

I came across two examples recently that brought this home for me in different ways, one relatively innocent, the other profoundly disturbing.

The first was the city of Boston’s renewed determination to prevent its citizens from informally reserving parking spaces that they personally shovelled cleared of snow after a storm. This is a pretty common practice throughout the urban Northeast. You shovel a space, you mark it with a chair or a cone, and expect it to be yours when you come home. The marking lasts for a few days—usually until the city has managed to clear many streets and the parking situation in general improves.

Municipal governments tend to disapprove of the practice for two major, somewhat divergent reasons. The first is on grounds of public order. When someone breaks the unwritten code, removes a chair or cone and takes the space someone else cleared, the possibility of retaliation, with all its potential for explosiveness, is very real. The second is that municipal governments properly view parking as a public resource, and therefore cannot abide private seizures of parking even for short periods of time.

You can see the sense in these views. But at the same time, the very evenhandedness and bureaucratic rationality that we often look to government to provide as a way to adjudicate social conflicts and problems also seems horribly disconnected from the intimate lived experience of parking, snow removal and rights enforcement. On any given street in any given city, the moral landscape around those issues is much more finely tuned and deeply known to all the people living on that street. On one street, everyone may pitch in to help each other clear snow, and so reserving spaces is a non-issue: the local social contract is cooperative. On another street, the person who yanks a cone or chair out of a space may be a very deliberate scumbag, and widely regarded as such—the kind of person who waits for a neighbor to leave, scrambles out, guns his skidding 4-wheel drive out of an uncleared space, and seizes his neighbor’s labor. On yet another street, the person clearing a space may attempt to selfishly defend it far longer than anyone thinks him entitled to, or his claim on space may be connected to a larger pattern of private seizure of common resources. In another case, maybe people feel conflicted, but feel it’s ok for someone coming home to move a cone if the space was cleared that morning. The tragedy of the commons is always one block away from Mr. Rogers Neighborhood.

The state knows none of this: it extrudes a crude rationality into a moral world that is extremely textured. People grumble, or in some cases cheer when that intrusion resolves conflicts that had grown beyond the local ability to adjudicate. They wait for the state to go away again, as it always does, for the tide to recede, and go back to working things out as best they can, in light of the individual character and behavior of the actual people in their locality. It would be wildly wrong to characterize this as oppressive behavior by government: it’s more just disorienting. A force that exerts moral or cultural authority without seeming to understand the common sense underpinnings or everyday knowledges about the issues it addresses.

I felt the same way, only more intensely so, reading about the death penalty case of Daryl Atkins. Atkins was at the heart of the US Supreme Court’s decision last year that mentally retarded criminals cannot be executed, but in a recent development, the state of Virginia planned to return to court to argue that Atkins can be executed as his IQ tests now at 76, which is above the cut-off of 70 used by Virgina to determine retardation. To explain the difference between Atkins’ lower score at the time of his initial sentencing and the more recent score, one court-appointed expert has suggested that Atkins’ extensive work with his lawyers in the Supreme Court case had a positive effect on his reading and comprehension skills.

I have very agnostic feelings about the death penalty overall. I tend to believe that in the abstract, there are crimes for which it is warranted for a variety of reasons, including an absolutely cold-blooded kind of civic or collective vengeance. But in practice, particularly in the last ten years, it’s become fairly obvious to me that the state in general and American jurisiprudence in particular are simply incapable of making this very final and absolute sort of judgement with anything even vaguely approaching the necessary rigor and moral coherence required.

The latest developments in Atkins’ case are an especially searing reminder of that. I have to hope that whether you’re a supporter of the death penalty or an opponent, or merely confused as I am, you find the proposition that the difference between life and death is 6 points on a fairly arbitrary scale measuring intelligence to be obscene.

I think most of us have some feelings about the degree to which mental retardation interferes with or limits the capacity for moral judgment. Sometimes we may have highly articulate, expert opinions on that, sometimes we just have a kind of groping, semi-spiritual intuition. But whatever our feelings, whatever their source, I think we all walk into that judgement with a sense of trepidation, in a state of moral anxiety. We know it’s complicated, we know it’s messy. And here we have a government—and for that matter a defense team—trying to make those decisions with points and graphs and charts and long legal codes, with experts.

I’m not blaming the government: what choice does it have? In the context of the law and the death penalty, it can’t operate with intuition, it can’t tolerate ambiguity. The Supreme Court spoke, and now the law must make that judgement into something concrete, consistent, fair. But here it is making a very final decision with a kind of obscene game that bears no meaningful resemblance to the lived world, to the breathing heart of moral judgment.

The strongly libertarian answer to these dilemmas is to remove the state entirely from those fragile and precious social worlds that it little understands and often damages. I can’t quite bend my head around that, because I also see the state as irreplaceably necessary in those very same domains. It’s not as if everything works out when common sense and the wisdom of crowds is allowed its day in the sun: a different flavor of grotesquery may rear its head, that’s all. But I do understand why many Americans—and others around the world—stir uneasily at the thought of “government” and its operations in their everyday lives, and forget quickly the paving of streets and the delivery of the mail.

[permalink]


February 8, 2005

It's a Fair Cop

In the widening spiral of discussions about Ward Churchill, I’m accused of being one of the “soft left”, of being a liberal modernist, of being a scholar whose own work is horribly obscure (and sells poorly on Amazon), and of being careless in proofreading this blog. I plead guilty, your Honor. As far as being of the “soft left”, I not only plead guilty but suggest that the court add new charges to the docket. “Soft” is too generous for someone who flirts so outrageously with not even being “left” at all.

In other news on the Ward Churchill front, many folks have taken umbrage at my quick dig at Glenn Reynolds. Let me clarify: that comment is not about Reynolds’ work as a legal scholar, but about the standards he employs as a blogger. One could argue that Instapundit is his hobby, and not something that should reflect on his status as a scholar. Churchill, I think, could say the same thing about some of his own writing that is now being used to attack him. You could (and some have) defend Churchill as a scholar by pointing to some strong and markedly scholarly works in his oeuvre, most of which come from early in his career, particularly the essay “A Little Matter of Genocide” and his work on COINTELPRO.

This observation opens onto some of the deepest questions about the nature of academic freedom. Tenure is a system that recognizes people for what they have done and offers them blanket protection on that basis for what they will do. The best career trajectories in academia, in my view, are those where a scholar does something after tenure, with its protections, that is fundamentally unlike his or her pre-tenure work. Better, richer, more daring or provocative, less constricted or constrained.

So in the best-case scenario, what are you really looking for in the tenure process? If you don’t want a guarantee that what someone has already written, they will write again and again, you are looking for quality of intellect and for some sort of evidence of a lifelong commitment to the academic ideal. That ideal is not a mirror image of what the larger public sphere should or does look like. Churchill’s defenders have observed that his books have sold many more copies than all of the books and writings of his various critics. Indeed so. Ward Churchill is an important and legitimate figure in the wider democratic public sphere. He speaks to and for his audience.There ought to be Ward Churchills as long as there are audiences who seek out what he has to say, or even audiences which might learn something from the intensity of his polemical response to American history and society.

But this is not what we claim to be doing with academic standards. If the point of academia was to mirror the wider American public sphere precisely, then the conservative critique of the leftward tilt of academic life becomes devastatingly on-target. The academic humanities and social sciences, whatever they are and should be, bear little resemblance to the distribution of opinion and argument in American public culture.

So if you tenure (or invite someone to speak as) an academic, that’s partially an expression of trust in that academic, that whatever intellectual evolutions they undergo, they’ll still be bound by a latent professionalism, a belief in a very particular bounded set of higher standards. I’m the last person to draw those standards tightly: I blog too, and often sloppily in many senses of that word. I consistently argue here and elsewhere for a loosening of many academic corsets: I want academics to write more passionately, with more diversity, with a higher regard for clarity and less regard for theoretical obscurity, and so on. I want academics to write inside and outside of the particular constraints of their scholarly fields of speciality. The bedrock values that I think should always define academic professionalism, however, are commitments to fairness, a near-religious faith in the messiness and complexity of truth, an abiding appreciation of complexity, a commitment to reason. By that standard, I think you can suggest that Reynolds is degrading his work as a legal scholar through some of his blogging, and that Churchill long ago left his scholarly professionalism in the dust.

[permalink]


February 2, 2005

Off the Hook

I’ve been reading in a few places about the controversy over Hamilton College’s now-rescinded speaking invitation to University of Colorado professor Ward Churchill, and Churchill’s resignation as the head of ethnic studies at the University of Colorado.

In a way, it’s a pity that the whole affair has become so consumed by Churchill’s remarks on 9/11, because that’s allowed it to fall into the familiar, scripted form of public controversies over remarks that are deemed to hurt or offend. The remarks get repeated, mantra-like and disconnected from the general work or thought from which they came. Critics cite the personal pain and distress the remarks create. Defenders of the speaker first mobilize behind the figure of free speech, that we may disagree with the remarks but must defend the right of the person to make them. Finally, the speaker issues a non-apology apology, usually in the formula of “I am sorry if anyone has taken offense at my words,” which manages to make it sound as if the real offense lies with those who felt offended. Sometimes the original speaker may also clarify intent by saying that he or she merely meant to “start a conversation” or “make a useful provocation”.

It’s a tired dance on so many fronts. If there’s anyone who should know all the steps in it, it’s Ward Churchill, who is a prolific practicioner of the kind of identity politics that has helped to choreograph many such waltzes and minuets. Now everyone knows how to play that game, particularly American conservatives. Rinse wash repeat.

We lose so much in this pantomime. On one hand, it allows the less thoughtful critics of academia to go away with one more caricature in their bag, to imagine Churchill as a absolutely typical, representative academic. On the other hand, it allows many academics to walk away without having to think about the ways in which Churchill and the invitation to him from Hamilton is also not aberrant. If not representative, neither is he idiosyncratic.

Churchill should frankly be happy if this whole affair is confined to his isolated remarks on 9/11, to be handled with the usual pro forma apology, because his larger intellectual career is the thing that really raises some questions. Not the kinds of straightforwardly bombastic one-liners now descending on Churchill from right-wing pundits, perhaps, but pointed observations nevertheless.

Churchill is prolific in the manner of many careerist academics, meaning, he’s written the same thing in a great many formats again and again. He’s got a very long c.v., but the length misleads. Almost everything he’s written is part of one long metapublication. And what he’s written is highly formulaic kind of identity-based scholarship that expounds unthoughtfully on some of the characteristic themes and ideas of one very particular segment of the left, with particular application to Native American issues and questions.

I stress very strongly, not the left at large or overall. It’s a very small tradition of anticolonial, pseudo-nationalist radicalism that eclectically and often incoherently grabs what it needs from Marxism, poststructuralism, postcolonial theory, and even conservative thought now and again (though often in unacknowledged ways).

It is also a tradition that is completely unable to face its own contradictions. Churchill’s much-cited remarks on 9/11 are an indication, for example, of the underlying moral incoherence of his writing (and writing like his). The principles that are used to value some lives (Iraqi babies dying under sanctions) and not others (people in the World Trade Center) have no underlying ethical or moral foundation: they’re purely historicist and instrumental. The original sin of modernity is seen as the expansion of the West; it is perceived as a kind of singularity that utterly destroyed or erased historical experience to that point. The only moral vector, the only capacity to act immorally or to commit evil, descends from that original sin. If you’re associated by social structurewith that expansion, you are bad. If you are a victim of it, you are good.

This perspective on history and contemporary global politics is incapable of explaining its own existence. How is it possible to value life in a world produced by the expansion of the West, even the lives of the victims of colonialism? What are the sources, in a purely historicist account of ethics, of a belief in the sanctity of human cultures, or a belief that it is wrong to colonize or practice what Churchill would call genocide? Churchill, like others who write within his intellectual tradition, has no way to explain the genesis of his own political and ethical position. He can in fragmented ways claim an authenticity rooted in Native American traditions—but if it is possible today in the here and now to construct and disseminate a whole ethical practice founded in those traditions, then his claim of genocidal eradication by the West is clearly is false. If on the other hand, the West contains within it the seeds of its own critique, then the expansion of the West is itself a much more complicated phenomena than it would appear to be in Churchill’s writing.

Churchill, like others, constructs the hegemony of global capitalism and Western domination as being near-total. The unmitigated and simplistic totalizing that suffuses Churchill’s writing makes it impossible to explain his own existence and professional success or anyone like him. He is incarnated impossibility of his own analysis. The only contradiction Western domination faces is produced, according to his oeuvre, by the dedicated and militant resistance of its subjects. But how is it possible that a totalizing system of domination permits such an uncompromising practicioner of resistance to publish over 11 books and occupy a tenured position at a university? (I know, I know: doubtless from a Churchillian perspective, the recent controversy is the system finally getting around to slapping him down. Quite a delayed reaction if so.)

Churchill’s scholarly oeuvre is practically a guided tour of every trope of identity politics: polemical extensions of the concept of genocide into every possible institutional or social interaction between the colonized and colonizer, erasures of any historical or programmatic distinctions between colonizers in different eras or systems, reduction of all history and contemporary society into a sociologically and morally simple binary schema of colonizer and colonized (hence the remark that the people in the Twin Towers were “little Eichmanns” while Iraqis are literally infantilized into starving babies and nothing more), pervasive indictments of systems of representation, and aggressive assertions of exclusive cultural, moral, political and economic ownership of anything and everything connected with a particular identity group (Native Americans in this case).

Anything and everything can be fed, often with appalling casualness, into the polemic machine he builds: other scholars become, if not heroic comrades, mere “crypto-fascists” (there is no other possible position or posture). Mickey Spillane’s novels are part of a cohesive infrastructure for global hegemony. All power is endlessly and floridly conspiratorial. And so on.

The thing of it, there are very thoughtful people who take some or all of these positions. Churchill isn’t: he’s prolific but he’s also something of a hack. Herein lies the deeper problem that Hamilton College, Ward Churchill and many academics might be perfectly happy to escape notice, and that shouldn’t be reduced to one more example of right-wing polemicists beating on lefty academics.

Hamilton College’s first instinct, the first instinct of all institutions (including conservative ones) that get caught up in this well-rehearsed minuet, is to cite free speech as a defense. I think that’s perfectly proper in a highly limited way. Once an invitation has gone out, I think you generally have to stick by your guns. Everyone does have a right to speak and say what they want, whatever it might be.

But academic institutions also insist in many ways and at many moments that they are highly selective, that all their peculiar rituals—the peer review, the tenure dossier, the hiring committee, the faculty seminar—are designed to produce the best, most thoughtful community of minds possible. In response to criticism from conservatives who complain at the lack of conservatives in the academic humanities and social sciences, a few scholars even had the cheek publically (and more privately) to suggest that conservatism is one of those things that academic quality control quite legitimately selects against, that if the academy is liberal, that’s because it’s selective. Anybody has the right to speak, but nobody has the obligation to provide all possible speakers a platform, an honorarium, an invitation.

In that context, it becomes awfully hard to defend the comfortably ensconsed position of someone like Churchill within academic discourse, and equally hard to explain an invitation to him to speak anywhere. There’s nothing in his work to suggest a thoughtful regard for evidence, an appreciation of complexity, a taste for dialogue with unlike minds, a proportionality, a meaningful working out of his own contradictions, a civil ability to engage in dialogue with his colleagues and peers in his own fields of specialization. He stands for the reduction of scholarship to nothing more than mouth-frothing polemic.

We cannot hold ourselves up as places which have thoroughly and systematically created institutional structures that differentiate careful or or thoughtful scholarship from polemical hackery and then at the same time, have those same structures turn around and continually confirm the legitimacy of someone like Churchill. We can’t deploy entirely fair and accurate arguments about the thoughtless cruelty and stupidity of a polemicist like Ann Coulter only to fill our bibliographies with citations to Ward Churchill, not to mention filling our journals with highly appreciative reviews.

Certainly if you study contemporary Native American politics, you’d have to cite Churchill, but as a phenomenon who is part of that which you study, not as scholarly creator of useful knowledge who guides and instructs you in your own arguments or findings. There is a distinction.

That’s the deeper problem here: not Churchill’s particular remarks, but the deeper wellsprings of his legitimacy. Conservatives should not necessarily welcome a turn to those deeper issues: it seems to me that Glenn Reynolds, for example, would have to be held a hack by any standard that held Churchill to be one. Nor would I want to raise the banner of higher standards only to have that quash interesting, provocative, exploratory writing and thinking on behalf of dour, cautious and bland scholarship. But there is more here than just some callous remarks on 9/11 to worry about. Churchill has said before that his main critics are on the left, not the right, but far too many academics remain timid in the face of the retaliatory capacities of identity-based activism within the academy and therefore too silent in the face of thoughtless choices by their colleagues about whom to value, whom to canonize, whom to invite to speak. It might be a good thing to make Churchill's characterization the uncontested truth.

[permalink]


January 26, 2005

Burke’s Home For Imaginary Friends

At a lunch meeting, I made a presentation to the faculty and administration today about blogging, along with my colleague Eric Behrens. I wish there had been more time for discussion and questions: some interesting things came up after the meeting had broken up. (Eric has set up a comments thread at his blog for any Swarthmore attendees who have questions or comments on the presentation.)

I said I write essays for several online publications for five reasons:

1) Because I want to introduce some unexpected influences and ideas into my intellectual and academic work. I want to unsettle the overly domesticated, often hermetic thinking that comes with academic specialization. I want to introduce a “mutational vector” into my scholarly and intellectual work.
2) Because I want a place to publish small writings, odd writings, leftover writings, lazy speculations, half-formed hypotheses. I want a place to publish all the things that I think have some value but not enough to constitute legitimate scholarship. I want a chance to branch into new areas of specialization at a reduced level of intensity and seriousness.
3) Because I want to find out how much of my scholarly work is usefully translatable into a wider public conversation. A lot of my writings on Iraq, for example, are really a public working-out of more scholarly writing I’m doing in my current monograph, a translation of my academic engagement with the historiography of imperialism.
4) Because I want to model for myself and others how we should all behave within an idealized democratic public sphere. I want to figure out how to behave responsibly but also generatively, how to rise to the better angels of my communicative nature.
5) Because I’m a compulsive loudmouth.

After listening, one of my colleagues asked a question that’s fairly typical and yet it really made me think once again about some perennial questions. She wondered if any of this blogging stuff leads to real, human connections.

Well, sure it does, I replied. I observed that I had just recently had a chance to meet John Holbo in real life, to our mutual delight. I’d met up with some of my Cliopatria colleagues at the American Historical Association. (The online reporting of both encounters largely seemed to lead to the dissing of my beard, though. Geez.) And I feel I’ve made real, powerful, emotionally resonant connections with other online correspondents over the years, even if we haven’t met face-to-face.

I thought about it some later. It’s also true that a lot of my online work is about a more abstract sense of human connection through an impersonal public sphere. That’s no different than scholarship. We know some of our colleagues in our disciplinary speciality very well—often people of our same generational cohort. Others exist as nothing more than fellow professionals, whom you know through their writing and maybe a bit of gossip here and there. So blogging is not exactly radically different than any writing in that way, including scholarly writing.

On the other hand, my blog writing does feel surreal to me sometimes. It involves me in a discursive world that sometimes feels like a small town where everyone knows one another. Kieran knows Harry who knows Russell who knows Laura who knows Rana who knows Elizabeth who knows John. They all know me, or at least the highly public, constrained, particular reduction of me who manifests in my online voice.

There’s a circle of people reading and writing about each others blogs and so much of what they have to say influences my waking thoughts. I crave their approval and respect. But at the same time, so little of that conversation comes into explicit ways into my day-to-day professional life or my personal life. I can come home and say, “You’ll never believe what that Belle Waring had to say today!”, but it takes so much set-up to stop the flow of an always-moving discussion and explain it to my wife that it’s not really worth the effort. I can say to a colleague, “I think you’ll really like what Russell Arben Fox had to say recently”, but I always feel vaguely embarrassed when I do, because I don’t know what they’ll think if they do go and look—will it take too much prior experience of ongoing discussions to appreciate it? will they wish I hadn't wasted their time on something that they couldn't immediately cite and make 'normal' scholarly use out of?—and because I feel a little like the guy who goes to lectures by engineers and tries to tell them about his perpetual motion machine. Sometimes it’s like being under the spell of some alien intelligence, on the other side of an ethnographic divide, a native mumbling to the patient, civilized researcher about the inexpressible interior feeling of his own culture.

One of my Cliopatria colleagues observed in Seattle that he was glad to see I can say funny things now and again and I thought, not for the first time, about just how truncated and selective that public voice of mine is. I might occasionally drop into pure humor, but mostly I’m trying so hard to be respectable and fair and ethical that I don’t feel I can be humorous. There’s no way to convey the tone of warmth and gentle self-deprecation that makes a joke in my real life funny: online it feels just snarky and unfair to its target, unthoughtful. Of course, I also don’t talk about my strong feelings on many subjects, because I don’t feel they have a proper place when I’m trying to be judicious and show fairness and have some intellectual heft.

This bleeds over into other things. I don’t generally like to talk about my everyday life or feelings in the blog. I think to myself, why would anyone care? To be completely honest, I don’t always care when reading blogs that about other people’s personal troubles and tribulations. It’s a bit like when I’m playing a massively-multiplayer computer game and some other character in my party stops to say that he needs to take a break because he’s got diarrhea. Too much information! Too much information! Keep the fleshworld out of my pure cyberworld, man. Still, other times, I really want to know and help and feel: it’s part of the pluralism of the online environment, that there’s a space for diaries and essayists and everything in between. Sometimes I’m looking for that. Other times, someone has given me so much of intellectual value at some moments that I want to repay them by reading along sympathetically while they talk about their divorce or their engagement or their depression or their sex lives.

Even that’s one of the issues here: online discourse, whether you come to it for an exploration of personal lives and feelings or a pure Habermasian public sphere, is experienced in staccato, in fragments. It doesn’t have the experiential cohesiveness of reading a novel or a letter. It doesn’t have the temporal situatedness (and inescapable tangibility) of everyday real-world life. I can come and go in both the thoughts and lives of others as I please with no one the wiser.

As I started on this essay after lunch, I decided to go look at Justin Hall’s website, links.net, which I catch up with every once in a while. Justin had a significant impact on me when I was just starting here as a faculty member: I was both excited and repelled by the online presence he’d crafted. It was so suggestive of the possibilities of online work, both technological and communicative, but he was doing exactly what I wouldn’t ever want to do personally, and that’s use the web for a kind of performance art, a lengthy form of written self-exploration and self-revelation. I had not even the faintest interest in writing about what he wrote about: his sex life, his personal relationships, his spirituality. In his hands, it was fascinating, important, useful—and helped me define the opposite online aspirations I have, to be the respectable, restrained, fair-minded intellectual trying to work within a highly idealized public sphere.

I catch up with Justin’s site every few months, occasionally pop into his comments threads. Sometimes I don’t read it very carefully, sometimes he’s writing about something I find really interesting. Sometimes it seems like schtick, other times affectingly genuine. I like him as a person: it’s a way of keeping tabs on him. Lately we’ve run into each other playing City of Heroes and World of Warcraft, which is both cool and vaguely unnerving. I mean, here I am a level 50+ character on World of Warcraft and all, he can do the math and know that means that two hours a night or more have been given up to the game in the last month. A little intimacy surrendered at that moment, and yet, there’s so much to talk about with him. My gameplaying is always half academicized anyway, always grist for some genuine (I think) intellectual mill. So a few nights back, he pops up, we talk about the issues, I bounce some ideas for an upcoming Terra Nova essay off of him while my character creeps around some dripping cave inhabited by alien insects.

I have no idea that about a week before we’re meeting up in World of Warcraft, he’s gone and posted this video, and then suspended his work at links.net. So today I view the video in various stages of dismay and concern. I see that it's actually been discussed in all sorts of places, including Grand Text Auto, a site I really like. I’m thinking, if my online connections were real ones, I’d know this already. Hell, if I were half the online reader I'm supposed to be, I'd know it. I wouldn’t have been chatting with him about virtual worlds, but asking him how he’s doing and if everything’s ok. But you watch the video and you realize that it’s both genuine and performative, self-indulgent and heartfelt, a plea for connection and a manipulation of the idea of connection, authentic and pretentious as all hell. Like links.net itself has always been. Which is exactly what Justin is grappling with: he is now, like all artists and public figures, a prisoner of his art. Justin Hall now has to live with his doppleganger: “Justin Hall”, the stick figure he made out of hundreds of thousands of words published online over eleven years.

Whom is it that I know and care about? I think it’s Justin Hall, but mostly I hear about him through “Justin Hall”. Because it’s “Justin Hall”, I’m fine with forgetting about him for months on end. But maybe it’s a mistake to attribute that to blogging or online discourse. I’m not really a very attentive friend in general. If people are out of sight, they’re out of mind. Not because I don’t care. The title of this blog was chosen very deliberately and expressively. My intellectual persona here is indeed easily distracted, but so am I in everyday emotional life. I forget stuff and people and obligations all the time, with (I hope, I feel, I pray) no malice, but just because something has caught my eye and I’m deep in the coils of my own mind for a while, behind a wall of mist.

Again, it’s not really different than any kind of writing or art or public life. It’s all about the formation of a double consciousness, the productive disconnect of an interior, inexpressive self from a speaking self. I like it that way. I believe in a kind of decorum and formality in the public sphere; I believe in the public sphere as a democratic and thus somewhat impersonal ideal, the meaningful incarnation and structural guarantee of freedom all at once.

Real human as well as valuably professional connections do come to you from what you write, whether it’s a peer-reviewed scholarly article in a well-respected journal or a blog entry. The connections that come to you through blogging are more unanticipated, less domesticated, but as I said to my colleagues today, that’s the point. I do worry sometimes, as Justin worries, that what makes it all valuable and generative also increasingly afflicts me in a real, lived, everyday context with intellectual and emotional aphasia, that I am constantly transformed and affected by relationships with are entirely in my own head as far as everyone around me is concerned. It’s kind of like being the tree falling in the forest with no one around. Damn right I make a sound! I think.

[permalink]


January 25, 2005

You Could Play With Your Magic Nose Goblins

If there’s one thing I hate trying to do in a classroom, or in a scholarly work, it’s defining the word “culture”. (Defining “postmodernism” is a close second.) It’s an essential word, but it means so many things in so many contexts. There’s culture as in expressive or artistic practice, there’s culture as in the sum total of the everyday practices and rituals of a particular society, there’s culture in the sense of high art (“cultured”), there’s culture in the old Tylorian sense of a single animating ur-concept or idea that defines the particularity of a given “people” and can be found disseminated throughout all of their expressive or everyday life practices. And much more. Not to mention being something you do with bacteria and Petri dishes.

So “cultural history” is hard to define as well because of collateral damage from its source term. Loosely speaking, I think there are really two very different kinds of cultural history. The one is a variant or commentary on social history as it was practiced in the 1970s and 1980s: basically taking the foundational methodologies, interests and content of social history and adding in narrative, whimsy, individual or idiosyncratic experience, expressive art, and so on. Sometimes that shift is undertaken by cultural historians in a very methodologically pointed manner: Carlo Ginzberg’s microhistories. Sometimes it’s a very modest freshening up of the room: social history with a few novels and entertaining stories tossed in. Sometimes there are very satisfying, thoroughly worked-out hybrid approaches, as in Alan Taylor’s William Cooper’s Town, which I don’t think you can really characterize cleanly as social history or cultural history.

The second kind of cultural history is done by historians, cultural anthropologists, literary critics and by scholars in cultural studies: it’s about tracing the genealogy and influence of a particular text, artist, performance style, musical form, genre. I personally find this kind of work unsatisfying many times, partly because it often draws its boundaries too narrowly to make the kinds of claims it would like to make.

I want to illustrate this a bit by talking about the 1991 cartoon series Ren and Stimpy, which I’ve been watching on DVD. Doing cultural history of a single work is much harder than it looks. You could just start by trying to understand the series in its own terms, as it changed over time. Just from viewing, if you didn’t know anything else, you’d probably pick up some shifts in tone and content near the end of the three-disc set. Doing a bit of research, you’d find that the later episodes were made after a bit of a production gap, and after the series achieved its first wave of fame as a cult hit. Then you might discover some material about internal conflicts between Nickelodeon’s management and John Kricfalusi, the central creative figure behind the series.

Before going back to view these episodes, I knew quite a lot about those conflicts, partially based on some conversations my brother and co-author Kevin Burke had with Kricfalusi while we were working on Saturday Morning Fever. Viewing the episodes again myself, though, I personally thought that had I been a Nickelodeon executive, I too might have tried to pull the plug—the later Kricfalusi episodes appear increasingly self-indulgent and miss the delicate balance of elements that makes an early episode like “Space Madness” so perfect. (Though the late episode “Stimpy’s First Fart”, which was sort of the last straw for Nickelodeon, is one of the best.) So actually viewing the material tends to make one think twice about the authority of Kricfalusi's bitter account of Nickelodeon's neutering of the program, because the later episodes that Kricfalusi and his most ardent fans (including my brother) tend to champion actually kind of suck. They don't suck anywhere near as much as the episodes that were produced after the show was taken away from Kricfalusi, of course, and that's also worth remembering. When Nickelodeon had control over the series, the shows got stupidly offensive and rigorously unfunny.

So there’s an interior history where you can relate the actual texts to each other, and then relate those to an immediate history of their production and comment on them as you see fit. Then you can go back from there. There are immediate precursors: Kricfalusi’s work on the 1987 version of Mighty Mouse was a direct warm-up for Ren and Stimpy. That in turn has to be understood as a reaction to the programming of the early 1980s, both directly (in that Kricfalusi has said that he was trying to cleanse himself for having worked on low-quality kidvid) and indirectly, in that the bottoming out of the kidvid market in the mid-1980s and the growthy of cable created an opportunity for new kinds of cartooning on television.

But you can’t stop there. Ren and Stimpy in its best episodes is funny because it riffs both on specific cartoon sources and because it offers a brilliantly ironic, visually hyperkinetic, compact reconfiguration of a whole range of cartoon tropes, particularly the Warner Brothers cartoons and the series Tom and Jerry. Cat/dog pairings, buddy pairings, chase-and-mayhem antics, and so on. The episode “Space Madness” isn’t half as funny if you haven’t seen “Duck Dodgers in the 24th 1⁄2 Century”.
Carry it forward. Ren and Stimpy looks like a revolution in the history of cartoons on American television. Afterwards, everything looked different. Virtually every original program produced by the Cartoon Network, with the exception of the material that derives from the equally influential Warner Brothers’ Batman series, has Ren and Stimpy’s fingerprints all over it. It’s impossible to imagine Sponge-Bob Squarepants without Ren and Stimpy.

It’s not just the visuals or the narrative style. It’s the subject matter: gross-out humor, for example, had a completely different place in television cartoons afterwards. And yet, gross-out humor is a good example of where the job of this kind of cultural history tends to get extremely difficult, and why it can’t just be limited to a sort of pure lineage of cultural texts (this show leads to that show leads to that show). Once you start to think about the wider context, you notice that Ren and Stimpy, original as it was, nevertheless was also a product of its times. Its ironic, postmodern hipster position on its cultural roots was a pose broadly adopted throughout early 1990s popular culture. Scatological humor and humorous pairings of “dumb and dumber” characters were burgeoning at that time.

Trying to figure out where works of culture represent interventions into the wider world around them, where they reshape culture in their own image, and when works of culture are expressions of something moving in the unseen depths below, is an always-unresolved but always-essential analytic problem for any historical treatment of culture. Things get unmanageably complicated very quickly at this stage, but they almost have to be allowed to do so. One of the problems I have with older and more conventional styles of literary history or intellectual history is that they make “intertextuality”, the relationship between discrete works, into such a neat and tidy affair. This book influences that book, this author influences that author. It really is not and can never be so.

At its most abstract and ambitious, the history of cultural works begins not just to probe the impossibly complicated weave of creativity and consumption, but also something still more intricate and sometimes even more compelling. I’m often struck, like my graduate advisor David William Cohen, by the expansive generality of things that almost everyone knows about the past. We rarely can identify how we know things, because our knowledge is dispersed in small fragments across the wide span of our cultural world. It’s almost like public memory is a massively long cipher encoded into television, movies, toys, games, and so on. Every text has one small key to the total message.

Think about the ways that agrarian modes of life in 19th Century and early 20th Century United States reproduce themselves today. Half of my daughter’s puzzles and so on have barns, chickens, cows and so on, things which she’s only seen directly when we’ve gone out to a local heritage site that re-enacts a colonial-era farm and when she’s gone to the zoo. We sing “Old McDonald”. We watch Porky Pig cartoons where he’s an iconic farmer. And so on.

Every single image or trope in a cartoon like Ren and Stimpy has a narrow cultural history—you can trace it back through the direct linear ancestors of the program. But it also has a wider history of horizontal linkage to the expressive culture and social mores of its moment and to the pervasive cultural unconscious it inherits. When Stimpy wins a contest for Gritty Kitty Kitty Litter, there’s the specific history of Bugs and Daffy sparring over a somewhat similar contest to think of, but also the entire history of television game shows, the history of the Hollywood celebrity system and its mythography of “discovered” stars, and so on. If you’re a ten-year old watching the show in 1991, you probably don’t know any of that, any more than I knew as a six-year old that Underdog was based on Superman. But it’s all there nevertheless, and forms the leaping synapses of our collective cultural mind.

My friend Carolyn Hamilton wrote a book that I really admire called Terrific Majesty. Its particular subject matter is the cultural history of the Zulu ruler Shaka. One of the things Hamilton does that I find immensely useful, that takes her one step beyond the usual “invention of tradition” idea that sometimes hobbles treatments of historical memory and cultural representation, is that she insists that the “real” history of Shaka is encoded into all subsequent representations of him, and limits the things that can be said and known about him later. She doesn’t mean to suggest that we can ever peel away all the accumulating layers of later portrayals of Shaka to get at the unvarnished reality of the past. You not only can’t do that, but in her reading, shouldn’t want to. But the point is that the mighty tree grew from an acorn. The complexity of public memory grows from the simplicity of singular events and interventions. The culture we know and appreciate and consume today is not an arbitrary or instrumental reinvention of the world as it once was. It is constrained by the past, and in turn constrains the future.

This is pretty high-faluting language for a show that traffics in booger jokes, but it seems to me that the problems and possibilities of cultural history are the same whether you’re talking about Picasso or Picard, Renoir or Ren. Evolutionary biologists are now keenly aware of the folly of trying to represent evolution as a simple chart of linear descent. Properly speaking, it is a densely bushy affair of interweaving relationships, and that’s just if you confine yourself to relations over time between the exterior morphology of particular organisms. If you really want to understand it, you have to understand the interior genetic relationships between organisms, the environmental histories surrounding biological change over time, the ecological relationships between organisms over time, other factors that influence genetic change, and so on. Cultural history, ambitiously conceived, presents the same daunting challenge. You can never be satisfied with just saying that a show like Ren and Stimpy came from Mighty Mouse and gave birth to Sponge Bob, true as that might be.

[permalink]


January 21, 2005

Liberal Life Stories

I’ve been thinking a lot about Errol Morris’ op-ed piece in the NewYork Times. In it, he attributes Kerry’s loss in November to his inability to communicate his own life story as a convincing, authentic, and complete narrative. In particular, argues Morris, Kerry’s near-total silence about his opposition to the Vietnam War, his failure to claim that as a life-defining and shining moment of conscience and courage, was his downfall.

I find this a fairly convincing argument, particularly because Morris also clearly understands something that many on the left do not about George Bush. Namely, that calling attention to Bush as a youthful ne’er-do-well, whether it is his lack of military service, his frat-boy wealth, or his alcoholism, actually strengthens the coherence and authenticity of the story that Bush tells of his own life, and deepens his appeal for many Americans. It makes his evangelism vastly more powerful than Jimmy Carter’s, for example: Bush can self-present as a sinner who was redeemed, whereas Carter could only present as someone for whom evangelical religion was his cultural habitus. As Morris notes, Bush’s narrative of redemption covers all the sins that his critics might try to attribute to him and makes powerful use of them. You don’t even have to be a born-again Christian to find the story of a middle-aged person undergoing a profound positive transformation appealing. I suspect most of us know one person who credibly tells his or her life-story that way. Heck, it’s one reason I ended up loving the character of Theoden in the Lord of the Rings films, having found him kind of uninteresting when I was a teenager reading the books. Middle-aged depressive with feelings of worthlessness and mediocrity shakes off his slump and rides forth to glory. Cool.

Where I’m not so sure Morris is right is that Kerry could have won if only he’d told his own story with the same confidence and clarity. Maybe. There were other problems and other issues.

At a deeper level still, I don’t think that the liberal wing of the Democratic Party has much of a story to tell any longer, or more precisely, it is complicatedly ashamed of or confused by the story that it could tell about itself.

Told honestly, without leaving anything out or trying to slickly paper over complexities, the Democratic Party today is fundamentally sustained by a coalition between educated professionals, urban interests, the shrunken core of the union movement, the equally reduced remnants of the Southern Democratic Party, people of color, and some other local interests and traditional factions in various places around the country. If you were going to tell the story of that coalition as if it were an individual’s biography, it would have to explain how these groups ended up with a shared sense of political interests.

One of the biggest problems in that story would involve the relationship between educated professionals (and associated groups like people in the entertainment industry) and the working-class or impoverished parts of the coalition. When the Eastern Establishment’s position in national political power was somewhat taken for granted, prior to the 1960s, that relationship needed little explanation. After that, for a long time, the civil rights movement and other new social movements coming out of the 1960s provided a strong narrative to explain a relationship newly noticed and commented upon: liberal educated whites were united to the other interests in the Democratic Party because they shared a common morality, a common sense of values, joined in opposition to manifest social injustices.

That’s the problem. That story doesn’t carry much water any longer, but that’s still the only one on offer for many. If the Democrats were a person telling us their biography, often they’d be one of those annoying old bastards who tells the same war story over and over again, perpetually living in the past.

The complication of that relationship runs deep. The modern Western left, speaking very generally and loosely, has always struggled with the problem of why elites would or should cast themselves into political struggle against their own ostensible interests. Highly “scientific” Marxism could explain it well enough: an intelligent person sides with the truth of history. Highly humanistic Marxism of certain kinds could also explain it: you do it because you recognize what is morally true, because you can achieve a sentimental empathy that overcomes your own class subjectivity. But any position on the left that relied too strongly on reading off the moral or political character of individuals from their class status or social position, or was insufficiently interested in a contingent view of individual agency, had to cough awkwardly and look away when it came time to explain why a significant portion of the postwar educated elite cast themselves as broadly sympathetic to left or liberal politics. There’s some very good historical explanations, but those are explanations rooted in the non-repeatability of events, not soaring narratives whose force can continue to construct legitimacy in the political present.

A certain variety of Marxist argument—echoed in other ways by some American conservatives—could explain that story well enough by noting that the business elite in Western Europe and the United States mostly did not drift leftward, but that the professionalized elite benefits in a great many concrete economic ways from an expanded social-democratic state. That’s obviously not a terribly good story to tell from the perspective of people on the left who want to mobilize people: join the left for reasons of self-interest.

In any event, the shared moral universe story also doesn’t really work any more. The profound structural and legal injustices that mobilized the alliance between the educated elite and other social groups have been largely dismantled, leaving far more complicated and heavily embedded forms of inequality and injustice behind. Educated left-leaning American professionals today don’t live in the same moral universe as inner-city African-Americans, speaking in highly generalized terms. The everyday concerns and animating issues in those two social worlds overlap largely at points that are somewhat abstract.
In a way, telling the story of the Democratic Party honestly, as I’ve suggested before, may mean breaking up the Democratic Party. I don’t know that there is any point in pretending that educated professional elites share a meaningfully foundational set of interests or views with some of the other historic constituencies in the Democratic Party.

I think in the end one of the only things left to educated elites who identify as liberals is in their collective life story is this: We Know Best, And That’s Why You Should Elect Us or Give Us Power.

That doesn’t sound like a terribly promising story to tell for political purposes. It could carry a lot of water in the 1950s, at the most soaring and hubristic moment in the history of American technocracy. Right now it sounds like confirmation of every anti-intellectual stereotype and sneer.

Perhaps that’s because the entrepreneurial nature of modern expertise means that most professionals are constantly on the make for new domains of everyday life into which they can insert themselves and say, “We Know Best”. Professionals invent social problems and then send an invoice offering to fix those problems. No wonder many Americans feel anti-intellectual skepticism and allow themselves to be seduced by various opposite kinds of hucksterism that promise to allow jest folks to be jest folks. I'm largely sympathetic to observers like Gary Jones who have a presumptively skeptical view of expert assertions on a wide variety of issues.

But even given that a tremendous amount of knowledge production and professionalized interventions into civil society are excessive, unwanted and unhelpful, We Know Best still has some teeth to it as a story. If it’s told with humility and generosity, with an eye to confessing error and correcting overreaching, it still can be a tremendously whiggish tale about the triumph of postwar America and the promise of America in the 21st Century. Why are we so wealthy, so happy, so much better than we were? Because we know a lot of things that we didn’t know before, and because we’re so much better at educating many of our people to know those things. The “we” of We Know Best is potentially a capacious We, not just a few tweedy intellectuals or behind-the-Beltway policy wonks.

And We Know Best has a powerful reply to offer to crude or manipulative anti-intellectualism. Just as Morris argues that Kerry could have done much better if only he’d told his own story with confidence and passion, We Know Best might score a point against mean-spirited or instrumental anti-intellectualism if it added this reply to its life story:

OK, Then Let’s See You Do It.

Let’s see you do open-heart surgery. Let’s see you program software. Let’s see you design new drugs. Let’s see you design a successful counter-terrorism strategy that goes beyond telling people to take off their shoes in airports. Let’s see you figure out how to deliver effective aid to tsunami victims after we get past the immediate provision of food, water and medical supplies. If the We don’t necessarily Know Best, they do Know A Lot.

I don’t mean to say that most or even many of the people who do Know these various things vote for the Democrats, or would figure in the life story of that party. Paul Wolfowitz is just as much a part of the elite that says We Know Best as Noam Chomsky is.

I do mean to suggest that liberals who fit in this category, whose political ideology derives from their sense that they know more and better about the world and many of the things within it, could maybe benefit from Morris’ advice. Rather than telling the story of their political values as a kind of moral fantasy of their own compassion and boundless emotional commitment to selflessly aiding the less fortunate, perhaps they could say more, and say it more authentically, about the roots of their social vision. At the very least, this might prove a more potent and honest—if not particularly democratic—reply to the kind of anti-intellectual populism that is embodied in something like the resurgence of creationism in many parts of the United States. It might also reconnect educated liberal Americans with a hopeful, progressive story of American life as opposed to a bitter story of alienation from America. (Without having to cover over or ignore that feeling of alienation where it is honestly present.) Errol Morris suggests that John Kerry might have been able to remind people that he opposed American policy out of deep love for American society. Perhaps some liberals can remind Americans of something similar by exploring the roots of their own political journeys and their own social identity.

[permalink]


January 13, 2005

Production and Overproduction

John Holbo writes, as part of his ongoing commentary on Gerald Graff’s Clueless in Academe, that part of the challenge facing academics today is how to “overproduce with dignity”.

New information from the Bureau of Labor Statistics shows that the growth rate of the income gap between those with an undergraduate degree and those with only a high school degree has come to a stop. It had been slowing for a while after dramatic growth in the 1980s and 1990s.

I sometimes think the nightmare scenario for American higher education would be if both parents and employers simultaneously came to the conclusion that the expense of a college education does not justify the return. If that gap not only stopped widening but started to close, the colleges and universities that passively have come to rely on the inevitability of young people seeking a bachelor’s degree would find themselves hard-pressed.

Having just been at the American Historical Association’s meetings, I couldn’t help but recall once again my worst interview experience, over ten years ago. I went to an interview with a tertiary public institution from a Midwestern state. Most of their students, by their own description, were local people and most of them were looking for a narrowly vocational degree of some kind. The historians interviewing me had joined in a sort of pact with several other departments in the humanities at their university to force a core curriculum requirement of several humanities courses on all the students. In the case of the historians, it was a Western Civ course. Had I been hired there, I would have taught a 4/4 load of Western Civ with class sizes around 200. No T.A.s. I must have looked pale as the chair leaned over and said, “Oh, don’t worry, all our tests are Scan-Tron”. You want a profscam? That’s a profscam. 200 person lecture courses on Western Civ with multiple choice questions foisted on people looking for some very particular professional or career training. Otherwise known as, “How to make people hate the liberal arts and see them as an obstacle”. You could do a better job wheeling in a television tuned to the History Channel.

Imagine if potential students not only recognized how pointless that kind of education is in terms of aiding with their life objectives but found that the society at large also recognized the same, and found other ways to train people and differentially search for good employees. It already happens here and there, in the software industry, for example.

Even at highly selective institutions, I don’t know that many faculty and administrators think very well or very systematically about whether the implicit guarantees about skills embedded in the degrees they confer are very well realized in the students that they graduate.

A big part of the problem is the way that academic institutions think about and measure productivity. It’s become increasingly common for state legislatures to hammer public institutions for more and more quantitative evidence of productivity, but even at institutions which do not have to answer to legislators, various metrics of productivity have become more and more common. (British universities are also under siege from some of the same demands.)

What ends up being measured, however? First, the productivity of scholarship: numbers of things published and disseminated, grant monies secured, quantities of fellowships and memberships. In suggesting that the status quo of such productivity lacks (and needs) dignity, John Holbo is pointing to the core of some of academia’s worst ills. The drive to scholarly overproduction which now reaches even the least selective institutions and touches every corner and niche of academia is a key underlying source of the degradation of the entire scholarly enterprise. It produces repetition. It encourages obscurantism. It generates knowledge that has no declared purpose or passion behind it, not even the purpose of anti-purpose, of knowledge undertaken for knowledge’s sake. It fills the academic day with a tremendous excess of peer review and distractions. It makes it increasingly hard to know anything, because to increase one’s knowledge requires every more demanding heuristics for ignoring the tremendous outflow of material from the academy. It forces overspecialization as a strategy for controlling the domains to which one is responsible as a scholar and teacher.

You can’t blame anyone in particular for this. Everyone is doing the simple thing, the required thing, when they publish the same chapter from an upcoming manuscript in six different journals, when they go out on the conference circuit, when they churn out iterations of the same project in five different manuscripts over ten years. None of that takes conscious effort: it’s just being swept along by an irresistible tide. It’s the result of a rigged market: it’s as if some gigantic institutional machinery has placed an order for scholarship by the truckload regardless of whether it’s wanted or needed. It’s like the world’s worst Five-Year Plan ever: a mountain of gaskets without any machines to place them in.

You could try to contest this if you wanted to measure academic productivity by looking to the importance or significance of particular scholarly work. But even that inevitably will lead to some ghastly results, whether you use a citation index or Google Scholar.

So my simple suggestion is this: stop. Administrations and faculties need to stop caring how much someone writes or publishes or says, or even how important what they’ve published is according to some measurable or quantifiable metric. Not only because trying to measure productivity in terms of scholarship destroys scholarship, but because it detracts from the truly important kind of productivity in an academic institution.

What really matters is this: how different are your students when they graduate from what they would have been had they not attended your institution, and how clearly can you attribute that difference to the things that you actively do in your classrooms and your institution as a whole? What, in short, did you teach them that they would not have otherwise known? How did you change them as people in a way that has some positive connection to their later lives?

That can be about income. It can be about happiness or satisfaction. It can be about civic or political contribution to their communities. It can be about competence. It can be about imagination. Not all these things can be quantified, but all of them can or ought to be made as concrete as possible.

Many colleges and universities, public and private, have gotten lazy about this essential task. They’ve relied on evidence of the income gap, and on hazy assumptions about the interior impact of a college education on character, personality, and ability. We fall back on profiles of our accomplished alumni and so implicitly claim credit for their being what they now are—but our collective ability to account clearly for such particular results in terms of particular things we do is often far weaker than we let on. Truthfully, alumni for most colleges and universities do that job for their alma mater better than the alma mater can do for itself.

I can tell you what difference I think going toWesleyan made for me, but if I were going to be skeptical about my own recollections, I might wonder if I would be attributing to a coherent institutional design the accident of my encounter with particular individual professors and a certain amount of auto-didactic effort which was made easier by the ambiance of the general environment and associated resources. Hanging around with a bunch of smart peers and smart teachers in a materially bountiful environment might help most people to form and sharpen their intellects and skills, but I’m not entirely sure that most colleges and universities are entitled to strongly claim that the good results of that process systematically derive from the careful design of their four-year programs. Reading Walter Kirn’s “Lost in the Meritocracy” in the current Atlantic Monthly, describing how in his years at Princeton Kirn and his friends shammed their way through classes and began to have the terrible suspicion that the professors and administrators were shamming right along with them, my doubts redoubled.

It’s the only productivity that matters, however we try to measure or account for it. What do we do by design that we can reasonably say produces a positive, identifiable difference in the lives of our students and our wider community? Scholarship enters that question somewhere, but hardly at all in the ghastly spew of excess publication that contemporary academia demands.

[permalink]


recent blog (August-January 11,2005)

stale blog (Jan-July 2004)

ancient blog (Nov. 2002-Dec. 2003)


 

 

timothy burke

swarthmore college

Recent Entries

Shame
Volokh's Bloodlust
Equality of Banality?
Calling Patrick Nielsen Hayden
On the Other Side of the Screen
Tinpot
Ecoutez et Repetez
At the Checkpoint
Down in the Dumps
Impersonation
The Loonatics
The Trouble With Larry
Misrecognitions and Mythologies
The Idiot God
It's a Fair Cop
Off the Hook
Burke's Home For Imaginary Friends
Magic Nose Goblins
Liberal Life Stories
Production and Overproduction

Readings and Rereadings

Slater, Opening Skinner's Box
Hanson, Landed Obligation
Chomsky, Hegemony or Survival

Great Courses

The Story of Evolution and the Evolution of Stories

Virtual Worlds 101

Terra Nova Entries
(and Related Game Material)

Nation-Time and Newspaper Criticism
I Said Lunch, Not Launch!
The Better Mousetrap
A Tale of Two Games
Dead Monsters and Naked Emperors
Ye Olde Disciplinary Punch and Judy Show

Play of State: Sovereignty and Governance in MMOGs

The Narrative-Nudge Model for MMOGs

Rubicite Breastplate, Priced to Move Cheap

Cliopatria Entries

Red and Blue Bunny
Must-See TV
Let Me Sum Up
A Better Analogy Than Weimar Germany
The Unfairness Doctrine
Bingo
Postcolonial Land Reform
Heteronormativity in Action
The Fortunes of Political History
The Way the Camera Made Us
In the Shadow of The Jungle
What the Polls Can't Tell You
Twizzle Twazzle Twozzle Twome
Conan the Unready Defeats Dr. Zaius
Judging Jefferson
Yglesias and Androgyny
5,000 Years of Marriage?
Simon Schama I Love You
In a Thousand Years
The Perfectly Baked Pie
Don't Tug on Superman's Cape
Bloggers Beat Dead Horses
Ferguson's Sloppy Counterfactual
Smoke But No Fire
Franco Moretti: A Quantitative Turn for Cultural History?
Hole in the Whole
Onate or the Equestrian?
Robert Byrd and the AHA
One of These Things Is Just Like the Others
Venice in Vegas

 

The Jackdaw Nest

21st Century College: An Outline

Should You Go to Grad Gchool?

From ABD to the Job Market: Advice for the Grad School Endgame

Why Journals Suck

Building the Liberal Arts Faculty

The Digital Divide is a Red Herring

Irrelevant, Irresponsible and Proud of It: My Perspective on Cultural Studies

 

 

Last Collection Talk 2002

9/11: A Painful Hesitancy, October 2001

Welcome to Swarthmore: August 2001

 



How to Read in College

Beyond the Five-Paragraph Essay



Syllabi

 

Regular Reading

Crumb Trail

Norman Geras

ThinkThunk

Grant McCracken

Mainly Martian

Crooked Timber

Electrolite

Michael Berube

Russell Arben Fox

Amardeep Singh

Jason Craft

Barely Tenured

PZ Myers

Unfogged

Pandagon

The Ludologist

No Loss For Words

In the Shadow of Mt Hollywood

Kitabkhana

Terra Nova

Household Opera

The Straight Dope

The Weblog

John and Belle Have a Blog

Mamamusings

New Kid on the Hallway

Brian Leiter

Scribbling Woman

Uncertain Principles

Invisible Adjunct

Early Modern Notes

Gnostical Turpitude

Fafblog

Wolfangel

Grand Text Auto

Cafe Hayek

Sharleen Mondial

Gary Farber

Full Context

Cranky Professor

Brian Ulrich

11D

Caveat Lector

Alex Pang

Crescat Sententia

Abu Aardvark

Tacitus

Julian Dibbell's Playmoney

Miniver Cheevy

Matthew Yglesias

The University Without Condition

Gideon's Blog

Daniel Drezner

Just Tenured

Frogs and Ravens

Mark Kleiman

Professor Dyke

Playing School, Irreverantly

Long Story; Short Pier

History News Network

Butterflies and Wheels

Steven Johnson

Alex Halavais

Gene Expression

Billmon

Iconoduel

Volokh Conspiracy

Ludology

Making Light

Political Animal

Boing Boing

Bitch Ph.D

Brad DeLong

Red Ted

Uncle Jazzbeau's Gallimaufrey

Foreign Dispatches

Languagehat

Ken McLeod

Stavros

Cosma Shalizi

cobb, the blog

Penny Arcade

Joseph Duemer

William Tozier

Oxblog

Jane Galt

Kottke

Baraita

PvP

Margaret Soltan

Boldrobot

Ctrl-Alt-Delete

Erin O'Connor

George H. Williams

Games * Design * Art * Culture

Immediacy

Wonkette

The Little Professor

Lawrence Lessig

Gamegirl Advance

Justin's Links

Bookslut


recent blog
stale blog
ancient blog

All materials at Easily Distracted are copyright Timothy Burke. Please do not duplicate without permission or attribution.

Interpretation Theory Capstone syllabus: current draft

Social History of Consumption syllabus: current draft

Theories of Agency: a presentation to the Bryn Mawr Emergence Working Group

 

Want to contact me?

Email me at

tburke1

@

swarthmore.edu