John Muir, the Hetch Hetchy Valley, and a bird: or, how I’ve spent the last forty years, and will spend the next as-long-as-I’ve-got

Great Blue Heron

One spring, when I was a child, my family went to Yosemite Valley in Yosemite National Park. My family mostly tried (and failed) to teach one another bridge, and I wandered around the emerald valley. Having grown up in semi-arid southern California, the forested walks seemed to me magical, and I was enchanted. One evening, my mother took me to a campfire, hosted by a ranger, who told the story of John Muir, a California environmentalist crucial in the preservation of Yosemite National Park. The last part of the ranger’s talk was about Muir’s last political endeavor, his unsuccessful attempt to prevent the damming and flooding of the Hetch Hetchy Valley, a valley the ranger said was as beautiful as the one by which I had been entranced. The ranger presented the story as a dramatic tragedy of Good (John Muir) versus Evil (the people who wanted to dam and flood the valley), with Evil winning and Muir dying of a broken heart. I was deeply moved.

I’d like to say this story so moved me that I became active in environmentalism, but that wouldn’t really be true—I could distinguish a pigeon from a seagull, and that was about it. Muir’s story did, however, stick with me as an odd story about rhetoric. Why could someone who, according to the ranger, have been so persuasive and moving on so many points—preserving Yosemite Valley, creating the national park system, valuing the High Sierras, starting the Sierra Club–have failed to persuade people on the one point that the ranger presented as so starkly simple? Why do people with the better cases so often lose arguments? And later it came back to me.

I went to Berkeley for my undergraduate degree, and became entranced again; this time by rhetoric.

The Berkeley rhetoric department emphasized the teaching of persuasive argumentation, something which must be distinguished from what many people experience as argument. I don’t want to get into the ways that was both right and wrong, so much as point out that it taught that rhetoric is always relational, and the kind of rhetoric we teach and practice signifies, models, and reinforces the kind of relationship with have with our interlocutors. Thus, a definition of rhetoric—whether we define rhetoric as getting others to do what we want or the ability to understand disagreements—is not just a theory of discourse, how we communicate to someone else, but a theory of community. A limited conception of rhetoric leads to a limited way of interacting with others, and the limited success we get from that interaction confirms our sense that rhetoric is limited.

Until I went to college, whenever I had been taught argumentation I had been required to have a confrontational thesis which was stated in the beginning (usually after a funnel introduction), and which was supported by three reasons (which were themselves stated at the beginning of each paragraph and before any evidence). Each “proof” paragraph had one piece of evidence to support its point. In the penultimate paragraph, I was expected to summarize and then contradict (or concede, but declare as trivial) some opposition argument. The conclusion would restate my thesis, and typically end with some rousing generalizations.

It is difficult to describe how frustrating I found this form. I certainly found it unpersuasive. That isn’t to say I’d never changed my mind; even in high school I was well aware that people did change their minds, but the texts that I’d found persuasive never followed this narrow structure. For one thing, the texts that changed my mind on things were often narratives—whether a fictional narrative like Nathaniel Hawthorne’s The Scarlet Letter that made me think differently about the role of gossip and identity, or a non-fiction narrative like Hannah Arendt’s Eichmann in Jerusalem that made me think differently about loyalty and duty.

It especially bothered me that the writers whom we were taught to admire and told to emulate—such as Martin Luther King, Jr., George Orwell, Virginia Woolf–did not write the way that we were required to write (thus making the recommendations to admire and emulate them more than a little confusing). On the contrary, their conclusions tended to be after their evidence, they tended to summarize their opposition early (often as early as the introduction), their theses were generally at the end of their texts, assuming they even had a thesis stated explicitly in the text at all. The experience of reading them was completely different from reading something written in the “tell ’em what you’re gonna’ tell, tell ’em, then tell ’em what you told ’em” form that always felt to me as though I was sitting in a small chair being yelled at, while reading people like King made me feel that I was walking along with the author who was pointing things out along the way.

Although we were presented with King, Orwell, and Woolf as rhetors to be admired, and told to emulate, we were graded down if we did. In other words, the explicit rules for good rhetoric—what my teachers said I had to do—were wildly at odds with the implicit rules for good rhetoric—what the ideal writing actually did. Thus, the teachers’ explicit instructions—write this way, and write like these authors—were actually in conflict.

This conflict within our explicit instructions for students—that we give them rules that are actually contradictory—is not particular to my teachers, and is a problem within the history of teaching writing. The contradiction comes about, I’ll suggest, from universalizing about rhetorical strategies and relations, and the number of concepts muddled in the term “effective.” This is another one of the themes to which I will return: what do we praise in rhetoric, what is effective in rhetoric, what do we say people should do, and how are those three at odds with each other.

At Berkeley, in the rhetoric classes, there was not as much conflict between the explicit and implicit rules for writing. The papers we wrote were supposed to be written for an intelligent and informed opposition (not at or about them) and were supposed to be structured in a movement from what we had identified as common ground with that opposition through our evidence to the conclusion.
But, this kind of writing is hard, and, one day, tired and frustrated with a paper assignment, I found myself walking by a coastal lagoon in an area far to the north of Berkeley. I had driven along the Northern California coast for days, and parked near the marshy water in order to give myself a chance to wander. But as I moved through the high grass, I startled something that took off with a surprising splash and whoof of sweeping wings. It was an elegant blue grey bird the color of the sky on an overcast day with a neck that seemed to me as long and majestic as a flamingo’s. What impressed me most was the grace, beauty, and power of the sweeping stroke of its wings as it flew over me and out of sight. I discovered I had been holding my breath.

I had never before been much impressed by birds.

I tried to find some places closer to Berkeley where I might watch birds like these. With high hopes, I went to a place called “Shorebird Park” only to discover that it had a neatly mown lawn, picnic tables, and dogs. While it was a friendly and inviting place for people, even able to accommodate large groups at picnic areas, it was useless for most shorebirds. The carefully tended lawns and rampaging dogs precluded any nesting habitat for birds; the ubiquitousness of garbage attracted seagulls who chased away any other species. I didn’t go back there. After some exploring, I discovered a marsh near a freeway, and another at the end of an access road near the airport, each of which provided habitat for egrets, red winged blackbirds, avocets, and stilts. There was something charming in watching the different birds–the way the avocets skittered, the red winged blackbirds flashed a ruby spot when they flew, the egrets endlessly looked gracefully ungainly. I was disturbed to discover that both of these marshy habitats were proposed for development.

I decided to try to use my experience seeing the grey bird (called a Great Blue Heron) as the common ground in order to move my audience toward the conclusion that the marshes I had visited and other like them should not all be turned into hotels or industrial parks, nor made into parks as sanitized and bereft of wildlife as the unintentionally ironically named “Shorebird Park.” Instead, at least some should remain wilderness areas in the middle of an urban environment so that everyone could have the breath-taking experience I did of seeing a Great Blue Heron.

I began with a description of seeing the heron, and then moved to bemoaning the tragedy of people in the city not having access to wildlife areas close to home. My instructor characterized the resulting paper as “an impressive effort, but unsuccessful” because it would not persuade an intelligent and informed opposition audience. That is, my common ground was not shared with my opposition (who were unlikely to see the flight of the heron as terribly important), and I had not really effectively incorporated or answered the kinds of concerns they were likely to have (such as the potential economic benefits of developing wetlands). Most important for the instructor was that the logic of my argument that preserving wilderness in urban areas would benefit people because it would provide them with opportunities to see a wider variety of wildlife was subtly circular.

There is a lot of disagreement in rhetoric as to what we should call that kind of discourse, and it is often called “epideictic,” from Aristotle’s tri-partite division of rhetoric. I have not found Aristotle’s taxonomy very useful, for various reasons. Here I simply want to mention that this kind of rhetoric—that looks as though it is persuading an opposition, but is actually confirming those who already agree—can happen anywhere, in political assemblies, schools, public areas, books, movies.

There are advantages to this kind of rhetoric, but one problem with it is that we don’t always recognize it when we see it. That is, we often use the word “persuasive” to mean “I like it” and describe a text as “persuasive” or “effective” when we mean that it confirmed beliefs we already have, rather than that it changed our views. This will be, perhaps, the most consistent theme in these lectures—the tremendous difficulty we have in describing the impact of rhetoric, both individual texts, sets of texts, and even a realm of texts.
For instance, in regard to the paper about the birds, I had shared the paper with fellow tutors at the Writing Center, with friends, with classmates, and with just about anyone I could persuade to read it, and all had praised it highly. It had seemed persuasive to them.

The teacher was right, of course—I hope that is clear—but none of us could see what she did because we, granting the premise that experiencing non-urban wildlife is valuable, could not imagine anyone not granting it. We could not shift our perspective to someone who disagreed.

Rhetoric, then, is a cognitive process, a way of thinking.

Or, at least, persuasive rhetoric is.

So, at this point, the question for me became whether I would find an enthymeme that would work with people who did not value the environment, and that led me back to John Muir and the Hetch Hetchy debate. Was there something he could or should have done that would have produced a different outcome? Could Muir, a man whose writing many still find persuasive, have found a rhetorical strategy that would have worked with his audience? Was Muir’s failure to prevent the damming and flooding of the Hetch Hetchy Valley a rhetorical failure? Is there something he should have done?

I decided I would write my senior thesis on this topic, and I could figure out what he should have done. I didn’t. So I decided I would get an MA, and figure it out. Then I thought I needed a PhD to solve this problem, so I decided to get one. (But I wasn’t going to be a professor.) And what I found was that, when people disagree about the environment, it’s because we disagree about God. So, how do you disagree productively about policies that affect a lot of us when you don’t share premises? I spent 40 years working on that problem.

I intend to spend my retirement working on it. I’ll get it this time.

And it all goes back to John Muir, a ranger who knew how to tell a story, and the way my soul still sings when I see a Great Blue Heron.


Grammar Nazis and deflected/projected racism

marked up draft

My mother, who was very racist but sincerely believed herself to be not racist, said that she was not personally opposed to intermarriage, but she was opposed to it, on the grounds that it was so hard on the children. In other words, she supported a racist practice (social shaming of “intermarriage”) while still feeling herself not racist because she could tell herself that her racist practice was necessitated by the racism of other people.

Teachers—all teachers, at every level—are far too often my mother. We teach in a racist way, all the while claiming that we, personally, aren’t racist, but our racist practice is necessary because of the racism of others. We do it when it comes to teaching “standard Edited American English” (a particular dialect) as though it is better than other dialects.

English has a lot of different dialects, and many of those dialects are grammatically different. Standard Edited American English (SEAE), for instance (a dialect no one speaks), prohibits the comma splice (The cats ran, the dogs barked), but Standard Edited British English doesn’t. In spoken English, sentence fragments are fine, and are also fine in much published writing (depending on formality), but generally prohibited in very formal writing (except resumes or cv, where they are required). It would be inappropriate for someone to use full sentences in a resume, and therefore equally inappropriate for someone to mark a resume as “wrong” for using sentence fragments. Sentence fragments aren’t therefore “worse” than complete sentences–they’re appropriate or not; that’s how language works.

However, in any language there are dialects that are stigmatized for racist, classist, historical, or various other bigoted reasons. They’re stigmatized as “bad” English (or French, or German, or whatever). In American English, one use of the double negative is stigmatized and the other accepted because one is associated with Black English. “She don’t know nuthin’ about nuthin’” is a perfectly clear sentence, but “The argument is not unclear” takes math to understand. Yet, it’s the first that gets called “bad English.” (Which is funny, if you think about it–calling something “bad English” is itself an instance of using the wrong term, so it’s “worse” English than a double negative.)

So, it’s important to separate out two kinds of grammatical errors: a violation of a dialect from within that dialect (such as someone trying to write SEAE who violates rules of that dialect, such as the muddled Black English of The Help), ones that are correct usage within that dialect but not accepted in the dialect a reader is expecting. (A third category would be uses of language that aren’t grammatically incorrect at all, but people think they are–ending a sentence with a preposition, for instance.)

Here’s what I mean by the second kind of error. It would be bizarre for someone to chastise someone speaking German for ending a sentence with a preposition—that’s how German works. (It’s also how English works, but that’s a different post.) It would also be sheer bigotry to say that French is better than German because French doesn’t allow ending sentences with prepositions. Dialects and languages are all equally good at communicating; none is better than another.

I’ll mention something about the first toward the end of this post, but, for the most part, I want to focus on what we do about stigmatized dialects. The problem is that, since, for instance, Black English is stigmatized, and Standard Edited American English is rewarded, should teachers require that their students learn Standard Edited American English?

The advice for years (ever since the National Council of Teachers of English and Conference on College Composition and Communication issued the Statement on Students’ Right to Their Own Language”) has been to advocate code-switching. To say that a student should know SEAE because it’s useful, not because it’s better, is like saying that it’s useful to know French if you intend to live and work in France. From within this model, German is no better than French (nor is French better than German), and a student might be speaking perfect German in a French class. A person shouldn’t give up German, but add on the knowledge of French. Students should learn SEAE as an additional dialect that is useful under some circumstances.[1]

Unfortunately, too many teachers and professors and employers and people in power use the language of code-switching in order to enforce the message that Black English is inferior.

A few years ago I found myself in an argument on the internet with a white teacher in a predominantly African American school who banned Black English in her classes. She was proud that she told her students that Black English would hold them back. She wasn’t racist, she insisted; she was helping them. There’s what might seem like a subtle difference between what she was doing and what “Students’ Right to Their Own Language” advocates, but it’s an important one. She was clear that SEAE was better than Black English, that Black English was something they should be shamed for using. I then noticed that I often had the same problem with training people in the teaching of writing—they made a bigger deal about perfectly clear uses of stigmatized language than they did about about grammar problems that interfered with communication. They did so, they said, because other people would be racist.

It’s my mother opposing “intermarriage” because other people would be racist. That’s racist.

Granted, we’re in a racist world, and using a stigmatized dialect will hurt a person in terms of job or housing applications, getting good scores on standardized tests, or dealing with racist teachers who deflect their racism onto others who might be racist. So, I understand, and still support, the idea that we should teach code-switching, but if (and only if) we give students the ability to choose whether they want to learn to code-switch, we do so by making it absolutely clear that no dialect is better than another, and we make a bigger deal about violations of grammar and usage within (rather than across) dialects. I don’t know that we can do the second, and if that’s the case, then teaching code-switching is racist.

I mentioned that violations within a dialect are worth looking at carefully, largely because they can signal issues with thinking. For instance, mixing metaphors can indicate that we haven’t decided on the underlying model, or that we’re appealing to troubling models, or that we just aren’t thinking. I once heard a facilitator say, “We’re on a fast train flying out of the box.” She was describing a train wreck, as far as I could tell, but I think she meant it as a good thing. I don’t know. Had she said, “We ain’t done nothin’ about nothin’” I could have understood her perfectly.

Unclear pronoun reference can mean we haven’t really decided how causality works. For instance, if I say, “There are bunnies eating kale in the backyard, which is weird,” it isn’t clear whether the weird part is that there are bunnies, that they’re eating kale, that they’re doing it in the backyard.” In other words, it isn’t clear what “which” is referring to. What’s interesting to me about these sorts of errors (predication error or mixed construction is another one along these lines) is that “correcting” the error means first figuring out what I’m trying to say. These are interesting and significant errors.

Whenever I get into this topic (or when it comes up even on scholarly mailing lists), people advocating my position (the position of most if not all linguists, btw) get accused of thinking that anything goes, and that we shouldn’t care about clarity or correctness of any kind. That isn’t what I’m saying. I’m making four points. First, no dialect is better than any other (it might be more useful, inappropriate, effective under certain circumstances). Second, what grammar Nazis worry about are often not “grammar” issues at all (but style preferences, hypercorrectness, misunderstandings of rules, misapplications of rules), and are almost always not issues of clarity, but are class or race markers (e.g., comma splices, double negatives, subject-verb agreement, ending with a preposition). Third, we should worry about certain issues of usage, but it should be the ones that are violations within a dialect, especially ones that signal muddled thinking. Fourth, the conventional wisdom among experts for years has been that we should teach code-switching (that is, the ability to switch between dialects), but that’s still racist unless we do so in a way that makes it clear that we aren’t privileging one dialect over another, and we offer it as a choice to students.


[1] Another way to put this is to say that prescriptivism is perfectly fine, as long as it’s taught qua prescriptivism.

Holding out for a Hero: The Far-Right Canonization of Kyle Rittenhouse

Guest post by Jim Roberts-Miller

Painting of St. Michael
Image from here: https://www.patriciarobertsmiller.com/wp-admin/post.php?post=2586&action=edit

On Tuesday, August 25 Kyle Rittenhouse drove from his home in Illinois to Kenosha, Wisconsin. Kenosha was roiled by protests over the police shooting of Jacob Blake. That night, Rittenhouse shot three people, killing two.

He is becoming a folk hero on the racist right. And not just on Twitter (which, as is often correctly pointed out, isn’t real life). As of this writing, at least two right wing pundits, Tucker Carlson and Ann Coulter, have come out decidedly in favor of the shooter, claiming he was acting in self-defense and setting his defense of private property as superior to the “lawlessness” of the protests.
The racist right is in desperate need of a hero after a summer of protest in which their usual tricks of attacking the victim, sympathizing with the tough job of police, and exaggerating the usually mild property damage that often comes with angry protests were, for various reasons, simply not working.

Despite their standard calls for “civility”, so-called support for “peaceful protests, not violence”, support for Black Lives Matter not only held steady, but actually went up. Corporations felt the need to make explicit their support of the BLM movement. Confederate monuments which survived recent protests were removed, in some case overnight. City governments began looking to lower police budgets, shifting that money elsewhere. In a least a few cases, city governments have actually done this. The right’s normal paladin, Donald Trump, seemed not only unable to move the rest of America with his typically harsh rhetoric, but watched as his popularity went down and the electoral lead of his opponent Joe Biden climb into the double digits, at least partially thanks to Trump’s ham-fisted efforts to violently put down what were seen an legitimate protests, walling off the White House and using tear gas to disperse protestors so he could hold a Bible upside down outside of a church.

A vast, if incomplete and imperfect, reckoning with the structures of white supremacy began to percolate through American society. The hysteria with which this was met on the right is extreme.

On the internet, you never more than two or three clicks away from a racist right wing alternate universe of (black and brown) wild-eyed leftists bent on burning down the suburbs and replacing the (white) social structures of peaceful law-abiding (white) Americans with their (black and brown) socialist agenda for robbing the productive so they can live off welfare. And in this universe, the fear, confusion, and anger over the failure of the rest of decent (white) society to get angry over the lawlessness and disrespect being shown to the normal (white) power structures was palpable.

But the racist right has only one play. And that is to keep pushing the narrative that the protestors are not only misguided and wrong, but that they are (black and brown) violent and greedy and actively coming for you (decent white person). They pushed that narrative with the Portland protests, but it wasn’t working out. Kenosha was another chance.

And then Rittenhouse, who broke several laws just by being present in Kenosha with an AR-15 (thus proving the Racist right’s problem isn’t really lawlessness, but who is breaking the law), shot three people. Unconsciously or not, the racist right realized that they could not allow Rittenhouse’s crimes to hijack the news the way the murder of Heather Heyer did in Charlottesville. To do so would once again wreck their narrative of leftist (black and brown) violence endangering good hard-working (white) folks. And so he couldn’t be written off as an aberration, or someone who made a mistake.

No. Rittenhouse had to be a hero. A young man who idolized the police, law and order, and who selflessly came to Kenosha to protect the property and society of ordinary (white) people from a ravening (black and brown) mob. That is their story. That is their desperate need. For them Rittenhouse is a hero, a martyr, a (white) man literally pursued by a mob who, in his extremity, was forced to kill to defend himself. And that is the story they will be pushing at all costs, because it all they have and they have to get enough (white) people to condone the violence needed to put the mob (black and brown people) back in their place and re-elect Donald Trump.

You must not let them do this. Rittenhouse deliberately chose to break several laws to go to Kenosha Wisconsin, gun in hand, expecting to shoot people. He got his wish. This is not heroism.

So someone said, “Check your privilege”

people arguing
From the cover of Wayne Booth’s _Modern Dogma-

It seems to me that white males get more upset about being told to “check your privilege” than do women or POC. (And, yes, POC do sometimes get told to check their privilege because privilege is complicated—Ijeoma Oluo has a nice chapter on checking her own privilege.) “Check your privilege” is upsetting, I’ve been told, because they understand themselves to have been told that their opinion is irrelevant purely because of who they are.

And I think women and POC have had that–being told our opinion is worthless because of who we are–happen so often that it’s nothing new. If anything, being told that my opinion is invalid because I’m speaking from such a place of privilege that my view is distorted is a much more valid reason than many others I’ve been given over the years. (My favorites remains the time that a man shouted at me that, because I’m a woman I couldn’t possibly understand logic.) After all, there are ways in which my coming from a place of privilege does make my opinion worth less (and sometimes worthless).

For instance, when I went to graduate school, it wasn’t possible—let alone necessary—to buy a personal computer, tuition was low, and housing close to campus was available and affordable. Therefore, although the stipend was low, it was possible to make it through the program with very little debt. Since I came from the kind of family that paid for my undergraduate education, I started graduate school with no debt at all. That I was so privileged means that any advice I might now give to students considering graduate school is worth less than the advice of someone closer to them in experience.

I give a lot of advice about writing, and, although I try to incorporate advice that others with different experiences have given, ultimately, what I say is going to be from my perspective. And my perspective is shaped by the advantages I have and I’ve had (such as low or nonexistent debt) And therefore it won’t be good advice for some people. They should ignore my advice.

If you tell me to check my privilege, you’re telling me that you think I’ve forgotten my epistemic limitations. You think my privilege means that my advice or judgment isn’t valid, or, at least, much more limited than I seem to realize.

What people who get defensive when told to check our privilege don’t understand is that your saying “Check your privilege” to me isn’t changing our relationship. You’re just naming it. It’s just a verbalized eyeroll. If you hadn’t said it, you would still have thought it.

So, the best response is to ask for clarification. In the days before people said, “Check your privilege,” there were other ways of making the same point: “You’re just saying that because you’re….” “I think you’re forgetting about…” “From my perspective…” “Someone from [this background] would look at it really differently…” and so on. And I think we’ve all had someone point out that our advice or judgment really was seriously limited by not having thought about it from another perspective. And it was useful.

It’s particularly hard to see how our perspective is limited by privilege because power comes into play. When I had people from prestigious and well-funded institutions give me career advice that was seriously limited by their privilege, it was hard for me to say, “Yeah, that won’t work for me” because they were powerful, and I needed their support. I didn’t say anything. But neither did I try to follow their advice because it didn’t make any sense—I didn’t have a TA to do my grading, a research assistant to help with clerical work, an administrative assistant to help with program administration. They hadn’t thought through how their advice was coming from a place of privilege, and was useless for someone like me.

This isn’t to say that someone who says, “Check your privilege” is always right. Sometimes people have a lot less privilege than it might appear, sometimes we’ve misunderstood how power works in a particular setting, sometimes people misunderstand what privilege means. Sometimes when people say, “Check your privilege” they want to talk about it, and they’re willing to explain in more detail. But sometimes they don’t want to, and that’s fine too. Almost always, it will take some time to think about whether and how privilege may have affected our judgment and what we should do about it.

Socially acceptable racism; Or, how “new” racism isn’t new

books about demagoguery

A lot of people make the point that there was a kind of racism—called “old” racism—that was openly biological/genetic, and openly hostile. Then, at a certain point, racist discourse shifted to become more genteel. That distinction between old and new racism isn’t entirely accurate, and the way it’s inaccurate is important. There have always been “genteel” racisms—what might be called “racism with a smile” or “some of my closest friends are…” racism. And those “nice” (that is, socially acceptable) racisms enable the kinds that openly advocate violence, expulsion, and extermination.

In this post, I want to talk about one of them—one that was tremendously popular in the twentieth century. This view accepted that there were “races,” that they were essentially (even genetically) different, that these differences manifest themselves in external characteristics (looks, behavior, cultural practices), but that all of these differences add to the richness of human life. This kind of racism celebrated the essential differences of human races. (Sort of. I’ll get to that.) People advocating this kind of racism often explicitly set themselves off from a similarly biological racism (they weren’t racist) on the grounds that they weren’t that bad.

Take, for instance, Dorothy Sayers, the mystery novelist. In Whose Body (1923), the villain kills a perfectly nice Jew out of spite with a non-trivial amount of antisemitism. The hero expresses no antisemitism, not even when his friend indicates a desire to marry into a Jewish family, and the narrator has nothing negative to say about the victim or his family. In fact, everything we hear about the victim and his family appears positive. He is very good at playing the stock market and therefore wealthy, but not showy in this wealth (for instance, because he doesn’t have a chauffeur, he travels alone to the meeting the murderer has set up). He dotes on his wife and daughter, and is a good family man. He is kind to people.

This all appears positive—he’s smart, successful, modest, and a family man. This characterization is, however, simply the “positive” side of the same coin of rabidly antisemitic rhetoric. For those groups, Jews are: parasitic capitalist, money-grubbing, cheap, tribal (“clannish” is the word sometimes used), and kind becomes “pacifist” or “cowardly.”

Antisemitic rhetoric in groups like the Nazis stuck close to the producer/parasite dichotomy that runs back through readings of Paul’s prohibition about usury. Chip Berlet and Matthew Lyons have a useful description of how that dichotomy plays into toxic populism. The short version is that toxic populism presents some group as producers, and the other as parasites, or, in Paul Ryan’s more recent rhetoric, “makers” and “takers.” The in-group is always makers. For many populists, people who make money off of money—financiers, people who play the stock market—haven’t really created wealth (such as through owning land). They’re parasites.

Nazis were populists (authoritarians almost always are, even though their policies actually screw over most of the populace, and especially the middle and lower classes). The notion that Jews were always financiers and stock market geniuses (and bankers) was one of the most important aspects of Nazi antisemitic propaganda. It’s a theme in Mein Kampf, fercryinoutloud. Real money, so this argument goes, comes from agriculture, or perhaps small manufacturing. Being good at the stock market, for Nazis, is a smear.

Similarly, the negative stereotype of Jews was that they can never really be patriots, because they always favor their family rather than their country (for Hitler, an “Aryan” putting his family first is putting the country first). And the stereotype of Jews as cheap was another piece of antisemitic rhetoric. In other words, Sayers, even if her portrayal of a Jew appeared sympathetic (i.e., she was trying to be “nice”), reinforced exactly the stereotypes that resulted in the Holocaust: Jews are good at finance (capitalist parasites), modest (miserly), family lovers (clannish), non-violent (pacifists and cowards). It was racism with a smile.

She was far from alone. After Wyndham Lewis’ enthusiastic paean to Hitler (1931) didn’t go over as well as he’d expected, and his insistence that Hitler was “a man of peace” showed him to have been very wrong, he tried to get back in the good graces of the public with his Jews: Are They Human? (1939). His answer is that they have their own virtues—they’re very loyal to one another and family-loving (clannish), careful with money (greedy and miserly), and so on. Like Sayers, he put it in positive terms, but it was still endorsing the notion that Jews have an essential set of characteristics.

Lewis took Hitler’s claims of wanting world peace at face value, but it’s interesting that he didn’t take Nazi antisemitism at face value. I think it’s because he didn’t really object to it all that much. Lewis and the Nazis didn’t disagree as to the basic character of Jews; they just disagreed as to what should be done about it. So, for Lewis, Hitler’s antisemitism wasn’t especially notable—it was something he could dismiss as a little bit of an overreaction.

What has been a little surprising to me in working on demagoguery, especially when it leads to extreme policies about the cultural out-group, is the number of people who consider themselves “moderate” who endorse the basic narrative behind the demagoguery about the out-group. They just don’t think it should be taken too far.

Germans who agreed that there should be a quota for Jewish doctors, Americans who agreed that integrated schools were just a little too much, Brits who wouldn’t want their daughter to marry one—they could all see themselves as “not racist” (or, at least, not unreasonable in their attitudes toward Those People) because there was some other group less nuanced, less reasonable in their hostility. And, when push came to shove, they might raise an eyebrow at the people who did go “too far,” or perhaps mutter some criticism, but that’s about it. They were often allies, and rarely enemies, of the people who went “too far.”

Thus, that we now have people who say “I’m not racist, but…” isn’t a sign that there is a new kind of racism. It’s an old form, and a very damaging one.

The weird place of expertise in our culture of demagoguery

image of batboy


While I was working on demagoguery, I was continually puzzled by the problem of anti-intellectualism. The problem matters because, too often, we characterize demagoguery in ways that we would never recognize if we’re getting suckered by it. We tell ourselves that demagogues are frauds, dishonest, and manipulative, but our leaders and pundits are sincere, truthful, and authentic. Sure they have to lie sometimes, but they aren’t lying out of a place of dishonesty–it’s out of sincere concern, it’s necessary, and they’re basically truthful. Supporters of even the most notorious demagogues believed that they weren’t supporting demagoguery because they believed that Hitler, Theodore Bilbo, Fidel Castro, Joseph McCarthy, Cleon were sincere, truthful, and authentic.

In general, I think it makes more sense to emphasize the culture of demagoguery, since the people we identify as demagogues were only able to come to power because the culture rewards demagoguery.

Demagoguery says that we don’t really face complicated issues of policy deliberation in a community of divergent and conflicting values, goals, and needs about issues that don’t have perfect answers. It says that things just look complicated—they’re actually very simple. We just have to commit to the obvious solution; that is, the solution that is obvious to our side.

That insistence on the solution being obvious, on disagreement and deliberation as unmanly dithering, can look like anti-intellectualism since it means the rejection of the kind of nuance and uncertainty generally considered central to science or research. But I’m not sure it’s useful to call it anti-intellectualism, since people rarely think of themselves as anti-intellectual. Like emphasizing the honesty/dishonesty of demagogues, talking about the anti-intellectualism of demagoguery means we won’t identify our own demagoguery.

It’s true that demagoguery often relies on rejecting experts as “eggheads” or, in Limbaugh’s phrase, “the liberal elite.” That quality of anti-elitism often means that scholars characterize demagoguery as a kind of populism (e.g., Reinhard Luthin). But lots of populism isn’t demagogic, and rhetoric in a democracy is of course going to attack some elite group–the super-rich, the military-industrial complex, Fat Cat Bankers. After all, major changes will be to disadvantage of someone.

In addition, we don’t like to see ourselves as crushing some weak group; we like the David and Goliath narrative. The narrative of the spunky underdog fighting a massive power is so mobilizing that it’s often used under ridiculous circumstances. To condemn populism, therefore, just condemns rhetoric.[1]

As Aristotle pointed out, the elite can engage in demagoguery. Earl Warren’s demagoguery regarding “the Japanese” was directed toward Congressional representatives, and he was presenting himself as an expert summarizing the expert judgment of others. Harry Laughlin’s demagogic testimony before Congress regarding the supposed criminality and mental incapacity of various “races” was expert testimony–experts can be full of shit, as he was.[2] I think there is a different way of estimating expertise, but I’ll get to that in a bit.

At one point, I started to think that demagoguery simplifies complicated situations, and I still think that’s more or less true, but in a deceptively complicated way. Demagoguery can have very complicated narratives behind them, so complicated that they’re impossible to follow (because they don’t actually make sense). QAnon, 9/11 conspiracies, Protocols of the Elders of Zion, conspiracy theories about Sandy Hook–they’re the narrative equivalent of an Escher drawing (conclusions are used as evidence for conclusions that are used as evidence for the first conclusions).

They’re often complicated narratives, in that they might have a lot of details and data, but they’re in service of a simple point about which one is supposed to feel certain: the out-group is bad, we are threatened with extermination[3], and any action we take against them is justified because they’re already doing worse or they intend to. So, the overall narrative is simple: we are good; they are evil.

Or, perhaps more accurately, the overall narrative is clear and provides us with certainty. Demagoguery equates certainty with expertise. Experts are certain; demagoguery doesn’t reject expertise, then, let alone precision, but it does reject any “expert” opinion that talks in terms of likelihood. Demagoguery relies on the binary of certain/clueless.

Thus, in a demagogic culture, certainty (sometimes framed as “decisiveness”) is seen as real expertise, the kind of expertise that matters.

Demagoguery tends to favor the notion of “universal genius”–the idea that judgment is a skill that applies across disciplines. So, someone with “good judgment” can see the truth in a situation even if they aren’t very knowledgeable. “Good judgment” is (in this model) not discipline specific (so someone with a PhD in mechanical engineering might be cited as an expert about evolution because he’s a “scientist”).

What I’m saying is that there are five qualities that contribute to demagoguery that we’re tempted to call “anti-intellectualism:” 1) the rejection of uncertainty; 2) the related rejection of deliberation; 3) the emphasis on narratives that are, in their end result, simple (we’re good and they’re bad); 4) faith in “universal genius;” 5) the equation of expertise with decisiveness.

Our impulse when arguing with someone who is promoting a debunked set of claims is to say “It’s been debunked by experts.” But that doesn’t work because it hasn’t been debunked by the people they consider experts. Similarly, it doesn’t help to say that they “reject facts.” They think they don’t–they think we do. (And we do, in a way–we reject data, some of which might be true.) I’m not sure how to persuade someone promoting false information that it’s false, but I’m increasingly coming to think that we’ll be running in place as long as we’re in a culture of demagoguery.

We need a conversation about certainty.



[1] I think there is a kind of populism that is toxic, and it’s the kind that Muller and Weyland each call “populism.” I think it’s more useful to call that kind of populism “populist demagoguery” or, as do Berlet and Lyons, “toxic populism.”

[2] I talk about these cases a lot more here.

[3] When I say this, many people focus on the “extermination” part, as though I’m casting doubt on whether groups sometimes face extermination. I’m not. As a side note, I’ll say that I’ve long noticed that people who live and breathe demagoguery have trouble noticing restrictive modifiers, especially if they’re left-branching or the modifier isn’t immediately obviously meaningful to them. That’s a different post, but the short version is that a person who thinks demagogically will read “Zionist Christianity is not necessarily a friend to Israel” as a claim about Christians, not a very specific kind of Christian.

Yes, unhappily, many groups face(d) extermination, but the situation isn’t zero-sum between only two groups. Something that hurt the Nazis didn’t necessarily help the Jews; Jews had potential allies among groups that were neither Jewish nor Nazi; there were, and had long been, disagreements within the Jewish communities in Europe as to how to respond to anti-semitism. Even now, it’s hard to say what would have been “the” right response because there probably wasn’t only one right response.

[2] People not engaged in demagoguery aren’t obligated to argue with every person who disagrees with them, but if we reject every opposition argument on the grounds that simply disagreeing means someone is bad, then it’s demagoguery.

The unnecessary incompetence of Tom Cotton’s (mis)understanding of slavery as a “necessary evil”

revisionist history books

Tom Cotton has proposed a bill that would prohibit Federal funds being used to support the teaching of the NY Times The 1619 Project. He said, “As the Founding Fathers said, [slavery] was the necessary evil upon which the union was built, but the union was built in a way, as Lincoln said, to put slavery on the course to its ultimate extinction.”

This will shock absolutely no one, but I don’t think Tom Cotton has any idea what he’s talking about. The irony is that he is trying to take the scholarly highground, as though his objection to The 1619 Project is that it is factually and historically flawed, when, in fact, his argument is factually and historically flawed. I’m not sure I’d call his history revisionist, as much as puzzling and uninformed.

“The Founding Fathers” is a vague term, but Cotton seems to be using it to include the authors of the Constitution. (I’m not sure if he’s including others, as he seems to be, so I’ll put “the Founders” and “Founding Fathers” in quotation marks.) The Constitution was a document of compromise agreed to by people with widely divergent views on various topics, especially slavery. Therefore, it doesn’t really make sense to attribute one view about slavery to “the Founders”—there wasn’t one.

Even the same Founder could have different views. Jefferson, at the time of the founding, was what is generally called a “restrictionist”—slavery should be restricted to the existing slave states, as such a restriction would cause it to die out (as might well have been true). By 1819, however, he was describing slavery as a “wolf by the ear situation.” In 1820, he wrote in a letter, “We have the wolf by the ear, and we can neither hold him nor safely let him go. Justice is in one scale and self-preservation in the other.” Justice demands abolition, but Jefferson (like many people) worried that freed slaves would wreak vengeance on their oppressors. By 1820, Jefferson was in favor of expansion of slavery (and he was still a Founder).

It’s fair to characterize Jefferson’s view (in 1820) as an instance of the “necessary evil” topos, a defense of slavery common in the early 19th century. But that wasn’t his view at the time of the founding. And it certainly isn’t accurate to say that the Founders “put the evil institution on a path to extinction”—that certainly wasn’t what most of them were trying to do. Many of them (most?) hoped to preserve it eternally. In fact, as late as 1860, there remained the view that the Constitution guaranteed slavery, and that abolishing slavery would require a new Constitution. In 1850, the abolitionist William Lloyd Garrison burned a copy of the Constitution for exactly that reason, calling it “a covenant with death, and an agreement with hell.”

Cotton ignores several other important points. First, the “necessary evil” argument wasn’t very popular at the time of the Revolution or writing of the Constitution. In that era, it was more common for defenders of slavery to use the argument that slavery brought Christianity and civilization to slaves, and was therefore a benefit. By the early part of the 19th century, that argument became increasingly implausible, as manumission was increasingly prohibited (for more than you probably wanted to know about the 19th century history of arguments for slavery, see here). It’s at that point that one gets the “necessary evil” argument (which was never the only way that slavers talked about slavery). And, at that point, it was never an argument for abolishing slavery, let alone for it being “a necessary evil upon which the union was built.” I have no idea whom he thinks said that. I can’t think of anything a Founder said that could be interpreted as saying slavery was some kind of necessary phase through which the US had to pass.

To be blunt, I think Cotton has no clue what the “necessary evil” argument actually was. Robert Walsh’s 1819 An Appeal from the Judgments of Great Britain has a passage perfectly exemplifying the “necessary evil” argument:
We do not deny, in America, that great abuses and evils accompany our negro slavery. The plurality of the leading men of the southern states, are so well aware of its pestilent genius, that they would be glad to see it abolished, if this were feasible with benefit to the slaves, and without inflicting on the country, injury of such magnitude as no community has every voluntarily incurred. While a really practicable plan of abolition remains undiscovered, or undetermined; and while the general conduct of the Americans is such only as necessarily results from their situation, they are not to be arraigned for this institution.” (421)
This is essentially Jefferson’s argument—it’s evil, but there’s nothing we can do about it. Zephaniah Kingsley called slavery an “iniquity [that] has its origin in a great, inherent, universal and immutable law of nature” (14, A Treatise on the Patriarchal, or Co-operative System of Society As It Exists in Some Governments, and Colonies in America, and in the United States, Under the Name of Slavery, with Its Necessity and Advantages, 1829). Alexander Sims, in A View of Slavery (1834) said, “No one will deny that Slavery is a moral evil” (“Preface”). James Trecothick Austin, the Massachusetts Attorney General, wrote a response to Channing’s anti-slavery book called, appropriately enough, Remarks on Dr. Channing’s “Slavery.” Austin argued that slavery could never be abolished, and then said, “I utter the declaration with grief; but the pain of the writer does not diminish the truth of the fact” (25). The necessary evil line of defense is a self-serving fatalism about slavery–while pronouncing it evil (and thereby showing that one has the right feelings), this position precludes any action to end slavery: “Public sentiment in the slave-holding States cannot be altered” (Austin 24). What Cotton doesn’t understand is that the necessary evil argument was an argument for a fatalistic submission to the eternal presence of slavery.

Cotton doesn’t seem to be the sharpest pencil in the drawer, insofar as he seems not to understand his own argument. The necessary evil argument says slavery is evil. Cotton’s argument is that The 1619 Project is inaccurate because it presents the US as “an irredeemably corrupt, rotten and racist country,” which isn’t my read of the project at all. Oddly enough, were Cotton right, if the “Founding Fathers” had said that slavery “was the necessary evil upon which the union was built” (as he claims) then they would have been endorsing the major point of The 1619 Project, that slavery is woven into US history from the beginning. The Founders didn’t say that, but Cotton seems to think it’s true, so I’m not even sure what his gripe with the project is. He seems to me to be endorsing its argument while thinking he’s disagreeing? (Has he actually looked at it?)

Walsh also seems not to understand Lincoln’s argument(s) on slavery. In 1858, during the Lincoln-Douglas debates, Lincoln argued that “it was the policy of the founders to prohibit the spread of slavery into the new territories of the United States,” but Lincoln wasn’t claiming that “the founders” were opposed to the spread of slavery into all territories. As mentioned above, the “founders” had a lot of different views; Lincoln means specifically the Northwest Ordinance of 1787 which, among other things, prohibited the expansion of slavery above a certain point. (Sometimes people cite Lincoln’s 1854 speech as though it’s about the Founders, but it isn’t—it’s about the policies regarding the expansion of slavery from 1776 to 1849. Lincoln never uses the term “Founders” in that speech because he isn’t talking about them.) In both those speeches, Lincoln was talking about the expansion of slavery into states above the line established by the Northwest Ordinance, in that territory. He wasn’t a fool. He knew that Kentucky (1792), Tennessee (1796), Louisiana (1812), Mississippi (1817) and various other states had been admitted as slave states by the generation that Cotton seems to want to call “Founders.”

So, here are some of the things that Cotton gets wrong. There wasn’t a view that “the Founders” had about slavery; the Constitution didn’t put slavery on a path to extinction (and the “Founding Fathers” certainly didn’t see it that way); the “necessary evil” argument was an argument for fatalistic submission to the possibly eternal presence of slavery, not an argument for its abolition, let alone benefit for the country; even the necessary evil line of defense admitted that slavery was evil; I don’t know of any Founders who argued that slavery was a necessary phase for the country to go through; that certainly wasn’t Lincoln’s argument; The 1619 Project doesn’t present the US as irredeemable.

But he’s right that slavery was evil.








The salesman’s stance, being nice to opponents, and teaching rhetoric

books about demagoguery

I mentioned elsewhere that people have a lot of different ideas about what we’re trying to do when we’re disagreeing with someone—trying to learn from them, trying to come to a mutually satisfying agreement, find out the truth through disagreement, have a fun time arguing, and various other options. There are circumstances in which all of these (and many others) are great choices—I think it’s an impoverishment of our understanding of discourse to say that only one of those approaches is the right one under all circumstances.

We also inhibit our ability to use rhetoric to deliberate when we assume that only one approach is right.

I’ll explain this point with two extremes.

At one extreme is the model of discourse that has been called “the salesman’s stance,” the “compliance-gaining” model, rhetorical Machiavellianism, and various other terms. This model says that you are right, and your only goal in discourse is to get others to adopt your position, and any means is justified. So, if I’m trying to convert you to a position I believe is right, then all methods of tricking or even forcing you to agree with me are morally good or morally neutral.

From within this model, we assess the effectiveness of a rhetoric purely on the basis of whether it gains compliance. For instance, in an article about lying, Matthew Hutson ends with advice from someone who has studied that lying to yourself makes you a more persuasive liar.

“Von Hippel offers two pieces of wisdom regarding self-deception: “My Machiavellian advice is this is a tool that works,” he says. “If you need to convince somebody of something, if your career or social success depends on persuasion, then the first person who needs to be [convinced] is yourself.””

The problem with this model is clear in that example: if you’re wrong, then you aren’t going to hear about it. Alison Green, on her blog askamanager.org, talks about the assumption that a lot of people make about resumes, cover letters, and interviews—that you are selling yourself. People often approach a job search with exactly the approach that Von Hippel (and by implication, Hutson) recommend: going into the process willing to say or do whatever is necessary for you to get the job, being confident that you’ll get the job, lying about whether you have the required skills or experience (and persuading yourself you do).

Green says,

“The stress of job searching – and the financial anxieties that often accompany it – can lead a lot of people to get so focused on impressing their interviewer sthat they forget to use the time to find out if the job is right for them. If you get so focused on wanting a job offer at the end of the process, you’ll neglect to focus on determining if this is even a job you want and would be good at, which is how people end up in jobs that they’re miserable in or even get fired from.
And counterintuitively, you’ll actually be less impressive if it’s clear that you’re trying to sell yourself for the job. Most interviewers will find you a much more appealing candidate if you show that you’re gathering your own information about the job and thinking rigorously about whether it’s the right match or not.”

Van Hippel’s advice comes from a position of assuming that the liar is trying to get something from the other (compliance), and so only needs to listen enough to achieve that goal. The goal (get the person to give you a job, buy your product, go on a date) is determined prior to the conversation. Green’s advice comes from the position of assuming the a job interview is mutually informative, a situation in which all parties are trying to determine the best course of action.

If we’re trying to make a decision, then I need to hear what other people have to say, I need to be aware of the problems with my own argument, I need to be honest with myself at least and ideally with others. (If I’m trying to deliberate with people who aren’t arguing in good faith, and the stakes are high, then I can imagine using some somewhat Machiavellian approaches, but I need to be honest with myself in case they’re right in important ways.)

At the other extreme, there are people who argue that every conversation should come from a place of kindness, compassion, and gentleness. We shouldn’t directly contradict the other person, but try to empathize, even if we disagree completely. We should use no harsh words (including “but”). We might, kindly and gently, present our experience as a counterpoint. Learning how to have that kind of conversation is life-changing, and it is a great way to work through conflicts under some circumstances.

It (like many other models of disagreement) works on the conviviality model of democratic engagement: if we like each other, everything will be okay. As long as we care for one another, our policies cannot go so far wrong. And there’s something to that. I often praise projects like Hands Across the Hills or Divided We Fall that work on that model—our political discourse would be better if we understood that not all people who disagree with us are spit from the bowels of Satan. The problem is that some of them are.

That sort of project does important work in undermining the notion that our current political situation is a war of extermination between two groups because it reduces the dehumanization of the opposition. I think those sorts of projects should be encouraged and nurtured because they show how much the creation of community can dial down the fear-mongering about the other.

They are models for how genuinely patriotic leaders and media should treat politics—by continually emphasizing that disagreement is legitimate, that we are all Americans, that we should care for one another. But that approach to politics isn’t profitable for media to promote, and therefore isn’t a savvy choice for people who want to get a lot of attention from the media.

It also isn’t a great model for when a group is actually existentially threatened (as opposed to being worked into a panic by media). This model says, if we apply it to all situations, that, if I think genocide is wrong, and you think it’s right, I should try to empathize with you, find common ground, show my compassion for you. And somehow that will make you not support a genocidal set of policies? I do think that a lot of persuasion happens person to person, when it’s also face to face. I’ve seen people change their minds about whether LGBQT merit equal treatment by learning that someone they loved would be hurt by the policies they were advocating. I’ve also seen people not change their minds on those grounds. Derek Black described a long period of individuals being kind to him as part of his getting away from his father’s white supremacist belief system, but the guy went to New College; he was open to persuasion.

And I think it’s a mistake to think that kind of person-to-person, face-to-face kindness makes much difference when we are confronting evil. Survivors of the Bosnian genocides describe watching long-time friends rape their sister or kill their family. It isn’t as though Jews being nicer to and about Nazis would have prevented genocide. It wasn’t being nice to segregationists that ended the worst kind of de jure segregation. We have far too many videos that show being nice to police doesn’t guarantee a good outcome. People in abusive relationships can be as compassionate as an angel, and that compassion gets used against them. We will not end Nazism by being nice to Nazis.

That kindness, compassion, and non-conflictual rhetoric is sometimes the best choice doesn’t mean it’s always the only right choice. It can be (and often has been) a choice that enables and confirms extraordinary injustice. It’s often only a choice available to people not really hurt by the injustice. Machiavellian rhetoric is sometimes the best choice; it’s often not.




















Racism, Biden, Trump, and the bad math of whaddaboutism

boxes

John Stoehr has a nice piece about what he calls the “malicious nihilism” of Trump supporting media and pundits. They’ve stopped trying to argue that Trump is not racist, since he explicitly stokes racism, but, they’re saying, since Biden is a Democrat, and Democrats used to be the party of racists, then Biden is racist too: “Fine, the GOP partisans now say, Trump is a racist. The Democrats are just as bad, though. May as well vote for the Republican.”

That’s just plain bad math.

It’s easy to point to so many things Trump and his Administration has said and done that are racist. Critics of Biden point to one thing he said, and what the Democratic Party was like prior to 1970. Those are not comparable. That way of thinking about Biden v. Trump ignores the important questions of degrees, impact, persistence.

It’s a weirdly common way of arguing about politics, though, and even interpersonal issues. There was a narrative about the Civil War for a long time which was that “both sides were just as bad,” and it was the mutual extremism about the issue of slavery that led to war.[1] The “mutual extremism” was this same bad math. There was one President between John Adams and Abraham Lincoln who didn’t own slaves (JQ Adams), Congress was so proslavery that the House and Senate both banned criticism of slavery for years (the gag rules), the Supreme Court ruled that African Americans could never be citizens. Criticism of slavery in slaver states could be punished by hanging; the Fugitive Slave Laws enabled slavers to kidnap African Americans in “free” states. Pro-slavery rhetoric regularly called for race war should abolition happen, and began calling for secession to protect slavery in the 1820s. Commitment to slavery was so dominant in slaver states that they went to war against the US.

There were pro-slavery Presidents; there was no abolitionist President (JQAdams would, after his presidency, become anti-slavery, but not clearly abolitionist). No state had a death penalty for advocating slavery; there was no gag rule for advocating slavery; abolitionists didn’t advocate civil war or race war; no one could go into a slaver state and declare an African American to be free and face the same low bar that kidnappers in the “free” states faced.

They weren’t both “just as bad” because they didn’t equally advocate violence, they weren’t equally powerful, advocating civil war was commonplace on only one side, the laws and practices they advocated weren’t equally extreme.

I wrote a book about proslavery rhetoric, and when I would make this point—“both sides” weren’t “just as bad”—neo-Confederates would say, “What about John Brown?” That’s the bad math. If, on one side, advocating and engaging in violence is commonplace, then one example on the other side doesn’t mean they’re both just as bad. You can even bring in Bloody Kansas and not get the amount of violence (and advocacy of violence) commonplace in supporting slavery to be anything close to the violence on the part of critics of slavery.

Here is my crank theory about why people reason that way. A lot of people really don’t (perhaps can’t) think in terms of degrees. They think in terms of categories (this is not the crank theory party—it’s a fairly common observation). Thus, you’re racist or not, certain or clueless, proud or ashamed; something is good or bad, right or wrong, correct or incorrect; you’re in-group or out-group, loyal or disloyal. They don’t think about degrees of racism, certainty, pride, goodness, loyalty, and so on.

There’s a funny paradox. Because they don’t think in terms of degrees (or mixtures—something might be loyal in some ways and disloyal in others), they believe that you either have a rigid, black/white ethical system, or you’re what they call a “moral relativist.” They actually mean “nihilist.” So, they hear “right v. wrong might be a question of degrees rather than absolutes” as saying there is no difference between right and wrong—one of their crucial binaries is “rigid ethical system of categories or nihilism.” That binary imbues those other binaries with ethical value—being rigid about loyalty v. disloyalty seems to be part of being a “good” person.

Because people like this think in terms of putting things in a box—something goes in the box of good or bad, racist or not racist, loyal or disloyal, then, if they can find a single racist thing related to Biden, he and Trump are in the same box. And, therefore, that box can be ignored when it comes to comparing them, since they’re both in it.

And this brings us back to Stoehr’s point. The attachment to rigidity, the tendency to think in terms of absolutes and not degrees makes these people actually incapable of ethical decision-making. Since wildly different actions are thrown into the box of “bad” or “racist,” people who reason this way can’t tell right from wrong. They can end up allowing, tolerating, encouraging, or even actively supporting wildly unethical actions because of their inability to think in nuanced ways about ethics. It’s moral nihilism.




[1] There weren’t only two sides, so the claim that “both sides” were anything is nonsensical. There were, at least, six sides. Pro-slavery/pro-secession, pro-slavery/anti-secession, anti-slavery/pro-colonization, anti-slavery/pro-full citizenship, anti-anti-slavery, anti-pro-slavery.

When every political issue is a war, shooting first seems like self-defense

train wreck
image from https://middleburgeccentric.com/2016/10/editorial-the-train-wreck-red/

For some time, we’ve been in a world in which far too much media (and far too many political figures) defenestrate public deliberation in favor of treating every policy decision as a war of extermination between two identities.[1] When a culture moves there, it’s inevitable that some group engages in what might be called “pre-emptive self-defense.” We’re there. It’s a weird argument, and profoundly damaging, but hard to explain.

The first time I ran across the proslavery argument, “We must keep African Americans enslaved and oppressed, because, if they had power, they would treat us as badly as we are treating them,” I thought it was really weird. I’ve since come to understand that it isn’t weird in the sense of being unusual. But it’s weird in the sense of being uncanny—it’s in the uncanny valley of argumentation in two ways. First, it’s turning the Christian value of doing unto as others as you would have them do unto you into a justification of vengeance: do unto them as they have done unto you, (which is a pretty clear perversion of what Jesus meant). Except, just to make it weirder, it isn’t what they have done unto you, but what they might do in an alternate reality. And that alternate reality requires that they are as violent and vindictive as you.

The argument is something like, “Yes, I am treating other people as I would not want to be treated, and as they have not treated me, but it’s justified because it’s how I imagine they would treat me in a narrative that also is purely imagined.”

This weird line of argument turns up a lot in arguments for starting wars. Obviously, wars start because some group attacks another; someone is the aggressor. So, when you think about pro-war rhetoric, you’d imagine that the side that is the aggressor would justify that aggression. They don’t. Instead, they present themselves as engaging in self-defense. They claim that their aggression isn’t really aggression, but self-defense because the other nation(s) will inevitably attack them. It’s self-defense against something that hasn’t happened (and might never). Pre-emptive self-defense.

For instance, Hitler invaded Poland because he intended to exterminate it as a political entity, exterminate most of its population, use it as a launching spot for a war of extermination against the USSR, and then make it (and other areas) a kind of Rhodesia of Europe, with “Aryans” comfortably watching “non-Aryans” act as serfs. But that isn’t how he justified it in his public rhetoric. In his September 1, 1939 speech announcing an invasion that had already started, he said the invasion was an act forced on him, that he had engaged in superhuman efforts to maintain peace, but Poland was preparing for war. Invading Poland was self-defense because Poland was intending to invade Germany, and had already fired shots (they hadn’t). [2] The various wars against the indigenous peoples of what is now the United States, even when they openly involved massacres, were rhetorically justified as self-defense because the indigenous peoples were, so the argument went, essentially hostile to “American” expansion, and therefore an existential threat.

In other words, pre-emptive self-defense says, we are going to invade this other nation while claiming that it isn’t an invasion but self-defense (although we’re the invaders) because they were going to be invaders or would be invaders if they could. That’s nonsense. That’s saying I’m justified in hitting you because I think that, were I in your situation, I would hit me.

It’s such an unintelligible defense that it isn’t even possible to put it into writing without ending up in some kind of grammatical moebius strip. Yet it’s obviously persuasive, so the interesting question is: how does that rhetoric work?

As I’ve often said, I teach and write about train wrecks in public deliberation, what are sometimes called “pathologies of public deliberation.” While there is a lot of interesting and important disagreement about specifics regarding the processes, on the whole, there’s a surprising amount of agreement among scholars of cognitive psychology, political science, communication, history of rhetoric, military history, social psychology, history, and several other fields about some generalizations we can make about what ways of reasoning lead people to unjust, unwise, and untimely decisions. And, basically, that agreement is that if the issues are high-stakes and the policy decisions will have long-term consequences, then relying on cognitive biases will fuck you up good. And not just you, but everyone around you, for a long time.

As it happens, deciding about whether to go to war, how to conduct a war, and whether to negotiate an end to a war are decisions that activate all the anti-deliberative cognitive biases. (Daniel Kahneman has a nice article explaining how some cognitive biases are pro-war.) So, there’s an interesting paradox: cognitive biases interfere with effective decision-making, arguments about whether to go to war (and how to conduct it) have the highest stakes, and those decisions are the most likely to trigger the cognitive biases. We reason the worst when we need to reason the best.

And what I’m saying is that we bring in that bad reasoning to every policy decision when we make everything a war. When people declare that a political disagreement is a state of war (the war on terror, war on Christmas, war on drugs, culture war, war on poverty), they are (often deliberately) triggering the cognitive biases associated with war. The most important of those is that our sense of identification with the in-group strengthens, and our tolerance for in-group dissent decreases. Declaring something a war is a deliberate strategy to reduce policy deliberation. It is deliberately anti-deliberative.

And one of the anti-deliberative strategies we bring in is pre-emptive self-defense. In war, that strategy consists of months of accusing the intended victim (the country that will be invaded) of intending to invade. Then, once the public is convinced that the country presents an existential threat, invasion can look like self-defense. In politics, that strategy consists of spending months or years telling a political base that “the other side” intends an act of war, a complete violation of the rule of law, extraordinary breaches of normal political practices (or claims they already have), then “us” engaging in those practices–even if we are actually the aggressor–looks like self-defense. Pre-emptively. Thus, pro-slavery rhetors insisted that the abolitionists intended to use Federal troops to force abolition on slaver states, pro-internment rhetors argued that Japanese Americans intended to engage in sabotage (Earl Warren said that there had been no sabotage was the strongest proof that sabotage was intended).

I think we’re there with the pro-Trump demagoguery about “voter fraud” (including absentee ballots, the same kind that Trump used–there is no difference between “absentee” and “mail-in” ballots)–it’s setting up a situation in which pro-Trump aggression regarding voting will feel like pre-emptive self-defense.

I asked earlier why it works, and there are a lot of reasons. Some of them have to do with what Kahneman and his co-author said about cognitive biases that favor hawkish foreign policy:

“Several well-known laboratory demonstrations have examined the way people assess their adversary’s intelligence, willingness to negotiate, and hostility, as well as the way they view their own position. The results are sobering. Even when people are aware of the context and possible constraints on another party’s behavior, they often do not factor it in when assessing the other side’s motives. Yet, people still assume that outside observers grasp the constraints on their own behavior.”

In the article, Kahneman and Renshon call these biases “vision problems,” but they’re more commonly known as “the fundamental attribution error” or “asymmetric insight” with a lot of projection mixed in.

The “fundamental attribution error” is that we attribute the behavior of others to internal motivation, but for ourselves we use a mix of internal (for good behavior) and external (for bad behavior) explanations. So, if an out-group member kicks a puppy, we attribute the action to their villainy and aggression; if they pet a puppy, we attribute the action to their wanting to appear good. In both cases, we’re saying that they are essentially bad, and all of their behavior has to be understood through that filter. If we kick a puppy, the act was the consequence of external factors (we didn’t see it, it got in our way); but petting the puppy was something that shows our internal state. In a state of war, even a rhetorical war, we interpret the current and future behavior of the enemy through the lens of their being essentially nefarious.

And we don’t doubt our interpretation of their intentions because of the bias of “asymmetric insight.” We believe that we are complicated and nuanced, but we have perfect insight into the motives and internal processes of others, especially people we believe below us. Since we tend to look down on “the enemy,” we will not only attribute motives to them, but believe that we are infallible in our projection of motives.

And it is projection. I’m not sure whether the metaphor behind “projection” makes sense to a lot of people now, since they might never have seen a projector. A projector took a slide or movie, and projected the image onto a screen. We tend to project onto the Other (an enemy) aspects of ourselves about which we are uncomfortable. If there is someone we want to harm, then projecting onto them our feelings of aggression helps us resolve any guilt we might feel about our aggression.

These three cognitive processes combine to mean that, quite sincerely, if I intend to exterminate you (or your political group, or your political power), I can feel justified in that extermination because I can persuade myself that you intend to exterminate me, since that’s what I intend to do to you.

Pre-emptive self-defense rationalizes my violence on the weird grounds that I intend to exterminate you and so you must desire to exterminate me. Therefore, all norms of law, constitutionality, Christian ethics are off the table, and I am justified in anything I do. It’s a dangerous argument. It’s an argument that justifies an invasion.



[1] And, no, “both sides” are not equally guilty of it. For one thing, there aren’t two sides. On which “side” is a voter who believes that Black Lives Matter, homosexuality is a sin, gay marriage should be illegal, we need a strong social safety net and should increase taxes to pay for it, abortion should be outlawed, the police should be demilitarized and completely changed? What about someone who believes there shouldn’t be any laws prohibiting any sexual practices or drug use, there shouldn’t be a social safety net, taxes should be greatly reduced, abortion should be legal, we shouldn’t intervene in any foreign wars? Those are positions held by important constituencies (in the first case many Black churches, and in the second Libertarians). Some environmentalists are liberals, some social democrats, some Republican, some racist, some Libertarian, some Third way neoliberal. The false mapping of our political world into two sides makes reporting easier and more profitable, and it enables demagoguery.

In addition, not all media engage in demagoguery to the same degree. Bloomberg, The Economist, Foreign Affairs, Foreign Policy, Nation, New York Times, Reason, Wall Street Journal, Washington Post are all media that sometimes dip a toe into demagoguery, but rarely. Meanwhile, The Blaze, DailyKos, Fox, Jacobin, Limbaugh, Maddow, Savage, WND and pretty much every group named by SPLC are all demagoguery all the time.

[2] Hitler was claiming that “Germans” who lived in Poland were oppressed. But, he said, “I must here state something definitely; […]the minorities who live in Germany are not persecuted.” In 1939.