What Putin’s rhetoric should tell us about ours

Trump and Putin

This post is only partly about Hitler; it’s really about Putin, and it’s mostly about us.[1]

I write about train wrecks in public deliberation, so it was just a question of time till I got around to the question of appeasing Hitler. That UK politicians chose to appease Hitler (and the US decided to do nothing) is not just a famously bad decision, but a consequential one. Jeffrey Record says it nicely:

No historical event has exerted more influence on post-World War II U.S. presidential use-of-force decisions than the Anglo-French appeasement of Nazi Germany that led to the outbreak of World War II. The great lesson drawn from appeasement—namely, that capitulating to the demands of territorially aggressive dictatorships simply makes inevitable a later, larger war on less favorable terms—has informed most major U.S. uses of force since the surrender of Germany and Imperial Japan in 1945. From the Truman’s Administration’s 1950 decision to fight in Korea to the George W. Bush’s administration’s 2003 decision to invade Iraq, presidents repeatedly have relied on the Munich analogy to determine what to do in a perceived security crisis. They have also employed that analogy as a tool for mobilizing public opinion for military action. (1)

When I started researching the issue, I approached it with the popular story about what happened. That is, Hitler was obviously a genocidal aggressor who couldn’t possibly be prevented from trying to be hegemon of all Europe—he had laid all that out in Mein Kampf, after all. Leaders who chose to appease him were wishful thinkers who deluded themselves; other countries should have responded aggressively much earlier, at the remilitarization of the Rhineland, ideally, or, at least, when he was threatening war with Czechoslovakia over what he called “the Sudetenland.”

Turns out it’s way more complicated than that. Way more complicated. To be clear, I still think various countries made terrible decisions regarding Hitler and Germany, but the leaders were constrained by voters. It was voters who got it wrong. I’ll get to that at the end.

Hitler took over from the Weimar democracy, which had its problems. It also had its critics. It liberalized laws about sexuality and gender identity, reduced the presence of religious proselytizing in public schools, opened up opportunities for women, included a lot of a demonized group in its power (Jews), relied on democratic processes that included Marxists and democratic socialists, had a reduced military, encouraged avant garde art.

Here’s what is generally left out of popular narratives about WWII. Conservatives in all the countries that went to war against Nazis hated everything the Weimar Republic had done, including its tolerance of Jews, and so many didn’t think the Nazis were all that wrong–better than the USSR, and better than Weimar. Popular between-the-wars UK literature is filled with anti-semitic and anti-Slav rhetoric. Even during the war, a US anti-Nazi pamphlet that condemned Nazi racial ideology was severely criticized because it was attacking the “science” used to defend US segregation. As late as 1967 (in the lower court rulings on Loving v. Virginia) theories of race integral to Nazism were cited as authorities.

Hitler had a lot of apologists among conservatives, including the owner of the very popular Daily Mail in the UK. And, as George Orwell describes in the book that conservatives who quote him never read (haha, they never read anything he wrote–they just quote him), many UK media were knee-jerk anti-communist in their coverage of events—so knee-jerk anti-communist that they failed to distinguish between various kinds of leftist movements. So, a lot of UK media liked what Mussolini, Franco, and Hitler were doing.

Hitler’s first move after being granted dictatorship powers in 1933 (which he did with no particular outrage on the part of major media in other countries, including the US) was to criminalize membership in unions, the democratic socialist party, the communist party, or any other party that advocated democratic deliberation. His second act was to kill all the socialists in the Nazis, which, weirdly enough, was used by his defenders as proof that he was more moderate than they. And from that point on it’s hard to get things in chronological order. The important point, thought, is that by 1939, when there were still major media and figures defending him, he had criminalized not just dissent but any criticism of him, begun engaging in mass killing, criminalized various identities, begun a process of fleecing emigrants, openly reduced Jews to constant humiliation and abuse, put into law the racialization of Germany. He had also remilitarized the Rhineland, incorporated the Saar, violently appropriated Austria, and then appropriated the “German” part of Czechoslovakia. He then took over the rest of Czechoslovakia, and he still had defenders.

Then, when he invaded Poland, some (not all) said, oh, wait, he’s a bad guy. So, why didn’t they do anything earlier? Because his rhetoric was pretty clever.

He had two kinds of rhetoric. For his internal audience, it was exactly what the rhetoric scholar Kenneth Burke described in 1939. Unification through a common enemy, scapegoating/projection, rebirth, bastardization of religious forms of thought, toxic masculinity (not Burke’s term, of course—he talks about the feminization of the masses). All of this was about the rebirth of Germany into a “strong” nation set on domination of weak groups. But he also always made a point of the injustice of the Versailles Treaty, especially the guilt clause.

His external rhetoric had a lot of overlap with that. For instance, a lot of UK media—specifically “conservative”—endorsed and openly admired Hitler’s ‘strong man’ crushing of liberal democratic practices and leftist policies, since they hated those policies and practices. They were also anti-Semitic, anti-Slav, and believed in the Aryan bullshit behind Nazi policies, as were many people in the US. In both the UK and US, many major political figures were sympathetic to thinking of Jews as “a problem” who should be denied immigration.

To go back to the UK, these “conservative” media were thereby writing approvingly of very new practices, ones that traditional conservative voices (such as Edmund Burke) would have found horrifying. “Conservatives” were now writing approvingly of what had until recently been seen as the enemy of the UK. In other words, people often claim to be “conservative” when all they’re conserving in their loyalty to their party, and it has nothing to do with conserving principles.

Here’s the part I didn’t know about appeasement. Many people, all over the political spectrum, were willing to say that the Versailles Treaty was unjust. Hitler’s foreign policy was defended through the rhetoric of the Versailles Treaty, which emphasized self-determination. He didn’t believe in self-determination, of course, but he could use that rhetoric. And he did.

And, as scholars have argued, his use of that rhetoric made it hard for advocates of the treaty to say he was wrong in what he was doing. They certainly couldn’t go to war over it, since the Great War, as it was called, was almost unanimously understood everywhere other than Germany as a colossal mistake. To go to war over the remilitarization of the Rhineland would have seemed to most UK voters a bizarre compulsion to repeat the errors of 1914, when a minor political issue could have been resolved without war.

Hitler adopted the rhetoric that his enemies had recently used—the rhetoric of self-determination—to scoop up territories. He claimed that “the people” of a region wanted Germany to invade because they were being oppressed by [Jews/liberals/Slavs], and so his appropriation was actually liberation. When it came to Poland, he couldn’t plausibly argue that, so he shifted his rhetoric to self-defense—Poland, France, and the UK were intent on attacking Germany (he claimed they had), and so all Germany was doing was justifiable self-defense.

And that’s what Putin did. He adopted the rhetoric his enemies had used, which made it hard for them to call him out.

The rhetoric for a preventive war against Iraq—an unprecedented kind of war for the US—was that it was preventive self-defense. In fact, it was motivated by the desire to make Iraq a reliable ally in US foreign policy.

The rhetoric was that Iraq was supporting a global war against the US in the form of Al Qaida (Bush later admitted they knew it wasn’t), the site of anti-American terrorism, and various other lies. The Bush Administration, and its fanatically supportive media, told a lot of lies, that they knew were lies, because they wanted to put in place a government that would be an supportive of US policy or because they loyally and irrationally supported whatever a GOP President did. I happen to think Bush meant well. I think he believed a very simplistic version of the extremely controversial (and circular) “democratic peace” model, one he didn’t think most Americans would find compelling enough for war, and he so he lied to get what he thought was a good outcome.

The problem is that rhetoric has its own consequences, regardless of intention. By arguing that the US was justified in invading Iraq and putting in a new leader because 1) that state was fostering terrorism, 2) part of an anti-US conspiracy, and 3) presented an existential threat to the US, Bush legitimated a certain set of arguments (what rhetoricians call “topoi”). Just as the Versailles Treaty was grounded in topoi of self-determination, the Iraq invasion was grounded in topoi about terrorism and existential threat. There was a long history of that kind of rhetoric in the Cold War, especially about crushing any kind of political movements in the areas that the US considered its sphere of influence, such as Nicaragua, that might threaten US control. Throughout the Cold War, the US persistently crushed local popular movements of self-determination on the grounds of “sphere of influence”–we would not let any government exist in those areas if it wasn’t loyal to the US.

Putin used US Cold War rhetoric to justify his scooping up of areas, such as Chechnya. It would have been rhetorically and politically impossible for the US and NATO to go to war over that region, given how factionalized US politics is. Look at how the GOP—which had far less power in those days—was critical of US intervention in Serbia. Had Clinton advocated going to war, or even threatening war, over Chechnya, the GOP would have gone to town, and very few Dems would have supported it.

When it came to Ukraine, Putin adopted a rhetoric that cleverly blended Hitler’s rhetoric about Poland, US Cold War rhetoric, and Bush’s rhetoric about Iraq. It was a gamble, but not an unreasonable one (a different post) given the rhetorical conditions of US politics. You could take Hitler’s speech about invading Poland and just do a few “find and replace” to get his speech, and blend it with a speech of Bush’s advocating invading Iraq.[1]

My point is that adopting a rhetoric to get what you want—Cold War rhetoric to justify propping up corrupt and vicious regimes in Central and South American, lying about terrorism to get a war desired for other reasons—has consequences. Rhetoric has consequences in terms of legitimating certain kinds of arguments.

And here is the point about appeasing Hitler. I’m writing a book with a chapter about the rhetoric of appeasement. My argument is that it was a bad choice in terms of what was in the long-term interest of the UK (and the world). However, and this is what most people don’t know, or won’t acknowledge, politicians made the choices they did because appeasing Hitler was the obvious choice to make for any political figure (or party) who wanted to get (or remain) elected. If they advocated responding aggressively to Hitler they would have been excoriated by the most powerful media. Had Clinton advocated responding aggressively to Putin’s treatment of Chechnya, it would have gone nowhere. Had a GOP President advocated responding aggressively to Putin’s expansionism, the Dems would have thrown fits.

I’m not saying that we should have responded aggressively when Putin took over Austria, I mean Chechnya, but that we should have deliberated what Putin was doing. And we couldn’t. Because we are in a culture that demonized deliberation. We are in a culture in which engaging in politics means standing in a stadium chanting, having no political opinion more complicated than what can be put on a bumper sticker, loyally repeating, retweeting, or sharing whatever is the latest in-group talking point, and hating the other side is proof of objectivity.

And here I’ll go back to appeasing Hitler. I don’t really blame the politicians for appeasing Hitler, but that’s largely for the same reason I don’t blame my dog Delbert for eating cat shit. Delbert will do whatever he can to get to cat shit, and politicians will do whatever they can to get elected.

Politicians appeased Hitler because the voters wanted Hitler appeased. We need to stop asking why politicians did what they did in regard to Hitler and instead ask why voters voted the way that they did. FDR and Chamberlain don’t bear the blame for why the US and UK responded as we did to Hitler; voters do. The lesson of appeasement, and the lesson of Putin, is not that leaders make bad decisions, but that voters make bad decisions, and then blame leaders.

After the tremendously popular Sicilian Expedition ended in disaster, the very people who had voted for it claimed that they had been misled, and politicians were at fault.

They voted for it.

George Lakoff pointed out that “liberals” and “conservatives” both adopt the metaphor of family for government in that the government is a parent to the citizens who are children. What if, instead of imagining voters as tools in the hands of political leaders, we acknowledge what Socrates says: even tyrants are tools in the hands of citizens.

So, how do we counter Putin’s kind of rhetoric?

We accept the responsibility of voters, citizens, commenters, sharers, likers. We are all rhetors, and we try to behave responsibly, whether it’s about how awful cyclists are or whether Putin is right.

We stop remaining within our informational enclave. And we feel no shame about pointing out how unfair and irresponsible people are being.

We read the best arguments against our positions; we hold others to the same rhetorical standards as ourselves; we stop engaging in rhetorical Machiavellianism; we argue, well and fairly and vehemently. And we shame others who argue badly. We might do so vehemently, kindly, gently, or harshly, but we do so because we want others to do that to us.

[1] Normally, I link to citations, but that would have delayed this post by a week, since there are a lot of links. If folks want links and cites, let me know.
[2] For the people who have trouble with logic, and reason associatively, I’m not saying Bush was Hitler. I’m saying we shouldn’t judge rhetoric by whether we like its outcome or its advocates—it has its own consequences. Bad rhetoric in favor of a cause we like is, I’m saying, still bad rhetoric in that it legitimates what others might do with it.

“Populism” is not restricted to the plebians; Or, don’t bathe in bagels

A doodle of someone bathing in bagels, and a maid offering more.

I talk a lot about models of democracy. In this post, I want to talk about a kind often called populism, largely because I’m worried about the implications of that term. I think it hinders our ability to think usefully about policy deliberation because it implies that a flawed model of deliberation is restricted to one group. Thus, once again, it makes inclusive democratic deliberation an issue of identity rather than approach.

Several models of democracy presumes that we really disagree, and there is no one viewpoint from which the best policy is obvious. We really disagree because we have different values, priorities, perceptions, interests, needs, experiences, and so on. There is no one right policy, but a large number of policies that are good enough in terms of appropriately sharing the burdens and benefits.

If we operate from within this sort of model, then, if people come to a decision that seems wrong to us, we try to figure out the perspective from which it makes sense, or the negotiations and compromises that might make this a “good enough” decision. Sometimes there is none, btw, and it really was a bad decision. Or it’s only good from some a narrow perspective that it’s really not good enough, if the goal is inclusion. There are lots of decisions that people later regretted that don’t look any better close up–refusing to change the “Jewish” immigration quota in the late 30s, eugenics, Jim Crow (I’ve picked examples that were bipartisan in their support, btw).

There are other models that presume that there is one perspective from which it is obvious what is the right thing to do, and I want to talk about one kind of that model–it’s the one to which we’re appealing when we decide that an entity has come to an obviously bad decision, and it’s obviously bad because it hurts or doesn’t help us. It assumes that there is no point of view with any validity other than our own. It assumes that the right course of action is obvious to a sensible person. There is a disengaged elite that has made a decision that ordinary people know is wrong.

This model is often called populism, but I’m not happy about that term, since it implies that the “populace” engages in this approach to politics and not elites.[1] The problem is that very few people think we’re in the elite, and yet, if you think about elite in terms of education or class, elites engage in that rhetoric just as much as any other group.

There is, for instance, the “makers v. takers” rhetoric, which is used to justify massive tax breaks to the very wealthiest, because they’re ordinary, in a way, and opposed to “the liberal elite” or “the Washington elite” who want intrusive government. Wealthy people complain about professors as an intellectual elite, as though wealthy people are oppressed by Ernesto Laclau.

I’ve talked about it before as “obvious politics,” which might be the right way—the right course of action is what looks obvious to MEEEEEE. It’s also called “stealth democracy” by some political scientists. In my grumpier moments, I think the right term might be something like narcissistic politics. Because of the rise of discussion about narcissism, we’ve lost the term “self-centered,” and that might be the right term.

In any case, to make the point that it isn’t about the unwashed, uneducated, and gullible masses being seduced into thinking badly about things, I want to talk about some academic conflicts in which I’ve seen super-smart people reason exactly this way—whatever we call it. It’s a way of approaching politics that assumes that there is one viewpoint (MINE) from which it’s obvious what should be done.

One example was when there was discussion at one of my universities of shifting the academic calendar in a particular way, and many faculty wanted the change enacted immediately. This came up at a Faculty Council meeting, of which I was a member since obviously I am paying for sins of a past life that must have been pretty fun. Most faculty talked purely in terms of how it would help them and their students. Several people from the College of Engineering said that enacting this change immediately would cause the University to lose its accreditation with important engineering entities. They agreed with the problem (classes on the day before Thanksgiving) but disagreed about the plan. The majority of faculty voted for the change happening immediately.

This was at a University at which the College of Engineering losing accreditation would severely damage the university as a whole. But, the faculty who voted for changing the calendar immediately didn’t listen or didn’t care. They just looked at it from their perspective.

So, anytime that people who pride themselves on their education are outraged that Those Idiots are voting for something or supporting a candidate or party who will hurt them in the long run, I think about that meeting. It isn’t just Them. That’s what’s the matter with, for instance, What’s the Matter with Kansas.

The second example is actually a lot of examples, and it has to do with the cost of academic conferences. They are expensive, and travel is expensive, and departments often don’t support faculty adequately for attendance, or graduate students at all. Faculty at less prestigious colleges and universities sometimes have neither the salary nor university support to attend. Yet, attending conferences is tremendously useful for teaching, research, job-hunting, networking. Thus, the cost of conferences reinforces all sorts of nasty hierarchies in academia. It is a really important problem about which a field that claims to be inclusive really needs to work. We’re agreed on the need.

The plan, however, is up for argument, and one recurrent plaint is that the conference hotel is expensive, and the organization is clearly out of touch, greedy, or in cahoots with the hotels, and so conferences should be hosted at less-expensive hotels. There are complaints that rooms at the conference hotel are expensive, for instance, or that hosting an event in the hotel is pricey, or that the conference registration is far above what so many people can easily afford. Sometimes the accusation is that the organization is clueless about the financial situation of most academics.

My favorite moment, by the way, is when someone complained that the bagels at the conference hotel were expensive, in a somewhat incoherent post but that seemed to suggest they thought the organizers were bathing in champagne on the basis of the profits of bagel sales.

And, just to be clear, I made all those complaints, and more, until I organized a conference. I looked at this issue through the model of narcissistic politics. I’d love to say that I reasoned my way out of it, but I didn’t. I experienced my way out of it.

I made those complaints (except the bagel one) because, from my perspective, it looked like an obviously stupid set of decisions.

In fact, the whole situation is much more complicated and boring than these fantasies of obviously stupid or nefarious conference organizers imply. (Although I’ll admit I kind of love the image of some conference organizer trying to bathe in as much champagne as they could buy with what they profit from the sale of bagels in the hotel lobby, or perhaps even in bagels, hence the doodle above.)

Before I was involved in hosting a conference I didn’t consider so many things, such as the cost of the rooms in which panels were held. Nor was I even remotely aware of the normal cost of the hotel rooms that attendees might get and thus how huge the discount often is, or how that discount is achieved. I’m not sure any academic organization profits from its annual conference; the registration fees barely cover the costs (and some lose money). Sometimes the host covers the losses. I’m not aware of any conference in my field that profits from the annual conference.

In my (limited) experience, the registration fee pays for the rooms in which the panels are held, and the organization has to guarantee a certain number of room rentals in order to get the substantial reduction on room rates (and it is a substantial reduction), and that room rental is connected to a lower price on the conference space. In other words, an organization can’t host the conference at cheaper hotels because those hotels don’t have the space for the panels, and it can only get that panel space by guaranteeing a certain number of room rentals. The more room rentals it can guarantee, the greater the room rental discount.

So, I was wrong to imagine that conference organizers were bathing in bagels, or in the profits from bagels.

I’ve come to think that the problem is big, and the solutions aren’t obvious, and that organizations are working on them–they involve things like funds for certain kinds of attendees, tiered registration rates, perhaps more virtual attendance options (which doesn’t help with networking), organizational support for regional conferences. What I do know is that leaders of academic organizations worry about this a lot.

There are, of course, people in power who are greedy, narrow-minded, malevolent, corrupt, stupid, and so on, and we need to condemn them. My point is simply that no one died and gave us omniscience. We see as through a glass darkly, and a glass that only shows part of the possible world. That tendency to assume that only people like us matter, and people like us see the world in an obvious and unbiased way, isn’t about education, in-group membership, or some universal genius. It’s about information. We can’t know whether a decision is bad without trying to hear why people have made the decision they have. That it looks bad to us doesn’t necessarily mean it’s bad.

Unless they’re bathing in bagels. That’s a bad decision.

[1]Paul Johnson talks about “conservative populism,” meaning a specific rhetoric mobilized by groups that claim to be “conservative” (spoiler alert: they aren’t), and he uses the term precisely and usefully, but I think one still might infer that populism is unique to people who self-identify as “conservative” (which is very clearly not what he means). Chip Berlet and Mathew Lyons have a book I still like, in which they talk about “Right-Wing Populism” which has as examples more than one Democrat, or supporter of the Democratic Party. Like Johnson, the term “right-wing” is restrictive. An awful lot of really good and smart work talks about populism more generally, which appears all over the political spectrum. But, again, it seems to me that, while no one is claiming that only people on that point of the political spectrum appeals to populism, there does seem to be the implication that it’s a vice of “the populace.”

“Sign, sign, everywhere a sign:” The Trappings of Truthful Information

I mentioned to someone that I thought people often mistake signs for proof, when signs aren’t even evidence. And that person asked for clarification, so here is the clarification.

What I was trying to say is that some people support their point via signs rather than evidence. I’ve often made the mistake of thinking that the people who appeal to signs rather than evidence misunderstand how evidence is supposed to work, but I eventually figured out that they don’t care about evidence. They care about signs. Explaining that point means going back over some ground.

A lot of people are concerned about our polarized society, identifying as the problem the animosity that “both sides” feel toward each other, and so the solutions seems to be some version of civility—norms of decorum that emphasize tone and feeling. I have to point out that falling for the fallacy of “both sides” is itself part of the problem, so this way of thinking about our situation makes it worse. The tendency to reduce the complicated range of policy affiliations, ideologies, ways of thinking, ways of arguing, depth of commitment, open-ness to new ideas, and so many other important aspects of our involvement with our polis to a binary or continuum fuels demagoguery. It shifts the stasis from arguing about our policy options to the question of which group is the good one. That is the wrong question that can only be answered by authoritarianism.

I think we also disagree about ontology. I’ve come to think that a lot of people believe that the world is basically a stable place, made up of stable categories of people and things (Right Answer v. Wrong Answers, Us v. Them). It isn’t just that the Right Answer is out there that we might be able to find; it’s that there is one Right Answer about everything, and it is right here–the Right People have it or can get it easily. We just need to listen to what the Right People tell us to do. I want to emphasize that these stable categories apply to everything—physics, ethics, religion, politics, aesthetics, how you put the toilet paper on the roll or make chili, time management, childraising….

There are many consequences of imagining the world is a place of fake disagreement in which there is one Right Answer that we are kept from enacting, and I want to emphasize two of them. First, in this world, there is no such thing as legitimate disagreement about anything. If two people disagree, one of them is wrong, and needs to STFU. Second, the goal of thinking is to get one’s brain aligned with the categories that are in the structure of the world (to see the Right Answer), and people who think about the world this way generally believe there is some way to do that. In my experience, people who believe the world presents us with problems that have obvious solutions are some kind of naïve realist, but it’s important that there are various kinds of naive realist (with much overlap).

There are naïve realists all over the political spectrum. That doesn’t mean I’m saying all groups are equally bad–that’s an answer to the question we shouldn’t waste our time asking [which group is the good one]. Instead of arguing about which group is good, we should be arguing about which way of arguing is better. I don’t think that there is some necessary connection between political ideology and epistemology—there are very few relativists (it’s hard to say that it’s wrong to judge other beliefs without making all the nearby cats laugh), but realists of various stripes I’ve read or argued with have been self-identified anarchist, apolitical, conservative, fascist, leftist, Leninist, liberal, Libertarian, Maoist, Nazi or neo-Nazi (aka, Nazi), neoconservative, neoliberal, objectivist, progressive, reactionary, socialist, and I’ve lost interest in continuing this list.[1] I’ve also argued with people from those various positions who are not realists (which is a weird moment when I’m arguing with objectivists), and it’s often the people who insist on the binary of realist v. relativist who actually appeal to various forms of social constructivism (Mathew McManus makes this point quite neatly).[2]

I’ve talked a lot about naïve realism in various writings, but I’ve relatively recently come to realize that there are a lot of kinds of naïve realism, and there are important differences among them. They aren’t discrete categories, in that there is some overlap as mentioned above, but you can point to differences (there are shades of purple that become arguably red, but also ones that are very much not red–naïve realism is like that). For instance, some people believe that the Truth is obvious, and everyone really knows what’s true, but some people are being deliberately lazy or dumb. These people believe you can simply see the Truth by asking yourself if what you’re seeing is true. I’ve tended to focus on that kind of naïve realism, and that was a mistake on my part because not all naïve realists think that way.

Many kinds of naïve realists believe that the Truth isn’t always immediately obvious to everyone, because it is sometimes mediated by a malevolent force: political correctness, ideology, Satan, chemtrails, corrupt self-interest, unclean engrams, or the various other things to which people attribute the inability of Others to see the obvious Truth.[3] These people still believe it’s straightforward to get to the Truth. It might be through sheer will (just willing yourself to see what’s true), some method (prayer, econometrics, reading entrails, obeying some authority), being part of the elect, identifying a person who has unmediated access to the Truth and giving them all your support, or through paying attention to signs, and that last one is the group I want to talk about in this post.

Belief in signs is still naïve realism—the Truth (who/what is Right and who/what is Wrong) can be perceived in an unmediated way, but not always; the Truth is often obscured, but also often directly accessible. These people believe that there are malevolent forces that have put a veil over the Truth, but that the Truth is strong enough that it sometimes breaks through. The Truth leaves signs.

It is extremely confusing to argue with these people because they’ll claim that one study is “proof” of their position (they generally use the word “proof” rather than evidence, and that’s interesting), openly admitting that the one study they’re citing is a debunked outlier. They’ll use a kind of data or argument that they would never admit valid in other circumstances—that some authors say there is systemic racism is a sign that those authors are Marxists, since that’s also what Marxists say. But, that the GOP says that capitalism tends toward monopoly doesn’t mean the GOP is Marxist, although that’s also what Marxists say. That one Black man, scientist, “liberal,” expert says something is proof that it’s true, but that another Black man, scientist, and so on say it isn’t true doesn’t matter. That a hundred Black men, scientists, and so on say it’s wrong doesn’t matter. Or, what I eventually realized is that it does sort of matter—it’s further proof that the outlier claim is True. That knowledge is stigmatized is proof that it is not part of the cloud malevolent forces place over the Truth—it’s one of the moments of Truth shining through. If you’ve argued with people like this, then you know that pointing out that relying on a photo, quote, or study that appears nowhere outside their in-group doesn’t suggest to them that there are problems with that datum; on the contrary, they take it as a sign that it’s proof.

Because they believe that the Truth shines through a cloud of darkness, or leaves clues scattered in the midst of obscurity, they prefer auto-didacts to experts, an unsourced heavily-shared photo to a nuanced explanation, someone whose expertise is irrelevant to the question at hand, polymaths, and people who speak with conviction and broad assertion over someone who talks in terms of probabilities.

Fields that use evidence such as law spend quite a bit of time thinking about the relative validity of kinds of evidence. Standards of good evidence are supposed to be content-free, so that there are standards of expertise that are applied across disciplines. We can argue about the relative strength of evidence, and whether it’s a kind of evidence we would think valid if it proved us wrong, but neither of those conversations have any point for someone who believes in signs rather than evidence. They’ll just keep repeating that there are signs that prove their point.

People who believe in degrees and kinds of evidence are likely to value cross-cutting research methods, disagreement, and diversity. People who believe that the Truth is generally hidden but shines out in signs at moments are prone, it seems to me, to see cross-cutting research methods and diversity as a waste of time, if not actively dangerous. They don’t see a problem with getting all their information from sources that confirm their beliefs; they think that’s what they should do. Yes, it’s one-sided, they’ll say—the side of Truth.

It’s because of that deep divide about perception that I often say that we have a polarized public not because we need more civility, as though we need to be nicer in our disagreements, but because we disagree about the nature of disagreement.




[1] Yes, I’ll argue with a parking brake, if it seems like an interesting one.
[2] I really object to the term “populist,” since it implies that the “elite” never engage in this way of thinking. That’s a different post.
[3] As an aside, I’ll mention that these people often believe that you either believe that there is a Truth, and good people perceive it with little or no difficulty or you believe that all beliefs are equally valid (a belief that pro-GOP media attribute, bizarrely enough, to “Marxism”—Marxists hate relativism). Acknowledging uncertainty doesn’t make one a relativist, let alone a Marxist. If it does, then Paul was both a relativist and a Marxist. He did, after all, say that “we see as through a glass darkly.” If you’d like to argue that Paul was a relativist and Marxist, I’m happy to listen.






Your twenties kinda suck

black and white photo of young people


I’ve spent forty years working with people in their late teens or early twenties, and then watching them navigate their twenties. And, for an awful lot of people, your twenties suck. Not as much as high school, but more than college. They suck for a bunch of reasons, and one of them is how much nostalgia people have about their twenties. Because, also, your twenties have some great things about them, and so nostalgia is easy. The problem is that far too many people in their forties and fifties (or older) only remember the good things, and so, in movies, memories, fiction, TV, they’re a carefree time. When you’re in them, they are not carefree.

As a culture, we memorialize people in their twenties as being free, with no responsibilities, able to do all sorts of impulsive things, with a world that has no commitments, including no romantic ones. We think of people in their twenties as people who can move anywhere, have folks over to a messy place and serve them cheap booze and bad weed, have a “dinner” party that is your best ramen recipe, drift in and out of hookups, spend a day in bed reading earnest literature or listening to angsty music or just playing a game, get a tat, wear nothing but a t-shirt and ripped jeans for months end.

And that’s sort of true, for some people with a particular background. And even for those people, the whole reason you could serve guests shitty alcohol and cheap food was that’s all any of you could afford. You could move anywhere and have drifty hookups because you had no responsibilities. But you had no responsibilities because you didn’t have a career, a stable enough living situation to get a dog, no particular connection to any one place, and often not a stable relationship. Another way to describe the twenties is not free of responsibilities, but unmoored.

I have seen people whom I knew in their twenties reminisce as though that time was wild and carefree. I remember what they were like. They were not carefree. They were at moments, but at other moments they were incredibly anxious about whether they’d find a career, make enough money to get a stable living situation, figure out where to live, be able to get a dog, find someone… How we look back on our lives, and how we lived it, don’t always match up. We think about those times differently because we know how that story ended, and so we forget how anxious we were about where the story would go. And it’s fine and great if we look back on our twenties with affection and nostalgia, but I think it’s harmful if, as people giving advice, or as a culture, we deny the angst and difficulties inherent to that age.

The twenties are really hard for a bunch of reasons. People’s brain chemistry changes, and so people suddenly start having issues they’ve never had before. Many people didn’t go to college, and they’ve spent the years since high school trying to figure out how to navigate a world that doesn’t have a path of upward mobility or decent benefits for skilled artisans. Many people go to college, and end up with a degree that has no clear career path. They don’t know what they want to do, they find that they are having trouble getting a foothold in a career, they’re expected to have a career plan without any information. They also find that jobs won’t hire them without experience, and they have no way to get experience. It’s rough. Many people finish college in order to go to grad school, and grad school sucks. Some people take a path or have a personality that means they never have to manage the changes presented by the twenties, and good for them, but they aren’t the norm.

In addition, for many people (including some who go on to grad school), all the signs that gave us confidence are gone—good grades, praise from teachers, getting to be Eagle Scout, winning a sports championship—and so we don’t know how to assess our performance. Are we failing to get jobs (dates, second dates, that apartment, raises, publications, the same level of success in grad school that we got in undergrad, and so on) because we’re bad, we’re good but not a good fit, good but with bad materials, it’s a rough market, looking in the wrong places, looking in the wrong ways, we aren’t capable of achieving our goals, we aren’t making the changes we need to make in order to achieve those goals, those are goals no one achieves?

Because we have lost the ground for our confidence, some people resort to arrogance, deciding that we are entitled to all the good things, exactly as we are, and doing exactly what we’re doing, and we should be enraged if we don’t get what we believe we’re entitled to get. We have been the best, and so we are entitled to be the best. I don’t think that’s a great choice, but I get that it’s attractive.

I think there are other ways that things change for people in their twenties that we don’t always remember when we look back on that era. For an awful lot of people, the behaviors and mindsets that got them to their twenties (or didn’t stop them from getting there) stop working as well and sometimes at all. All-nighters, relying on panic and shame to get things done, letting friendship come to you, random self-care, no self-care—for many people those behaviors start having costs they didn’t have before.

Some people just pay those costs, some people get lost, some people wander and are not lost, some people do the hard work of trying to figure out how to manage a new world, some people postpone the difficulty, and, well, so many other options. There are lots of ways to get through the twenties that will turn out fine, and lots of ways that aren’t so great.

I rather like Erik Erickson, and I find persuasive the notion that there are moments in our lives when we’ve got a lot of crap from the past and mixed messages from the present. And we have to figure out what we’re going to do about the fact that living life as we’ve lived it has put us into a crisis about who we are and what we want.

There are moments when we look in our closet, and it’s stuffed full. It’s full of things we thought we’d use, things we used to use, things we’d like to use, things that we use, things we will use if life plays out a certain way, things we’ve been told we should value, things we value we’ve been told we shouldn’t, things we’ll never use but like to think of ourselves as the sort of person who would use them, and so on.

I think the twenties are one of those moments of looking at that closet. (There are others.)

We might decide to keep shoving stuff in there and just not look. That’s always a choice.

If we pull it all out and try to figure out what to do with it, there will be a moment (or more) that is absolutely awful. We will look at all that shit, all over the fucking place, and just wish we’d never started. We have to make so many decisions when we don’t have the information to know what we’ll need and what we won’t. We can, of course, shove it back in. We can give it all away. We can do the hard work of thinking about it all, and deciding what to keep, what to store, what to give away, what to burn ritually, what to give away.

Later, when we’ve shoved everything back in, burned it all, gone through it thoughtfully, or whatever we did, it doesn’t seem so bad, and that’s when the nostalgia kicks in. I don’t think people need to remember the pain of our twenties on a regular basis, but I do think it’s helpful, if we’re talking to people in their twenties, not to present our nostalgic version as though that was all that happened. It happened, but it isn’t all that happened.

Everyone wants to ban books

various books that are often challenged

I used to teach a class on the rhetoric of free speech, since what you would think would be very different issues (would the ideal city-state allow citizens to watch dramas, should Milton be allowed to advocate divorce, should people be allowed to criticize a war, should we ban video games) end up argued using the same rhetoric. Everyone is in favor of banning something, and everyone is prone to moral outrage that others want to ban something. The Right Wing Outrage Media went into a frenzy about people trying to pull To Kill a Mockingbird from K-12 curricula, and “cancel culture” as though they were, on principle, opposed to censorship. Those same pundits are now engaged in a disinformation campaign about CRT, which they are trying to ban (or, in other words, “cancel”), as well as books that teach students their rights, mention LGBTQ, talk about systemic racism. And the biggest call for pulling books from curriculum, school, and public libraries is on the part of the GOP, which continues to fling itself around about cancel culture. Of course, those examples could be flipped: people who defended removing Adventures of Huckleberry Finn or To Kill a Mockingbird are now outraged at Maus being removed.

They aren’t the first or only group to claim to be outraged, on principle, about “censorship” at the same moment they’re advancing exactly the policy they’re claiming they are, on principle, outraged that others advocate. Everyone wants some book removed from K-12 curricula, school libraries, public libraries. We are all in favor of banning books.

I’m not saying that everyone is a hypocrite, that there’s not really a controversy, we’re all equally bad, or it’s all about who has the power. I’m saying that this disagreement too often falls into the rhetorical trap that so much public discourse does. We talk as though our actions are grounded in a principle to which we are completely and purely committed when, in fact, we violate it on a regular and strategic basis. It would be useful if we stopped doing that. We should argue about whether these books should be banned, and not about banning books in the abstract.

There are several problems with how we argue about “censorship.” One is that we often conflate boycotting and banning, and they’re different. If you choose not to listen to music that offends you, give money to businesses or individuals who promote values or advocate actions that you believe endanger others, refuse to spend Thanksgiving dinner with a relative who is abusive, that isn’t “cancel culture.” It’s making choices about what you hear, read, or give your money to. Let’s call that boycotting. This post is not about boycotting, but about banning, about restricting what others can hear, read, watch, or learn. For sake of ease, I’ll call that “banning books.”

We’re shouting slogans at one another because we aren’t arguing on the stasis (that is, place) of disagreement. It’s as though we were room-mates and you wanted me to do my dishes immediately, and I wanted to do them once a day, and we tried to settle that disagreement by arguing about whether Kant or Burke had a better understanding of the sublime. We’ll never settle the disagreement if we stay on that stasis. We’ll never settle the issue about whether Ta-Nehisi Coates’ books should be banned from high school libraries if we’re pretending that this is an issue about whether book banning is right or wrong on principle.

The issue of banning books that we’re talking about right now actually has a lot of places of agreement. Everyone agrees that it is appropriate to limit what is taught in K-12, and what public and school libraries make available (especially to children). Everyone agrees that the public should have input on those limits and that availability. Everyone also agrees that it’s appropriate to limit access to material that is likely to mislead children, especially if it is in such a way that they might harm themselves or others. We also agree that mandatory schooling is necessary for a well-functioning democracy.

We disagree about when, how, and why to ban books because we really disagree about deeper issues regarding how democracy functions, what reading does, what constitutes truth, and how people perceive truth. We are not having a political crisis, as much as rhetorical one that is the consequence of an epistemic one.

It makes sense to start my argument with our disagreements about democracy, although the disagreements about democracy aren’t really separable from the disagreements about truth. Briefly, there are many different views as to democracy is supposed to function. I’ll mention only five of the many views: “stealth democracy” (see especially page two; this model is extremely close to what is called “populism” in political science), technocracy, neo-Hobbesianism, relativism, pluralism. And here is my most important point: none of these is peculiar to any place on the political spectrum. Our world is demagogically described as left v. right, just because that sells papers, gets clicks, and mobilizes voters. Our political world is, in fact, much more complicated, and the competing models of democracy exemplify how we aren’t in some false binary of left v. right. Every one of these models has its advocates everywhere on the political spectrum–not evenly distributed, I’ll grant, but they’re there. As long as we try to think about our political issues in terms of whether “the left” or “the right” has it right, we’ll never have useful disagreements on issues like book banning. So, back to the models.

“Stealth democracy” presumes that “the people” really consists of a group with homogeneous views, values, needs, and policy preferences. There isn’t really any disagreement among them as to what should be done; common sense is all one needs to recognize what the right decisions are in any situation, whether judicial, domestic or foreign policy, economic, military, and so on. Expert advice is reliable to the extent that it confirms or helps the perceptions of these “real” people, who rely on “common sense.” This kind of common sense privileges “direct” experience, claiming that “you can just see” what’s true, and what should be done. Experts, in this view, have a tendency to complicate issues unnecessarily and introduce ambiguity and uncertainty to a clear and certain situation.

So, how do advocates of stealth democracy explain disagreement, compromise, bargaining, and the slow processes of policy change? They believe that politicians delay and dither and avoid the obviously correct courses of action in order to protect their jobs, because they’re getting paid by “special interests,” and/or because they’ve spent too much time away from “real” people. They deflect that other citizens disagree with them by characterizing those others as not “real” people, dupes of the politicians, or part of the “special interests.”

In short, there are people who are truly people (us) who have unmediated perception of Truth and whose policies are truly right. We rely on facts, not opinions. In this world, there is no point in listening to other points of view, since those are just opinions, if not outright lies. Just repeat the FACTS (using all caps if necessary) spoken by the pundits who are speaking the truth (and you know it’s the truth without checking their sources, not because you’re gullible, but because true statements fit with other things you believe). Bargaining or negotiating means weakening, corrupting, or damaging the truly right course of action. What we should do is put real people in office who will simply get things done without all the bullshit created by dithering and corrupt others. Dissent from the in-group is not just disloyalty, but dangerous. Stealth democracy valorizes leaders who are “decisive,” confident, anti-intellectual, successful, not particularly well-spoken, impulsive, and passionately (even fanatically) loyal to real people.

People who believe in stealth democracy believe that educating citizens to be good citizens means teaching them to believe that the in-group (the real people) is entirely good, whose judgment is to be trusted.

Technocracy is exactly the same, but with a different sense of who are the people with access to the Truth—in this case, it’s “experts” who have unmediated perception, know the “facts,” whereas everyone else is relying on muddled and biased “opinion.” Believers in technocracy valorize leaders who can speak the specialized language (which might be eugenics, bizspeak, Aristotelian physics, econometrics, neo-realism, Marxism, or so many other discourses), are decisive, and certain of themselves. And technocracy has, oddly enough, exactly the same consequences for thinking about disagreement, public discourse, dissent, and school that stealth democracy does.

In both cases, there is some group that has the truth, and truth can simply be poured into the brains of others—if they haven’t been muddled or corrupted by “special interests.” They agree that taking into consideration various points of view weakens deliberation and taints policies—the right policy is the one that the right group advocates, and it should be enacted in its purest form. They just disagree about what group is right. (In one survey, about the same number of people thought that decisions should be left up to experts as thought decisions should be left up to business leaders, and I think that’s interesting.)

Both models agree that school can make people good citizens by instilling in students the Truths that group knows, while also teaching them either to become members of that group, or to defer to it. Because students should learn to admire, trust, and aspire to be a member of that group, there is no reason to teach students multiple points of view (since all but one would be “opinion” rather than “fact”), skills of argumentation (although teaching students how to shout down wrong-headed people is useful), or any information that makes the right group look bad (such as history about times that group had been wrong, mistaken, unjust, unsuccessful). Education is indoctrination, in an almost literal sense—putting correct doctrine into the students.

I have to repeat that there are advocates of these models all over the political spectrum (although there are very few technocrats these days, they seem to me evenly distributed, and there are many followers of stealth democracy everywhere). In addition, it’s interesting that both of these approaches are, ultimately, authoritarian, although advocates of them don’t see them that way—they think authoritarianism is a system that forces people to do what is not the obviously correct course of action. They both think authoritarianism is when they don’t get their way.

Hobbesianism comes and goes in various forms (Social Darwinism, might makes right, objectivism, “neo-realism,” some forms of Calvinism, what’s often called Machiavellianism). It posits that the world is an amoral place of struggle, and winning is all that matters. If you can break the law and get away with it, good for you. Everyone is trying to screw everyone else over, so the best approach is to get them first—it is a world of struggle, conflict, warfare, and domination. Democracy is just another form of war, in which we can and should use any strategies to enable our faction to win, and, when we win, we should grab all the spoils possible, and use our power to exterminate all other factions. Schooling is, therefore, training for this kind of dog-eat-dog world, either by training students to be fighters for one faction, or by allowing and encouraging bullying and domination among students. The curriculum and so on are designed to promote the power and prestige of whatever faction has the political control to force their views on others. There is no Truth other than what power enables a group to insist is true. As with the other models, taking other points of view seriously just muddies the water, weakens the will, and, with various other metaphors, worsens the outcome. People who ascribe to this model like to quote Goering: “History is written by the victors.”

I’m including relativism simply because it’s a hobgoblin. I’ve known about five actual relativists in my life, or maybe zero, depending on how you define it. “Relativist” is the term people commonly use for others (only one of the people I knew called themselves relativists) who say that there is no truth, all positions are equally valid, and we should never judge others. In fact, relativists are very judgmental about people who are not relativist (I have more than once heard some version of, “Being judgmental is WRONG!”), and they generally stop being relativist very fast when confronted with someone who believes that people like them should be exterminated or harmed.

Stealth democrats and Hobbesians are often effectively sloppy moral relativists, in that they believe that the morality of an action depends on whether it’s done by an in-group member (stealth democracy) or is successful (Hobbesians). But they also, in my experience, both condemn relativism, because they don’t see themselves as relativists, as much as people who are so good in one way that they have moral license to behave in ways that they fling themselves around like a bad ballet dancer if engaged in by an out-group. On Moral Grounds.

Pluralism assumes that any nation is constituted by people with genuinely different needs, values, priorities, policy preferences, experiences. Therefore, there is no one obviously correct policy, about which all sensible people agree. Since sensible and informed people disagree, we should look for an optimal policy, a goal that will involve deliberation and negotiation. The optimal policy isn’t one that everyone likes—in fact, it’s probably no one’s preferred policy—but neither is it an amalgamation of what every individual wanted. It’s a good enough policy. Considering various points of view improves policy deliberation, but not because all points of view are equally valid, or there is no truth, or we are hopelessly lost in a world of opinion. Some advocates of pluralism believe that there is a truth, but that compromise is part of being an adult; some believe in a long arc of justice, and that compromises are necessary; some believe that truth is not something any one human or group has a monopoly on; some believe that the truth is that we disagree; some people believe that, for now, we see as through a glass darkly, but we can still strive to see as much and as clearly as possible, and that requires including others who, because they’re different, are part of a larger us. The foot is not a hand, the eye is not an ear, but they are all equally important parts of the body. We thrive as a body because the parts are different.

So, how does pluralism keep from slipping into relativism? It doesn’t say that all beliefs are equally valid, but that all people, actions, and policies are held to the same standards of validity—the ones to which we hold ourselves. We treat others as we want to be treated. We don’t give ourselves moral license.

And, now, finally, back to the question of book banning.

We all want to restrict books from schools and libraries. We disagree about which books because we disagree about which democracy we want to have. Do we believe that giving students accurate information about slavery, segregation, the GI Bill, housing practices and laws will make them better citizens, or do we believe that patriotism requires lying to them about those facts? Or, at least, pretending they didn’t happen? Do we imagine that a book transmits its message to readers, so that a het student reading a book that describes a gay relationship in a positive way might be turned gay?[1] Do we believe that citizens should be trained to believe that only one point of view is correct, to manage disagreement productively, to listen to others, to refuse to judge, to value triumph over everything, or any of the many other options? When we say books will harm students, what harm are we imagining? Are we worried about normalizing racism because that violates the pluralist model, normalizing queer sexualities because that violates the stealth democracy model, having students hear about events like the Ludlow Massacre since that troubles the Hobbesian model?

We don’t have a disagreement about books. We have a disagreement about democracy.



[1] One of the contributing factors to my being denied tenure was that I taught a book that enraged someone on the tenure and promotion committee. I didn’t actually like the book, and was using it to show how a bad argument works. He assumed you only taught books that had arguments you wanted your students to adopt. In other words, he and I were operating from different models of reading. One topic I haven’t been able to cover in this already too long post is about lay theories of reading in book banning. My colleague Paul Corrigan is working on this issue, and I hope he publishes something soon.












“A little less talk, a little more action….”

Prime Minister Chamberlain announcing "peace for our time"
From here: https://www.youtube.com/watch?v=SetNFqcayeA


I know that I spend so much time talking about paired terms that people are probably tired of it. But, once you learn to recognize when someone is arguing used binary paired terms, then suddenly so many otherwise inexplicable jumps in disagreements make sense.

Just to recap, binary paired terms are sets of binaries (Christian/atheist, capitalist/communist) that are assumed to be logically equivalent—the preferred term in each pair is equivalent (and necessarily chained to) all the other good terms; and all of them are opposed to other terms that are equivalent (and chained to) all the other bad terms. Christian is to communist as capitalist is to communist—all communists are atheists, all Christians are capitalists.

Paired terms showing that people assumed that integration was communist because they believed segregation was Christian

When someone (or a culture) is looking at the world through binary paired terms, then it seems reasonable to make an inference about an opposition’s affirmative case or identity simply because they’ve made a negative case. It’s fallacious. It’s assuming that, if you say A is bad, you must be saying B is good, as though the world of policy options is reduced to A and B.

For instance, segregationists who believed that segregation was mandated by Scripture (an affirmative case: A [segregation] is good) thought they were being reasonable when they assumed that critics of segregation (negative case: A is bad) were making an affirmative case for communism (B is good)—segregation is Christian; communists are the opposite of Christian; therefore, critics of segregation are communists. The important point is that people who believed that particular set of binary paired terms believed that it wasn’t possible to be Christian and critical of segregation.

Thinking in binary paired terms isn’t limited to one spot on the political spectrum, nor to any spot on the spectrum of educational achievement/experience. Nor are the binary paired terms the same for everyone, and they can change over time. For instance, now many conservative Christians (exactly the point on the religious spectrum that advocated slavery and then segregation) claim that Christians were opposed to segregation because MLK was Christian, thereby ignoring that the major advocates of segregation were white Christian churches and leaders, and even universities, like Bob Jones. They are ignoring that there were Christians on all sides of that argument.

Consider these sets of paired terms. For some people, being proud is the opposite of being critical; for some, it’s the opposite of being ashamed. Thus, for the first set of people, if you’re proud of the US, or proud of being an American, then you must think everything the US did is good; therefore, you think slavery was okay, and you must be racist. So, they assume that, if you say you’re proud of the US, or you fly a flag, then you’re a defender of slavery. Their set of terms is something like this:

paired terms about slavery
Paired terms following from the proud/critical false binary


For the other group, the terms are something like this:

false binary proud/ashamed
Paired terms following from the proud/ashamed false binary

So, while we might put those two arguments in opposition to each other (anti- v. pro-CRT, for instance), it’s interesting that they are both positions from within a world that assumes similarly binary paired terms. The whole controversy ends if we imagine that being proud and critical are possible at the same time—that is, if we dismantle the binary paired terms.

When I criticize, for instance, some practice of GOP politicians as authoritarian (or a GOP pundit for advocating authoritarianism), a supporter of the GOP will surprisingly often answer, “It’s the Dems who are authoritarian,” as though that’s a refutation. (The same happens when I criticize Dems, Libertarians, Evangelicals, or just about any other group.) That response doesn’t make any sense, unless you are working from within binary paired terms.

If Dems are the opposite of the GOP, and Dems are authoritarian at all, then they occupy the slot for authoritarian, and GOP must be anti-authoritarian.

Of course, that’s entirely false. Both parties might be authoritarian, they might be different degrees of authoritarian, neither party might be authoritarian per se but either party might, at this moment, be advocating an authoritarian policy. Instead of arguing which party is authoritarian (as though that gives a “get out of authoritarianism free” card to “the” other), we should argue about whether specific policies or rhetoric are authoritarian, but you can’t do that if you approach all issues through binary paired terms.

Another important and damaging set of paired terms begins with the false binary of talk v. action. It’s both profoundly anti-deliberative, but anti-democratic. And it’s so pervasive that we don’t even realize when we’re assuming it.

I got a really smart and thoughtful email about Rhetoric and Demagoguery, and the person raised the question of whether the desire for deliberation can be destructive, citing the instance of appeasing Hitler. And a common understanding of the appeasement issue is that people tried to deliberate with and about Hitler rather than take action, when action was what was necessary.

For reasons I’ll mention toward the end of this post, I am writing a chapter about the rhetoric of appeasement for the current book project, so I can answer that question. The answer is actually pretty complicated, but the short answer is that the British leaders never deliberated with Hitler, and the British public had severely constrained public discourse about Nazism and Hitler—so constrained that I’m not sure it counts as deliberation.

When we think in binary paired terms, one of the pair is narrowly defined (often implicitly rather than explicitly), and the other is everything else. When it comes to the issue of appeasing Hitler, “action” is implicitly narrowly defined as military action, and everything else is seen as “talk.” But talk is not necessarily deliberation. British leaders didn’t deliberate with Hitler; they bargained with him. Hitler didn’t bargain with British leaders; he deflected and delayed. I don’t think more talking with Hitler would have prevented war, and he wasn’t capable of deliberation (his discussions with his generals show that to be the case). But that doesn’t mean that military action would have prevented war. I used to think that going to war over Czechoslovakia would have been the right choice, but it turns out that course of action had serious weaknesses, as would sending troops in to prevent the militarization of the Rhineland (for more on the various alternatives to appeasement, see especially this book). The short version is that many of the military actions are advocated on the grounds that they would have deterred Hitler, a problematic assumption.

There were other actions that I’ve come to think probably had a higher likelihood of preventing war, such as Britain and the US refusing to agree to such a punitive treaty in 1919, insisting that the Kaiser explicitly agree to a treaty (i.e., not letting him and Ludendorff throw it onto the democracy), enacting something like the Dawes plan long before they did, either explicitly renegotiating the Versailles Treaty or enforcing it. In other words, preventing the rise of Nazism would have been the better course of action.

There are other counterfactuals people advocate: a mutual protection pact with the USSR, preventing France and Belgium from occupying the Ruhr, a different outcome for the Evian Conference, the US joining the League of Nations, a more vigorous response to the aggressions of Japan and Italy, the UK rearming long before it did, intervention in the Spanish Civil War. But, for various reasons, almost all of those options were rhetorical third rails–it was career-ending for a political leader to advocate any of them. The problem wasn’t that the UK engaged in talk rather than action, but that it didn’t talk about all the possible actions it might take, while the US didn’t deliberate about the issue at all.

The British public discourse about Hitler and the Nazis was severely constrained by the isolationism of the US, political complications in France, an unwillingness to deliberate about basic assumptions regarding what caused the Great War or what Hitler wanted, demonizing of the USSR, shared narratives about Aryanism, racism about Jews, Slavs, and immigrants generally.

But, many people ignore all those complexities, and imagine the situation this way:

Paired terms about appeasement resulting from false binary of talk/action

All the various actions that weren’t appeasement, but that weren’t military response, disappear from this way of thinking. And, to be blunt, that’s how the popular discourse about appeasement works.

So, why did I decide to write a chapter about appeasement?

Because I believed that the UK had ignored the obvious evidence that Hitler was obviously not appeasable and it was obvious that they should have responded more aggressively. In other words, I accepted the reductive binary paired terms about the situation. I was wrong.

Binary paired terms are pervasive and seductive, and we all fall for them. Obviously.

On planning (especially for dissertation writers)

calendar showing highlights for different kinds of work

A while ago (probably several months), someone said they hated planning, and I’ve been meaning since then to write a blog post about it. It’s even been on my to-do list since then. To some people, that might look ironic–here I am giving advice about planning when I have been planning to do something for months and not getting to it.

That only seems ironic if we imagine planning to do something as making an iron-clad commitment we are ethically obligated to fulfill immediately. Thinking about planning that way works for some people, but for most people, it seems to me, it’s terrifying and shaming.

Planning isn’t necessarily a process that guarantees you’ll achieve everything you ever imagine yourself doing, let alone as soon as you first imagine it. Nor does planning require that you make a commitment to yourself that you must fulfill or you’re a failure. It’s about thinking about what must v. what should v. what would be nice to get done, somehow imagined within the parameters of time, cognitive style, resources, energy, support, and various other constraints. Sometimes things you’d like to get done remain in your planning for a long time.

There are people who are really good at setting specific objectives and knocking them off the list, who believe that you shouldn’t set an objective you won’t achieve, and who are very rigid about planning. They often get a lot done, and that’s great. I’m glad it works for them. Unfortunately, some of them are self-righteous and shaming because they assume that this system–because it works for them–can work for everyone. That it clearly doesn’t is not a sign that the method is not a universally valid solution, but a sign of the weakness on the part of people for whom it doesn’t work. They insist that this (sometimes very elaborate) system will work if you apply yourself, not acknowledging different constraints, and so they end up shaming others. They seem to write a lot of the books on planning, as well as blog posts.

And that’s the main point of this post. There is a lot of great advice out there about planning, but an awful lot of it is clickbait self-help rhetoric. There’s a lot of shit out there. There are some ponies. But there is so much shaming.

There are a lot of good reasons that some people are averse to planning—reasons about which they shouldn’t be ashamed. People who’ve spent too much time around compulsive critics or committed shamesters have trouble planning because they know that they will not perfectly enact their plan, and so even beginning to plan means imagining how they will fail. And then failure to be perfect will seem to prove the compulsive critic or committed shamester right. Thus, for people like that, making a plan is an existential terrordome. Personally, I think compulsive critics and committed shamesters are all just engaged in projection and deflection about how much they hate themselves, but that’s just one of many crank theories I have. Of course we will fail to enact our plan—nothing works out as planned—because we cannot actually perfectly and completely control our world. In my experience, compulsive critics and committed shamesters are people mostly concerned about protecting their fantasy that the world is under (their) control.

People who have trouble letting go of details find big-picture planning overwhelming; people who loathe drudgery find it boring; people trying to plan something they’ve never before done (a dissertation, wedding, trip to Europe, long-term budget) just get a kind of blank cloud of unknowing when they think about making a plan for it. People who are inductive thinkers (they begin with details and work up) have trouble planning big projects because it requires an opposite way of thinking. People who are deductive thinkers can have trouble imagining first steps. People who use planning to manage anxiety can get paralyzed when a situation requires making multiple plans.

I think planning of some kind is useful. I think it’s really helpful, in fact, and I think—if people can find the right approach to planning—it can reduce anxiety. But it is never to going to erase anxiety about a high-stakes project. And a method of planning shouldn’t increase anxiety.

Because there are different reasons that people are averse to planning, and people get anxious in different ways and moments, there is no process that will work for everyone. If a process doesn’t work for you, that doesn’t mean you’re a bad person, or you’ll never be able to plan; it just means you need to find a process that works for you. And, to be blunt, that process might involve therapy (to be even more blunt, it almost always does).

Here are some books that people trying to write dissertations have found helpful. Anyone who wants to recommend something in the comments is welcome to do so, and it’s especially helpful if people say why it worked for them. Some of these are getting out of date, and yet people still like them.

Choosing Your Power, Wayne Pernell (self-help generally)
Destination Dissertation, Sonja Foss and William Waters
Getting Things Done, David Allen (the basic principle is good, but it’s getting very aged in terms of technology)
Seven Habits of Highly Effective People, Stephen Covey (another one that is getting long in the tooth)
I haven’t done much with this website, but the research is strong: https://woopmylife.org/

There are some things that can help. If you don’t like planning because it’s drudgery, then make it fun. Buy a new kind of planner every year. Use colors to code your goals. If planning paralyzes you because of fear of failure, then set low “must” goals that you can definitely achieve, and have a continuum of what should get done. Get into some kind of group that will encourage you. If you feel that you’re facing a white wall of uncertainty, work with someone who has done what you’re trying to do (e.g., your diss director) to create a reasonable plan. This strategy works best if they see part of their job as reducing anxiety, and if they have a way of planning that works with yours.

One of the toxically seductive things about being a student is that you don’t have to have a plan through most of undergraduate and even graduate school. You have to pick a major, but it’s possible to pick one not because of any specific plan–it’s the one in which we succeed (a completely reasonable way to pick a major, I think), and then we might go to graduate school in that thing at which we’re succeeding (it makes sense), and in graduate school we’re given a set of courses we have to take. The “plan,” so to speak, might be nothing more than “complete the assignments with deadlines set by faculty.” Those deadlines are all within a fifteen week period, and it’s relatively straightforward to meet them through sheer panic and caffeine. Then, suddenly (for many people), we are supposed to have a plan for finishing your dissertation, with deadlines that are years apart, for things we’ve never done—a prospectus, a dissertation. We have to know how to plan something long-term, with contingencies.

In my experience, planning in academia means being able to engage in a multiple timeline plan. Having one plan that requires that you get a paper accepted by this time, a job by that time, a course release by then increases anxiety. It seems to me that people tend to do better with an approach that enables a distinction between hard deadlines (if this doesn’t happen by that date, funding will run our) and various degrees of aspirational achievements.

I think this challenge is present in lots of fields: you can’t determine to hit a certain milestone, as much as hope to do so, and try to figure out what things you can do between now and then to make that outcome likely. Thus, there are approaches out there helpful for that kind of contingent planning. But, just to be clear, there are a lot that really aren’t.

I also think it’s helpful to find a way of planning that is productive given our particular habits, anxieties, ways of thinking. People who are drawn to closure seem to thrive with a method that is panic-inducing for people who are averse to it, for instance. So, it might take some time to find a method (it took me till well into my first job, but that was before the internet).

Writing a dissertation is hard; there is nothing that will make it easy. There are things that will make it harder, and doing it without a way of planning that works fits personality, situation, and so on is one. But there is no method of planning that will work for everyone, and there is no shame if some particular method isn’t working.




On finding my notes and files from my dissertation

heavily edited writing

I recently found my notes and files from when I was writing my dissertation. I’ll start with saying that I’ve had a respectable publishing career, but hooyah, that dissertation was a hot steaming mess. So was my process of writing it. So, if you’re trying to write a dissertation, and you’re in the midst of a chaotic writing process and you think that what you’re writing is awful, it can’t be worse than either my process or product. You’ll be fine.

There’s a longer version of this, but here I’ll list a few ways that things went wrong. First, I was trying to use a technology that lots of people used (a notecard system), but it really didn’t work for me. I didn’t know that, and I couldn’t have known it till I tried it. It got me too caught up in details, and I’m an inductive thinker (and writer), and it worsened all the flaws of inductive writing (assuming that if you give enough details people will infer your argument).

There are lots of technologies that people now use—zotero, commenting on pdf—and they work for some people and not for others. If one that other people are using doesn’t work for you, then committing to it with more will won’t make it work. It doesn’t mean you’re a failure; it means there’s a bad match between that technology and you, and the technology needs to go.

Second, I was working with faculty who were not in the conversation I was trying to enter. That was simply a function of my topic and department. My committee was really good, but they couldn’t tell me what to read or what conferences to attend. Make sure someone on your committee knows the conversation, or change the conversation.

Third, I was modelling my argument on books I admired that were written by advanced scholars. Your dissertation, in terms of scope and structure, should be modelled on books written by junior scholars or other dissertations in your department.

Fourth, people writing their dissertations should be prohibited from making any major decisions regarding things like marriage.

Fifth, I was in a highly competitive department in which something like a writing group would probably not have been helpful, but I wish I had found one. It’s hard, though, since people outside your field will often give advice that isn’t appropriate for yours.

How things went right.

First through fifth: my dissertation director was a smart, insightful, and kind person. He was a student of Thomas Kuhn’s, and so stepped back and saw processes. When you’re writing a dissertation, there are moments you are completely paralyzed. It’s because we’ve often gotten through undergrad and coursework without thinking about structure very much. So, you go from thinking about how to structure a 20-page paper (or not, you just make it a list) to how to structure something that is 200 pages. You have to decide what’s background, where to explain that background, how to position yourself in regard to other scholarship, how much of that scholarship to discuss, where to start…so many things that just don’t come up in a seminar paper.

My director, Arthur Quinn, taught a course about 18th century rhetoric that was entirely histories of the 18th century that happened to emphasize rhetoric, and we spend the semester talking about their methods, structures, assumptions, rhetoric. It was one of four classes I had in that program that were historiography (maybe five), but I didn’t know that at the time. What I did know is that he was asking us to step up a ladder, from just thinking about our data, or our argument, to the various ways we might make that argument. That course influenced every single grad course I taught.

At one point, completely paralyzed in my writing, I was in a grad student office, rearranging the Gumby-like figures my office-mate had into a baseball game. His office was next door, and he stopped, looked in, and then went to his office. A while later, he came into my office and said, “Here’s what you’re arguing.” He gave me an outline for my dissertation. I started writing again.

My dissertation did not end up with that outline. But his giving me that direction got me writing. He was generally a hands-off director, but he knew the moment he had to step in.

Now that I’ve seen a lot of grad students, and a lot of directors, I appreciate him so much. A lot of scholars rely on panic to motivate themselves, and so they sincerely believe they are helping their students when they deliberately work their students into a panic. Many rely on shaming themselves in order to write, and so they think that shaming students is helpful. Some forget how hard it is to write a dissertation, and so they dismiss or minimize the concerns of their students. Some believe that they benefitted from how isolating writing a dissertation is, and so they believe that refusing to give directive advice is helping their students. Some have writing processes in which you have to have the entire argument determined before you start, and so they insist their students do. Some drift around in data and so encourage their students to do so. All of these processes work for someone—that’s why people adopt them—but none of them work for all students, and none of them work for any one student all the time.

And that is what Art Quinn taught me.