How not to make a Hitler analogy

Americans love the Hitler analogy, the claim that their political leader is just like Hitler. And it’s almost always very badly done—their leader (let’s call him Chester) is just like Hitler because…. and then you get trivial characteristics, such as characteristics that don’t distinguish either Hitler or Chester from most political leaders (they were both charismatic, they used Executive Orders), or that flatten the characteristics that made Hitler extraordinary (Hitler was conservative). That process all starts with deciding that Chester is evil, and Hitler is evil, and then looking for any ways that Chester is like Hitler. So, for instance, in the Obama is Hitler analogy, the argument was that Obama was charismatic, he had followers who loved him, he was clearly evil (to the person making the comparison–I’ll come back to that), and he maneuvered to get his way.

Bush was Hitler because he was charismatic, he had followers who loved him, he was clearly evil (to the people making the comparison), and he used his political powers to get his way. And, in fact, every effective political figure fits those criteria in that someone thought they were clearly evil: Lincoln, Washington, Jefferson, FDR, Reagan, Bush, and Trump, for instance.

He was clearly evil. In the case of Hitler it means he killed six million Jews; in the case of Obama it means he tried to reduce abortions in a way that some people didn’t like (he didn’t support simply outlawing them), in the case of Bush it was that he invaded Iraq, for Lincoln it was that he tried to end slavery, and so on. In other words, in the case of Hitler, every reasonable person agrees that the policies he adopted six or seven years into his time as Chancellor were evil. But not everyone who wants to reduce abortions to the medically necessary agrees that Obama’s policies were evil, and not everyone who wants peace in the middle East agrees that Bush was evil.

So, what does it mean to decide a political leader is evil?

For instance, people who condemned Obama as evil often did so on grounds that would make Eisenhower and Nixon evil (support for the EPA, heavy funding for infrastructure, high corporate taxes, a social safety net that included some version of Medicare, secular public education), and many of which would make Eisenhower, Nixon, Reagan, and the first Bush evil (faith in social mobility, protection of public lands, promoting accurate science education, support for the arts, an independent judiciary, funding for infrastructure, good relations with other countries, the virtues of compromise). So, were the people condemning Obama as evil doing so on grounds that would cause them to condemn GOP figures as evil? No—their standards didn’t apply to figures they liked. It just a way of saying he wasn’t GOP.

Every political figure has some group of people who sincerely believe that leader is obviously evil. And every political figure who gets to be President has mastered the arts of being charismatic (not every one gets power from charismatic leadership, but that’s a different post), compromising, manipulating, engaging followers. So, is every political leader just like Hitler?

Unhappily, we’re in a situation in which people make the Hitler analogy to everyone else in their informational cave, and the people in that cave think it’s obviously a great analogy. Since we’re in a culture of demagoguery in which every disagreement is a question of good (our political party) or evil (their political party), any effective political figure of theirs is Hitler.

We’re in a culture in which a lot of media says, relentlessly, that all political choices are between a policy agenda that is obviously good and a policy agenda that is obviously evil, and, therefore, nothing other than the complete triumph of our political agenda is good. That’s demagoguery.

The claim that He was clearly evil is important because it raises the question of how we decide whether something is true or not. And that is the question in a democracy. The basic principle of a democracy is that there is a kind of common sense, that most people make decisions about politics in a reasonable manner, and that we all benefit because we get policies that are the result of the input of different points of view. Democracy is a politics of disagreement. But, if some people are supporting a profoundly anti-democratic leader, who will use the power of government to silence and oppress, then we need to be very worried. So the question of whether we are democratically electing someone who will, in fact, make our government an authoritarian one-party state is important. But, how do you know that your perception that this leader is just like Hitler is reasonable? What is your “truth test” for that claim?

1. Truth tests, certainty, and knowledge as a binary

Talking about better and worse Hitler analogies requires a long digression into truth tests and certainty for two reasons. First, the tendency to perceive their effective political leaders as evil because their policies are completely evil is based on and reinforces the tendency to think of political questions as between obvious good and obvious evil, and that perception is reinforced by and reinforces what I’ll explain as the two-part simple truth test (does this fit with what I already believe, and do reliable authorities say this claim is true). Second, believing that all beliefs and claims can be divided into obvious binaries (you are certain or clueless, something is right or wrong, a claim is true or false, there is order or chaos) correlates strongly to authoritarianism, and one of the most important qualities of Hitler was that he was authoritarian (and that’s where a lot of these analogies fail—neither Obama nor Bush were authoritarians).

And so, ultimately, as the ancient Greeks realized, any discussion about democracy quickly gets to the question of how common people make decisions as to whether various claims are true or false. Democracies fail or thrive on the single point of how people assess truth. If people believe that only their political faction has the truth and every other political faction is evil, then democracies collapse and we have an authoritarian leader. Hitlers arise when people abandon democratic deliberation.

That’s the most important point about Hitler: leaders like Hitler come about because we decide that diversity of opinion weakens our country and is unnecessary.

The notion that authoritarian governments arise from assumptions about how people argue might seem counterintuitive, since that seems like some kind of pedantic question only interesting to eggheads (not what you believe but how you believe beliefs work) and therefore off the point. But, actually, it is the point—democracies turn into authoritarian systems under some circumstances and thrive under others, and it all depends on what is seen as the most sensible way to assess whether a claim is true or not. The difference between democracy and authoritarianism is that practice of testing claims—truth tests.

For instance, some sources say that Chester is just like Hitler, and other sources say that Hubert it just like Hitler. How do you decide which claim is true?

One truth test is simple, and it has two parts: does perceiving Chester as just like Hitler fit with what you already believe? do sources you think are authorities tell you that Chester is just like Hitler? Let’s call this the simple two-part truth test, and the people who use it are simple truth-testers.

Sometimes it looks as though is a third (but it’s really just the first reworded): can I find evidence to show that Chester is just like Hitler?

For many people, if they can confirm a claim through those three tests (does it fit what I believe, do authorities I trust say that, can I find confirming evidence), then they believe the claim is rational.

(Spoiler alert: it isn’t.)

That third question is really just the same as the first two. If you believe something—anything, in fact—then you can always find evidence to support it. If you are really interested in knowing whether your beliefs are valid, then you shouldn’t look to see whether there is evidence to support what you believe; you should look to see whether there is evidence that you’re wrong. If you believe that someone is mad at you, you can find a lot of evidence to support that belief—if they’re being nice, they’re being too nice; if they’re quiet, they’re thinking about how angry they are with you. You need to think about what evidence you would believe to persuade you they aren’t mad. (If there is none, then it isn’t a rational belief.) So, those three questions are two: does a claim (or political figure) confirm what I believe; do the authorities I trust confirm this claim (or political figure)?

Behind those two questions is a background issue of what decisions look like. Imagine that you’re getting your hair cut, and the stylist says you have to choose between shaving your head or not cutting your hair at all—how do you decide whether that person is giving you good advice?

And behind that is the question of whether it’s a binary decision—how many choices to you have? Is the stylist open to other options? Do you have other options? Once the stylist has persuaded you that you either do nothing to your hair or shave it, then all he has to do is explain what’s wrong with doing nothing. And you’re trapped by a logical fallacy, because leaving your hair alone might be a mistake, but that doesn’t actually mean that shaving your head is a good choice. People who can’t argue for their policy like the fallacy of the false division (the either/or fallacy) because it hides the fact that they can’t persuade you of the virtues of their policy.

The more that you believe every choice is between two absolutely different extremes, the more likely it is that you’ll be drawn to political leaders, parties, and media outlets that divide everything into absolutely good and absolutely bad.

It’s no coincidence that people who believe that the simple truth test is all you need also insist (sometimes in all caps) that anyone who says otherwise is a hippy dippy postmodernist. For many people, there is an absolute binary in everything, including how to look at the world—you can look and make a judgment easily and clearly or else you’re saying that any kind of knowledge at all is impossible. And what you see is true, obviously, so anyone who says that judgment is vexed, flawed, and complicated is a dithering weeny. They say that, for a person of clear judgment, the right course of action in all cases is obvious and clear. It’s always black (bad) or white (good, and what they see). Truth tests are simple, they say.

In fact, even the people who insist that the truth is always obvious and it’s all black or white go through their day in shades of grey. Imagine that you’re a simple truth tester. You’re sitting at your computer and you want an ‘e’ to appear on your screen, so you hit the ‘e’ key. And the ‘e’ doesn’t appear. Since you believe in certainty, and you did not get the certain answer you predicted, are you now a hippy-dippy relativist postmodernist (had I worlds enough and time I’d explain why that term is incredibly sloppy and just plain wrong) who is clueless? Are you paralyzed by indecision? Do you now believe that all keys can do whatever they want and there is no right or wrong when it comes to keys?

No, you decide you didn’t really hit the ‘e’ or your key is gummed up or autocorrect did something weird. When you hit the ‘e’ key, you can’t be absolutely and perfectly certain that the ‘e’ will appear, but that’s probably what will happen, and if it doesn’t you aren’t in some swamp of postmodern relativism and lack of judgment.

Your experience typing shows that the binary promoted by a lot of media between absolutely certainty and hippy dippy relativism is a sloppy social construct. They want you to believe it, but your experience of typing, or making any other decision, shows it’s a false binary. You hit ‘e’ key, and you’re pretty near certain that an ‘e’ will appear. But you also know it might not, and you won’t collapse into some pile of cold sweat of clueless relativism if it doesn’t. You’ll clean your keyboard.

It’s the same situation with voting for someone, marrying someone, buying a new car, making dinner, painting a room. You can feel certain in the moment that you’re making the right decision, but any honest person has to admit that there are lots of times we felt totally and absolutely certain and turned out to have been mistaken. Feeling certain and being right aren’t the same thing.

That isn’t to say that the hippy-dippy relativists are right and all views are equally valid and there is no right or wrong—it’s to say that the binary between “the right answer is always obviously clear” and hippy-dippy relativism is wrong. For instance, in terms of the assertion that many people make that the distinction between right and wrong is absolutely obvious: is killing someone else right or wrong? Everyone answers that it depends. So, does that mean we’re all people with no moral compass? No, it means the moral compass is complicated, and takes thought, but it isn’t hopeless.

Our world is not divided into being absolutely certain and being lost in clueless hippy dippy relativism. But, and this is important, that is the black and white world described by a lot of media—if you don’t accept their truth, then you’re advocating clueless postmodern relativism. What those media say is that what you already believe is absolutely true, and, they say, if it turns out to be false, you never believed it, and they never said it. (The number of pundits who advocated the Iraq invasion and then claimed they were opposed to it all along is stunning. Trump’s claiming he never supported the invasion fits perfectly what with Philip Tetlock says about people who believe in their own expertise.)

And that you have been and always be right is a lovely, comforting, pleasurable message to consume. It is the delicate whipped cream of citizenship—that you, and people like you, are always right, and never wrong and you can just rely on your gut judgment. Of course, the same media that says it’s all clear has insisted that something is absolutely true that turned out not to be (Saddam Hussein has weapons of mass destruction, voting for Reagan will lead to the people’s revolution, Trump will jail Clinton, Brad Pitt is getting back together with Angelina Jolie, studies show that vaccines cause autism, the world will end in 1987). The paradox is that people continue to consume and believe media who have been wrong over and over, and yet are accepted as trusted authorities because they have sometimes been right, or, more often, because, even if wrong, what they say is comforting and assuring.

But, what happens when media say that Trump has a plan to end ISIS and then it turns out his plan is to tell the Pentagon to come up with a plan? What happens when the study that people cite to say autism is caused by vaccines turns out to be fake? Or, as Leon Festinger famously studied, what happens when a religion says the world will end, and it doesn’t? What happens when something you believe that fits with everything else you believe and is endorsed by authorities you believe turns out to be false? You could decide that maybe things aren’t simple choices between obviously true and obviously false, but that isn’t generally what people do. Instead, we recommit to the media because now we don’t want to look stupid.

Maybe it would be better if we all just decided that complicated issues are complicated, and that’s okay.

There are famous examples that show the simple truth test—you can just trust your perception—is wrong.

For instance, there is this example.

If you’re looking at paint swatches, and you want a darker color, you can look at two colors and decide which is darker. You might be wrong. Here’s a famous example of our tendency to interpret color by context.

Those examples look like special cases, and they (sort of) are: if you know that you have a dark grey car, and there is a grey and dark grey car in the parking lot, you don’t stand in the parking lot paralyzed by not knowing which car is yours because you saw something on the internet that showed your perception of darkness might be wrong. That experiment shows you might be entirely wrong, but you will not go on in your life worrying about it.

But you have been wrong about colors. And we’ve all tried to get into the wrong car, but in those cases we get instant feedback that we were wrong. With politics it’s more complicated, since media that promoted what turns out to have been a disastrous decision can insist they never promoted it (when Y2K turned out not to be a thing, various radio stations that had been fear mongering about it just never mentioned it again), claim it was the right decision, or blame it on someone else. They can continue to insist that their “truth” is always the absolutely obvious decision and that there is binary between being certain and being clueless. But, in fact, our operative truth test in the normal daily decisions we make is one that involves skepticism and probability. Sensible people don’t go through life with a yes/no binary. We operate on the basis of a yes/various degrees of maybe/no continuum.

What’s important about optical illusions is that they show that the notion central to a lot of argutainment—that our truth tests for politics should involve being absolutely certain that our group is right or else you’re in the muck of relativistic postmodernism—isn’t how we get through our days. And that’s important. Any medium, any pundit, any program, that says that decisions are always between us and them is lying to us. We know, from decisions about where to park, what stylist to use, what to make for dinner, how to get home, that it isn’t about us vs. them: it’s about making the best guesses we can. And we’re always wrong eventually, and that’s okay.

We tend to rely on what social psychologists call heuristics—meaning mental short cuts—because you can’t thoroughly and completely think through every decision. For instance, if you need a haircut, you can’t possibly thoroughly investigate every single option you have. You’re likely to have method for reducing the uncertainty of the decision—you rely on reviews, you go where a friend goes, you just pick the closest place. If a stylist says you have to shave your head or do nothing, you’ll walk away.

You might tend to have the same thing for breakfast, or generally take the same route to work, campus, the gym. Your route will not be the best choice some percentage of the time because traffic, accidents, or some random event will make your normal route slower than others from time to time (if you live in Austin, it will be wrong a lot). Even though you know that you can’t be certain you’re taking the best route to your destination, you don’t stand in your apartment doorway paralyzed by indecision. You aren’t clueless about your choices—you have a lot of information about what tends to work, and what conditions (weather, a football game, time of day, local music festivals, roadwork) are likely to introduce variables in your understanding of what is the best route. You are neither certain nor clueless.

And there are dozens of other decisions we make every day that are in that realm of neither clueless nor certain: whether you’ll like this movie, if the next episode of a TV program/date/game version/book in a series/cd by an artist/meal at a restaurant will be as good as the last, whether your boss/teacher will like this paper/presentation as much as the previous, if you’ll enjoy this trip, if this shirt will work out, if this chainsaw will really be that much better, if this mechanic will do a good job on your car, if this landlord will not be a jerk, if this class/job will be a good one.

We all spend all of our time in a world in which we must manage uncertainty and ambiguity, but some people get anxious when presented with ambiguity and uncertainty, and so they talk (and think) as so there is an absolute binary between certain and clueless, and every single decision falls into one or the other.

And here things get complicated. The people who don’t like uncertainty and ambiguity (they are, as social psychologists say, “drawn to closure”) will insist that everything is this or that, black or white even though, in fact, they continually manage shades of grey. They get in the car or walk to the bus feeling certain that they have made the right choice, when their choice is just habit, or the best guess, or somewhere on that range of more or less ambiguous.

So, there is a confusion between certainty as a feeling (you feel certain that you are right) and certainty as a reasonable assessment of the evidence (all of the relevant evidence has been assessed and alternative explanations disproven)—as a statement about the process of decision-making. Most people use it in the former way, but think they’re using it in the latter, as though the feeling of certainty is correlated to the quality of evidence. In fact, how certain people feel is largely a consequence of their personality type (On Being Certain has a great explanation of that, but Tetlock’s Expert Political Judgment is also useful). There’s also good evidence that the people who know the most about a subject tend to express themselves with less certainty than people who are un- or misinformed (the “Dunning-Kruger effect”).

What all that means is that people who get anxious in the face of ambiguity and uncertainty resolve that anxiety by feeling certain, and using a rigid truth test. So, the world isn’t rigidly black or white, but their truth test is. For instance, it might have been ambiguous whether they actually took the best route to work, but they will insist that they did, and that they obviously did. They managed uncertainty and ambiguity by denying it exists. This sort of person will get actively angry if you try to show them the situation is complicated.

They manage the actual uncertainty of situations by, retroactively, saying that the right answer was absolutely clear.[1] That sort of person will say that “truth test” is just simply asking yourself if something is true or not. Let’s call that the simple truth test, and the people who use it simple truth testers.

The simple truth test has two parts: first, does this claim fit with what I already believe? and, second, do authorities I consider reliable promote this claim?

People who rely on this simple truth test say it works because, they believe, the true course of action is always absolutely clear, and, therefore, it should be obvious to them, and it should be obvious to people they consider good. (It shouldn’t be surprising that they deny having made mistakes in the past, simply refashioning their own history of decisions—try to find someone who supported the Iraq invasion or was panicked about Y2K.)

The simple truth test is comfortable. Each new claim is assessed in terms of whether it makes us feel good about things we already believe. Every time we reject or accept a claim on the basis of whether it confirms our previous beliefs it confirms our sense of ourselves as people who easily and immediately perceive the truth. Thus, this truth test isn’t just about whether the new claim is true, but about whether they and people like them are certainly right.

The more certain we feel about a claim, the less likely we are to doublecheck whether we were right, and the more likely we are to find ways to make ourselves have been right. Once we get to work, or the gym, or campus, we don’t generally try to figure out whether we really did take the fastest route unless we have reason to believe we might have been mistaken and we’re the sort of person will to consider that we might have been mistaken.

There’s a circle here, in other words: the sort of person who believes that there is a binary between being certain and being clueless, and who is certain about all of her beliefs, is less likely to do the kind of work that would cause her to reconsider her sense of self and her truth tests. Her sense of herself as always right appears to be confirmed because she can’t think of any time she has been wrong. Because she never looked for such a time.

Here I need to make an important clarification: I’m not claiming there is a binary between people who believe you’re either certain or clueless and people who believe that mistakes in perception happen frequently. It’s more of a continuum, but a pretty messy one. We’re all drawn to black or white thinking when we’re stressed, frightened, threatened, or trying to make decisions with inadequate information. Most people have some realms or sets of claims they think are certain (this world is not a dream, evolution is a fact, gravity happens). Some people need to feel certain about everything, and some people don’t need to feel certain much at all, and a lot of people feel certain about many things but not everything.

Someone who believes that her truth tests enable certainty on all or most things will be at one end of the continuum, and someone who managed to live in a constant state of uncertainty would be at the other. Let’s call the person at the “it’s easy to be certain about almost everything important” authoritarian (I’ll explain the connection better later).

Authoritarians have trouble with the concept of probabilities. For instance, if the weather report says there will be rain, that’s a yes/no. And it’s proven wrong if the weather report says yes and there is no rain. But if the weather report says there is a 90% chance of rain and it doesn’t rain, the report has not been proven wrong.

Authoritarians believe that saying there is a 90% chance is just a skeezy way to avoid making a decision—that the world really is divided into yes or no, and some people just don’t want to commit. And they consume media that says exactly that.

This is another really important point: many people spend their consuming media that says that every decision is divided into two categories: the obviously right decision, and the obviously wrong one. And that media says that anyone who says that the right decision might be ambiguous, unclear, or a compromise is promoting relativism or postmodernism. So, as those media say, you’re either absolutely clear or you’re deep in the muck of clueless relativism. Authoritarians who consume that media are like the example above of the woman who believes that her certainty is always justified because she never checks to see whether she was wrong. They live in a world in which their “us” is always right, has always been right, and will always be right, and the people who disagree are wrong-headed ditherers who pretend that it’s complicated because they aren’t man enough to just take a damn stand.

(And, before I go on, I should say that, yes, authoritarianism isn’t limited to one political position—there are authoritarians all over the map. But, that isn’t to say that “both sides are just as bad” or authoritarianism is equally distributed. The distribution of authoritarianism is neither a binary nor a constant; it isn’t all on one side, but it isn’t evenly distributed.)

I want to emphasize that the authoritarian view—that you’re certain or clueless—is often connected to a claim that people are either authoritarians or relativists (or postmodernists or hippies) because there are two odd things about that insistence. First, a point I can’t pursue here, authoritarians rarely stick to principles across situations and end up fitting their own definition of relativist/postmodern. (Briefly, what I mean is that authoritarians put their group first, and say their group is always right, so they condemn behavior in them that they praise or justify in us. In other words, whether an act is good or bad is relative to whether it’s done by us or them—that’s moral relativism. So, oddly enough, you end up with moral relativism attacked by people who engage in it.) Second, even authoritarians actually make decisions in a world of uncertainty and ambiguity, and don’t use the same truth test for all situations. When their us turns out to be wrong, then they will claim the situation was ambiguous, there was bad information, everyone makes mistakes, and go on to insist that all decisions are unambiguous.

So, authoritarians say that all decisions are clear, except when they aren’t, and that we are always right, except when we aren’t. But those unclear situations and mistakes should never be taken as reasons to be more skeptical in the future.

2. Back to Hitler

Okay, so how do most people decide whether their leader is like Hitler? (And notice that it is never about whether our leader is like Hitler.) If you believe in the simple two-part truth test, then you ask yourself whether their leader seems to you to be like Hitler, and whether authorities you trust say he is. And you’re done.

But what does it mean to be like Hitler? What was Hitler like?

There is the historical Hitler who was, I think, evil, but didn’t appear so to many people, and who had tremendous support from a lot of authoritarians, and there is the cartoon Hitler. Hitler was evil because he tried to exterminate entire peoples (and he started an unnecessary war, but that’s often left out). The cartoon version assumes that his ultimate goals were obvious to everyone from the beginning—that he came on the scene saying “Let’s try to conquer the entire world and exterminate icky people” and always stuck to that message, so that everyone who supported him knew they were supporting someone who would start a world war and engage in genocide.

But that isn’t how Hitler looked to people at the time. Hitler didn’t come across as evil, even to his opponents (except to the international socialists), until the Holocaust was well under way. Had he come across as evil he would never have gotten into power. While Mein Kampf and his “beerhall” speeches were clearly eliminationist and warmongering, once he took power his recorded and broadcasted speeches never mentioned extermination and were about peace. (According to Letters to Hitler, his supporters were unhappy when he started the war.) Hitler had a lot of support, of various kinds, and his actions between 1933 and 1939 actually won over a lot of people, especially conservatives and various kinds of nationalists, who had been skeptical or even hostile to him before 1933. His supporters ranged from the fans (the true believers), through conservative nationalists who wanted to stop Bolshevism and reinstate what they saw as “traditional” values, conservative Christians who objected to some of his policies but also liked a lot of them (such as his promotion of traditional roles for women, his opposition to abortion and birth control, his demonizing of homosexuality), and people of various political ideologies who liked that (they thought) he was making Germany respected again, had improved the economy, had ended the bickering and instability they associated with democratic deliberation, and was undoing a lot of the shame associated with the Versailles Treaty.

Until 1939, to his fans, Hitler came across as a truth-teller, willing to say politically incorrect things (that “everyone” knew were true), cut through all the bullshit, and be decisive. He would bring honor back to Germany and make it the military powerhouse it had been in recent memory; he would sideline the feckless and dithering liberals, crush the communists, and deal with the internal terrorism of the large number of immigrants in Germany who were stealing jobs, living off the state, and trying to destroy Germany from within; he would clean out the government of corrupt industrialists and financiers who were benefitting from the too-long deliberations and innumerable regulations. He would be a strong leader who would take action and not just argue and compromise like everyone else. He didn’t begin by imprisoning Jews; he began by making Germany a one-party state, and that involved jailing his political opponents.

Even to many people willing to work with him, Hitler came across as crude, as someone pandering to popular racism and xenophobia, a rabble-rouser who made absurd claims, and who didn’t always make sense, whose understanding of the complexities of politics appeared minimal. But conservatives thought he would enable them to put together a coalition that would dominate the Reichstag (the German Congress, essentially) and they could thereby get through their policy agenda. They thought they could handle him. While they granted that he had some pretty racist and extreme things (especially his hostility to immigrants and non-Christians, although his own record on Christian behavior wasn’t exactly great), they thought that was rabble-rousing he didn’t mean, a rhetoric he could continue to use to mobilize his base for their purposes, or that he could be their pitbull whom they could keep on a short chain. He instantly imposed a politically conservative social agenda that made a lot of conservative Christians very happy—he was relentless in his support for the notion that men earn money and women work in the home, homosexuality and abortion are evil [2], sexual immorality weakens the state, and his rhetoric was always framed in “Christian terms” (as Kenneth Burke famously argued—his rhetoric was a bastardization of Christian rhetoric, but it still relied on Christian tropes).

Conservative Christians (Christians in general, to be blunt) had a complicated reaction to him. Most Christian churches of the era were anti-Semitic, and that took various forms. There were the extreme forms—the passion plays that showed Jews as Christ-killers, who killed Christians for their blood at Passover, even religious festivals about how Jews stabbed consecrated hosts (some of which only ended in the 1960s).

There were also the “I’m not racist but” versions of Christian anti-Semitism promoted by Catholic and Protestant organizations (all of this is elegantly described in Antisemitism, Christian Ambivalence, and the Holocaust). Mainstream Catholic and Lutheran thought promoted the notion that Jews were, at best, failed Christians, and that the only reason not to exterminate them was so that they could be converted. There was, in that world, no explicit repudiation of the sometimes pornographic fantasies of greedy Jews involved in worldwide conspiracies, stabbing the host, drinking the blood of Christian boys at Passover, and plotting the downfall of Germany. And there was certainly no sense that Christians should tolerate Jews in the sense of treating them as we would want to be treated; it simply meant that they shouldn’t be killed. As Ian Kershaw has shown, a lot of German Christians didn’t bother themselves about oppression (even killing) of Jews, as long at it happened out of their ken; they weren’t in favor of killing Jews, but, as long as they could ignore it was happening, they weren’t going to do much to protest (Hitler, The Germans, and the Final Solution).

Many of his skeptics (even international ones) were won over by his rhetoric. His broadcast speeches emphasized his desire for peace and prosperity; they liked that he talked tough about Germany’s relations to other countries (but didn’t think he’d lead them into war), they loved that he spent so much of his own money doing good things for the country (in fact, he got far more money out of Germany than he put into it, and he didn’t pay taxes—for more on this, see Hitler at Home), and they loved that he had the common touch, and didn’t seem to be some inaccessible snob or aristocrat, but a person who really understood them (Letters to Hitler is fascinating for showing his support). They believed that he would take a strong stance, be decisive, look out for regular people, clear the government of corrupt relationships with financiers, silence the kind of people who were trying to drag the nation down, and cleanse the nation of that religious/racial group that was essentially ideologically committed to destroying Germany.

There were a lot of people who thought Hitler could be controlled and used by conservative forces (Van Papen) or was a joke. In middle school, I had a teacher who had been in the Berlin intelligentsia before and during the war, and when asked why people like her didn’t do more about Hitler, she said, “We thought he was a fool.” Many of his opponents thought he would never get elected, never be given a position of power.

But still, some students say, you can see in his early rhetoric that there was a logic of extermination. And, yes, I think that’s true, but, and this is important, what makes you think you would see it? Smart people at the time didn’t see it, especially since, once he got a certain level of attention he only engaged in dog whistle racism. Look, for instance, at Triumph of the Will—the brilliant film of the 1934 Nazi rally in Nuremburg—in which anti-Semitism appears absent. The award-winning movie convinced many that Hitler wasn’t really as anti-Semitic as Mein Kampf might have suggested. But, by 1934, true believers had learned their whistles—everything about bathing, cleansing, purity, and health was a long blow on the dog whistle of “Jews are a disease on the body politic.” Hitler’s first speech on the dissolution of the Reichstag (March 1933) never uses the word Jew, and looked reasonable (he couldn’t control himself, however, and went back to his non-dog whistle demagoguery in what amounted to the question and answer period—Kershaw’s Hubris describes the whole event).

We focus on Hitler’s policy of extermination, but we don’t always focus enough on his foreign policy, especially between 1933 and 1939. Just as we think of Hitler as a raging antisemite (because of his actions), so we think of him as a warmonger, and he was both at heart and eventually, but he managed not to look that way for years. That’s really, really important to remember. He took power in 1933, and didn’t show his warmongering card till 1939. He didn’t show his exterminationist card till even later.

Hitler’s foreign policy was initially tremendously popular because he insisted that Germany was being ill-treated by other nations, was carrying a disproportionate burden, and was entitled to things it was being denied. Hitler said that Germany needed to be strong, more nationalist, more dominating, more manly in its relations with other nations. Germany didn’t want war, but it would, he said, insist upon respect.

Prior to being handed power, Hitler talked like an irresponsible war-monger and raging antisemite (especially in Mein Kampf), but his speeches right up until the invasion of Poland were about peace, stability, and domestic issues about helping the common working man. Even in 1933-4, the Nazi Party could release a pamphlet with his speeches and the title Germany Desires Work and Peace.

What that means is that from 1933 to 1939 Hitler managed a neat rhetorical trick, and he did it by dog whistles: he persuaded his extremist supporters that he was still the warmongering raging antisemite they had loved in the beerhalls and for whom Streicher was a reliable spokesman, and he persuaded the people frightened by his extremism that he wasn’t that guy, he would enable them to get through their policy agenda. (His March 1933 speech is a perfect example of this nasty strategy, and some day I intend to write a long close analysis of it.)

And even many of the conservatives who were initially deeply opposed to him came around because he really did seem to be effective at getting real results. He got those results by mortgaging the German economy, and setting up both a foreign policy and economic policy that couldn’t possibly be maintained without massive conquest; it had short-term benefits, but was not sustainable.

Hitler benefitted by the culture of demagoguery of Weimar Germany. After Germany lost WWI, the monarchy was ended, and a democracy was imposed. Imposing democracy is always vexed, and it doesn’t always work because democracy depends on certain cultural values (a different post). One of those values is seeing pluralism—that is, diversity of perspective, experience, and identity—as a good thing. If you value pluralism, then you’ll tend to value compromise. If you believe that a strong community has people with different legitimate interests, points of view, and beliefs, then you will see compromise as a success. If, however, you’re an authoritarian, and you believe that you and only you have the obvious truth and everyone else is either a knave or a fool, then you will see refusing to compromise as a virtue.

And then democracy stalls. It doesn’t stall because it’s a flawed system; it stalls when people reject the basic premises of democracy, when, despite how they make decisions about how to get to work in the morning, or whether to take an umbrella, they insist that all decisions are binaries between what is obviously right (us) and what is obviously wrong (them).

And, in the era after WWI, Germany was a country with a democratic constitution but a rabidly factionalized set of informational caves. People could (and did) spend all their time getting information from media that said that all political questions are questions of good (us) and evil (them). Those media promoted conspiracy theories—the Protocols of the Elders of Zion, for instance—insisted on the factuality of non-events, framed all issues as apocalyptic, and demonized compromise and deliberating. They said it’s a binary. The International Socialists said the same thing, that anything other than a workers’ revolution now was fascism, that the collapse of democracy was great because it would enable the revolution. Monarchists wanted the collapse of the democracy because they hoped to get a monarchy back, and a non-trivial number of industrialists wanted democracy to collapse because they were afraid people would vote for a social safety net that would raise their taxes.

It was a culture of demagoguery.

But, in the moment, large numbers of people didn’t see it that way because, if you were in a factional cave, and you used the two-step test, everything you heard in your cave would seem to be true. Everything you heard about Hitler would fit with what you already believed, and it was being repeated by people you trusted.

Maybe what you heard confirmed that he would save Germany, that he was a no-bullshit decisive leader who really cared about people like you and was going to get shit done, or maybe what you heard was that he was a tool of the capitalists and liberals and that you should refuse to compromise with them to keep him out of power. Whether what you heard was that Hitler was awesome or that he was completely wrong, what you heard was that he was obviously one or the other, and that anyone who disagreed with you was evil. What you heard was the disagreement itself was proof that evil was present. And heard democracy was a failure.

And that helped Hitler, even the attacks on him . As long as everyone agreed that the truth is obvious, that disagreement is a sign of weakness, the compromise is evil, then an authoritarian like Hitler would come along and win.

There were a lot of people who more or less supported the aims he said he had—getting Germany to have a more prosperous economy, fighting Bolshevism, supporting the German church, avoiding war, renegotiating the Versailles Treaty, purifying Germany of anti-German elements, making German politics more efficient and stable—but who thought Hitler was a loose cannon and a demagogue. Many of those were conservatives and centrists.

And, once Hitler was in power they watched him carefully. And, really, all his public speeches, especially any ones that might get international coverage, weren’t that bad. They weren’t as bad as his earlier rhetoric. There wasn’t as much explicit anti-Semitism, for instance, and, unlike in Mein Kampf, he didn’t advocate aggressive war. He said, over and over, he wanted peace. He immediately took over the press, but, still and all, every reader of his propaganda could believe that Hitler was a tremendously effective leader, and, really, by any standard he was: he effected change.

There wasn’t, however, much deliberation as to whether the changes he effected were good. He took a more aggressive stance toward other countries (a welcome change from the loser stance adopted from the end of WWI, which, technically, Germany did lose), he openly violated the deliberately shaming aspects of the Versailles Treaty, he appeared to reject the new terms of the capitalism of the era (he met with major industrial leaders and claimed to have reached agreements that would help workers), he reduced disagreement, he imprisoned people who seemed to many people to be dangerous, he enacted laws that promoted the cultural “us” and disenfranchised “them.” And he said all the right things. At the end of his first year, Germany published a pamphlet of his speeches, with the title “The New Germany Desires Work and Peace.” So, by the simple two-art truth test (do the claims support what you already believe? do authorities you trust confirm these claims?) Hitler’s rhetoric would look good to a normal person in the 30s. Granted, his rhetoric was always authoritarian—disagreement is bad, pluralism is bad, the right course of action is always obvious to a person of good judgment, you should just trust Hitler—but it would have looked pretty good through the 30s. A person using that third test—can I find evidence to support these claims—would have felt that Hitler was pretty good.

3. So, would you recognize Hitler if you liked what he was saying?

What I’m trying to say is that asking the question of “Is their political leader just like Hitler” is just about as wrong as it can get as long as you’re relying on simple truth tests.

If you get all your information from sources you trust, and you trust them because what they say fits in with your other beliefs, then you’re living in a world of propaganda.

If you think that you could tell if you were following a Hitler because you’d know he was evil, and you are in an informational cave that says all the issues are simple, good and evil are binaries and easy to tell one from another, there is either certainty or dithering, disagreement and deliberation are what weak people do, compromise is weakening the good, and the truth in any situation is obvious, then, congratulations, you’d support Hitler! Would you support the guy who turned out to start a disastrous war, bankrupt his nation, commit genocide? Maybe—it would just be random chance. Maybe you would have supported Stalin instead. But you would definitely have supported one or the other.

Democracy isn’t about what you believe; it’s about how you believe. Democracy thrives when people believe that they might be wrong, that the world is complicated, that the best policies are compromises, that disagreement can be passionate, nasty, vehement, and compassionate–that the best deliberation comes when people learn to perspective shift. Democracy requires that we lose gracefully, and it requires, above all else, that we don’t assess policies purely on whether they benefit people like us, but that we think about fairness across groups. It requires that we do unto others as we would have them do unto us, that we pass no policy that we would consider unfair if we were in all the possible subject positions of the policy. Democracy requires imagining that we are wrong.

[1] That sort of person often ascribes to the “just world model” or “just world hypothesis” which is the assumption that we are all rewarded in this world for our efforts. If something bad happens to you, you deserved it. People who claim that is Scriptural will cherry-pick quotes from Proverbs, ignoring what Jesus said about rewards in this world, as well as various other important parts of Scripture (Ecclesiastes, Job, Paul).

[2] There is a meme circulating that Hitler was pro-abortion. His public stance was opposition to abortion at least through the thirties. Once the genocides were in full swing, Nazism supported abortion for “lesser races.”

Terrorist Peanuts and Immigration

When I teach about the Holocaust, one of the first questions students ask is: why didn’t the Jews leave? The answer is complicated, but one part isn’t: where would they go? Countries like the US had such restrictive immigration quotas for the parts of Europe from which the Jews were likely to come that we infamously turned back ships. And, so, students ask, why did we do that?

We did it because of that era’s version of the peanut argument.

The peanut argument (more recently presented with a candy brand name attached to it, but among neo-Nazis the analogy used is a bowl of peanuts) has been shared by many, including by members of our administration, as a mic-drop strong defense of a travel ban on people from regions and of religions considered dangerous because, as the analogy goes, would you eat from a bowl of peanuts if you knew that one was poisoned?

People who make that argument insist that they are not being racist, because their objection is, they say, not based in an irrational stereotype about this group. They say it is a rational reaction to what members of this group have really done. And, they say, for the same reason, that they are not being hypocritical: as descendants of immigrants, they are open to safe immigrant groups. These immigrants, unlike their forbears, have dangerous elements.

What they don’t know is that every ethnicity and religion that has come to America has had members that struck large numbers of existing citizens as dangerous—the peanut argument has always been around. And it’s exactly the argument that was used for sending Jews back to death. The tragedies of the US immigration policy during Nazi extermination were the consequence of the 1924 Immigration Act, a bill that set race-based immigration quotas grounded in arguments that this set of immigrants (at that point, Italians and eastern and central Europeans) was too fundamentally and dangerously antagonistic to American traditions and institutions to admit. Architects of that act (and defenders of maintaining the quotas, in the face of people escaping genocide) insisted that they weren’t opposed to immigration, just this set of immigrants.

At least since Letters from an American Farmer (first published in 1782), Americans have taken pride in being a nation of immigrants. And, since around the same time, large numbers of Americans who took pride in being descended from immigrants have stoked fear about this set of immigrants.

Arguments about whether Catholics were a threat to democracy raged throughout the nineteenth century, for instance. Samuel Morse (of the Morse code) wrote a tremendously popular book arguing that German and Irish Catholics were conspiring to overthrow American democracy, which appealed to popular notions about Catholics’ religion being essentially incompatible with democracy. Hostility towards the Japanese and Chinese (grounded in stereotypes that their political and religious beliefs necessarily made them dangerous citizens) resulted in laws prohibiting their naturalization, owning property, repatriation, and, ultimately, their immigration (and, in the case of the Japanese, it led to race-based imprisonment). After the revolutions of 1848, and especially with the rise of violent political movements in the late nineteenth century (anarchism, Sinn Fein, various anti-colonial and independence movements), large numbers of politicians began to focus on the possibility that allowing this group would mean that we were allowing violent terrorists bent on overthrowing our government.

And that’s exactly what it did mean. Every one of those groups did have individuals who advocated violent change.

A large number of the defendants in the Haymarket Trial (concerning a fatal bomb-throwing incident at a rally of anarchists–photo left) were immigrants or children of immigrants; by the early 20th century, people arguing that this group had dangerous individuals could (and did) cite examples like Emma Goldman (a Jewish anarchist imprisoned for inciting to riot), Nicola Sacco and Bartolomeo Vanzetti (Italian anarchists executed murder committed during a robbery), Jacob Abrams and Charles Schenck (Jews convicted of sedition), and Leon Czolgosz (the son of Polish immigrants, who shot McKinley). Even an expert like Harry Laughlin, of the Eugenics Record Office, would testify that the more recent set of immigrants were genetically dangerous (they weren’t—his math was bad).

History has shown that the fear mongerers were wrong. While those groups did all have advocates of violence, and individuals who advocated or committed terrorism, the peanut analogy was fallacious, unjust, and unwise. Those groups also contributed to America, and they were not inherently or essentially un-American.

Looking back, we should have let the people on those ships disembark. Looking forward, we should do the same.

[image: By Internet Archive Book Images – https://www.flickr.com/photos/internetarchivebookimages/14782377875/Source book page: https://archive.org/stream/christianheralds09unse/christianheralds09unse#page/n328/mode/1up, No restrictions, https://commons.wikimedia.org/w/index.php?curid=42730228]

Demagoguery and Democracy

John Muir and environmental demagoguery

One of the most controversial claims I make about demagoguery is that it isn’t necessarily harmful. When I make that argument, it’s common for someone to disagree with me by pointing out that some specific instance of demagoguery is harmful. But that isn’t refuting my argument because I’m not arguing for a binary of demagoguery being always or never harmful. I’m saying that not every instance of demagoguery is necessarily harmful. Whether demagoguery is harmful depends, I think, on where it lies on multiple axes: how demagogic the text is; how powerful that media is that is promoting the demagoguery; how widespread that kind of demagoguery is.

(Yeah, yeah, I know, that means a 3d map, but I honestly think you need all three axes.)

And the best way to talk about the harmless demagoguery is to talk more about one of the first examples of a failed deliberative process that haunted me. One spring, when I was a child, my family went to Yosemite Valley in Yosemite National Park. My family mostly tried (and failed) to teach one another bridge, and I wandered around the emerald valley. Having grown up in semi-arid southern California, the forested walks seemed to me magical, and I was enchanted. One evening, my mother took me to a campfire, hosted by a ranger, who told the story of John Muir, a California environmentalist crucial in the preservation of Yosemite National Park. The last part of the ranger’s talk was about Muir’s final political endeavor, his unsuccessful attempt to prevent the damming and flooding of the Hetch Hetchy Valley, a valley the ranger said was as beautiful as the one by which I had been entranced. The ranger presented the story as a dramatic tragedy of Good (John Muir) versus Evil (the people who wanted to dam and flood the valley), with Evil winning and Muir dying of a broken heart. I was deeply moved, and fascinated. And years later, I would come back to the story when trying to think about whether and how people can argue together on issues with profound disagreement.

The ranger had told the story of Good versus Evil, but that isn’t quite right, in several ways. For one thing, it wasn’t a debate with only two sides (something I have since discovered to be true of most political issues). In this case, it is more accurate to say that there were three sides: the corrupt water company currently supplying San Francisco that wanted to prevent San Francisco getting any publicly-owned water supply; the progressive preservationists like John Muir, who wanted San Francisco to get an outside publicly-owned water supply, but not the Hetch Hetchy; and the progressive conservationists like Gifford Pinchot or Marsden Manson, who wanted an outside publicly-owned water supply that included the Hetch Hetchy.

And a little background on each of the major figures in this issue. Gifford Pinchot was head of the Forest Service, with close political ties to Theodore Roosevelt. Born in 1865, he was a strong advocate of conservation—that is, keeping large parts of land in public ownership, sustainable foresting practices, and what is called “multiple use.” The principle of conservation (as opposed to preservation) is that public lands should be available to as many different uses as possible, such as foresting, hunting, camping, and fishing. The consensus among scholars is that Pinchot’s support for the Hetch Hetchy dam was crucial to its success.

Marsden Manson was far less famous than Pinchot. Born in 1850, he was an engineer (trained at Berkeley), member of the Sierra Club who had camped in Yosemite, and, from 1897 till 1912, was an engineer for the City of San Francisco, first serving on the San Francisco Drainage Committee, then in the Public Works Department, and finally City Engineer. It was in that capacity that he wrote the pamphlet I’ll talk about in a bit. He was an avid conservationist.

John Muir is probably the most famous of the people heavily involved in the controversy, and still a hero among environmentalists. Born in 1838 in Scotland, his family emigrated to the United States when he was around ten, to Wisconsin. He arrived in California in 1868, and promptly went to Yosemite Valley (which was not yet a national park). He stayed there for several years, writing about the Sierras, in what would become articles in popular magazines. His elegant descriptions of the beauties of the Sierra Nevada mountains were influential in persuading people to preserve the area, creating Yosemite National Park. He was the first President of the Sierra Club (formed in the early 1890s) which is still a powerful force in environmentalism. Muir was a preservationist, believing that some public lands should be preserved in as close to a wilderness state as possible.

Perhaps the most important character in the controversy is the Hetch Hetchy Valley. Part of the Yosemite National Park, it was less accessible than Yosemite Valley, and hence far less famous. Like many other valleys in the Sierra Nevada mountains, it was formed by glaciers. Two of its waterfalls are among the tallest waterfalls in North America.

The story the ranger told was between right and wrong, good and evil, and, even though I disagree with the stance Pinchot and Manson took, and believe that the Hetch Hetchy Valley should not have been dammed (and I believe they used some pretty sleazy rhetorical and political tactics to make it happen), I don’t think they were bad people. I don’t think they were selfish or greedy, or even that they didn’t appreciate nature. I think they believed that what they were doing was right, and they had some good arguments and good reasons, and they felt justified in some troubling rhetorical means because they believed their ends were good. I don’t think they were Evil.

After all, San Francisco had long been victimized by a corrupt water company, the Spring Valley Company, with a demonstrated record of exploiting users (particularly during the aftermath of the 1906 earthquake). San Francisco had a legitimate need for a new water supply, and the argument that such public goods should not be subject to the profit motive is a sensible argument. The proponents of the dam argued that turning the valley into a reservoir would increase the public’s access to it, and the ability of the public to benefit. The dam, it was promised, would provide electric power that would be a public utility (that is, not privately owned), thereby benefiting the public directly. Thus, both the preservationists and conservationists were concerned about public good, but they proposed different ways of benefitting the public.

Although John Muir was President and one of the founders of the Sierra Club, not everyone in the organization was certain the dam was a mistake, and so the issue was put to a vote—the Sierra Club at that point had both conservationists and preservationists. Muir wrote the case against, a pamphlet called “The Hetch Hetchy Valley,” which, along with Manson’s argument, “Statements of San Francisco’s Side of the Hetch Hetchy Reservoir Matter,” was distributed to members of the Sierra Club, and they were asked to vote.

For Muir’s pamphlet, he reused much of an 1873 article about Hetch Hetchy, originally written to persuade people to visit the Sierras. He kept much (but not all) of his highly poetical description of the Hetch Hetchy Valley, especially its two falls. His argument throughout the pamphlet is that the valley is beautiful, unique and sacred; it isn’t until the end of the pamphlet that he added a section specifically written for the dam controversy, and in that part he resorted to demagoguery, painting his opponents as motivated by greed and an active desire to destroy beauty, in the same category as the Merchants in the Temple of Jerusalem and Satan in the Garden of Eden: “despoiling gainseekers, — mischief-makers of every degree from Satan to supervisors, lumbermen, cattlemen, farmers, etc., eagerly trying to make everything dollarable […] Thus long ago a lot of enterprising merchants made part of the Jerusalem temple into a place of business instead of a place of prayer, changing money, buying and selling cattle and sheep and doves. And earlier still, the Lord’s garden in Eden, and the first forest reservation, including only one tree, was spoiled.” Muir presented the conflict as “part of the universal battle between right and wrong,” and characterized his opponents’ arguments as “curiously like those of the devil devised for the destruction of the first garden — so much of the very best Eden fruit going to waste; so much of the best Tuolumne water.” Muir called his opponents “Temple destroyers, devotees of ravaging commercialism,” saying, they “seem to have a perfect contempt for Nature, and, instead of lifting their eyes to the mountains, lift them to dams and town skyscrapers.” And he ended the pamphlet with the rousing peroration:

Dam Hetch-Hetchy! As well dam for water-tanks the people’s cathedrals and churches, for no holier temple has ever been consecrated by the heart of man. (John Muir Sierra Club Bulletin, Vol. VI, No. 4, January, 1908)

Muir’s argument is demagoguery—he takes a complicated situation (with at least three different positions) and divides it into a binary of good versus evil people. The bad people don’t have arguments; they have bad motives.

But this, too, is a controversial claim on my part, and it actually makes some people really angry with me for me to “criticize” Muir. The common response is that I shouldn’t criticize him because he was a good man and he was fighting for a good cause. In other words, the world is divided into good and bad people, and we shouldn’t criticize good people on our side. And I reject every part of that argument. I think we should criticize people on our side, especially if we agree with their ends (and especially if we’re looking critically at an argument in the past) because that’s how we learn to make better arguments. And I’m not even criticizing Muir in the sense those people mean—they mean I’m saying negative things about him, and that I believe he should have done things differently. The assumption is that demagoguery is bad, so by saying he engaged in demagoguery he’s a bad person.

Like Muir’s argument, that presumes a binary (or even continuum) between good and bad people. Whether there really is such a binary I don’t know, but I’m certain that it isn’t relevant. The debate wasn’t split into good and bad people, and we don’t have to make our heroes untouchable.

And, besides, I’m not criticizing Muir in the sense of saying he did the wrong thing. I’m not sure he did. His demagoguery had no particular harm. While his text (especially the last part) is demagoguery, and he was a powerful rhetor at the time, the kind of demagoguery in which he was engaged (against conservationists) wasn’t very widespread, so he wasn’t contributing to a broad cultural demonizing of some group. And I’m not even sure that his demagoguery did any harm (or benefit) to the effectiveness of his argument.

Muir was trying to get the majority of people in the Sierra Club—perhaps even all of them—to condemn the Hetch Hetchy scheme on preservationist grounds, so he already had the votes of preservationists like himself. What he had to do rhetorically is to move conservationists (or, at least, people drawn to that position) over to the preservationist side, at least in regard to the Hetch Hetchy Valley.

A useful step in an argument is identifying what, exactly, is the issue (or are the issues): why are we disagreeing? Called the “stasis” in classical rhetorical theory, the “hinge” of an argument points to the paradox that a productive disagreement requires agreement on several points—including on the geography of the argument: what is at the center, how broad an area can/should the argument cover, what areas are out of bounds? The stasis is the main issue in the argument, and arguments often go wrong because people disagree about what it is. In the case of the Hetch Hetchy, an ideal argument about the topic would be about whether damming and flooding that valley was the best long-term option for everyone who uses the valley—such a debate would require that people talk honestly and accurately about the actual costs, the various options, and as usefully as possible about the benefits (of all sorts) to be had from preserving the valley for camping (this is a big issue in California, in which camping is very popular).

It’s conventional in rhetoric to say that you have to argue from your opposition’s premises to persuade your opposition, and that would have necessitated Muir arguing on the premises that informed conservation.

Muir’s rhetorical options included:

    1. condemning conservationism in the abstract, and trying to persuade his conservationist audience to abandon an important value;
    2. arguing that conservationism is not a useful value in this particular case, and that this is a time when preservationism is a better route;
    3. arguing that damming and flooding the valley does not really enact conservationist values (e.g., it’s actually expensive).

But, to do any of those strategies effectively, he’d have to make the case on the conservationist premise that it’s appropriate to think about natural resources in terms of costs and benefits. And Muir’s stance about nature—his whole career—was grounded in the perception that such a way of looking at nature is a unethical.

Muir paraphrases (in quotes) the conservationist mantra: “Utilization of beneficent natural resources, that man and beast may be fed and the dear Nation grow great.” While I’ve never found any conservationist text that has that precise wording, it’s a fair representation of the basic principle of conservation; i.e., “greatest good for the greatest number.” And, certainly, conservationists did (and do) believe that there is no point in preserving any wilderness areas—all forests should be harvested, all lakes should be used, all areas should be open to hunting. But they didn’t do this out of a desire for financial gain, as much as from a different (and I would say wrong-headed) perception of how to define “the public.”

The conservationist argument in this case was pretty much bad faith, in that they claimed that they would improve the beauty of the valley by making it a lake. Muir argued they would destroy it. I agree with Muir, as it happens, and so my argument is not that Muir is factually wrong; the valley was destroyed by the damming. I also think some of the dam proponents—specifically Manson–knew that it would be destroyed, and Manson was lying when he described a road, increased camping, and other features that, as an engineer, he must have known were impossible. But many of the people drawn to the conservationist plan didn’t know that Manson was describing technologically impossible conditions, and they believed the proponents’ argument that the resulting reservoir would not only benefit San Franciscans (by providing safe cheap water and electric power) but it would have no impact on camping; it would, the conservationists claim, increase the accessibility of the area without interfering with the beauty of the valley at all. Again, that isn’t true, but it’s what people believed. And part of Aristotle’s point about rhetoric, and its reliance on the enthymeme, is that rhetoric begins with what people believe.

Manson’s response was fairly straightforward, and grounded, he insisted repeatedly, on facts. He argued:

    • San Francisco owned the valley floor.
    • Construction would not begin on the Hetch Hetchy dam until and unless San Francisco first developed Lake Eleanor (a water source not disputed by the preservationists) and then found that water source inadequate.
    • A photo he presented showed what the lake would look like when dammed and flooded—very little of the valley flooded, with no obstruction of the falls that Muir praised so heavily, a road around the edge enabling visitors to see more of the valley—so, he said, the valley will be more beautiful, reflecting the magnificent granite walls.
    • Keeping the reservoir water pure will not inhibit camping in any way.
    • The Hetch Hetchy plan is the least expensive option, and it will provide energy, thereby breaking the current energy monopoly.

Muir’s arguments, he says, “are not in reality based upon true and correct facts” (435).

Marsden Manson was City Engineer for San Francisco, and had done thorough reports on the issue. And so he had to know that almost all of what he was saying was “not in reality based upon true and correct facts.” San Francisco had bought the land, but, since it was within a national park, the seller had no right to sell it. Construction would begin immediately on the dam, flooding the entire valley, making the entire valley inaccessible, including the famous falls. It was not possible to build the roads that Manson drew on the photo and, being an engineer, he must have known that. The reservoir inhibited camping, and, most important, the Hetch Hetchy plan was the most expensive option available to San Francisco. Manson had muddled the numbers to make it appear less expensive.

In other words, either Manson lied, or he was muddled, uninformed, bad at arithmetic, and not a very good engineer.

Manson’s motives in all this are complicated, and ultimately irrelevant. He may have expected to benefit personally by the approval of the dam project, as he may have thought he would build it. But it would have been a benefit of glory but not money; I’ve never read anything to suggest that he was motivated by anything other than a sense that dominating nature is glorious, and that public projects providing water and power are better than preserving valleys. (He is reputed to have suggested damming and flooding Yosemite Valley.)

In other words, what presented itself as the pragmatic option was just as ideologically driven as what was rejected as the emotional one (I think the same thing happens now with arguments about the death penalty, welfare “reform,” the war on drugs, foreign policy, the deficit—there is a side that manages to be taken as more practical, but it might actually be the most ideologically driven).

Muir’s rhetorical options were limited by his opponent, an engineer, making claims about engineering issues that neither Muir nor his supporters had the expertise to refute. It took years for someone to look at the San Francisco reports and determine that the numbers were bad; preservationists didn’t know (and, presumably, many supporters of the dam didn’t know) that the numbers were misleading, and it was the most expensive option.

But would Muir have argued on such grounds anyway? To argue on the grounds of cost would have confirmed the Major Premise that public projects should be determined by cost—to say that the Hetch Hetchy should not be built because it is the most expensive would seem to confirm the perception that you can make natural cathedrals “dollarable” in Muir’s words. In other words, Muir rejected the very terms by which the conservationist argument was made—he rejected the premises. To argue on premises (except in rare circumstances) seems to confirm them, and so he would, in order to win the Hetch Hetchy argument, have argued against what he had spent a lifetime arguing for: that we should not look at nature in terms of money. Wilderness areas are, he insisted, sacred. And so he railed against his opposition.

As I mentioned above, I’m often attacked by people who think I’m attacking Muir. And I think that misunderstanding arises because of a particular perception of what the discipline of rhetoric is for: rhetorical analysis is often seen as implicitly normative; we do an analysis to say what a person should do or should have done. So, to say that Muir’s rhetorical strategies didn’t work is to say his rhetoric was bad, and it should have been different. Coupled with the notion that good people promote good things, if I say that Muir’s rhetoric was “demagoguery,” then I am saying he cannot have been a good person. There is, here, a theory of identity: that people are either good or bad; that good people say good things, and that bad people say bad things; that demagoguery is something only bad people do. That whole model of discourse and identity is wrong in too many ways to count, and I am not endorsing it.

I think Muir was a good man–he is a personal hero of mine—but that doesn’t mean he was perfect, and it certainly doesn’t mean we can’t learn from him. Muir did well within the Sierra Club (the Sierra Club vote was about 80% on Muir’s side and 20% in favor of the dam) , but ultimately lost the argument. And I think what we learn from his failure to persuade all conservationists to vote against the Hetch Hetchy project is not about Muir’s personal qualities or failings, but about rhetorical constraints and models of persuasion.

I’m arguing that, for Muir to have persuaded his opposition, he would have had to rely on premises that he rejected. This is sometimes called the “sincerity problem” in rhetoric. To what extent, and under what circumstances, should we make arguments we don’t believe in order to achieve an end in which we do believe? Muir didn’t argue from insincere premises; that may have weakened his effectiveness in the moment. But it definitely strengthened his effectiveness in the long run. His Hetch Hetchy pamphlet continues to be powerfully motivating for people, perhaps more motivating than it would have been had he compromised his rhetoric in order to be effective in the short-term. Muir’s demagoguery did no harm, and it may have even done some good. Demagoguery isn’t necessary harmful.

Demagoguery and Democracy

[image source: https://en.wikipedia.org/wiki/Hetch_Hetchy#/media/File:Hetch_Hetchy_Valley.jpg]

On career choices as mingling in Burke’s parlor

On Wednesday, I sent off the scholarly version of the demagoguery argument. It isn’t the book I once planned (that would involve a deeply theoretical argument about identity and the digital world), but it’s the one I really wanted to write, that would (I think) reach more people than that other one.

And it’s the last scholarly book I’ll write. I intend to spend the rest of my career trying to solve the interesting intellectual problem of making scholarly concepts and debates more accessible to non-academics. But that isn’t because I reject highly specialized academic writing as, in any way, a bad thing.

I have no problem with highly theoretical and very specialized books. My books have barely grazed the 1000 sales point, and that’s pretty good for a scholarly book. People have told me that something I’ve written has had an impact on their scholarship, pedagogy, program administration, so I’m really happy with my record as a scholar.

And I’m happy with the record of people who have sold both more and less because measuring impact is so very difficult. Publishing a book with an academic press is an extraordinary achievement, and measuring the impact of such books accurately is nigh impossible—a really powerful book is shared in pirated pdfs, checked out of libraries, passed from one person to another. Sales and impact are orthogonal in academia.

If you study the history of ideas even a little you have to know that what seemed major in the moment was sometimes just a trend (like mullets) and sometimes a sea change (like the synthesizer). No one reads Northrop Frye anymore, but he was a big deal at one moment, and yet Hannah Arendt is still in the conversation, who was also a big deal around the same time. And there are all those people who weren’t big deals in their era, but later came to have tremendous impact, such as Mikhail Bakhtin.

Some trade books on scholarly issues have had extraordinary sales (such as Mortimer Adler’s writings), but it’s hard to know what impact they had. Madison Grant’s racist book Passing of the Great Race had poor sales, but appears to have had a lot of impact. And there are lots of trade books that have come and gone without leaving any impact, so there’s no good reason to conclude that trade books necessarily have more impact than scholarly ones. I don’t think there are a lot of (any?) necessary conclusions that one can draw about whether trade or scholarly books have more impact, are more or less important, more or less valuable intellectual activity.

I have always loved Kenneth Burke’s analogy of the parlor for what it means to be interested in major questions. You show up at a party, he says, and it’s been going on for a while, and you find some conversation that seems interesting. You listen for a while, and then you take a side or point out something new. You get attacked and defended, and some people leave the conversation, and others join, and eventually you too leave. And it goes on, with other people taking sides that may or may not have to do with what you were arguing.

What Burke fails to mention is that, if it’s a good party, there are a lot of conversations going on. You might choose to leave that particular conversation, but not leave the party.

I have loved writing scholarly pieces (although I didn’t initially think I would), and my work has placed me in some particular conversations. I’ve moved from one conversation to another, but all on the side of the parlor engaged in very scholarly arguments. I’d like to leave that side of the parlor, not because it’s a bad one—it’s a wonderful one—but because it’s a party with a lot of conversations going on. I’d like to mingle.

I think a lot of discussions of the public intellectual make a lot of odd assumptions that assume binaries—that either specialized or public scholarship is good, for instance. Scholarship that speaks with authority to a small group is neither better nor worse than scholarship that reaches a broad audience—it’s just a different conversation in Burke’s parlor. And I’m going to wander over there for a bit.

 

The easy demagoguery of explaining their violence

When James Hodgkinson engaged in both eliminationist and terroristic violence against Republicans, factionalized media outlets blamed his radicalizing on their outgroup (“liberals”). In 2008, when James Adkisson committed eliminationist and terroristic violence against liberals, actually citing in his manifesto things said by “conservative” talk show hosts (namechecking some of the ones who blamed liberals for Hodgkinson), those media outlets and pundits neither acknowledged responsibility nor altered their rhetoric.[1]

That’s fairly typical of rabidly factional media: if the violence is on the part of someone who can be characterized as them (the outgroup), then outgroup rhetoric obviously and necessarily led to that violence. That individual can be taken as typical of them. If, however, the assailant was ingroup, then factionalized media either simply claimed that the person was outgroup (as when various media tried to claim that a neo-Nazi was a socialist and therefore lefty), or they insisted this person be treated as an exception.

That’s how ingroup/outgroup thinking works. The example I always use with my classes is what happens if you get cut off by a car with bumper stickers on a particularly nasty highway in Austin (you can’t drive it without getting cut off by someone). If the bumper stickers show ingroup membership, you might think to yourself that the driver didn’t see you, or was in a rush, or is new to driving. If the bumper stickers show outgroup membership, you’ll think, “Typical.” Bad behavior is proof of the essentially bad nature of the outgroup, and bad behavior on the part of ingroup membership is not. That’s how factionalized media works.

So, it’s the same thing with ingroup/outgroup violence and factionalized media (and not all media is factionalized). For highly factionalized right-wing media, Hodgkinson’s actions were caused by and the responsibility of “liberal” rhetoric, but Adkisson’s were not the responsibility of “conservative” rhetoric. For highly factionalized lefty media, it was reversed.

That factionalizing of responsibility is an unhappy characteristic of our public discourse; it’s part of our culture of demagoguery in which the same actions are praised or condemned not on the basis of the actions, but on whether it’s the ingroup or outgroup that does it. If a white male conservative Christian commits an of terrorism, the conservative media won’t call it terrorism, never mentions his religion or politics, and generally talks about mental illness; if a someone even nominally Muslim does the same act, they call it terrorism and blame Islam. In some media enclaves, the narrative is flipped, and only conservatives are acting on political beliefs. In all factional media outlets, they will condemn the other for “politicizing” the incident.

While I agree that violent rhetoric makes violence more likely, the cause and effect is complicated, and the current calls for a more civil tone in our public discourse is precisely the wrong solution. We are in a situation when public discourse is entirely oriented toward strengthened our ingroup loyalty and our loathing of the outgroup. And that is why there is so much violence now. It isn’t because of tone. It isn’t because of how people are arguing; it’s because of what people are arguing.

To make our world less violent, we need to make different kinds of arguments, not make those arguments in different ways.

Our world is so factionalized that I can’t even make this argument with a real-world example, so I’ll make it with a hypothetical one. Imagine that we are in a world in which some media that insist all of our problems are caused by squirrels. Let’s call them the Anti-Squirrel Propaganda Machine (ASPM).They persistently connect the threat of squirrels to end-times prophecies in religious texts, and both kinds of media relentlessly connect squirrels to every bad thing that happens. Any time a squirrel (or anything that kind of looks like a squirrel to some people, like chipmunks) does something harmful it’s reported in these media, any good action is met with silence. These media never report any time that an anti-squirrel person does anything bad. They declare that the squirrels are engaged in a war on every aspect of their group’s identity. They regularly talk about the squirrels’ war on THIS! and THAT! Trivial incidents (some of which never happened) are piled up so that consumers of that media have the vague impression of being relentlessly victimized by a mass conspiracy of squirrels.

Any anti-squirrel political figure is praised; every political or cultural figure who criticizes the attack on squirrels is characterized as pro-squirrel. After a while, even simply refusing to say that squirrels are the most evil thing in the world and that we must engage in the most extreme policies to cleanse ourselves of them is showing that you are really a pro-squirrel person. So, in these media, there is anti-squirrel (which means the group that endorses the most extreme policies) and pro-squirrel. This situation isn’t just ingroup versus outgroup, because the ingroup must be fanatically ingroup, so the ingroup rhetoric demands constant performance of fanatical commitment to ingroup policy agendas and political candidates.

If you firmly believe that squirrels are evil (and chipmunks are probably part of it too0, but you doubt whether this policy being promoted by the ASPM is really the most effective policy, you will get demonized as someone trying to slow things down, not sufficiently loyal, and basically pro-squirrel. Even trying to question whether the most extreme measures are reasonable gets you marked as pro-squirrel. Trying to engage in policy deliberation makes you pro-squirrel.

We cannot have a reasonable argument about what policy we should adopt in regard to squirrels because even asking for an argument about policy means that you are pro-squirrel. That is profoundly anti-democratic. It is un-American insofar as the very principles of how the constitution is supposed to work show a valuing of disagreement and difference of opinion.

(It’s also easy to show that it’s a disaster, but that’s a different post.)

ASPM media will, in addition, insist on the victimization narrative, and also the “massive conspiracy against us” argument, but that isn’t really all that motivating. As George Orwell noted in 1984, hatred is more motivating when it’s against an individual, and so these narratives end up fixating on a scapegoat. (Right now, for the right it’s George Soros, and for the left it’s Trump.) There can be institutional scapegoats—Adkisson tried to kill everyone in a Unitarian Church because he’d believed demagoguery that said Unitarianism is evil.

Inevitably, the more that someone lives in an informational world in which they are presented as in a war of extermination against us, the more that person will feel justified in using violence against them. If it’s someone who typically uses violence to settle disagreement, and there is easy access to weapons, it will end in violence against whatever institution, group, or individual that person has been persuaded is the evil incubus behind all of our problems.

At this point, I’m sure most readers are thinking that my squirrel example was unnecessarily coy, and that it’s painfully clear that I’m not talking about some hypothetical example about squirrels but the very real examples of the antebellum argument for slavery and the Stalinist defenses of mass killings of kulaks, most of the military officer class, and people who got on the wrong side of someone slightly more powerful.

And, yes, I am.

The extraordinary level of violence used to protect slavery as an institution (or that Stalin used, or Pol Pot, or various other authoritarians) was made to seem ordinary through rhetoric. People were persuaded that violence was not only justified, but necessary, and so this is a question of rhetoric—how people were persuaded. But, notice that none of these defenses of violence have to do with tone. James Henry Hammond, who managed to enact the “gag rule” (that prohibited criticism of slavery in Congress) didn’t have a different “tone” from John Quincy Adams, who resisted slavery. They had different arguments.

Demagoguery—rhetoric that says that all questions should be reduced to us (good) versus them (evil)—if given time, necessarily ends up in purifying this community of them. How else could it end? And it doesn’t end there because of the tone of dominant rhetoric. It ends there because of the logic of the argument. If they are at war with us, and trying to exterminate us, then we shouldn’t reason with them.

It isn’t a tone problem. It’s an argument problem. It doesn’t matter if the argument for exterminating the outgroup is done with compliments toward them (Frank L. Baum’s arguments for exterminating Native Americans), bad numbers and the stance of a scientist (Harry Laughlin’s arguments for racist immigration quotas), or religious bigotry masked as rational argument (Samuel Huntington’s appalling argument that Mexicans don’t get democracy).

In fact, the most effective calls for violence allow the caller plausible deniability—will no one rid me of this turbulent priest?

Lots of rhetors call for violence in a way that enables them to claim they weren’t literally calling for violence, and I think the question of whether they really mean to call for violence isn’t interesting. People who rise to power are often really good at compartmentalizing their own intentions, or saying things when they have no particular intention other than garnering attention, deflecting criticism, or saying something clever. Sociopaths are very skilled at perfectly authentically saying something they cannot remember having said the next day. Major public figures get a limited number of “that wasn’t my intention” cards for the same kind of rhetoric—after that, it’s the consequences and not the intentions that matter.

What matters is that whether it’s individual or group violence, the people engaged in it feel justified, not because of tone, but because they have been living in a world in which every argument says that they are responsible for all our problems, that we are on the edge of extermination, that they are completely evil, and therefore any compromise with them is evil, that disagreement weakens a community, and that we would be a better and stronger group were we to purify ourselves of them.

It’s about the argument, not the tone.

[A note about the image at the beginning: this is one of the stained glass windows in a major church in Brussels celebrating the massacre of Jews. The entire incident was enabled by deliberately inflammatory us/them rhetoric, but was celebrated until the 1960s as a wonderful event.]

[1] For more on Adkisson’s rhetoric, and its sources, see Neiwert’s Eliminationists (https://www.amazon.com/Eliminationists-Hate-Radicalized-American-Right/dp/0981576982)

For more about demagoguery: https://theexperimentpublishing.com/catalogs/fall-2017/demagoguery-and-democracy/

Making sure the poor don’t get any food they don’t deserve

“But when thou makest a feast, call the poor, the maimed, the lame, the blind”

In a recent interview, Kellyanne Conway said that “able-bodied” people who will lose Medicare with the GOP health plan should “go find employment” and then get “employee-sponsored benefits.” Critics of Conway presented evidence that large numbers of adults on Medicaid do have jobs, as though that would prove her wrong. But that argument won’t work with the people who like the GOP plan because their answer is that those people should get better jobs. The current GOP plan regarding health care is based on the assumption that benefits like health care should be restricted to working people.

For many, this looks like hardheartedness toward the poor and disadvantaged—exactly the kind of people embraced and protected by Jesus, so many people on the left have been throwing out the accusation of hypocrisy. That the same people who are, in effect, denying healthcare to so many people have protected it for themselves seems, to many, to be the merciless icing on the hateful cake.

And so progressives are attacking this bill (and the many in the state legislatures that have the same intent and impact) as heartless, badly-intentioned, cynical, and cruel. And that is exactly the wrong way to go about this argument. The category often called “white evangelical” tends to be drawn to the just world hypothesis and prosperity gospel, and those two (closely intertwined) beliefs provide the basis for the belief that public goods should not be equally accessible (let alone evenly distributed) because, they believe, those goods should be distributed on the basis of who deserves (not needs) them more. And they believe that Scripture endorses that view, so they are not hypocrites—they are not pretending to have beliefs they don’t really have. This isn’t an argument about intention; this is an argument about Scriptural exegesis.

Progressives will keep losing the argument about public policy until we engage that Scriptural argument. People who argue that the jobless, underemployed, and government-dependent should lose health care will never be persuaded by being called hypocrites because they believe they are enacting Scripture better than those who argue that healthcare is a right.

1. The Just World Hypothesis and Prosperity Gospel

There are various versions of the prosperity gospel (and Kate Bowler’s Prosperity Gospel elegantly lays them out), but they are all versions of what social psychologists call “the just world hypothesis.” That hypothesis is a premise that we live in a world in which people get what they deserve within their lifetimes—people who work hard and have faith in Jesus are rewarded. In some versions, it’s well within what Jesus says, that God will give us what we need. In others, however, it’s the ghost of Puritanism (as Max Weber called it) that haunts America: that wealth and success are perfect signs of membership in the elect. And it’s that second one that matters for understanding current GOP policies.

In that version, in this life, people get what they deserve, so that good people get and deserve good things, and bad people don’t deserve them—it is an abrogation of God’s intended order to allow bad people to get good things, especially if they get those good things for free. For people who believe that God perfectly and visibly rewards the truly faithful, there is a perfect match between faith and the goods such as health and wealth. People with sufficient faith are healthy and wealthy, and, because they have achieved those things by being closer to God, they deserve more of the other goods, such as access to political power. Rich people are just better, and their being rich is proof of their goodness. So, it’s a circular argument–good people get the good things, and that must mean that people with good things are good.

I would say that’s an odd reading of Scripture, but no odder than the defenses of slavery grounded in Scripture, nor of segregation, nor of homophobia. All of those defenders had their proof-texts, after all. And, in each case, the people who cited those texts and defended those practices had a conservative (sometimes reactionary) ideology. They positioned themselves as conserving a social order and set of practices they sincerely believed intended by God as against liberal, progressive, or “new” ways of reading Scripture.

[And here a brief note—they often didn’t know that their own readings were very new, but that’s a different post.]

Because they were reacting against the arguments they identified as liberal (or atheist), I’ll call them reactionary Christians for most of this post, and then in another post explain what’s wrong with that term.

In some cultures, political ideology and identity are identical, so that a person with a particular political belief automatically identifies everyone with that belief as in the category of “good person,” and anyone who doesn’t share that belief is a “bad person.” We’re in that kind of culture.

That easy equation of “believes what I do” and “good person” is enhanced by living within an informational enclave. In informational enclaves, a person only hears information that confirms their beliefs—antebellum Southern newspapers were filled with (false) reports of abolitionist plots, for instance,—so it would sincerely seem to their readers as though “everyone” agrees that abolitionists are trying to sow insurrection. In an informational enclave, “everyone” agrees that the Jews stab the host for no particular reason (the subject of the stained glass above–a consensus that resulted in massacre).

Informational enclaves are self-regulating in that anyone who tries to disrupt the consensus is shamed, expelled, perhaps even killed. By the 1830s, it was common for slave states to require the death penalty for anyone advocating abolition, and “advocating abolition” might be understood as “criticizing slavery.” American Protestant churches split so that Southern churches could guarantee they would not have a pastor that might condemn slavery (the founding of the SBC, for instance), and proslavery pastors could rain down on their congregations proof-texts to defend the actually fairly bizarre set of practices that constituted American slavery.

As Stephen Haynes has shown, the reliance of those pastors on an odd reading of Genesis IX became a Scriptural touchstone for defending segregation.

Southern newspapers were rabidly factional in the antebellum era, and (with a few exceptions) pro-segregation (or silent on segregation) in the Civil Rights eras. (This was not, by the way, “true of both sides,” in that the major abolitionist newspaper, The Liberator, often published the full text of proslavery arguments.) Because those proof-texts were piled up as defenses, and reactionary Christianity was hegemonic in various areas, many people simply knew that there were three kings who visited the baby Jesus, that those three kings related to the three races, with the “black” race condemned to slavery due to Noah’s curse.

If you’d like to see how hegemonic that (problematic) reading of Scripture was, look at older nativity scenes, and you will see that there is always a white, someone vaguely semitic, and an African. Ask yourself, how many wise men visited Jesus? Try to prove that number through Scripture.

That whole history of reactionary Christianity is ignored, and even the SBC has tried to rewrite its own history, not acknowledging the role of slavery in their founding. My point is simply that, when a method of interpreting Scripture becomes ubiquitous in a community, then people don’t realize that they’re interpreting Scripture through a particular lens—they think they’re just reading what is there.

For years, the story of Sodom was taken as a condemnation of homosexuality, but there is really nothing about homosexuality in it—the Sodomites were more commonly condemned for oppressing the poor. There are rapes in it, and one of them would have been homosexual, but there is no indication that homosexuality was accepted as a natural practice in the community. Yet, for years, the story of Sodom was flipped on the podium as though it obviously condemned all same-sex relationships.

For readers of The New York Times, The Nation, or other progressive outlets, the Scriptural argument over homosexuality was under the radar, but it was crucial to how far we’ve gotten for the civil rights of people with  sexualities stigmatized by reactionary Christians. The Scriptural argument about queer sexuality was always muddled—Sodom wasn’t really about gay sex, the word “homosexuality” is nowhere in Scripture, people who cite Leviticus about men lying with each other get that sentiment tattooed on themselves while wearing mixed fibers, Paul was opposed to sex in general.

Reactionary Christians managed to promote their muddled view as long as no one raised questions about exegesis, and the Christian Left raised those questions over and over. And now even mainstream reactionary churches who argue that Scripture condemns homosexuality have abandoned the story of Sodom as a proof text. That success can be laid at the feet of progressive Christians.

One thing that turned large numbers of people, I think, was the number of bloggers, popular Christian authors, and pastors making the more sensible Scriptural argument: there isn’t a coherent method of reading Scripture that demonizes queer sexuality and allows the practices reactionary Christians want to allow (such as non-procreative sex, divorce, wildflower mixes, corduroy, oppressing the poor).

Similarly, an important realm in the Civil Rights movements was that in which progressive Christians debated the Scriptural argument. One of the more appalling “down the memory hole” moments in American history is the role of reactionary Christians in civil rights. Segregation was a religious issue, supported by Genesis IX, and various other texts (about God putting peoples where they belong, and all the texts about mixing). Even “moderate” Christians, like those who opposed King, and to whom he responded in his letter, opposed integration.

That’s important. The major white churches in the South supported segregation, and all of the reactionary ones.The opponents of segregation (like the opponents of slavery) were progressive Christians, sometimes part of organizations (like the black churches) and sometimes on the edge of getting disavowed by their organizations. And that is obscured, sometimes deliberately, as when reactionary Christians try to claim that “Christianity” was on the side of King—no, n fact, reactionary Christianity was on the side of segregation.

Right now, there is a complicated fallacy of genus-species among many reactionary Christians, in that they are trying to claim the accomplishments of people like Jesse Jackson and Martin Luther King, Jr., and Stokely Carmichael on the grounds that King was Christian, while ignoring that their churches and leaders disavowed and demonized those people (and, in the case of Jackson and Carmichael, still do).

Reactionary Christianity has two major problems: one is a historical record problem, and the second, related, is an exegesis problem. They continually deny or rewrite their own participation in oppression, and they have thereby enabled the occlusion of the problems their method of exegesis presents. If their method of reading got them to support slavery and segregation, practices they now condemn, then their method is flawed. Denying the problems with their history enables them to deny the problems with their method.

Reactionary Christianity’s method of reading of Scripture begins by assuming that the current cultural hierarchy is intended by God, that this world is just, that everything they believe is right, and then goes in support of texts that will support that premise. And there is also a hidden premise that the world is easily interpretable, that uncertainty and ambiguity are unnecessary because they are the signs of a weak faith, and that the world is divided into the good and the bad.

2. The Scriptural argument

The proof-text for the notion that poor people don’t deserve health care or other benefits is 2 Thessalonians 3:10, “For even when we were with you, this we commanded you: that if any would not work, neither should he eat.”

Thessalonians may or may not have been written by Paul (probably not), but it certainly contradicts what both Paul and Jesus said about how to treat the poor. There are far more texts that insist on giving without question, caring for the poor, tending to people without judging, and for humans not presuming to be God (that is, we are not perfect judges of good and evil, and our fall was precisely on the grounds of thinking we should be).

That we have a large amount of public policy wavering on that single wobbly text of 2 Thessalonians 3:10 is concerning, but it isn’t new—the Scriptural arguments for slavery, segregation, and homophobia were and are similarly wobbly. Prosperity gospel has a very shaky Scriptural foundation, and the whole notion that Scripture supports an easy division into makers and takers isn’t any easier to argue than the readings that supported antebellum US practices regarding slavery.

Their reading of Scripture says that they should feel good about health insurance being restricted to people who have jobs (which is why Congress is cheerfully giving themselves benefits they’re denying to others—they see themselves as having earned those benefits by having the job of being in Congress). They can feel justified (in the religious sense) in cutting off people on Medicaid, those who are un- or underemployed, and those with pre-existing conditions because they believe that Scripture tells them that those people could simply stop being un- or underemployed, or have made different choices that wouldn’t have landed them on Medicaid, or could have prayed enough not to have those pre-existing conditions. They believe that they are, in this life, sitting by Jesus’ side and handing out judgments.

I think they’re wrong. But calling them hypocrites won’t work.

This is an argument about Scripture, and progressives need to understand that, as with other policy debates, progressive Christians will do some of the heavy lifting. And progressive Christians need to understand that it is our calling: to point, over and over, to Jesus’ passion for the poor and outcast, and to his insistence that the rewards of this world should never be taken as proof of much of anything.

http://theexperimentpublishing.com/tag/patricia-roberts-miller/

King Lear and charismatic leadership

Recently, various highly factionalized media worked their audience into a froth by reporting that New York’s “Shakespeare in the Park” had Julius Caesar represented as Trump. That these media were successful shows people are willing to get outraged on the basis of no or mis-information. Shakespeare’s Caesar is neither a villain nor a tyrant.

And it’s the wrong Shakespeare anyway for a Trump comparison. Shakespeare was deeply ambivalent about what we would now consider democratic discourse (look at how quickly Marc Antony turns the crowd, or Coriolanus’ inability to maintain popularity). But he wasn’t ambivalent about leaders who insist on hyperbolic displays of personal loyalty. They are the source of tragedy.

The truly Shakespearean moment recently was Trump’s cabinet meeting, which he seemed to think would gain him popularity with his base, since it was his entire cabinet expressing perfect loyalty to him. And anyone even a little familiar with Shakespeare immediately thought of the scene in King Lear when Lear demands professions of loyalty. Trump isn’t Caesar; he’s Lear.

Lear’s insistence on loyalty meant that he rejected the person who was speaking the truth to him, and the consequence was tragedy. It isn’t exactly news, at least among people familiar with the history of persuasion and leadership, that leaders who surround themselves with people who make the leader feel great (or who worship the leader) make bad decisions. Ian Kershaw’s elegant Fatal Choices makes the point vividly, showing how leaders like Mussolini, Hitler, or Hirohito skidded into increasingly bad decisions because they treated dissent as disloyalty.

In business schools, this kind of leadership is called “charismatic,” and it is often presented as an unequivocal good—something that is surely making Max Weber (who initially described it in 1916) turn in his grave. Weber identified three sources of power for leaders: tradition, legal, and charismatic, and Hannah Arendt (the scholar of totalitarianism) added a fourth: someone whose authority comes from having demonstrated context-specific knowledge. Weber argued that charismatic leadership is the most volatile.

In business schools, charismatic leadership is praised because it motivates followers to go above and beyond; followers who believe in the leader are less likely to resist. And, while that might seem like an unequivocal good, it’s only good if the leader is leading the institution in a good direction. If the direction is bad, then disaster just happens faster.

Charismatic leadership is a relationship that requires complete acquiescence and submission on the part of the followers. It assumes that there is a limited amount of power available (thus, the more power that others have, the less there is for the leader to have). And so the charismatic leader is threatened by others taking leadership roles, pointing out her errors, or having expertise to which she should submit. It is a relationship of pure hierarchy, simultaneously robust and fragile, because it can withstand an extraordinary amount of disconfirming evidence (that the leader is not actually all that good, does not have the requisite traits, is out of her depth, is making bad decisions) by simply rejecting them; it is fragile, however, insofar as the admission of a serious flaw on the part of the leader destroys the relationship entirely. A leader who relies on legitimacy isn’t weakened by disagreement (and might even be strengthened by it), but a charismatic leader is.

Hence, leaders who rely on legitimacy encourage disagreement and dissent because that leader’s authority is strengthened by the expertise, contributions, and criticism of others, but charismatic leaders insist on loyalty.

Charismatic leadership is praised in many areas because it leads to blind loyalty, and blind loyalty certainly does make an organization that has people working feverishly toward the leaders’ ends. But what if those ends aren’t good?

Whether charismatic leadership is the best model for business is more disputed than best sellers on leadership might lead one to believe. There is no dispute, however, that it’s a model of leadership profoundly at odds with a democratic society. It is deeply authoritarian, since the authority of the leader is the basis of decision-making, and dissent is disloyalty.

Lear demanded oaths of blind loyalty, and, as often happens under those circumstances, the person who was committed to the truth wouldn’t take such an oath. And that person was the hero.

“Just Write!” and the Rhetoric of Self-Help

There is a paradox regarding the large number of scholars who get stalled in writing—and a large number do get stalled at some point (50% of graduate students drop out)—they got far enough to get stalled because, for some long period of time, they were able to write. People who can’t write a second book, or a first one, or a dissertation, are people who wrote well enough and often enough to get to the point that they needed to write a dissertation, first book, second book, grant, and so on. So, what happened?

The advice they’re likely to be given is, “Just write.” And the reason we give that advice (advice I gave for years) is that we have the sense that they’re overthinking things, that, when they sit down to write, they’re thinking about failure, and success, and shame, and all the things that might go wrong, and all the ways what they’re writing might be inadequate, and all the negative reactions they might get for what they’ve written. So, we say, “Just write,” meaning, “Don’t think about those things right now.”

The project of writing may seem overwhelming because existentially risky, and the fear created by all the anxiety and uncertainty is paralyzing. It can seem impossibly complicated, and so we give simple advice because we believe that persuading them to adopt a simpler view of the task ahead will enable them to write something. Once they’ve written something, once they’re unstuck, then they can write something more, and then revise, and then write more. Seeing that they have written will give them the confidence they need to keep writing.

And I think that advice often works, hence the (deserved) success of books like Writing Your Dissertation in Fifteen Minutes a Day or Destination Dissertation. They simplify the task initially, and present the tasks involved in ways that are more precise than accurate, but with the admirable goal of keeping people moving. Many people find those books useful, and that’s great. But many people don’t, and I think the unhappy consequence of the “you just have to do this” rhetoric is that there is an odd shaming that happens to people for whom that advice doesn’t work. And, while it’s great that it works for a lot of people, there are a lot for whom it doesn’t, and I’m not happy that they feel shamed.

These books have, as Barbara Kamler and Pat Thomson have argued, characteristics typical of the self-help genre (“The Failure of Dissertation Advice Books”), especially in that it presents dissertation writing as “a series of linear steps” with “hidden rules” that the author reveals. While I am not as critical of those books, or of the genre of self-help, as Kamler and Thomson, I think their basic point is worth taking seriously: that this advice misleads students because it presents dissertation writing as a set of practices and habits rather than cognitive challenges and developments.

Academic writing is hard because it’s hard. Learning to master the postures, steps, and dances of developing a plausible research question, identifying and mastering appropriate sources, determining necessary kinds of support, managing a potentially sprawling project, and positioning a new or even controversial claim in an existing scholarly conversation—all of that is hard and requires cognitive changes, not just writing practices.

Telling people academic writing “just” requires anything (“just write,” “just write every day,” “just ignore your fears,”) is a polite and sometimes useful fiction. And self-help books’ reliance on simple steps and hidden rules is, I’d suggest, not necessarily or manipulative, but based in the sense that telling people something hard is actually hard can discourage them. If you lie, and thereby motivate them to try doing it, then they might realize that, while hard, it isn’t impossible.

I think the implicit analogy is to something like telling a person who needs to exercise that they should “just get up off the couch.” Telling people that improving their health will be a long and slow process with many setbacks is unlikely to motivate someone to start the process; it makes the goal seem impossible, and unrewarding. Telling someone that getting healthier is simple, and they “just” need to increase their exercise slightly, or reduce portion size slightly, or do one thing differently will at least get them started. Having gotten a little healthier might inspire them to do more, but, even if it doesn’t, they are getting a little better.

But that’s the wrong analogy.

A scholar who is having difficulty writing is not analogous to someone who needs to get up off the couch: it’s a person with a long record of successes as a writer. That is what we (and people who are stuck) so often lose track of when we give the “just write” advice. They are not a person sitting on a couch; they are someone with an exercise practice that has always worked for them in the past and it isn’t working now.

The better analogy, I would suggest, is a sprinter who is now trying to run a marathon. Sprinting has worked for them in the past, and many academics have a writing process that is akin to sprinting—chunks of time in which we do nothing but write, and try to get as much done as quickly as we can. Writing a dissertation or book, on the other hand, is more like running a marathon.

It would be unethical to tell a sprinter who is unable to run a marathon that she should “just run.” She has been running; she’s quite good at it. But the way that she has been running is not working for this new distance. And if she does try to run a marathon the way she has always run short races, she will hurt herself.

My intuition is that people who have trouble writing are people who have always used the sprinting method, and have simply managed to develop the motivational strategies to sprint for longer, or collapse from time to time while on the race, and pick themselves up. Often, it seems to me, that motivation relies on panic and negative self-talk—they manage to binge write because otherwise, they tell themselves, they are a failure.

So I’m not saying that “Just write” is always bad advice. I am saying that it sometimes is; it is sometimes something that can send people into shame spirals. It only works for some people, for people who do find that polite fiction motivating. For others, though, telling them “just write” is exactly like telling a person in a panic attack “just calm down” or someone depressed “just cheer up.”

The “just write” comes from a concern that lack of confidence will paralyze a student. But I think we might be solving the wrong problem.

Part of the problem is the myth of positive thinking, which has taken on an almost magical quality for some people. There is a notion that you should only think positive thoughts, as though thinking negative things brings on bad events. Since thinking clearly about how hard it is to write a book, dissertation, or grant (and, specifically, thinking clearly about how we might have habits or processes that inhibit our success) is thinking about “bad” things, about how things might go wrong or what troubles we might have, the myth of positive thinking says you shouldn’t do it. You should, instead, just imagine success.

This is a myth. It isn’t just a myth, but pernicious, destructive nonsense. A (sometimes secular) descendant of the positive psychology elegantly described by Bowler in Blessed, this is magical thinking pure and simple, and perfectly contrary to what research shows about how positive thinking actually affects motivation.

But here I should be clear. Some people who advocate wishful thinking do so because believe that the only other possibility is wallowing in self-loathing and a sense that the task is impossible, and they believe that telling students that academic writing is hard will necessarily lead to their believing it is impossible. In other words, there is an assumption that there is a binary between thinking only and entirely about positive outcomes or thinking only and entirely about tragic outcomes. The former is empowering and the latter is paralyzing. That narrative is wrong on all three counts—positive thinking is not necessarily enabling, moments of despair are not necessarily disabling, and our attitude toward our own challenges is not usefully described as a binary between pure optimism and pure despair. Left out of that binary is being hopefully strategic: aware of possible failures, mindful of hurdles, with confidence in our resilience as much as in our talents.

As to the first, studies clearly show that refusing to think negative thoughts about possible outcomes is actively harmful, and frequently impairs achievement. That’s important to remember: telling students they shouldn’t think about their own flaws, the challenges ahead of them, and how things might go wrong is not helping them, and it is making it less likely they will do what they need to do.

Gabriele Oettingen’s considerable research shows that (summarized in the very helpful book Rethinking Positive Thinking), while wishful thinking can be useful for maintaining hope in a bad situation or identifying long-term goals, it inhibits action. Fantasizing about how wonderful a dissertation or book will be doesn’t inspire us to write either; for many people, it makes the actual sometimes gritty work so much more unattractive in comparison that it’s impossible to write. The fantasy is far more fun than writing a crummy first draft. Similarly, Carol Dweck’s research on mindsets shows that success depends on acknowledging what has gone wrong and identifying how one might grow and change to get a different outcome in the future.

A sense that the task is so hard as to be impossible is not inevitably and necessarily disabling. It is, however, inevitable. It is dishonest to tell students that we never feel that what we’re trying to do can’t be done or isn’t worth doing, because so many of us do. And most of us got (and get) through it. Sometimes it took time, therapy, medication, changing things in our personal lives, changing jobs, changing projects, all of the above. But I don’t know any productive scholar free from times of slogging through the slough of despond.

In my experience, academic writing gets easier, but it’s never easy. The hardest writing is probably finishing a dissertation while writing job materials—nothing after that is so hard. But it’s always hard. If we tell students that it’s easy, or that it gets easy, even if we do so with the intention of keeping them moving, we do them a disservice. If they believe us, if they believe that we find it easy, then, when it gets hard, as it necessarily will, they have to conclude that there is something wrong with them. They are unhappily likely to conclude that they have been exposed for the imposter they always worried they were.

The “just write” advice almost certainly works for some people in some situations, as does the “just write every day” or “just freewrite” or “just start with your thesis” or any of the other practices and rules that begin with “just.” They work for someone somewhere and maybe they work for everyone some of the time, and they always strike me as sensible enough to suggest that people experiment with them. But we shouldn’t pretend that they’re magical and can’t possibly fail, or that someone “just” needs to do them. The perhaps well-intentioned fiction that academic writing “just” requires certain practice is magical thinking, and we need to stop saying it.

In my experience, people who find the “just write” advice useless find it too abstract. So, I think we need to be clear that scholarly productivity is, for most people, hard, and it’s find that a person finds it hard. And it takes practice, so there are some things a person might “just write”:

    • the methods section;
    • descriptions of an incident, moment in a text, interaction, or some other very, very specific epitome of their problem (Pirsig’s brick in the wall of the opera house);
    • summaries of their secondary materials with a discussion of how each text is and is not sufficient for their research;
    • a collection of data;
    • the threads from one datum to another;
    • a letter to their favorite undergrad teacher about their current research;
    • a description of their anxieties about their project;
    • an imitation of an introduction, abstract, conclusion, or transition paragraph they like written by a junior scholar.

I’m not presenting that list as a magical solution. It would be odd for me to say that simplistic advice is not helpful and then give a list of the five (or seven, or ten) things we “just” have to do to become (or teach others to become) skilled and productive academic writers. What we have to do is acknowledge that the project requires significant and complicated cognitive changes: that, for most of us, scholarly writing is hard because it’s hard. Let’s be honest about that.

Arguments from identity and the easy demagoguery of everyday commenting

I recently had a piece published on Salon, and it was thrilling. http://www.salon.com/2017/06/10/demagoguery-vs-democracy-how-us-vs-them-can-lead-to-state-led-violence/And the comments quickly skeeved off into the direction of whether “liberals” or “republicans” are better people. That was frustrating.

My argument about demagoguery has several parts:

    1. demagoguery shifts the stasis (as rhetoricians say) from policy arguments to identity arguments, relying on the assumption that all that matters is whether advocates/critics of a policy are ingroup or outgroup.
    2. therefore, in a culture of demagoguery all arguments about policy end up relying on two points: which group is better, and what group an advocate is in—in other words, it’s all identity politics.
    3. so, all arguments end up being deductive arguments from identity.
    4. this part is barely mentioned in either book I’ve done on the issue, but that reasoning on identity is done by homogenizing the outgroup, so if a person seems to be a member of this group, you can attribute to them everything any other member of that group has said or done.

There are other characteristics, but these are the ones that seemed especially important in the comment section on the article.

And here I have to go back to some really old work, and say that I think we remain muddled on how public discourse operates—we flop around among models of expression, deliberation, and purchasing.

Lay theories of public deliberation aren’t expected to be entirely consistent—as social psychologists have noted, we all toggle between naïve realism and skepticism in our everyday lives. But I think there are important consequences of our failing to realize that we flop around among various models of arguing and various models of knowing.

There is a basic premise: major policy decisions shouldn’t be made on the basis of some kind of model of us versus them when we’re talking about a culture that includes us and them. The idea that only group is entitled to determine policy isn’t democratic, sensible, or Christian.

If we want a thriving community (or nation state or world or even club) then we want enough disagreement that we can prevent the problems associated with what is often called groupthink—when a bunch of like-minded and ingroup people agree that what they think and who they are is, obviously, the best.

It’s clearly demonstrated the people have trouble admitting error, and therefore, if we want to make good decisions, we need people who will tell us we’re wrong. Good decisions rely on people contributing from various perspectives—not just people like us.

That’s the deliberative model of public argument: that the point of Congress and state legislatures is that they would consider various points of view, the impacts on all communities, and then come to a decision. If we look at public decision-making from that perspective (what’s often called the deliberative model), then we would ensure that there is diverse representation in deliberative assemblies, such as the state legislature or Congress. (The notion that the best decisions involve various perspectives is a given in successful business decision-making models.)

There is another model: the expressive model. For many people, there is no such thing as persuasion, and public discourse is all about people expressing their opinions (usually their statements of commitments to their group). Public discourse isn’t about deliberation or communal reasoning—it’s a bunch of people shouting in a stadium, and the group that has the people who shout the loudest win. You don’t go into that stadium intending to listen carefully to what other people are shouting in order to come to a new understanding of your own views: you come to shout out the others.

I can’t think of a time when this model of public discourse led to a community coming to a good decision.

The third model is that ideas/policies are products sold just like shampoo. The hope is that the market is rational, and so if a particular shampoo sells the best, it is the best product. This is a problematic model in many ways, not the least of which is that it’s circular. The market is assumed to be rational because it represents what people value, and it’s assumed that people’s values are rational. This is an almost religious belief in that it can’t be supported empirically, and has often been falsified (bubbles). The problem with the market model is three-fold: people buy products on the basis of short-term benefits and inadequate information, whereas policy decisions should be made in light of long-term consequences; second, it makes voters passive, who can whinge about a candidate not being adequately sold (instead of seeing it as being our responsibility to inform ourselves about candidates); finally, if I buy the wrong shampoo, my hair falls out, but if I buy the wrong candidate, my community is harmed.

The activity of market always represents short-term choices, and assessments of “marketability” tend to be about short-term gains. Unless you have a circular argument (the market choice is rational because the market choice is defined as rational—which a surprising amount of people on this issue assume), then the market does not represent the long-term best interest of the people (think bubbles). In addition, the market, by definition, cannot represent the values of those without the resources to participate (future generations, for instance). The market is always the tragedy of the commons.

(You never get a defense of the inherent rationality of the market that isn’t logically circular, doesn’t assume the just world hypothesis, or doesn’t appeal to prosperity gospel.)

While I believe that the deliberative model is best for community decision-making, I think a healthy public sphere has places where each of these models is practiced. It’s fine if someone’s facebook page (or twitter feed) is entirely expressive. But, on the whole, there should be a place where people try to deliberate with one another, or, at least, acknowledge in the abstract that the inclusion of people with them they disagree is valuable. The problem is that people are spending all of their time in expressive public spheres, and making decisions on the basis of group identity.

I was definitely one of the people who thought that the digitally-connected world would be the Habermasian public sphere, and that isn’t how it played out. I think there were moments (in the 80s) when it seemed to be something like what Habermas described—a realm in which argument and not identity mattered. But, what became clear is that identity does matter.

And so here is what I came to believe: in good arguments there are a lot of data. And identity is a datum. But that’s all it is. It isn’t a premise: it’s a datum.

[As an aside, I have to say that sometimes I think that public deliberation could be wonderful were we to understand five points: 1) a premise and datum are not the same thing; 2) don’t put always or never or necessarily into someone else’s argument; 3) treat others as you want to be treated; 4) there isn’t a binary between certainty and sloppy relativism; 5) a claim can be false and/or illogical even if the evidence for the claim is true.}

But, what happens in a lot of public discourse is that people assume that you can deduct the goodness of an argument from the goodness of the person making the argument, and you can make that determination on the basis of cues. That is, if a person says something that, for you, cues that they are a member of a particular group, you can assume that they believe all the things you think members of that group believe. If that particular group is one you share, then you’ll attribute all sorts of wonderful qualities and beliefs to them; if it’s an outgroup for you, then you’ll attribute all sorts of stupid beliefs, bad motives, and bad behavior to them.

That last point is simultaneously simple and complicated. We tend to homogenize the outgroup, and so if an outgroup member says that squirrels are awesome, and another outgroup member says that little dogs are the best, we’ll assume that second person thinks squirrels are awesome. People who are particularly drawn to thinking in terms of us versus them will take mere criticism of the ingroup as sufficient proof that the critic is a member of the outgroup, and will then attribute to that person all the things that are supposed to be true of outgroup members.

This is deductive reasoning—inferring beliefs of individuals from our assumptions about what those people believe. It’s pervasive in toxic publics.

And, no, it isn’t particular to any one “side” of the political spectrum. But, the fact that that question even comes up—who does this more?—is a sign of how uselessly committed to group loyalty our political world has become.

Democracy presumes that there is no single person, or single group, that knows all that is necessary to make good policy decisions. And that means that, while it isn’t necessary that people in a democracy believe that all views are equally valid (or even that all views are valid), it is necessary that we believe that we have something to learn from people with whom we disagree—we cannot delegitimate everyone who disagrees with us and continue to claim that we believe in democracy. (For me, this tendency to dismiss every other point of view as corrupt, servile, or in other ways illegitimate is especially troubling in people who self-identify as democratic socialists—c’mon, folks, it isn’t democratic if it’s a one-party system.) The tendency to insist that only one point of view if legitimate is profoundly anti-democratic—it assumes that the ideal situation is a one-party system. And that’s authoritarianism. And it has never ended well.

Comey’s testimony and identity politics

Comey, being a careful person, documented his deeply problematic meetings with Trump in the moment, and he’s released a statement with all anyone needs to know—Trump used his power to fire Comey in order to try to coerce him into closing down an investigation.

But that isn’t how it will play out in the hearing tomorrow.

For many years now (at least since the rise of Fox News), the GOP Political Correctness Machine has so consistently engaged in projection that you can tell the weakest point of a GOP candidate by noticing what accusations the Fox media (and other water carriers, as Limbaugh called himself) make about their opponents (think about their attack on Kerry for his war record).

For years, they’ve been flinging the accusation of political correctness at their opposition, and it’s a great example of projection.

Originally, the term came from the way that the Stalinist propaganda machine would decide what was the correct line to take on some event: Nazis are evil, Nazis are okay, Nazis are evil. To be politically correct meant that you were in line with what the higher-ups said was the right line to take on a political issue. And it was even better if you could pivot quickly.

To be politically correct means that you don’t have principles that operate across groups (adultery is bad whether it’s a GOP, Libertarian, Dem, Green), but that you know what your beliefs are supposed to be. And the GOP is all about political correctness in that sense—that’s why they accuse others of it so often. Michelle Obama dishonored the office of First Lady by wearing a sleeveless dress—that was presented as a principle. But, that they hadn’t objected to Nancy Reagan’s sleeveless dresses, nor the current First Lady’s problematic sartorial choices long ago shows it was never about the principle. They pivoted to condemn Obama and then pivoted again not to condemn Trump.

So, what will be the politically correct thing to say about Comey?

While large numbers of people across the political spectrum make policy judgments on the basis of their perceptions of identity (if “good” people support a policy, it must be a “good” policy), loyalty to the group is more a value among people who self-identify as conservative (see Jonathan Haidt’s The Righteous Mind). Authoritarians also tend to reason from ingroup membership, and authoritarians are more likely self-identify as conservative (Hetherington and Weiler’s Authoritarianism and Polarization in American Politics has a good summary of the research on this; so does John Jost’s work in political psychology).

In other words, the GOP Political Correctness Machine has also been engaged in projection in its making one of the politically correct things to say that lefties engage in identity politics. They’re all about identity politics.

So, what we can expect is that the politically correct Congresscritters will attack Comey’s identity. They’ll dodge any of his claims of what happened in favor of questions that enable them to present him as a bad person, especially as one disloyal to GOP values.

Of course the head of the FBI should not be a loyal Republican. The very same people who will condemn him for that disloyalty would fling themselves around in outrage were a Congress with a Dem majority and/or President to insist that he be loyal to Dems.

So, let’s be clear: this isn’t about a principle that operates across groups. This is purely and simply about factional politics. This is about loyalty only being a value when it’s a loyalty to their group.

It will be identity politics.