Demagoguery and Disruption

various books that are or are about demagoguery

I’ve been asked why demagoguery rises and falls, more than once by people who like the “disruption” theory—that demagoguery is the consequence of major social disruption. The short version is that events create a set and severity of crises that “normal” politics and “normal” political discourse seem completely incapable of ameliorating, let alone solving. People feel themselves to be in a “state of exception” when things they would normally condemn seem attractive—anti-democratic practices, purifying of a community through ethnic/political cleansing, authoritarianism, open violation of constitutional protections.

When I first started working on demagoguery, I assumed that was the case. It makes sense, after all. And, certainly, in the instances that most easily come to mind when we think about demagoguery, there was major social disruption. Hitler rose to power in the midst of major social disruption: humiliation (the Great War and subsequent Versailles Treaty), economic instability (including intermittent hyperinflation), mass immigration from Central and Eastern Europe, an unstable government system.

And you can look at other famous instances of demagoguery and see social disruption: McCarthyism and the Cold War (specifically the loss of China), Charles Coughlin and the Great Depression, Athenian demagogues like Cleon and the Peloponnesian War, Jacobins and failed harvests. But, the more I looked at various cases, the weirder it got.

Take, for instance, McCarthyism and China. There are two questions never answered by people who blame(d) “the Democrats” for “losing” China is: what plan would have worked to prevent that victory? Did Republicans advocate that plan (so, would they have enacted it if in power)? McCarthy’s incoherent narrative was that spies in the State Department were [waves hands vaguely] somehow responsible for the loss of China. Were losing China a major social disruption, then it would have until that moment been seen as an important power by the people who framed its “loss” as a major international threat—American Republicans. But, prior to Mao’s 1949 victory in China, American Republicans were not particularly interested in China. In fact, FDR had to maneuver around various neutrality laws passed by Republicans in order to provide support for Chiang Kai Shek at all. After WWII, Republicans were still not very interested in intervening in China—they weren’t interested in China till Mao’s victory. So, why did it suddenly become a major disruption?

One possibility is that Mao’s victory afforded the rhetorical opportunity of having a stick with which to beat the tremendously successful Democrats (that’s Halberstam’s argument). If Halberstam is right, then demagoguery about China, communism, and communist spies was the cause, not the consequence, of social disruption. Another equally plausible possibility is that China becoming communist, and an ally of the USSR, took on much more significance in light of Soviet acquisition of nuclear weapons. So, the disruption led to demagoguery.

In other words, McCarthyism turned out not to be quite as clean a case as I initially assumed, although far from a counter-example.

Another problematic case was post-WWI demagoguery about segregation. In many ways, that demagoguery was simply a continuation of antebellum pro-slavery demagoguery, with added bits from whatever new “scientific” or philosophical movement might seem useful (e.g., eugenics, anti-communism). It wasn’t always at the same level, but tended to wax and wane. I couldn’t seem to correlate the waxing and waning to any economic, political, or social event, or even kind of event. What it seemed like was that it correlated more to specific political figures deciding to amp up the demagoguery for short-term gain (see especially Chapter Three).

Similarly, ante-bellum pro-slavery demagoguery didn’t consistently correlate to major disruptions; if anything, it often seemed to create them, or create a reframing of conflicts (such as with indigenous peoples). But, the main problem with the disruption narrative of causality is that I couldn’t control the variables—it’s extremely difficult to find a period of time when there wasn’t something going on that can be accurately described as a major disruption. Even if we look only at financial crises considered major (there was a major downturn in the economy that lasted for years), there were eight in the US in the 19th century: 1819, 1837, 1839, 1857, 1873, 1884, 1893, and 1896. Since several of these crises lasted for years, as much as half of the 19th century was spent in a major financial crisis.

And then there are other major disruptions. There were riots or uprisings related to slavery and race in almost every year of the 19th century. The Great Hunger in Ireland (1845-1852) and later recurrence (1879), 1848 revolutions in Western Europe, and various other events led to mass migrations of people whose ethnicity or religion was unwelcome enough to create major conflicts. And this is all just the 19th century only in the US.

Were demagoguery caused by crises, then it would always be full-throated, since there are always major crises of some kind. But it waxes and wanes, often to varying degrees in various regions, or among various groups, sometimes without the material conditions changing. Pro-slavery demagoguery varied in terms of themes, severity, popularity, but not in any way that I could determine correlated to the economic viability or political security of the system.

Anti-Japanese demagoguery was constant on the West Coast of the US from the late 19th century through at least the mass imprisonment in the 40s, but not as consequential or extreme elsewhere. One might be tempted to explain that discrepancy by population density, but there was not mass imprisonment in Hawaii, which had a large population of Japanese Americans. Anti-Judaism has never particularly correlated to the size (or even existence) of a local Jewish population; it’s not uncommonly the most extreme in situations almost entirely absent of Jews. And sometimes it’s impossible to separate the crisis from the demagoguery—as in the cases of demagoguery about fabricated threats, such as Satanic panics, stranger danger demagoguery, wild and entirely fabricated reports of massive abolitionist conspiracies, intermittent panics about Halloween candy.

I’ve come to think it has to do with two other factors: strategic threat inflation on the part of rhetors with a sufficiently large megaphone, and informational enclaves (and these two factors are mutually reinforcing). I’ve argued elsewhere that the sudden uptick in anti-abolitionist was fueled by Presidential aspirations; Truman strategically engaged in threat inflation regarding Soviet intentions in his speech “The Truman Doctrine;” the FBI has repeatedly exaggerated various threats in order to get resources; General DeWitt fabricated evidence to support race-based imprisonment of Japanese Americans. These rhetors weren’t entirely cynical; I think they felt sincerely justified in their threat inflation, but they knew that they were exaggerating.

And threat inflation only turns into demagoguery when it’s picked up by important rhetors. Japanese Americans were not imprisoned in Hawaii, perhaps because DeWitt didn’t have as much power there, and there wasn’t a rhetor as important as California Attorney General Earl Warren supporting it.

In 1835, there was a panic about the AAS “flooding” the South with anti-slavery pamphlets that advocated sedition. They didn’t flood the South; they sent the pamphlets, which didn’t advocate sedition, to Charleston, where they were burned. But, the myth of a flooded South was promoted by people so powerful that it was referred to in Congress as though it had happened, and is still referred to by historians who didn’t check the veracity of the story.

And that brings up the second quality: informational enclaves. Demagoguery depends on people either not being aware of or not believing disconfirming information. The myth of Procter and Gamble being owned by a Satan worshipper (who was supposed to have gone on either Phil Donahue or Oprah Winfrey and announced that commitment) was spread for almost 20 years despite it being quite easy to check and see if any recording of such a show existed. The people I knew who believed it didn’t bother even trying to check. Advocates of the AAS mass-mailing demagoguery (or other fabricated conspiracy stories) only credited information and sources that promoted the demagoguery.

Once the Nazis or Stalinists had control of the media in their countries, the culture of demagoguery escalated. But, even prior to the Nazi silencing of dissent, Germany was in a culture of demagoguery—because people could choose to get all their information from reinforcing media, and many made that choice. Antebellum media was diverse—it was far from univocal—but people could choose to get all their information from one source. They could choose to live in an informational enclave. Many made that choice.

It didn’t end well.

Seeds Over a Wall: The Pyramid of Harm

flowers in front of a wall

My grandmother had a “joke” (really more of a parable) about a guy who sees a pie cooling in the window, and steals it. Unfortunately, he leaves a perfect handprint on the sill, so he sneaks into the house in order to wash off his handprint. But then it’s obvious that the sill has been washed, since it’s so much cleaner than the wall. So he washes that wall. It’s still obvious that something has happened because that one wall is so much cleaner than the others. When the police came, he was repainting the attic.

You can tell this as a shaggy dog joke, with more of the steps between the sill and the attic. And, in a way, that’s how this situation often plays out, at least in regard to bad choices. Rather than admit the initial mistake, we get deeper and deeper into a situation; the more energy we expend to deflect the consequences of that first mistake, the more committed we are to making that expenditure worthwhile. So, we’re now in the realm of the “sunk cost” fallacy/cognitive bias. Making decisions on the basis of trying to retrieve sunk costs—also known as “throwing good money after bad”–enables us to deny that we made a bad decision.

In the wonderful book Mistakes Were Made, Carol Tavris and Elliot Aronson call this process “the pyramid of choice.” It’s usefully summarized here:

“The Analogy of the Pyramid (Tavris and Aronson, 2015). An initial choice -which is often triggered by the first “taking the temperature” vote -amounts to a step off on one side of the pyramid. This first decision sets in motion a cycle of self-justification which leads to further action (e.g., taking a public stance during the group discussion) and further selfjustification. The deeper down participants go, the more they can become convinced and the more the need arises to convince others of the correctness of their position.”

The example used by Tavris and Aronson is of two students who are faced with the choice of cheating or getting a bad grade on an exam. They are, initially, both in the same situation. One decides to cheat, and one decides to get the bad grade. But, after some time, each will find ways of not only justifying their decision, but they will be “convinced that they have always felt that way” (33).

In the equally wonderful book Denial, Jared Del Rosso describes a similar process for habituating a person to behaviors they would previously have condemned (such as engaging in torture). A prison guard or police officer is first invited to do something a little bit wrong; that small bad act is normalized, and then, once they’ve done that, it becomes easier to get them to do a little worse (Chapter 4). Christopher Browning describes a similar situation for Nazi Wehrmacht soldiers who participated in genocide; Hannah Arendt makes that argument about Adolf Eichmann; Robert Gellately makes it about Germans’ support for Hitler.

It’s like an upside-down pyramid—the one little bad act enables and requires more and worse ones, since refusing to continue doing harm would require admitting to one’s self and others that the first act was bad. It means saying, “I did this bad (or stupid) thing,” and that’s hard. It’s particularly hard for people who equate identity and action, and who believe that only bad people do bad things, and only stupid people do stupid things; that is, people who believe in a stark binary of identity.

This way of thinking also causes people to “double down” on mistakes. In late 1942, about 250,000 Nazi soldiers approaching and in Stalingrad were in danger of getting encircled by Soviet troops. Hitler refused to allow a retreat; instead opting for Goering’s plan of airlifting supplies. David Glantz and Jonathan House argue that Hitler was “trapped” by his previous decisions—to acknowledge the implausibility of Goering’s proposal (and it was extremely implausible) would amount to Hitler admitting that various decisions that he had made were wrong, and that his generals had been right. Glantz and House don’t mean he was actually trapped–other decisions could have been made, but not by Hitler. He was trapped by his own inability to admit that he had been wrong. Rather than admit that he was wrong in his previous bad decisions, he proceeded to make worse ones. That’s the pyramid of harm.

The more walls the thief washes, the harder it is to say that the theft of the pie was a one-time mistake.

Don’t be the thief.


Seeds Over a Wall: Credibility

blooming cilantro

tl;dr Believing isn’t a good substitute for thinking.

As mentioned in the previous post, Secretary of Defense Robert McNamara, LBJ, Dean Rusk, McGeorge Bundy, and various other decision-makers in the LBJ Administration were committed to the military strategy of “graduated pressure” with, as H.R. McMaster says, “an almost religious zeal” (74). Graduated pressure was (is) the strategy of slightly increasing the amount of military force by steps in order to pressure the opponent into giving up. It’s supposed to “signal” to the opponent that we are absolutely committed, but open to negotiation.

It’s a military strategy, and the people in favor of it were not people with much (or sometimes any) military training or experience. There were various methods for people with military experience to advise the top policy-makers. Giving such advice is the stated purpose of the Joint Chiefs of Staff, for instance. There were also war games, assessments, memos, and telegrams, and their hostility to “graduated pressure” ranged from dubious to completely opposed. The civilian advisors were aware of that hostility, but dismissed the judgments of military experts on the issue of military strategy.

It did not end well.

In the previous post, I wrote about binary thinking, with emphasis on the never/always binary. When it comes to train wrecks in public deliberation, another important (and false) binary is trustworthy/untrustworthy. That binary is partially created by others, especially the fantasy that complicated issues really have two and only two sides.

Despite what people think, there aren’t just two sides to every major policy issue—you can describe an issue that way, and sincerely believe it is, but doing so requires misdescribing the situation, and forcing it into a binary. “The Slavery Debate,” for instance, wasn’t between two sides; there were at least six different positions on the issue of what should happen with slavery, and even that number requires some lumping of people together who were actually in conflict.

(When I say this to people, I’m often told, “There are only two sides: the right one and the wrong one.” That pretty much proves my point. And, no, I am not arguing for all sides being equally valid, “relativism,” endless indecision, compulsive compromise, or what the Other term is in that false binary.)

I’ll come back to the two sides point in other posts, but here I want to talk about the binary of trustworthy/untrustworthy (aka, the question of “credibility”). What the “two sides” fallacy fosters is the tendency to imagine credibility as a binary of Us and Them: civilian v. military advisors; people who advocate “graduated pressure” and people who want us to give up.

In point of fact, the credibility of sources is a very complicated issue. There are few (probably no) sources that are completely trustworthy on every issue (everyone makes mistakes), and some that are trustworthy on pretty much nothing (we all have known people whom we should never trust). Expertise isn’t an identity; it’s a quality that some people have about some things, and it doesn’t mean they’re always right even about those some things. So, there is always some work necessary to try to figure out how credible a source is on this issue or with this claim.

There was a trendy self-help movement at one point that was not great in a lot of ways, but there was one part of it that was really helpful: the insistence that “there is no Santa Claus.” The point of this saying was that it would be lovely were there someone who would sweep in and solve all of our problems (and thereby save us from doing the work of solving them ourselves), but there isn’t. We have to do the work.[1] I think a lot of people talk about sources (media, pundit, political figure) as a Santa Claus who has saved them from the hard work of continually assessing credibility. They believe everything that a particular person or media says. If they “do their own research,” it’s often within the constraints of “motivated reasoning” and “confirmation bias” (more on that later).[2]

I mentioned in the first post in this series that I’m not sure that there’s anything that shows up in every single train wreck, except the wreck. Something that does show up is a particular way of assessing credibility, but I don’t think that causes the train wreck. I think it is the train wreck.

This way of assessing credibility is another situation that has a kind of mobius strip quality (what elsewhere I’ve called “if MC Escher drew an argument”): a source is credible if and only if it confirms what we already believe to be true; we know that what we believe is true because all credible sources confirm it.

This way of thinking about credibility is comforting; it makes us feel comfortable with what we already believe. It silences uncertainty.

The problem is that it’s wrong.

McNamara and others didn’t think they were making a mistake in ignoring what military advisors told them; they dismissed that advice on the grounds of motivism, and that’s pretty typical. They said that military advisors were opposed to graduated pressure because they were limited in their thinking, too oriented toward seeking military solutions, too enamored of bombing. The military advisors weren’t univocal in their assessment of Vietnam and the policy options—there weren’t only two sides on what should be done—but they had useful and prescient criticism of the path LBJ was on. And that criticism was dismissed.

It’s interesting that even McNamara would later admit he was completely wrong in his assessment of the situation, yet wouldn’t admit that he was told so at the time. His version of events, in retrospect, was that the fog of war made it impossible for him to get the information he needed to have advocated better policieds. But that simply isn’t true. McNamara’s problem wasn’t a lack of information—he and the other advisors had so very, very much information. In fact, they had all the information they needed. His problem was that he didn’t listen to anyone who disagreed with him, on the grounds that they disagreed with him and were therefore wrong.

McNamara read and wrote reports that listed alternatives for LBJ’s Vietnam policies, but they were “poisoning the well.” The alternatives other than graduated pressure were not the strongest alternative policies, they were described in nearly straw man terms, and dismissed in a few sentences.

We don’t have to listen to every person who disagrees with us, and we can’t possibly read every disconfirming source, let alone assess them. But we should be aware of the strongest criticisms of our preferred policy, and the strongest arguments for the most plausible of alternative policy options. And, most important, we should know how to identify if we’re wrong. That doesn’t mean wallowing in a morass of self-doubt (again, that’s binary thinking).

But it does mean that we should not equate credibility with in-group fanaticism. Unless we like train wrecks.









[1] Sometimes people who’ve had important conversion experiences take issue with saying there is no Santa Claus, but I think there’s a misunderstanding—many people believe that they’ve accomplished things post-conversion that they couldn’t have done without God, and I believe them. But conversion didn’t save them from doing any work; it usually obligates a person to do quite a bit of work. The desire for a “Santa Claus” is a desire for someone who doesn’t require work from us.

[2] Erich Fromm talked about this as part of the attraction of authoritarianism—stepping into that kind of system can feel like an escape from the responsibilities of freedom. Many scholars of cults point to the ways that cults promise that escape from cognitive work.

Seeds Over a Wall: Binary Thinking

primroses

Imagine that we’re disagreeing about whether I should drive the wrong way down a one-way street, and you say, “Don’t go that way—you could get in an accident!” And I say, “Oh, so no one has ever driven down a one-way street without getting into an accident?” You didn’t say anything about always or never. You’re talking in terms of likelihood and risk, about probability. I’m engaging in binary thinking.

What’s hard about talking to people about binary thinking is that, if someone is prone to it, they’re likely to respond with, “Oh, so you’re saying that there’s never a binary?” Or, they’ll understand you as arguing for what they think of as relativism—they imagine a binary of binary thinking or relativism.

(In other words, they assume that there’s a binary in how people think: a person either believes there’s always an obvious and clear absolutely good choice/thing and an obvious and always clear absolutely bad choice/thing OR a person believes there’s no such thing as good v. bad ever. That latter attitude is often called “relativism” and, for binary thinkers, they assume it’s the only possibility other than their approach. So, they’re binary thinkers about thinking, and that makes talking to them about it difficult.)

“Binary thinking” (also sometimes called “splitting” or “dichotomous thinking”) is a cognitive bias that encourages us to perceive people, events, ideas, and so on into two mutually exclusive categories. It’s thinking in terms of extremes like always or never—so if something doesn’t always happen, then it must never happen. Or if someone says you shouldn’t do something, you understand them to be saying you should never do it. Things are either entirely and always good, or entirely and always bad.

We’re particularly prone to binary thinking when stressed, tired, faced with an urgent problem. What it does is reduce our options, and thereby seems to make decision-making easier; it does make decision-making easier, but easy isn’t always good. There’s some old research suggesting that people faced with too many options get paralyzed in decision-making, and so find it easier to make a decision if there are only two options. There was a funny study long ago in which people had an option to taste salsas—if there were several options, more people walked by than if there were only two. (This is why someone trying to sell you something—a car, a fridge, a house–will try to get you to reduce the choice to two.)

Often, it’s a false dichotomy. For instance, the small circle of people making decisions about Vietnam during the LBJ Administration kept assuming that they should either stick with the policy of “graduated pressure” (which wasn’t working) or pull out immediately. It was binary thinking. While there continues to be considerable disagreement about whether the US could have “won” the Vietnam conflict, I don’t know of anyone who argues that graduated pressure could have done it. Nor does anyone argue there was actually a binary–there were plenty of options other than either graduated pressure or an immediate pull-out, and they were continually advocated at the time.

Instead of taking seriously the options advocated by others (including the Joint Chiefs of Staff), what LBJ policy-makers assumed was that they would either continue to do exactly what they were already doing or give up entirely. And that’s a common false binary in the train wrecks I’ve studied–stick with what we’re doing or give up, and it’s important to keep in mind that this is a rhetorical move, not an accurate assessment of options.

I think we’ve all known people who, if you say, “This isn’t working,” respond with, “So, you think we should just give up?” That isn’t what you said.

“Stick with this or give up” is far from the only binary that traps rhetors into failure. When Alcibiades argued that the Athenians either had to invade Sicily or betray Egesta, he was invoking the common fallacy of brave v. coward (and ignoring Athens’ own history). A Spartan rhetor used the same binary (go to war with Athens or you’re a coward) even while disagreeing with a brave general who clearly wasn’t a coward, and who had good reasons for arguing against war with Athens at that moment.

One way of defining binary thinking is: “Dualistic thinking, also known as black-and-white, binary, or polarized thinking, is a general tendency to see things as good or bad, right or wrong, and us or them, without room for compromise and seeing shades of gray” (20). I’m not wild about that way of defining it, because it doesn’t quite describe how binary thinking contributes to train wrecks.

It isn’t that there was a grey area between graduated pressure and an immediate pull-out that McNamara and others should have considered (if anything, graduated pressure was a gray area between what the JCS wanted and pulling out entirely). The Spartan rhetor’s argument wouldn’t have been a better one had he argued that the general was sort of a coward. You can’t reasonably solve the problem of which car you should buy by buying half of one and half of the other.

The mistake is assuming that initial binary—of imagining there are only two options, and you have to choose between them. That’s binary thinking—of course there are other options.

When I point out the problems of binary thinking to people, I’m often told, “So, you’re saying we should just sit around forever and keeping talking about what to do?”

That’s binary thinking.



Seeds Over a Wall: Thoughts on Train Wrecks in Public Deliberation

a path through bluebonnet flowers

I’ve spent my career looking at bad, unforced decisions. I describe them as times that people took a lot of time and talk to come to a decision they later regretted. These aren’t times when people didn’t know any better—all the information necessary to make a better decision was available, and they ignored it.

Train wrecks aren’t particular to one group, one kind of person, one era. These incidents I’ve studied are diverse in terms of participants, era, consequences, political ideologies, topics, and various other important qualities. One thing that’s shared is that the interlocutors were skilled in rhetoric, and relied heavily on rhetoric to determine and advocate policies that wrecked the train.

That’s how I got interested in them—a lot of scholars of rhetoric have emphasized times that rhetors and rhetoric saved the day, or at least pointed the way to a better one. But these are times that people talked themselves in bad choices. They include incidents like: pretty much every decision Athens made regarding the Sicilian Expedition, Hitler’s refusal to order a fighting retreat from Stalingrad, the decision to dam and flood the Hetch Hetchy Valley (other options were less expensive), eugenics, the LBJ Administration’s commitment to “graduated pressure” in Vietnam; Earl Warren’s advocacy of race-based mass imprisonment; US commitment to slavery; Puritans’ decision to criminalize Baptist and Quakers.

I’ve deliberately chosen bad decisions on the part of people that can’t be dismissed as too stupid to make good decisions. Hitler’s military decisions in regard to invading France showed considerable strategic skill–while he wasn’t as good a strategist as he claimed, he wasn’t as bad as his generals later claimed. Advocates of eugenics included experts with degrees from prestigious universities—until at least WWII, biology textbooks had a chapter on the topic, and universities had courses if not departments of Eugenics. It was mainstream science. Athenians made a lot of good decisions at their Assembly, and a major advocate of the disastrous Sicilian Expedition was a student/lover of Socrates’. LBJ’s Secretary of Defense Robert McNamara was a lot of things, but even his harshest critics say he was smart.

The examples also come from a range of sorts of people. One temptation we have in looking back on bad decisions is to attribute them to out-group members. In-group decisions that turned out badly we try to dismiss on the grounds that they weren’t really bad decisions, they had no choice, an out-group is somehow really responsible for what happened.[1] (It’s interesting that that way of thinking about mistakes actively contributes to train wrecks.) The people who advocated the damming and flooding of the Hetch Hetchy Valley were conservationists and progressives (their terms for themselves, and I consider myself both[2]). LBJ’s social agenda got us the Voting Rights Act, the Civil Rights Act, Medicare, all of which I’m grateful for. Earl Warren went on to get Brown v. Board passed, for which I admire him.

In short, I don’t want these posts to be in-group petting that makes Us feel good about not being Those People. This isn’t about how They make mistakes, but how We do.

A lot of different factors contributed to each of these train wrecks; I haven’t determined some linear set of events or decisions that happened in every case, let alone the one single quality that every incident shares (I don’t think there is, except the train wrecking). It’s interesting that apparently contradictory beliefs can be present in the same case, and sometimes held by the same people.

So, what I’m going to do is write a little bit about each of the factors that showed up at least a few times, and give a brief and broad explanation. These aren’t scholarly arguments, but notes and thoughts about what I’ve seen. In many cases (all?) I have written scholarly arguments about them in which I’ve cited chapter and verse, as have many others. If people are interested in my chapter and verse version, then this is where to start. (In those scholarly versions, I also cite the many other scholars who have made similar arguments. Nothing that I’m saying is particularly controversial or unique.)

These pieces aren’t in any particular order—since the causality is cumulative rather than linear, there isn’t a way to begin at the beginning. It’s also hard not to write about this without at least some circularity, or at least backtracking. So, if someone is especially interested in one of these, and would like me to get to it, let me know.

Here are some of the assumptions/beliefs/arguments that contribute to train wrecks and that I intend to write about, not necessarily in this order:

Bad people make bad decisions; good people make good ones
Policy disagreements are really tug-of-war contests between two sides
Data=proof; the more data, the stronger the proof
The Good Samaritan was the villain of the story
There is a single (but not necessarily simple) right answer to every problem
That correct course of action is always obvious to smart people
What looks true (to me) is true—if you don’t believe that, then you’re a relativist
Might makes right, except when it doesn’t (Just World Model, except when not)
The ideal world is a stable hierarchy of kiss up/kick down
All ethical stances/critiques are irrational and therefore equally valid
Bad things can only be done by people who consciously intend to do them
Doing something is always better than doing nothing
Acting is better than thinking (“decisiveness” is always an ideal quality)
They cherry-pick foundational texts, but Our interpretations distinguish the transient from the permanent
In-group members and actions shouldn’t be held accountable (especially not to the same degree as out-group members and actions)

There are a few other qualities that often show up:
Binary thinking
Media enclaves
Mean girl rhetoric
Short-term thinking (Gus Johnson and the tuna)
Non-falsifiable conspiracy theories that exempt the in-group from accountability
Sloppy Machiavellianism
Tragic loyalty loops


[1] I’m using “in-“ and “out-“ groups as sociologists do, meaning groups we’re in, and groups against whom we define ourselves, not groups in or out of power. We’re each in a lot of groups, and have a lot of out-groups. Here’s more information about in- and out-groups. You and your friend Terry might be in-group when it comes to what soccer teams you support but out-group when it comes to how you vote. Given the work I do, I’m struck by how important a third category is: non in-group (but not out-group). For instance, you might love dogs, and for you, dog lovers are in-group. Dog-haters would be out-group. But people who neither love nor hate dogs are not in-group, yet not out-group. One of the things that happens in train wrecks is that the non in-group category disappears.

[2] For me, “conservatives” are not necessarily out-group. Again, given the work I do, I’ve come to believe that public deliberations are best when there is a variety of views considered, and “conservatism” is a term used in popular media, and even some scholarship, to identify a variety of political ideologies which are profoundly at odds with each other. Libertarianism and segregation–both called “conservative” ideologies by popular media–are not compatible. Our political world is neither a binary nor a continuum of ideologies.

“Defeats will be defeats.”

copy of book--Foreign Relations of the US, Vietnam, 1964

“Defeats will be defeats and lassitude will be lassitude. But we can improve our propaganda.” (Carl Rowan, Director of the US Information Agency, June 1964, FRUS #189 I: 429).

In early June of 1964, major LBJ policy-makers met in Honolulu to discuss the bad and deteriorating situation in South Vietnam. SVN was on its third government in ten months (there had been a coup in November of 1963 and another in January of 1964), and advisors had spent the spring talking about how bad the situation was. In a March 1964 memo to LBJ, Secretary of Defense Robert McNamara reported that “the situation has unquestionably been growing worse” (FRUS #84). “Large groups of the population are now showing signs of apathy and indifference [….] Draft dodging is high while the Viet Cong are recruiting energetically and effectively [….] The political control structure extending from Saigon down into the hamlets disappeared following the November coup.” A CIA memo from May has this as the summary:

“The over-all situation in South Vietnam remains extremely fragile. Although there has been some improvement in GVN/ARVN performance, sustained Viet Cong pressure continues to erode GVN authority throughout the country, undercut US/GVN programs and depress South Vietnamese morale. We do not see any signs that these trends are yet ‘bottoming out.’ During the next several months there will be increasing danger than an assassination of Khanh, a successful coup, a grave military reverse, or a succession of military setbacks could have a critical psychological impact in South Vietnam. In any case, if the tide of deterioration has not been arrested by the end of the year, the anti-Communist position is likely to become untenable.” (FRUS #159)

At that June meeting, Carl Rowan presented a report as to what should be done, and he summarized it as: “Defeats will be defeats and lassitude will be lassitude. But we can improve our propaganda.” (FRUS #189). This is a recurrent theme in documents from that era, including military ones—the claim that effective messaging could solve what were structural problems. They didn’t. They couldn’t.

I was briefly involved in MLA, and I spent far too much time at meetings listening to people say that declining enrollments in the humanities could be solved by better messaging about the values of a humanistic education; I heard the same thing in far too many English Department meetings.

Just to be clear (and to try to head off people telling me that a humanistic education is valuable), I do not disagree with the message. I disagree that the problem can be solved by getting the message right, or getting the message out there. I’m saying that the rhetoric isn’t enough.

I am certain that there are tremendous benefits, both to an individual and to a culture, in a humanistic education, especially studying literature and language(s). That’s why I spent a career as a scholar and teacher in the humanities. But, enrollments weren’t (and aren’t) declining just because people haven’t gotten the message. There were, and are, declining enrollments for a variety of structural reasons, most of which are related to issues of funding for university educations. The fact is that the more that college costs, and the more that those costs are borne by students taking on crippling debt, the more that students want a degree that lands them a job right out of college.

Once again, I am not arguing that’s a good way for people to think about college; I am saying that the reason for declining enrollments isn’t something we can solve by better messaging about the values of a liberal arts education. For the rhetorical approach to be effective (and ethical) it has to be in conjunction with solving the structural problems. Any solution has to involve a more equitable system of funding higher education.

I am tired of people blaming the Dems’ “messaging” for the GOP’s success. I thought that Dem messaging was savvy and impressive. They couldn’t get it to enough people because people live in media enclaves. If you know any pro-GOP voters, then you know that they get all their information from media that won’t let one word of that message reach them, and that those voters choose to remain in enclaves. How, exactly, were the Dems supposed to reach your high school friend who rejects as “librul bullshit” anything that contradicts or complicates what their favorite pundit, youtuber, or podcaster tells them? What messaging would have worked?

The GOP is successful because enough people vote for the GOP and not enough vote against them. Voter suppression helps, but what most helps is anti-Dem rhetoric.

Several times I had the opportunity to hear Colin Allred speak, and his rhetoric was genius. It was perfect. Cruz didn’t try to refute Allred’s rhetoric; all Ted Cruz had to do was say, over and over (and he did), that Allred supported transgender rights.

From the Texas Observer: “Cruz and his allied political groups blitzed the airwaves with ads highlighting that vote and Allred’s other stances in favor of transgender rights. The ads, often featuring imagery of boys competing against girls in sports, reflected what Cruz’s team had found from focus groups and polling: Among the few million voters they’d identified who were truly on the fence, the transgender sports topic was most effective in driving support to Cruz, said Sam Cooper, a strategist for Cruz’s campaign.”

Transphobia is not a rhetorical problem that can be ended by the Dems getting the message right. Bigotry is systemic. Any solution will involve rhetoric, and rhetoric is important. But it isn’t enough.

Writing is hard; publishing is harder.

marked up draft of a book ms


In movies, struggling writers are portrayed as trying to come up with ideas. In my experience as a writer and teacher of writing, that isn’t the hard part. Ideas are easy, and are much better in our head than on the paper, so a very, very hard part of writing is to getting the smart and elegant ideas in our head to be comprehensible to someone else, let alone either persuasive or admired. But the even harder part is submitting something we’ve written—sending it off to be judged. It feels like the first day of sending a child to middle school—will they be bullied? Will they make friends? Will they change beyond recognition?

And I think there’s another reason that submitting a piece of writing is so hard. Our fantasies about what is going to happen when we submit a piece of writing are always more pleasurable than any plausible reality.

Somerset Maugham has a story called “Mirage,” about someone he knew when he was a medical student. Grosely was spending his time and money on wine and women, and eventually came up with a scam to get more money. He was caught, arrested, and kicked out of school. He became a kind of customs official in China, and, desperate to get back to partying in London, was as corrupt as possible: “He was consumed by one ambition, to save enough to be able to go back to England and live the life from which he had been snatched as a boy.”

After 25 years, he did go back to England, and he did try to live the life he’d lived at nineteen. But he couldn’t, of course. London was different, and so was he, and it was all a massive disappointment. He started to think about China, and what a great place it had been, and what a great time he could have there with all the money he’d made. So he headed back. He got almost to China, but stopped just shy of it (in Haiphong). Maugham explains:

“England had been such a terrible disappointment that now he was afraid to put China to the test too. If that failed him he had nothing. For years England had been like a mirage in the desert. But when he had yielded to the attraction, those shining pools and the palm trees and the green grass were nothing but the rolling sandy dunes. He had China, and so long as he never saw it again he kept it.”

I read that story as a graduate student trying to write a dissertation, and it resonated. As long as I didn’t finish the dissertation, I could entertain outrageous fantasies about its reception, quality, and impact. Once submitted, it was what it was. It was passable. (And unpublishable.) It was not anything like what I’d imagined it could be.

I have felt that way about every single piece of writing since (including this blog post)—I’m hesitant to finish it because of not wanting to give up the dream of what it could be.

Every writer of any genre has a lot of partially-written things. I knew a poet who actually had a drawer in his desk into which he dropped pieces of paper onto which he’d written lines that came to him that seemed good, but he didn’t have the rest of the poem. I don’t know if he ever pulled any of those pieces of paper out and wrote the rest of the poem (he did publish quite a bit of poetry). There’s nothing wrong with having a lot of incomplete projects, and lots of good reasons to leave them incomplete.

I once pulled out a ten-year old unsubmitted and unfinished piece of writing, revised it, and submitted it—it was published, and won an award. It took ten years for me to understand what that argument was really about, so leaving it unfinished for that long wasn’t a bad choice at all. There are others that will remain forever unfinished—also not a bad choice.

But there are times when one should just hit submit. The dreams may not come true, but there will be other pieces of writing about which we can dream.

I’m saying all this because I hope people who might be stuck in their writing will find it hopeful. Just hit submit.

The Writer’s Progress

“The most unreliable indication of whether your writing is good or bad is how you feel about it.” Susan Wells

marked up ms. draft

I’m in the process of significantly rewriting an article (based on readers’ comments), and last night I dreamed that I wanted to get to the top of a building, and I couldn’t find the right way to get there. I kept taking stairs that led me elsewhere. Some of those other places were interesting, and some were dreary, but they weren’t where I was trying to go. I often have dreams like that when I’m in the Slough of Despond stage of a writing process.

A friend mentioned that when she is writing, she has dreams about trying to drive a vehicle that is too big, unfamiliar, or unwieldy.

When I was writing my dissertation, I had moments of thinking that I couldn’t possibly write a dissertation, but I did. At several points in every writing project I have found myself convinced I can’t do it, there’s no point in my doing it, and all my previous successes were meaningless flukes. It’s much like the Slough of Despond in Bunyan’s Pilgrim’s Progress:

This miry Slough is such a place as cannot be mended; it is the descent whither the scum and filth that attends conviction for sin doth continually run, and therefore is it called the Slough of Despond: for still as the sinner is awakened about his lost condition, there ariseth in his soul many fears, and doubts, and discouraging apprehensions, which all of them get together, and settle in this place; and this is the reason of the badness of this ground.

Pilgrim’s Progress is a (very boring) book about the soul’s progress toward salvation, told in the form of “Christian” and his travels/travails. Initially, Christian is travelling with someone called Pliable, and they both fall into a bog. Pliable at that point gives up the journey entirely,

and angrily said to his fellow, Is this the happiness you have told me all this while of? If we have such ill speed at our first setting out, what may we expect between this and our journey’s end? May I get out again with my life, you shall possess the brave country alone for me. And with that he gave a desperate struggle or two, and got out of the mire on that side of the slough which was next to his own house: so away he went, and Christian saw him no more.

Christian gets help, appropriately enough from someone named Help, who explains that there are ways out of the bog, but it is despair itself that traps travellers.

I’m writing this because I wish it’s something that I had understood as a junior scholar. There may be people out there for whom writing a lot is a necessary part of their profession and who find it easy, or who go from start to finish on a project without falling into a bog, getting lost in a building, or driving a vehicle that just isn’t working the way it should. But I don’t know them.

The only way out of the bog is through. The moment of loss of confidence isn’t a moment of truth (although some of the insights I have in those moments are true—the introduction of this article really does suck pretty hard); it’s just a moment.

Is Satire a Useful/Effective Strategy with Trump Supporters?

2009 Irish tug of war team
https://en.wikipedia.org/wiki/Tug_of_war#/media/File:Irish_600kg_euro_chap_2009_(cropped).JPG

I’m often asked this question, and the answer is: it depends on the nature of their support, what we mean by “useful/effective,” and what we mean by “satire.”

1) Why do people support Trump?

There are, obviously, many reasons, and sometimes it’s a combination. But, for purposes of talking about satire’s effect, I’ll mention five:

A) Political Sociopathy (some people call it “political narcissism”). These are people who support Trump because they believe he will pass policies that will benefit them in the short run—lower taxes, eliminate environmental and employment protections, protect the wealthy from accountability, privatize public goods, and so on. They don’t care what the consequences will be for others, or what the long-term consequences might be—they only care that it will benefit them (they believe). Hence sociopathy.

B) Eschatalogical Understanding of Politics. This way of thinking isn’t necessarily explicitly religious—a certain kind of American Exceptionalism as well as Hegelian readings of history are also in this category, even if apparently secular. It reads history as inevitably headed toward [something]. That “something” might be: American hegemony, world capitalism, the return of Jesus, Armageddon, racial/ethnic domination, fascism, a people’s revolution, in-group dominance. People with this understanding don’t care about politics in terms of reasonable disagreements about policy, or even about specifics, but in terms of the apocalyptic battle or necessary triumph.

C) Resentment of Libs. (Stiiginit to the libs) These are people who support anything—regardless of its consequences even for them—that they believe pisses off “liberals.” That “liberals” are a hobgoblin, and that this orientation leads to “Vladimir’s Choice” doesn’t much matter.

D) Charismatic Leadership. This is a relationship that people have with a leader (sometimes several leaders). They believe that the leader is a kind of savior (the sacralized language is often striking), an embodiment of the in-group, who should be given unlimited power and held unaccountable so that they can “fight” on behalf of real people like them. (Charismatic leadership is often some kind of authoritarian populism—maybe always.

E) Purity politics. This group includes people who are radically committed to banning abortion—although there are policies that demonstrably reduce abortion, these people refuse to support them because they believe those policies (e.g., accurate sex education, access to effective birth control) are also sinful. Supporting birth control isn’t radially pure enough for them—that their policy will result in deaths in the short term doesn’t matter to them. They refuse to be pragmatic about short-term improvements or short-term devastation. People who refuse to vote for opposition candidates because that party or candidate isn’t radical enough are also in the category.

One characteristic shared among all of these kinds of supporters–in my experience is a tragic informational cycle—they refuse to look at anything that contradicts or complicates what they believe about Trump. They only pay attention to pro-Trump demagoguery because they believe that the entire complicated world of policy options and disagreements is really a tug-of-war between two groups. (A lot of people who aren’t Trump supporters think about politics that way–it isn’t helpful.)

2) Useful/Effective at what?


A) Persuading the interlocutor. People often assume that the point in engaging someone with whom we disagree is to get them to adopt our point of view.

B) Persuading bystanders. Sometimes, however, we aren’t trying to persuade the person with whom we’re disagreeing, but others who might be watching the disagreement.

C) Getting the topic off of Trump. Often, we’re just trying to get them to drop the subject, to let us enjoy dinner, a holiday, a coffee break, or whatever without talking about Trump, or taking swipes at the hobgoblin of “libs.” In other words, just get them to STFU about that topic.

D) Undermine the in- and out-group binary. We might want them to recognize the harm of their support for Trump and/or his policies—that is, to persuade them to empathize with an out-group.

3) What do we mean by “satire”?
The loose category of satire can mean: stable irony (a statement with a clear meaning that is not the literal statement—saying, “Great weather” in the midst of a nasty storm); unstable irony (the rhetor clearly doesn’t mean the literal statement, but it isn’t clear what they do mean), parody (which might be loving, as in the case of Best in Show, or critical, as in the case of much Saturday Night Live sketches about politics), or Juvenalian satire (scathing and often scatological). (Wikipedia has a useful entry on satire.)

So, the short answer to the question about effectiveness of satire on Trump supporters is: it depends on the kind of supporter, what effect we’re trying to have, and what kind of satire we use.

Many people object to satire because they assume that it’s insulting, and believe we should always rely on kindness and reason. But, as Jonathan Swift famously said: “Reasoning will never make a Man correct an ill Opinion, which by Reasoning he never acquired.” (For more on this quote and various versions, see this.) This is not always true, but it is true that a person has to be open to changing their mind for a reasoned argument to work. And not everyone is. So, if we’re talking to a Trump supporter who is open to changing their mind—that is, whose beliefs about Trump and Trump’s policies are falsifiable–then satire might not be the best strategy. But, to be blunt, I haven’t run across a Trump supporter whose beliefs can be falsified through reasoned argument in a long time.

The first category—the person who is in it for their own short term gain, and who might actually hate Trump—can seem to be “rational” insofar as they’re engaged in an apparently amoral calculation of costs and benefits (in my experience) is generally grounded in some version of the “just world model.” (That people who are wealthy/powerful/dominant deserve to be wealthy/powerful/dominant.) So, their reason for supporting policies that benefit them in the short-term isn’t falsifiable. They can be persuaded, sometimes, on very specific points about specific policies. Sometimes. Satire certainly doesn’t alienate them (although it might piss them off); it neither strengthens nor weakens their support.

Similarly, people whose support comes from an eschatological view of history/politics have a non-falsifiable narrative (they’re very prone to conspiracy theories), and, in my experience, have long since dug in. So, similarly, satire neither strengthens nor weakens their support.

The “stigginit to the libs” type person sometimes change their mind about Trump when they or someone they love gets harmed by Trump’s policies, so—sometimes—their support can be falsified by personal experience. But reasonable argumentation is right off the table—they like that Trump critics get frustrated with how unreasonable they are. (And, often, they have an unreasonable understanding of reasonable argumentation.)

Satire can increase their resentment of “libs,” especially if it hits close to home, but it isn’t as though some other rhetorical strategy would work. In theory, what should work would be some strategy of rejiggering their sense of in- and out-groups, or that creates empathy for an out-group, but I’m not sure I’ve seen that happen short of direct personal experience.

Satire, including the Juvenalian, can shame some people into shutting up, or allowing a change of subject, but I think other strategies are more effective (like refusing to engage). It can also have an impact on observers, but whether satire will cause them to feel sorry for the Trump supporters, get mad at libs, or distance themselves from Trump support/ers varies from person to person.

Satire can be very effective for people in a charismatic leadership relationship because it emphasizes that they look foolish. Their fanatical commitment is not, as they want to see it, a deeply personal and reciprocated loyalty, but gullibility. They’ll deflect by projecting their own fanatical commitment onto “libs,” and so it can be useful to insist that it stay on the stasis of their commitment. After all, it doesn’t matter if there are people equally fanatically committed to Biden or whoever—two wrongs don’t make a right. Biden supporters might howl at the moon and eat broken glass; that doesn’t mean that fanatical support of Trump is reasonable.

That Biden supporters might be wrong about something doesn’t mean Trump supporters are right. (And vice versa. Our political world is not, actually, a tug-of-war between two groups.)

And that points to one problem with satire of a group—what’s wrong with our political discourse is the fundamental premise that politics is a zero-sum battle between two groups. So, any satire that confirms that premise is, I think, problematic, and much of it does.

The final group is the purity politics folks. For those people, politics is a performance of purity; there’s a kind of narcissism to it. I don’t know whether satire would do much either to shame or persuade them—personally, I’ve never found that kind of person open to persuasion (regardless of where they are on the political spectrum).

So, to answer the original question: it depends.


Why Was Hitler Elected?

nazi propaganda poster saying "death to marism"


Despite the fact that invoking Hitler in arguments is so kneejerk that there’s even a meme about it, a surprising number of people misunderstand the situation. They misunderstand, for instance, that he was elected; he was even voted into dictatorship. So, why was he elected?

I want to focus on four factors that are commonly noted in scholarship but often absent from or misrepresented in popular invocations of Hitler: widespread resentment effectively mobilized by pro-Nazi rhetoric, an enclave-based media environment, authoritarian populism, agency by proxy/charismatic leadership.

I. Resentment

Resentment is often defined as a sense of grievance against a person, but grievances can be of various kinds, including motivating positive personal change or political action. Resentment is grievance drunk on jealousy. It’s common to distinguish jealousy from envy on the grounds that, while both involve being unhappy that someone has something we don’t, jealousy means wanting it taken from the other. If I envy someone’s nice shirt, I can solve that problem by buying one for myself; jealousy can only be satisfied if they lose that shirt, they are harmed for having the shirt, or the shirt is damaged. My jealousy can even be satisfied without my getting a shirt—as long as they lose theirs. Jealousy is zero-sum, but envy is not.

Resentment is a zero-sum hostility toward others whom I think look down on me. Although I feel victimized that they have something I don’t, I don’t necessarily want what they have. I do, however, want them to lose it; I want them crushed and humiliated for even having it. Resentment relies on a sense that others have things to which I am entitled, and they are not. In addition, resentment always has a little bit of unacknowledged shame.

Many Germans (most? all?) resented the Versailles Treaty, and they resented losing the Great War. They resented the accusation that they were responsible for the war, they resented that they lost a war they believed they were entitled to win, and they resented a treaty as punitive as the kind they were accustomed to impose on others.

Certainly, the Versailles Treaty was excessively punitive, but it was, oddly enough, fair—at least in the sense that it was an eye for an eye. Germans didn’t condemn equally punitive treaties they had imposed on others (e.g., the Treaty of Frankfurt or the 1918 Brest-Litvosk Treaty. They resented being treated as they felt entitled to treat others.

The war guilt clause was a particular point of resentment, and yet it was partially true. The notion that one nation and one nation only can cause a world war is implausible—few wars are monocausal—but certainly Germany held a large part of the responsibility. Yet Germans liked to see themselves as forced into a war they didn’t want—they were the real victims and completely blameless. Were the Germans actually truly blameless, I don’t think the Treaty of Versailles would have been as useful a tool for mobilizing resentment.

Sloppy pan-Germanism mixed with the even sloppier Social Darwinism promoted a narrative that victory always goes to the strongest, and that therefore whoever win deserved it. The German defeat in the Great War therefore was a massive blow to German ideology—if the best always win, and the winners are always the best, that loss was stuck in the craw. People who enjoyed Nazi rhetoric resented that they lost a war they felt entitled to win.

Important to Nazi mobilization of that resentment was continually reminding audiences of it—there are few (any?) Hitler speeches in which he didn’t remind his audience of the humiliation of the Versailles Treaty, and of the way that other nations looked down on and victimized Germany.

Did everyone else really look down on Germany? The French probably did, but that’s just because they looked down on everyone. Some British and Americans did; some didn’t. But the Germans certainly looked down on everyone. So, like the resentment about punitive treaties, they weren’t on principle opposed to people looking down on others; they just resented when they thought they were being looked down on.

The Versailles Treaty didn’t actually end the fighting. Pogroms, forced emigration, violent clashes, and genocides raged through Eastern and Central Europe well after the war, causing a massive immigration crisis. And, as often happens, people resented the immigrants. They also resented liberals, intellectuals, Jews, and various other groups that they imagined looked down on them.

Resentment is an act of projection and imagination.

hitler smiling at a child


II. Enclave-based media environment

Weimar Germany had a lot of political parties (around forty, depending on how you count them), which can loosely be categorized as: Catholic, communist, conservative, fascist, liberal (in the European sense), and socialist. They all had their own media (mostly newspapers), and many of them were rabidly partisan in terms of coverage but without admitting to the partisan coverage. [The antebellum era in the US was much the same.]

The important consequence of this factionalized media landscape was that it was possible for a person to remain fully within an informational enclave: foundational narratives, myths, premises, and outright lies were continually repeated. Repetition is persuasive. Since factional media wouldn’t present criticism or even critics fairly (or at all), it was possible for someone to feel certain about various events and yet be completely wrong. Germany was not about to win the war when it capitulated.

Sometimes the narratives were specific (e.g.,The Protocols of the Elders of Zion documents the plot of international Jewry ), and sometimes about broader historical events, or history itself. One of the most important narratives was the shape-shifting “stab in the back” myth about the Great War. This myth said that Germany was just about to win the war, and would have, but the nation was stabbed in the back, and therefore had to accept a humiliating treaty. As Richard Evans has shown, just who stabbed the back, or when, or why, or even what back, varied considerably. Like a lot of myths, it was simultaneously detailed and inconsistent.

Another important narrative was a similarly specific and vague narrative about the course of history, as a survival of the fittest conflict undermined by liberal democracy. This narrative typically cast Jews as intractably incapable of patriotism, assimilation, or German identity. German exceptionalism denied the actual heterogeneity of

Of those six kinds of political parties, three were explicitly and actively hostile to democracy, either advocating a return to the monarchy (Catholic) or a new system entirely (fascist, communist). Some “conservatives” parties advocated a return to monarchy, some advocated some other kind of authoritarian government, and some at least seemed willing to accommodate democratic decision-making practices. Only the liberals and socialists actively supported democracy (communists wanted a Marxist-Leninist revolution and dictatorship of the proletariat, whereas socialists agreed with Marxist critiques of unconstrained capitalism, but wanted reform via democratic processes; “liberals” believed in a free market and democratic processes of decision-making).

What’s important about this kind of media environment is that it undermines democratic practices because it enables the demonization or dismissal of anyone who significantly disagrees. Repetition is persuasive. If you are repeatedly told that socialists want to kick bunnies, and never hear from socialists what they actually advocate, then you’ll believe that socialists want to kick bunnies. That makes them people who shouldn’t be included in the decision-making process at all; it personalizes policy disagreements. Policy disagreements, rather than being opportunities for arguing about the ads/disads, costs, feasibility, and so on of our various policy options (even vehemently arguing) is a contest of groups.

Tl;dr If you only get your information from in-group sources, then chances are that you never hear the most reasonable arguments for out-group policies; therefore everyone who is not in-group will seem unreasonable. Not hearing the arguments leads to refusing to listen to the people.

Repetition coupled with isolation from reasonable counterarguments radicalizes.

Hitler looking at a map with generals


III. Authoritarian populism

One way to misunderstand how persuasion works is to imagine out-groups and their leaders as completely and obviously evil—by refusing to understand what some people find/found attractive about such leaders, we make ourselves feel more secure (“I would never have supported Hitler”), and thereby ignore that we might get talked into supporting someone like that.

Nazism is a kind of “authoritarian populism.” Populism is a political ideology that posits that politics is a conflict between two kinds of people: a real people whose concerns and beliefs are legitimate, moral, and true; a corrupt, out-of-touch, illegitimate elite who are parasitic on the real people. Populism is always anti-pluralist: there is only one real people, and they are in perfect agreement about everything. (Muller says populism is “a moralized form of antipluralism” 20).

Populism become authoritarian when the narrative that the real people have become so oppressed by the “elite” that they are in danger of extermination. At that point, there are no constraints on the behavior of populists or their leaders. This rejection of what are called “liberal norms” (not in the American sense of “liberal” but the political theory one) such as fairness, change from within, deliberation, transparent and consistent legal processes is the moment that a populist movement becomes authoritarian (and Machiavellian).

As Muller says, “Populists claim that they, and they alone, represent the people” (Muller 3). Therefore, any election that populists lose is not legitimate, any election they win is, regardless of what strategies they’ve used to win. Violence on the part of the in-group is admirable and always justified, purely on the grounds that it is in-group violence. The in-group is held to lower moral standards while claiming the moral highground.

Authoritarian populism always has an intriguing mix of victimhood, heroism, strength, and whining. Somehow whining about how oppressed “we are” and what meany-meany-bo-beanies They are is seen as strength. And that is what much of Hitler’s rhetoric was—so very, very much whining.

And that is something else that authoritarian populism promises: a promise of never being held morally accountable, as long as you are a loyal (even fanatical) member of the in-group (the real people).

In authoritarian populism, the morality comes from group membership, and the values the group claims to have—values which might have literally nothing to do with whatever policies they enact or ways they behave.


IV. Charismatic leadership/agency by proxy


Authoritarian populism needs an authority to embody the real people. It’s fine if they’re actually elite (many people were impressed by Hitler’s wealth). Kenneth Burke talked about the relationship in terms of “identification”—they saw him as their kind of guy. They imagined a seamless connection with him. In charismatic leadership relationships, the followers attribute all sorts of characteristics to their leader (which the leader may or may not actually have): extraordinary health, almost superhuman endurance, universal genius, a Midas touch, infallible and instantaneous judgment, and a perfect understanding of what “normal” people like and want.

In general, people engage in intention/motive-based explanations for good behavior on the part of the in-group and bad behavior on the part of non in-group leaders and members, and situational explanations for good behavior on the part of the non in-group and bad behavior for the in-group.

So, if Hubert (in-group) and Chester (out-group) give a cookie to a child (good behavior), then it shows that Hubert is good and generous (motive/intention), but Chester only did so because he was forced by circumstances (situational).

If Hubert (in-group) and Chester (out-group) both steal a cookie from a child (bad behavior), then it was because Hubert didn’t see the child, the child shouldn’t have been eating the cookie, he had no choice (situational explanations), but Chester stealing the cookie was deliberate and because Chester is evil.

One sign, then, of a charismatic leadership relationship is whether the follower holds a leader to the same standards of behavior as non in-group leaders, or if they flip the intention/situation explanations in order to hold on to the narrative that the in-group is essentially good.

What we get from a charismatic leadership relationship is a fairly simple way of understanding good and bad—it reduces moral complexity and uncertainty. Since our group is essentially good, we are guaranteed moral certainty simply by being a loyal member. And that is what Hitler promised.

Because Hitler is like us, and really gets us, then we are powerful—we take pride in everything he does; we have agency by proxy.

But, because we identify with him, then our attachment to him means we will not listen to criticism of him—criticism of him is an attack on our goodness. Our support becomes non-falsifiable, and therefore outside the realm of a reasonable disagreement about him, his actions, or his policies.

Charismatic leadership is authoritarian. But oh so very, very pleasurable.




Sources:

There are still lots of arguments among scholars about Hitler, the Germans, and the Nazis, but nothing I’m saying here is either particularly controversial or something I’ve come up with on my own.

While it is a mistake to attribute magical qualities to Hitler’s rhetoric, and to attribute the various genocides and disasters to him personally (as though his personal magnetism was destroyed agency on the part of Germans), it is also a mistake to think the rhetoric was powerless. Germans elected him because they liked what he had to say.

There was a time when scholars were insistent that Hitler’s rhetoric wasn’t that great (an argument that Ryan Skinnell’s forthcoming book will show was an accusation made at the time, one that completely misses the rhetorical force of Hitler’s strategies), but that was partially a reaction to the immediate post-war deflecting of German responsibility for the war, the Holocaust, the various genocides (the argument was that Germans were overwhelmed by Hitler’s rhetoric, or secretly hated him—neither was true).

There are many excellent biographies of Hitler, the ones written after the opening of the records captured by the Soviets are the most useful. Kershaw’s writings are especially readable, but Volker Ullrich’s and Peter Longerich’s biographies have been able to take advantage of more recent research (If I were asked to recommend just one biography, it would be Longerich’s). Richard Evans’ three-volume study of Nazism (coming to power, in power, at war) is thorough and makes clear the enthusiastic participation of various other leaders.

Adam Tooze’s Wages of Destruction is a compelling and detailed analysis of the economy under the Nazis, and Nicholas Stargardt’s The German War shows the considerable support Nazis had throughout the war. There are a lot of books about the media and Hitler, but I think the best place to start is Despina Stratigakos’ Hitler at Home. Robert Gerwarth’s The Vanquished is a powerful discussion of the aftermath of the Great War.