The Politics of Purity

people arguing
From the cover of Wayne Booth’s _Modern Dogma-

My area of expertise is how communities make bad decisions—train wrecks in public deliberation. These are times that big and small communities made a decision that resulted in an unforced disaster.

And the way this happens is oddly consistent. From the Athenians deciding to invade Sicily to Robert McNamara refusing to listen to good advice as to what to do in regard to Vietnam, individuals and communities that make disastrous decisions have a similar approach to disagreement:

• The most persuasive/powerful rhetors persuaded large numbers of people that this actually complicated issue is really just a just a question of dominance between Us and Them (and Them is always a hobgoblin).

• The more that oppositional rhetors accept that false framing of policy questions—Us v.Them—the more that they help (unintentionally or intentionally) those who hold the most power in the community. They’re helping to prevent thorough deliberation about the complicated situation.

• Once things are framed this way, then legitimate questions of policy can’t get argued in reasonable ways. If you disagree about in-group policy, then you’re really consciously or unconsciously out-group. Public disagreements aren’t about whether a proposed policy is feasible, likely to solve the problem, worth what it’s likely to cost, might have unintended consequences—they’re really about who you are and where your loyalties are.

• Instead of trying to give voters useful information about the policy agenda of various groups, media accepts the frame of policy disagreements as really a conflict between two groups and proceeds to treat policy disagreements through a motivistic and race horse frame because it seems “objective.” It isn’t. It’s toxic af, and depolitizes politics.

• Even worse is the rhetoric that reframes policy disagreements as an issue of dominance. As though, instead of people who can work together reasonably to find good solutions, politics is some kind of thunderdome fight.

What I’m saying, and have tried to say in so many books, is that the first error that makes a train wreck likely is to deflect the responsibilities of reasonable policy argumentation by saying that there is no such thing as reasonable disagreement about this issue. In those circumstances, to ask for reasonable policy deliberation on this issue is taken as proof that you’re not really in-group, and therefore you can be ignored. Under those circumstances, we too often end up with a politics of purity.

There’s an unfortunately expensive book Extremism and the Politics of Uncertainty that is a collection of essays from a symposium of political psychologists. And what turns up again and again in that book is that, when faced with an uncertain and complicated situation, people have a tendency to become more “extreme” in their commitment to the in-group. I would say that the scholars are describing a desire for more in-group purity—that the in-group should expel or convert dissenters, members of the in-group should be more purely committed, the in-group should refuse to work with other groups, and the policies should be more pure. While I understand why the scholars in the book describe this process as more “extreme,” I think it’s more useful to think about it in terms of purity. After all, it’s very possible for people to believe that we must purify ourselves of everyone who isn’t a centrist.

By “politics of purity” I mean a rhetoric (and policy agenda) that says that our problems are caused by the presence in the in-group of people who are not fully committed to an individual (the leader), a specific policy agenda, or the group. In any of three forms (and they’re not fully distinct, as I’ll explain below), the attraction of this approach to politics comes, I think, from its mingling ways of thinking about the power of belieeeving, what I think of as the P-Funk fallacy, the just world fallacy, what Eric Fromm calls “escape from freedom,” social dominance orientation, and the process(es) described by the political psychologists in the collection mentioned above. (Probably a few others.)

If you take all that and create a politics of purity oriented toward an individual (people must have a pure faith in the leader), then it’s charismatic leadership. The advantage to a leader of creating a politics of purity about an individual is that, as Hitler observed, policies can be completely reversed without losing followers. It’s worth remembering that, even as Allied troops were crashing through the west and Soviet troops through the east, and the horrors of the Holocaust were indisputable, about 25% of Germans still supported Hitler. They believed he’d been badly served by his underlings. For complicated reasons, this is pretty common–admitting that one’s commitment to a leader is irrational, let alone a mistake, is incredibly difficult for people. Often, in-group members don’t even know what the leader’s policies are, and are therefore completely wrong about what the leader has done, is doing, or intends to do.

It’s also important to note that charismatic leadership is never on its own. People enter a charismatic leadership relationship because there is an effective media promoting a particular narrative about the leader. In fact, refusing to pay attention to criticism of the leader is one of the ways that people keep their commitment pure.

Insisting on a pure commitment to a policy agenda has a pretty clear history of factionalism, splitting, heresy-hunting, and even politicide, generally to the detriment of the group, and, paradoxically enough, to their ability to get their agenda through. There’s so much purifying of a group (i.e., expelling heretics) that there isn’t time for making strategic alliances with partially compatible individuals or groups. And, often, such alliances are demonized (often literally, as in the history of Christianity–just think about the wars of extermination engaged in against other Christians).

The second (purity of commitment to a specific policy agenda), I think, tends to morph either into the first (charismatic leadership, as happened with Stalin) or the third (a pure commitment to the group). It seems to me that, in the latter case, it’s a charismatic leadership relationship, but oriented toward the group, and it has all the dangers of charismatic leadership. “Believe, obey, fight,” as Mussolini said–he didn’t say, “Reason. Listen. Deliberate.”

There’s inevitably a move to retell history in terms of what will enhance obedience and fanatical loyalty rather than accuracy. Instead of hagiographies about the individual leader, the history(ies) of the group are entirely positive, triumphalist, and dismissive of criticism. Orwell talked a lot about this in various writings, especially Homage to Catalonia and his journalism.

What all three politics of purity do is depoliticize politics, by expelling, criminalizing, demonizing, or dismissing reasonable disagreements about policies. They characterize disagreement as a failure on the part of some people to see what is obviously the correct course of action.

We disagree about policies not because there are people who have gone into Plato’s cave and emerged knowing the true policies we all need to have, and others who are looking at shadows on the wall, but because any policy affects different people in different ways. While not all positions are equally valid, I don’t think there is a policy on any major issue that is the only reasonable one. We disagree about policies because, as Hannah Arendt says, political action is always a leap into the uncertain and unknown.




Recurrent terms in my posts

Books about demagoguery



Authoritarianism. There’s a lot of scholarly debate about how to define authoritarianism, and it has to do with some scholars wanting to have a definition that includes ideology, epistemology, government, psychology, even parenting sometimes. And so there are different definitions because people are trying to do different things with those definition—nothing wrong with that. For purposes of thinking about rhetoric and train wrecks, I have found the most productive way to think about authoritarianism is as in-group favoritism on steroids, coupled with a sense that stability is the ideal and that only rigid hierarchies of dominance/submission provide stability.

Briefly, I use the term “authoritarians” for people who believe that societies should be controlled by people at the top of a pyramidal hierarchy (with, obviously, the person or the group at the top the purest in-group), with power and accountability flowing down. That is, people are only accountable to people above them in the hierarchy, and not to anyone below. An authoritarian system doesn’t imagine “justice” as something that should be applied to everyone the same way, nor that “fairness” is treating everyone equally. “Justice” is a system in which everyone “gets what they deserve,” meaning in-group members get more, and out-groups get less (if anything).

Therefore, people at different places in the hierarchy are treated differently. It’s kiss-up and kick-down. Subordinates are responsible for managing the feelings of superiors. Thus, “self-control” is equated with dominating those below; so, paradoxically, people at the top of the hierarchy are allowed to throw temper tantrums (that is, lose control) as long as the tantrums are directed downwards. Authoritarian systems put a lot of emphasis on control through fear.

Authoritarianism constrains public deliberation in several ways. Only in-group members are allowed to participate in deliberation, and even then only those toward the top. They might deliberate with each other in order to make decisions that are announced to those below them, who can only deliberate with others of a similar level about how to enact the dicta; they then tell those below what to do. In addition, authoritarianism tends to presume that there is an obviously correct answer to every problem; dissent and diversity of perspective/opinion are seen as destabilizing, as creating fractures in the stable hierarchy. Authoritarians therefore almost always emphasize the objective of education as instilling obedience, and that means they believe that education should never involve any criticism of the in-group (including facts about past in-group failures or unethical behavior). Authoritarians tend to think in binaries, and an important binary is shame v. honor. Criticism is always shame, and shame undermines obedience, so the “higher Truth” is always a version of events favorable to the in-group.

Authoritarianism isn’t particular to politics (cults are authoritarian), or necessarily connected to one specific policy agenda.

And here we have a moment of Trish Crank Theory time. I’ve read all sorts of authoritarians–from Alkibiades to the Weathermen (that’s alphabetical, rather than historical)–and what’s consistent is that they reason deductively from major premises about groups. That’s interesting.

Demagoguery. In Demagoguery and Democracy and Rhetoric and Demagoguery I define demagoguery as “discourse that promises stability, certainty, and escape from the responsibilities of rhetoric by framing public policy in terms of the degree to which and the means by which (not whether) the out-group should be scapegoated for the current problems of the in-group. Public disagreement largely concerns three stases: group identity (who is in the in-group, what signifies out-group membership, and how loyal rhetors are to the in-group); need (the terrible things the out-group is doing to us, and/or their very presence); and what level of punishment to enact against the out-group (ranging from the restriction of the out-group’s rights to the extermination of the out-group).”

Escape from freedom. Erich Fromm argued that freedom requires choice and responsibility, and inherently means making mistakes. For many people, that level of freedom (the freedom to) is terrifying, and so they escape from the responsibilities of freedom by becoming part of a kiss-up/kick-down hierarchy. They want a system in which they’re told what to do, so that they’re never responsible for bad outcomes. Being part of that hierarchy means they get the pleasure of ordering others around, while escaping the anxiety that comes from making decisions, and the accountability for any outcome.

In-group favoritism. We have a tendency to favor an in-group in various ways, most of which mean holding the in-group (and especially in-group leaders) to lower standards than out-groups (especially the Out-group) while claiming the moral highground. Because we believe that the in-group is essentially good, then we find ways to justify/rationalize anything in-group members do. For instance, we attribute good motives to in-group members and bad motives to out-group members for exactly the same behaviors. We explain the same behaviors differently:

people explain away good behavior on the part of the out-group and bad behavior on the part of the in-group

In-group favoritism always involve various kinds of bad math. An in-group political figure (Chester) might be caught having kicked twenty puppies, and an out-group political figure (Hubert) might be caught having kicked one puppy. Pro-Chester media and Chester’s supporter will treat Hubert’s one puppy-kicking incident as worse than Chester’s (despite the numerical difference) or use it to deflect discussing Chester’s puppy kicking. The one incident erases the twenty.

Similarly, one example of bad behavior on the part of an out-group member is proof about the essence of the out-group, who they really are, but the same is not true of in-group members. The bad behavior or bad in-group member is an exception (or not really in-group).

That’s bad math. One is not the same as twenty.

In-group/out-group. The “in-group” is a group we’re in (not necessarily the group in power). We have a lot of in-groups, some of which are tremendously important to our sense of self (e.g., Christian, American) and some that only intermittently become salient (e.g., rhetoric scholars, Austin resident). There are groups that are not in-group, but not particularly important to our identity (I tend to refer to them as non in-groups), but there are groups against whom we identify ourselves. That opposition is crucial to our sense of what it means to be “American” or “Christian.” It’s almost as though we couldn’t have a sense of what it means to be “American” unless we had the concept of “foreigner” (out-group). We take pride in who we are because we are not Them. Sometimes there is an Out-group (an Other) who is, more or less, the evil twin of our in-group. For many evangelicals Christians, Muslims are the Other; for much of Christianity, it was Jews. That Other often has little or nothing to do with how members of that group actually are. Often, the Other is a hobgoblin—an imagined and non-falsifiable stereotype.

Just World Fallacy (aka “just world model”). The just world fallacy/model assumes and asserts that people get what they deserve, and people deserve what they get. If bad things happen to a person, they did something that caused it to happen. This cognitive bias is tremendously comforting and non-falsifiable. It’s also always ableist and victim blaming.

Motivism/motivistic (aka “appeal to motive fallacy”). We’re engaged in motivism when we refuse to engage a reasonable argument on the grounds that the person making the argument has bad motives. People only do this with opposition arguments (I don’t think I’ve ever run across a person dismissing an in-group argument on the grounds that the person making it has bad motives). It’s important to note that this is a fallacy when the interlocutor whom we’re dismissing has made a reasonable argument. I often give the advice that you don’t have to engage with someone whose position on the issue is non-falsifiable, who is not engaged in good faith argumentation. You can if you like rattling chains or poking fire ants’ nests, but it’s generally a waste of time. This fallacy is sometimes categorized as a kind of ad hominem (a fallacy of relevance).

So, for instance, if you’ve rejected everything I’ve said in this post on the basis that I’m an out-group member, then your position is fallacious. If I’m wrong, show I’m wrong through reasonable argument instead of flicking this away like something that scares you too much to engage.

PFunk fallacy. This is sort of unfair to PFunk, but I like the quote: “If you free your mind, your ass will follow.” People often seem to assume that things have gone wrong because we didn’t approach with the right theory. If we get our theory (or beliefs) right, then good actions will necessarily follow, and so they spend a lot of time trying to get everyone to agree on the principles. (It’s like a bad Platonic dialogue.) There’s nothing wrong with trying to make sure a group is oriented toward the same goals, at least in the abstract—to be able to answer the question, “What the hell are we trying to do here?” And it’s useful to try to figure out what caused a problem that we’re trying to solve. The problematic hidden assumption is that there is such a thing as getting the theory right (there is One Right Theory). There is one real cause for any problem (what’s usefully called “a monocausal narrative”). Such a claim is often in service of denying legitimate disagreement by saying that we can derive from the One Right Theory (or the One Right Narrative) the One Right Policy.

There was a time when people seemed to describe every bad incident as “a perfect storm,” and I realize that got tedious, perhaps because it’s almost always true that the big failures and disasters are multicausal. Were I Queen of the Universe, you couldn’t graduate from high school without understanding the concept of “necessary but not sufficient.” Widespread and deep hostility to Jews was necessary for the Holocaust but not sufficient. As Ian Kershaw said, “No Hitler, no Holocaust.” But, were it not for that deep and wide hostility, Hitler wouldn’t have risen to power.

I’m making two points. First, the solution to our problems is not to get everyone to agree on The One Right Theory—univocality can itself be a problem, and it’s unlikely that there is One Right Theory that gets it all exactly right. Second, what is probably more useful to talk about is what are the several necessary but not sufficient conditions or factors that led to this problem. Such a way of approaching problems implies that there is also a variety of possible policy responses to any situation—not that all are equally good, but that deductively determining The Right Policy from The Right Theory is both fallacious and harmful.

Politicide. The sociologist Michael Mann has an extraordinary, albeit depressing, book about mass killing (that is, mass killing based on group identity). One part of his argument is that, unhappily, people who are trying to create a new nation-state with an ethos choose to equate the national ethos with an ethnos. And that necessarily means purifying the new state of the people not in that ethnos. The non in-groups.

So, as both Kenneth Burke (in “The Rhetoric of Hitler’s ‘Battle’”) and the Wizard of Oz (in Wicked) point out, one very straightforward way of unifying a disparate group is to find a common enemy.[1] Mann notes that it isn’t always an ethnic group. Mass killing might happen to a religious minority (religicide, as in the Spanish Inquisition), an economic or social class (classicide, Khmer Rouge in Cambodia), or political group, politicide (mass killing of people whose politics present a threat, as in Argentina and Chile).

Power of belieeeeeving. This is the one that makes people way mad at me when I mention it. It’s a kind of magical thinking, and maybe a subset of the just world model. It’s also complicated because there’s a bit of truth to it (the more that a person thinks in binaries, the more truth there seems to be). It’s promoted in a lot of dodgy self-help rhetoric (not all self-help rhetoric is dodgy–I’ve found a lot of it tremendously helpful), scams, heartless policies. It says that you can succeed at anything if you just belieeeeeeve enough.

The sensible version is that you should adopt policies you believe can work–whether it’s about personal change, military action, policies–but having faith doesn’t exempt you from taking practical action to achieve your ends: “Trust in God but keep your powder dry.”

There’s a kind of narcissism in thinking that God will rearrange the world because of your faith, as though the people opposing you don’t also have faith. I’m not against praying (I do it every day), but history shows that radical and fanatical faith is not a guarantee of success. Hitler was wrong when he said, “Where there’s a will there’s a ferry.” He was wrong to think that sheer will could enable the soldiers to withstand Russian winters.

Social Dominance Orientation. This is a way of describing the preference that some people have for hierarchical systems. People with a social dominance orientation tend to be Social Darwinists (which is neither Darwinian nor social).

[1] I’d like to believe that this is not the first time that Kenneth Burke and a musical have been cited together.

Consumerism and Democracy

Marjorie Taylor Greene saying she voted for a bill she neither read nor understood



At various moments in my career, I was the Director of the first-year composition program, and so dealt with grade complaints. My sense was that about 1/3 of the complaints were misunderstandings; in about 1/3 the instructor had really screwed up. For both of those, I was grateful that a student (or several) had complained, since it was an issue that needed to be resolved at the institutional level.

But the other third was … something. There were all sorts of odd things. But MTG’s defense of her failure to do her job competently reminds me of something I ran across when responding to some students in that other third. More than once, I found myself talking to a student who had not read the assignment sheet (let’s forget reading the syllabus) or paid attention in class, and who therefore failed to meet the assignment criteria. They were complaining to me because they sincerely believed that their not having met the assignment criteria was the fault of the teacher. They didn’t dispute that the information was in writing that they had been given and told to read, nor that other students understood the assignment just fine. In other words, there was never any claim that the information (about due dates, grading criteria, and so on) hadn’t been communicated. They admitted that they hadn’t read/listened. But, their argument was that the instructor was at fault for the student having ignored the information they’d been given, because the instructor’s rhetoric wasn’t good enough.

That narrative of causality–the student failed to meet the criteria of the assignment because the teacher/rhetor wasn’t persuasive enough–is an instance of the transmission model of communication (a good rhetor transmits the message effectively to a passive recipient). It’s also a consumerist model of education: students are consumers, passively waiting to be sold a product. The teacher’s job is to sell the product effectively.

But students aren’t consumers, and teachers aren’t selling a product. You can talk and think about education this way, but that doesn’t make it good.

I used to use this analogy with my students. Imagine that you’re a server in a restaurant, and your boss says, “Those people over there need water.” And you aren’t clear what table your boss means. If, later, your boss asked whether you’d given that table water, and you said that you weren’t sure what table, so you didn’t do anything, what do you think would happen? And they said, “I’d get fired.”

Learning something–anything–doesn’t mean being a passive container into which information is poured; it’s an action that requires agency.

This way of thinking is often applied to politics, both the consumerist model of relationships and the transmission model of communication.[1] It’s a train wreck way of thinking about communication of any sort, but especially politics.

The consumerist model of going to a restaurant does apply to the actual customers—the consumers of what the restaurant offers. They can choose to come back to that restaurant or not on the basis of whether that restaurant gives them what they want. If you choose not to buy a car because the advertising doesn’t really speak to you, or you choose to buy a car because you love the rhetoric about that car, well, you do you. You can buy (consume) whatever car you choose on whatever bases you choose.

Currently, the dominant model for thinking about politics is that voters are consumers. This is a recent model, from the mid twentieth century, as far as I can tell. [2] The argument is that if voters like the “message” a party portrays, then they should vote for that party. It’s up to the party and political figures to provide an appealing product (policies, rhetoric). Voters are passively sitting at the table waiting to be sold a product. Like the student who doesn’t feel obligated to read the syllabus or assignment sheet, voters aren’t seen as responsible for educating themselves about the issues.

In a democracy, voters aren’t sitting at a table wanting to be given the most pleasing product in the most timely manner.

Voters are the servers.

[1] One of my many crank theories is that the transmission model of communication is necessarily tied to consumerist models of interactions, but I’m not sure why that is the case.
[2] Prior to that, voters were described as needing to be able to deliberate—you can see that in The Federalist Papers, for instance. The reasoning behind the electoral college, and indirect election of Senators, was that voters didn’t have enough information to deliberate about the competence of people they didn’t know.)


Make politics about policies, not high stakes tug-of-war

2009 Irish tug of war team
https://en.wikipedia.org/wiki/Tug_of_war#/media/File:Irish_600kg_euro_chap_2009_(cropped).JPG

Pro-GOP media and supporters have long committed themselves to a view of politics as a zero-sum battle between the fantasy of an “Us” and a hobgoblin of “Them.” This rhetorical strategy goes at least as far back as McCarthyism, but Limbaugh was relentlessly attached to it, as is Fox News. They aren’t alone in this (I first became familiar with this way of thinking about politics when arguing with Stalinists, Libertarians, and pro-PETA folks many, many years ago). It’s working better for the GOP than it is for critics of the GOP, or Dems, or various groups for various reasons.

1) Demagoguery posits an Us (Good Persons) and a Them (Bad People With Bad Motives), and says that the correct course of action is obvious to every and any Good Person. While there are rhetors all over the political spectrum (it’s a spectrum, not a binary or continuum) who appeal to the false Us v. Them, the most anti-democratic and dangerous demagoguery relies on there being a third group—one that is unhuman (associated with terms and metaphors of animals or diseases)—and one of the things that characterizes Them is that They don’t recognize the danger of the animalistic group.

For Nazis, Romas and Jews were the dehumanized group, and liberals and socialists were the Them that didn’t recognize the danger. For proslavery rhetors, enslaved people and freed African Americans were the dehumanized group, and abolitionists and critics of slavery were the Them that didn’t recognize the danger. PETA used to dehumanize farmers and ranchers, and the Them was people who continued to buy animal products.[1]

Regardless of who does it–whether in- or out-group–, we need to object when rhetors dehumanize humans.

2) The media has long promoted a (false, incoherent, but easy and profitable) framing of policy questions as a horse race or tug-of-war between two groups. The “continuum” model is just as inaccurate, and just as incoherent. When I point out that it’s false, I’m told, “But everyone uses it.” That’s a great example of the “bandwagon” fallacy. “Everyone” used the substance v. essence distinction for hundreds of years. “Everyone” bled people to cure diseases for over a thousand years.

Our world is not actually two groups; our world is a world of people with different values, needs, and policy agenda. Media treating policy disagreements as a fight between two groups is a self-fulfilling description insofar as it teaches people to treat policy options as signals of in-group commitment rather than …well…policy options.

A person might be genuinely committed to reducing crime in an area. That commitment doesn’t necessarily mean they should be opposed to or in favor of more reliance on “Own Recognizance” rather than bail, or decriminalizing various activities, increasing infrastructure expenditure in that area, increasing punishment, privatizing prisons, applying the death penalty more often. The relationship between and among those policies is complicated in all sorts of ways, and data as to which policy strategy is most likely reduce crime is also complicated. Each of those topics is a policy issue that is complicated, nuanced, and uncertain, and something that should be argued as a complicated, nuanced, and uncertain issue and not a tug-of-war between good and evil.

Not everyone who believes that abortion should be criminalized also believes that our death penalty system is just, for instance. Despite how many media portray issues, neither of the major parties has a consistent policy agenda from one year to the next—keep in mind that as recently as the overturning of Roe v. Wade major figures in the GOP said there would not be a federal ban on abortion. They were not speaking for every member of their party, as was immediately made clear. Republicans disagree with each other about whether bi-partisanship is a virtue, gay rights, tariffs. Dems disagree with each other about universal health care, the death penalty, how to respond to climate change. As they should.

Talking about politics in terms of a contest between two groups means we don’t argue policies. Policies matter.

Most important, a person persuaded that the death penalty should be applied more often, but who believes that people who disagree have a legitimate point of view—a pluralist (which is different from a relativist)—enhances democracy, whereas a person who believes that every and anyone who disagrees with them is spit from the bowels of Satan is an authoritarian, regardless of whether they’re pro- or anti-death penalty.

Democracy depends upon values like pluralism, fairness, equality before the law. Media needs to talk about extremism in regard to those values, not one’s stance on a policy. The continuum model falsely conflates the two–a person who believes in universal health care is not more “extreme” in terms of their commitment to democracy than someone who believes that anyone who wants a change to our system is a dangerous radical who should be silenced, if not deported. The media would call that latter person a centrist. They aren’t.

Treating politics as a conflict between identities mobilizes an audience, and is therefore more profitable, but it is, at least, proto-demagogic, and it inhibits (and often prohibits) reasonable deliberations about our complicated policy options.

(And, just to be clear, so does a “let’s all just get along” way of approaching politics—if we think that “civility” is being nice to each other, and refraining from saying anything that hurts the feelings of anyone else, then we’re still avoiding the hard work of reasonably, and passionately, arguing about policy.)

So, if we want less demagoguery, we need to abandon a demagogic way of talking about politics. Stop talking about two sides. Talk about policies.

3) Mean girl rhetoric. A junior high mean girl (Regina) who wants to be friends with Jane is likely to do it in three steps. First, she tells Jane that Sally says terrible things about Jane. She’ll pick things about which Jane is at least a little insecure. “Jane keeps making fun of your acne.” “Jane says you’re fat.” Then she’ll badmouth Sally, thereby creating a bond between herself and Jane—they are unified against the common enemy (Sally). Sally may or may not have said those things—Regina might have entirely lied, taken something out of context, or even been the one to say the crappy things to Sally. Regina will continue to strengthen the bond with Jane by continually telling her about crap Sally is supposed to have said. Regina thereby creates resentment against Sally—“who is she to say I’m fat?”

The insecurity is necessary for the bonding, so, oddly enough, it’s Mean Girl who has to keep making Jane insecure by repeating what Sally may or may not have said. She has to keep fuelling that resentment.

If you pay attention to demagogic media, they spend a lot of time talking about the terrible things They say about Us. Sometimes someone in the out-group did say it, but often it’s a misrepresentation. Most often it’s cherry-picking. We tend to see the in-group as heterogeneous, but out-groups as homogeneous. So, while We are all individuals, any member of the out-group can stand for all of Them. That means demagogic media can find some minor out-group figure and use it to foment resentment against the out-group in general.

Find the best opposition arguments on policy issues before dismissing the Other as blazing idiots. Don’t rely on entirely in-group sources.

4) Demagogic media holds the in- and out-group to different standards. In fact, it holds the in-group to no standards at all other than fanatical commitment to the in-group.

Here’s what I mean. Imagine that we’re in a world that is polarized between Chesterians and Hubertians, and we’re Hubertians. Hubertian media finds some Assistant to the Assistant Dog Catcher in North Northwest Small Town who has said something terrible about Hubertians, perhaps called for violence against us. If our media is going to use that as proof that Hubertians are out to exterminate us, then if there is any Hubertian who has ever called for exterminating Chesterians, we are (if we have a reasonable argument), then we have to admit that we are out toe exterminate Chesterians.

If one what one member of the non in-group can be used to characterize what everyone other than the in-group says—if that’s a reasonable way to think about political discourse—then it’s reasonable for Them to characterize Us on the basis of what any in-group member says, no matter how marginalized.

If we don’t hold the in- and out-group to the same standards, then our position is unreasonable. We’re also rejecting Jesus, but that doesn’t generally matter to followers of demagogic media.

Hold in- and out-group media, rhetors, and political figures to the same standards: of argument, ethics, legality, accountability. If you won’t, then you’re an authoritarian.

Pro-GOP media isn’t the only media doing these things. (I’ve seen exactly this rhetoric in regard to raw food for dogs.) But if someone replies to this post by telling me that “Both Sides Are Bad,” I will point out that they have completely misread my argument. They are applying the false model of two sides that enables and fuels demagoguery. Saying “both sides are bad” is almost always in service of deflecting criticism of in-group demagoguery and is thereby participating in demagoguery.

If you don’t like demagoguery, stop engaging in it. That means stop talking about our political situation as a tug-of-war between two sides. Argue policies, acknowledge diversity and complexity, and seek out the smartest opposition arguments.

[1] There are various anti-GOP rhetors whom I cannot watch now that I’ve retired (studying demagoguery is my job, not something I do for fun), and I used them in classes as examples of demagoguery, but even I will admit that they don’t openly dehumanize some group the way that many pro-GOP rhetors dehumanize immigrants. They irrationalize “conservatives” and engage in a lot of motivism, but don’t equate “conservatives” with animals, viruses, and so on to the same extent. I’ve been told that dehumanizing metaphors don’t play as well with people who self-identify as “conservative,”and that’s why such rhetors avoid them, but I don’t know.






Progressives are children of the Enlightenment

bee on a flower

I loathe putting my thesis first (the thesis-first tradition is directly descended from people who didn’t actually believe that persuasion is possible), but here I will. The way that a lot of liberals, progressives, and pro-democracy people are talking about GOP support for authoritarianism is neither helpful nor accurate. Both the narrative about how we got here and the policy agenda for what we should do now are grounded in assumptions about rhetoric that are wrong. And they’re narratives and assumptions that come from the Enlightenment.

I rather like the Enlightenment—an unpopular position, even among people who, I think, are direct descendants of it. But, I’ll admit that it has several bad seeds. One is a weirdly Aristotelian approach of valuing deductive reasoning.

In an early version of this post, I wrote a long explanation about how weird it is that Enlightenment philosophers all rejected Aristotle but they actually ended up reasoning like he did—collecting data in service of finding universally valid premises. I deleted it. It wouldn’t have made my argument any clearer or more effective. I too am a child of the Enlightenment. I want to go back to sources.

Here’s what matters: syllogistic reasoning starts with a universally valid premise and then makes a claim about a specific case. “All men are mortal, and Socrates is mortal, so Socrates must be mortal.” Inductive reasoning starts with the specific cases (“Socrates died; so did Aristotle; so did Plato”) in order to make a more general claim (“therefore, all Greek philosophers died”). For reasons too complicated to explain, Aristotle was associated with the first, although he was actually very interested in the second.

Enlightenment philosophers, despite claiming to reject Aristotle, had a tendency to declare something to be true (“All men are created equal”) and then reason, very selectively, from that premise. (It only applied to some men.) That tendency to want to reason from universally valid principles turned out to be something that was both liberating and authoritarian. Another bad seed was the premise that all problems, no matter how complicated, have a policy solution. There are two parts to this premise: first, that all problems can be solved, and second, that there is one solution. The Enlightenment valued free speech and reasonable deliberation (something I like about it), but in service of finding that one solution, and that’s a problem.[1]

The assumption was that enlightened people would throw off the blinders created by “superstition” and see the truth. So, like the authorities against whom they were arguing, they assumed that there was a truth. For many Enlightenment philosophers, the premise was that free and reasonable speech among reasonable people would enable them to find that one solution. The unhappy consequence was to try to gatekeep who participated in that speech, and to condemn everyone who disagreed—this move still happens in public discourse. People who agree with Us see the Truth, but people who don’t are “biased.”

The Enlightenment assumed a universality of human experience—that all people are basically the same—an assumption that directly led to the abolition of slavery, the extension of voting rights, public education. It also led to a vexed understanding of what deliberative bodies were supposed to do: 1) find the right answer, or 2) find a good enough policy. It’s interesting that the Federalist Papers vary among those two ways of thinking about deliberation.

The first is inherently authoritarian, since it assumes that people who have the wrong answer are stupid, have bad motives, are dupes, and should therefore be dismissed, shouted down, expelled. This way of thinking about politics leads to a cycle of purification (both Danton and Robespierre ended up guillotined).[2] I’m open to persuasion on this issue, but, as far as I know, any community that begins with the premise that there is a correct answer, and it’s obvious to good people, ends up in a cycle of purification. I’d love to hear about any counter-examples.

The second is one that makes some children of the Enlightenment stabby. It seems to them to mean that we are watering down an obviously good policy (the one that looks good to them) in order to appease people who are wrong. What’s weird about a lot of self-identified leftists is that we claim to value difference while actually denying that it should be valued when it comes to policy disagreements.

We’re still children of Enlightenment philosophers who assumed that there is a right policy, and that anyone who disagrees with us is a benighted fool.

Another weird aspect of Enlightenment philosophers was that they accepted a very old model of communication—the notion that if you tell people the truth they will comprehend it (unless they’re bad people). This is the transmission model of communication. Enlightenment philosophers, bless their pointed little heads, often seemed to assume that enlightening others simply involved getting the message right. (I think JQA’s rhetoric lectures are a great example of that model.)

I think that what people who support democracy, fairness, compassion, and accountability are now facing is a situation that has been brewing since the 1990s—a media committed to demonizing democracy, fairness, compassion, and in-group accountability. It’s a media that has inoculated its audience against any criticism of the GOP.

And far too many people are responding in an Enlightenment fashion—that the problem is that the Democratic Party didn’t get its rhetoric right. As though, had the Democratic Party transmitted the right message, people who reject on sight anything even remotely critical of the GOP would have chosen to vote Dem. Ted Cruz won reelection because he had ads about transgender kids playing girls’ sports. That wasn’t about rhetoric, but about policy.

We aren’t here because Harris’ didn’t get her rhetoric right. Republicans have a majority of state legislatures and governorships. This isn’t about Harris or the Dem party; this is about Republican voters. To imagine that Harris’ or the Dems’ rhetoric is to blame is to scapegoat. Blame Republican voters.

We are in a complicated time without a simple solution. Here is the complicated solution: Republicans have to reject what Trump is doing.

I think that people who oppose Trump and what he’s doing need to brainstorm ways to get Republican voters to reject their pro-Trump media and their kowtowing representatives.

I think that is a strategy necessary for our getting this train to stop wrecking, and I think it’s complicated and probably involves a lot of different strategies. And I think we shouldn’t define that strategy by deductive reasoning—I think this is a time when inductive reasoning is our friend. If there is a strategy that will work now, it’s worked in the past. So, what’s likely to work?





[1] The British Enlightenment didn’t make the rational/irrational split in the same way that the Cartesian tradition did. For the British philosophers, there wasn’t a binary between logic and feelings; for them, sentiments enhance reasonable deliberation, but the passions inhibit it.

[2] There’s some research out there that suggests that failure causes people to want to purify the in-group. My crank theory is that it depends on the extent to which people are pluralist.

Demagoguery and Disruption

various books that are or are about demagoguery

I’ve been asked why demagoguery rises and falls, more than once by people who like the “disruption” theory—that demagoguery is the consequence of major social disruption. The short version is that events create a set and severity of crises that “normal” politics and “normal” political discourse seem completely incapable of ameliorating, let alone solving. People feel themselves to be in a “state of exception” when things they would normally condemn seem attractive—anti-democratic practices, purifying of a community through ethnic/political cleansing, authoritarianism, open violation of constitutional protections.

When I first started working on demagoguery, I assumed that was the case. It makes sense, after all. And, certainly, in the instances that most easily come to mind when we think about demagoguery, there was major social disruption. Hitler rose to power in the midst of major social disruption: humiliation (the Great War and subsequent Versailles Treaty), economic instability (including intermittent hyperinflation), mass immigration from Central and Eastern Europe, an unstable government system.

And you can look at other famous instances of demagoguery and see social disruption: McCarthyism and the Cold War (specifically the loss of China), Charles Coughlin and the Great Depression, Athenian demagogues like Cleon and the Peloponnesian War, Jacobins and failed harvests. But, the more I looked at various cases, the weirder it got.

Take, for instance, McCarthyism and China. There are two questions never answered by people who blame(d) “the Democrats” for “losing” China is: what plan would have worked to prevent that victory? Did Republicans advocate that plan (so, would they have enacted it if in power)? McCarthy’s incoherent narrative was that spies in the State Department were [waves hands vaguely] somehow responsible for the loss of China. Were losing China a major social disruption, then it would have until that moment been seen as an important power by the people who framed its “loss” as a major international threat—American Republicans. But, prior to Mao’s 1949 victory in China, American Republicans were not particularly interested in China. In fact, FDR had to maneuver around various neutrality laws passed by Republicans in order to provide support for Chiang Kai Shek at all. After WWII, Republicans were still not very interested in intervening in China—they weren’t interested in China till Mao’s victory. So, why did it suddenly become a major disruption?

One possibility is that Mao’s victory afforded the rhetorical opportunity of having a stick with which to beat the tremendously successful Democrats (that’s Halberstam’s argument). If Halberstam is right, then demagoguery about China, communism, and communist spies was the cause, not the consequence, of social disruption. Another equally plausible possibility is that China becoming communist, and an ally of the USSR, took on much more significance in light of Soviet acquisition of nuclear weapons. So, the disruption led to demagoguery.

In other words, McCarthyism turned out not to be quite as clean a case as I initially assumed, although far from a counter-example.

Another problematic case was post-WWI demagoguery about segregation. In many ways, that demagoguery was simply a continuation of antebellum pro-slavery demagoguery, with added bits from whatever new “scientific” or philosophical movement might seem useful (e.g., eugenics, anti-communism). It wasn’t always at the same level, but tended to wax and wane. I couldn’t seem to correlate the waxing and waning to any economic, political, or social event, or even kind of event. What it seemed like was that it correlated more to specific political figures deciding to amp up the demagoguery for short-term gain (see especially Chapter Three).

Similarly, ante-bellum pro-slavery demagoguery didn’t consistently correlate to major disruptions; if anything, it often seemed to create them, or create a reframing of conflicts (such as with indigenous peoples). But, the main problem with the disruption narrative of causality is that I couldn’t control the variables—it’s extremely difficult to find a period of time when there wasn’t something going on that can be accurately described as a major disruption. Even if we look only at financial crises considered major (there was a major downturn in the economy that lasted for years), there were eight in the US in the 19th century: 1819, 1837, 1839, 1857, 1873, 1884, 1893, and 1896. Since several of these crises lasted for years, as much as half of the 19th century was spent in a major financial crisis.

And then there are other major disruptions. There were riots or uprisings related to slavery and race in almost every year of the 19th century. The Great Hunger in Ireland (1845-1852) and later recurrence (1879), 1848 revolutions in Western Europe, and various other events led to mass migrations of people whose ethnicity or religion was unwelcome enough to create major conflicts. And this is all just the 19th century only in the US.

Were demagoguery caused by crises, then it would always be full-throated, since there are always major crises of some kind. But it waxes and wanes, often to varying degrees in various regions, or among various groups, sometimes without the material conditions changing. Pro-slavery demagoguery varied in terms of themes, severity, popularity, but not in any way that I could determine correlated to the economic viability or political security of the system.

Anti-Japanese demagoguery was constant on the West Coast of the US from the late 19th century through at least the mass imprisonment in the 40s, but not as consequential or extreme elsewhere. One might be tempted to explain that discrepancy by population density, but there was not mass imprisonment in Hawaii, which had a large population of Japanese Americans. Anti-Judaism has never particularly correlated to the size (or even existence) of a local Jewish population; it’s not uncommonly the most extreme in situations almost entirely absent of Jews. And sometimes it’s impossible to separate the crisis from the demagoguery—as in the cases of demagoguery about fabricated threats, such as Satanic panics, stranger danger demagoguery, wild and entirely fabricated reports of massive abolitionist conspiracies, intermittent panics about Halloween candy.

I’ve come to think it has to do with two other factors: strategic threat inflation on the part of rhetors with a sufficiently large megaphone, and informational enclaves (and these two factors are mutually reinforcing). I’ve argued elsewhere that the sudden uptick in anti-abolitionist was fueled by Presidential aspirations; Truman strategically engaged in threat inflation regarding Soviet intentions in his speech “The Truman Doctrine;” the FBI has repeatedly exaggerated various threats in order to get resources; General DeWitt fabricated evidence to support race-based imprisonment of Japanese Americans. These rhetors weren’t entirely cynical; I think they felt sincerely justified in their threat inflation, but they knew that they were exaggerating.

And threat inflation only turns into demagoguery when it’s picked up by important rhetors. Japanese Americans were not imprisoned in Hawaii, perhaps because DeWitt didn’t have as much power there, and there wasn’t a rhetor as important as California Attorney General Earl Warren supporting it.

In 1835, there was a panic about the AAS “flooding” the South with anti-slavery pamphlets that advocated sedition. They didn’t flood the South; they sent the pamphlets, which didn’t advocate sedition, to Charleston, where they were burned. But, the myth of a flooded South was promoted by people so powerful that it was referred to in Congress as though it had happened, and is still referred to by historians who didn’t check the veracity of the story.

And that brings up the second quality: informational enclaves. Demagoguery depends on people either not being aware of or not believing disconfirming information. The myth of Procter and Gamble being owned by a Satan worshipper (who was supposed to have gone on either Phil Donahue or Oprah Winfrey and announced that commitment) was spread for almost 20 years despite it being quite easy to check and see if any recording of such a show existed. The people I knew who believed it didn’t bother even trying to check. Advocates of the AAS mass-mailing demagoguery (or other fabricated conspiracy stories) only credited information and sources that promoted the demagoguery.

Once the Nazis or Stalinists had control of the media in their countries, the culture of demagoguery escalated. But, even prior to the Nazi silencing of dissent, Germany was in a culture of demagoguery—because people could choose to get all their information from reinforcing media, and many made that choice. Antebellum media was diverse—it was far from univocal—but people could choose to get all their information from one source. They could choose to live in an informational enclave. Many made that choice.

It didn’t end well.

Seeds Over a Wall: The Pyramid of Harm

flowers in front of a wall

My grandmother had a “joke” (really more of a parable) about a guy who sees a pie cooling in the window, and steals it. Unfortunately, he leaves a perfect handprint on the sill, so he sneaks into the house in order to wash off his handprint. But then it’s obvious that the sill has been washed, since it’s so much cleaner than the wall. So he washes that wall. It’s still obvious that something has happened because that one wall is so much cleaner than the others. When the police came, he was repainting the attic.

You can tell this as a shaggy dog joke, with more of the steps between the sill and the attic. And, in a way, that’s how this situation often plays out, at least in regard to bad choices. Rather than admit the initial mistake, we get deeper and deeper into a situation; the more energy we expend to deflect the consequences of that first mistake, the more committed we are to making that expenditure worthwhile. So, we’re now in the realm of the “sunk cost” fallacy/cognitive bias. Making decisions on the basis of trying to retrieve sunk costs—also known as “throwing good money after bad”–enables us to deny that we made a bad decision.

In the wonderful book Mistakes Were Made, Carol Tavris and Elliot Aronson call this process “the pyramid of choice.” It’s usefully summarized here:

“The Analogy of the Pyramid (Tavris and Aronson, 2015). An initial choice -which is often triggered by the first “taking the temperature” vote -amounts to a step off on one side of the pyramid. This first decision sets in motion a cycle of self-justification which leads to further action (e.g., taking a public stance during the group discussion) and further selfjustification. The deeper down participants go, the more they can become convinced and the more the need arises to convince others of the correctness of their position.”

The example used by Tavris and Aronson is of two students who are faced with the choice of cheating or getting a bad grade on an exam. They are, initially, both in the same situation. One decides to cheat, and one decides to get the bad grade. But, after some time, each will find ways of not only justifying their decision, but they will be “convinced that they have always felt that way” (33).

In the equally wonderful book Denial, Jared Del Rosso describes a similar process for habituating a person to behaviors they would previously have condemned (such as engaging in torture). A prison guard or police officer is first invited to do something a little bit wrong; that small bad act is normalized, and then, once they’ve done that, it becomes easier to get them to do a little worse (Chapter 4). Christopher Browning describes a similar situation for Nazi Wehrmacht soldiers who participated in genocide; Hannah Arendt makes that argument about Adolf Eichmann; Robert Gellately makes it about Germans’ support for Hitler.

It’s like an upside-down pyramid—the one little bad act enables and requires more and worse ones, since refusing to continue doing harm would require admitting to one’s self and others that the first act was bad. It means saying, “I did this bad (or stupid) thing,” and that’s hard. It’s particularly hard for people who equate identity and action, and who believe that only bad people do bad things, and only stupid people do stupid things; that is, people who believe in a stark binary of identity.

This way of thinking also causes people to “double down” on mistakes. In late 1942, about 250,000 Nazi soldiers approaching and in Stalingrad were in danger of getting encircled by Soviet troops. Hitler refused to allow a retreat; instead opting for Goering’s plan of airlifting supplies. David Glantz and Jonathan House argue that Hitler was “trapped” by his previous decisions—to acknowledge the implausibility of Goering’s proposal (and it was extremely implausible) would amount to Hitler admitting that various decisions that he had made were wrong, and that his generals had been right. Glantz and House don’t mean he was actually trapped–other decisions could have been made, but not by Hitler. He was trapped by his own inability to admit that he had been wrong. Rather than admit that he was wrong in his previous bad decisions, he proceeded to make worse ones. That’s the pyramid of harm.

The more walls the thief washes, the harder it is to say that the theft of the pie was a one-time mistake.

Don’t be the thief.


Seeds Over a Wall: Credibility

blooming cilantro

tl;dr Believing isn’t a good substitute for thinking.

As mentioned in the previous post, Secretary of Defense Robert McNamara, LBJ, Dean Rusk, McGeorge Bundy, and various other decision-makers in the LBJ Administration were committed to the military strategy of “graduated pressure” with, as H.R. McMaster says, “an almost religious zeal” (74). Graduated pressure was (is) the strategy of slightly increasing the amount of military force by steps in order to pressure the opponent into giving up. It’s supposed to “signal” to the opponent that we are absolutely committed, but open to negotiation.

It’s a military strategy, and the people in favor of it were not people with much (or sometimes any) military training or experience. There were various methods for people with military experience to advise the top policy-makers. Giving such advice is the stated purpose of the Joint Chiefs of Staff, for instance. There were also war games, assessments, memos, and telegrams, and their hostility to “graduated pressure” ranged from dubious to completely opposed. The civilian advisors were aware of that hostility, but dismissed the judgments of military experts on the issue of military strategy.

It did not end well.

In the previous post, I wrote about binary thinking, with emphasis on the never/always binary. When it comes to train wrecks in public deliberation, another important (and false) binary is trustworthy/untrustworthy. That binary is partially created by others, especially the fantasy that complicated issues really have two and only two sides.

Despite what people think, there aren’t just two sides to every major policy issue—you can describe an issue that way, and sincerely believe it is, but doing so requires misdescribing the situation, and forcing it into a binary. “The Slavery Debate,” for instance, wasn’t between two sides; there were at least six different positions on the issue of what should happen with slavery, and even that number requires some lumping of people together who were actually in conflict.

(When I say this to people, I’m often told, “There are only two sides: the right one and the wrong one.” That pretty much proves my point. And, no, I am not arguing for all sides being equally valid, “relativism,” endless indecision, compulsive compromise, or what the Other term is in that false binary.)

I’ll come back to the two sides point in other posts, but here I want to talk about the binary of trustworthy/untrustworthy (aka, the question of “credibility”). What the “two sides” fallacy fosters is the tendency to imagine credibility as a binary of Us and Them: civilian v. military advisors; people who advocate “graduated pressure” and people who want us to give up.

In point of fact, the credibility of sources is a very complicated issue. There are few (probably no) sources that are completely trustworthy on every issue (everyone makes mistakes), and some that are trustworthy on pretty much nothing (we all have known people whom we should never trust). Expertise isn’t an identity; it’s a quality that some people have about some things, and it doesn’t mean they’re always right even about those some things. So, there is always some work necessary to try to figure out how credible a source is on this issue or with this claim.

There was a trendy self-help movement at one point that was not great in a lot of ways, but there was one part of it that was really helpful: the insistence that “there is no Santa Claus.” The point of this saying was that it would be lovely were there someone who would sweep in and solve all of our problems (and thereby save us from doing the work of solving them ourselves), but there isn’t. We have to do the work.[1] I think a lot of people talk about sources (media, pundit, political figure) as a Santa Claus who has saved them from the hard work of continually assessing credibility. They believe everything that a particular person or media says. If they “do their own research,” it’s often within the constraints of “motivated reasoning” and “confirmation bias” (more on that later).[2]

I mentioned in the first post in this series that I’m not sure that there’s anything that shows up in every single train wreck, except the wreck. Something that does show up is a particular way of assessing credibility, but I don’t think that causes the train wreck. I think it is the train wreck.

This way of assessing credibility is another situation that has a kind of mobius strip quality (what elsewhere I’ve called “if MC Escher drew an argument”): a source is credible if and only if it confirms what we already believe to be true; we know that what we believe is true because all credible sources confirm it.

This way of thinking about credibility is comforting; it makes us feel comfortable with what we already believe. It silences uncertainty.

The problem is that it’s wrong.

McNamara and others didn’t think they were making a mistake in ignoring what military advisors told them; they dismissed that advice on the grounds of motivism, and that’s pretty typical. They said that military advisors were opposed to graduated pressure because they were limited in their thinking, too oriented toward seeking military solutions, too enamored of bombing. The military advisors weren’t univocal in their assessment of Vietnam and the policy options—there weren’t only two sides on what should be done—but they had useful and prescient criticism of the path LBJ was on. And that criticism was dismissed.

It’s interesting that even McNamara would later admit he was completely wrong in his assessment of the situation, yet wouldn’t admit that he was told so at the time. His version of events, in retrospect, was that the fog of war made it impossible for him to get the information he needed to have advocated better policieds. But that simply isn’t true. McNamara’s problem wasn’t a lack of information—he and the other advisors had so very, very much information. In fact, they had all the information they needed. His problem was that he didn’t listen to anyone who disagreed with him, on the grounds that they disagreed with him and were therefore wrong.

McNamara read and wrote reports that listed alternatives for LBJ’s Vietnam policies, but they were “poisoning the well.” The alternatives other than graduated pressure were not the strongest alternative policies, they were described in nearly straw man terms, and dismissed in a few sentences.

We don’t have to listen to every person who disagrees with us, and we can’t possibly read every disconfirming source, let alone assess them. But we should be aware of the strongest criticisms of our preferred policy, and the strongest arguments for the most plausible of alternative policy options. And, most important, we should know how to identify if we’re wrong. That doesn’t mean wallowing in a morass of self-doubt (again, that’s binary thinking).

But it does mean that we should not equate credibility with in-group fanaticism. Unless we like train wrecks.









[1] Sometimes people who’ve had important conversion experiences take issue with saying there is no Santa Claus, but I think there’s a misunderstanding—many people believe that they’ve accomplished things post-conversion that they couldn’t have done without God, and I believe them. But conversion didn’t save them from doing any work; it usually obligates a person to do quite a bit of work. The desire for a “Santa Claus” is a desire for someone who doesn’t require work from us.

[2] Erich Fromm talked about this as part of the attraction of authoritarianism—stepping into that kind of system can feel like an escape from the responsibilities of freedom. Many scholars of cults point to the ways that cults promise that escape from cognitive work.

Seeds Over a Wall: Binary Thinking

primroses

Imagine that we’re disagreeing about whether I should drive the wrong way down a one-way street, and you say, “Don’t go that way—you could get in an accident!” And I say, “Oh, so no one has ever driven down a one-way street without getting into an accident?” You didn’t say anything about always or never. You’re talking in terms of likelihood and risk, about probability. I’m engaging in binary thinking.

What’s hard about talking to people about binary thinking is that, if someone is prone to it, they’re likely to respond with, “Oh, so you’re saying that there’s never a binary?” Or, they’ll understand you as arguing for what they think of as relativism—they imagine a binary of binary thinking or relativism.

(In other words, they assume that there’s a binary in how people think: a person either believes there’s always an obvious and clear absolutely good choice/thing and an obvious and always clear absolutely bad choice/thing OR a person believes there’s no such thing as good v. bad ever. That latter attitude is often called “relativism” and, for binary thinkers, they assume it’s the only possibility other than their approach. So, they’re binary thinkers about thinking, and that makes talking to them about it difficult.)

“Binary thinking” (also sometimes called “splitting” or “dichotomous thinking”) is a cognitive bias that encourages us to perceive people, events, ideas, and so on into two mutually exclusive categories. It’s thinking in terms of extremes like always or never—so if something doesn’t always happen, then it must never happen. Or if someone says you shouldn’t do something, you understand them to be saying you should never do it. Things are either entirely and always good, or entirely and always bad.

We’re particularly prone to binary thinking when stressed, tired, faced with an urgent problem. What it does is reduce our options, and thereby seems to make decision-making easier; it does make decision-making easier, but easy isn’t always good. There’s some old research suggesting that people faced with too many options get paralyzed in decision-making, and so find it easier to make a decision if there are only two options. There was a funny study long ago in which people had an option to taste salsas—if there were several options, more people walked by than if there were only two. (This is why someone trying to sell you something—a car, a fridge, a house–will try to get you to reduce the choice to two.)

Often, it’s a false dichotomy. For instance, the small circle of people making decisions about Vietnam during the LBJ Administration kept assuming that they should either stick with the policy of “graduated pressure” (which wasn’t working) or pull out immediately. It was binary thinking. While there continues to be considerable disagreement about whether the US could have “won” the Vietnam conflict, I don’t know of anyone who argues that graduated pressure could have done it. Nor does anyone argue there was actually a binary–there were plenty of options other than either graduated pressure or an immediate pull-out, and they were continually advocated at the time.

Instead of taking seriously the options advocated by others (including the Joint Chiefs of Staff), what LBJ policy-makers assumed was that they would either continue to do exactly what they were already doing or give up entirely. And that’s a common false binary in the train wrecks I’ve studied–stick with what we’re doing or give up, and it’s important to keep in mind that this is a rhetorical move, not an accurate assessment of options.

I think we’ve all known people who, if you say, “This isn’t working,” respond with, “So, you think we should just give up?” That isn’t what you said.

“Stick with this or give up” is far from the only binary that traps rhetors into failure. When Alcibiades argued that the Athenians either had to invade Sicily or betray Egesta, he was invoking the common fallacy of brave v. coward (and ignoring Athens’ own history). A Spartan rhetor used the same binary (go to war with Athens or you’re a coward) even while disagreeing with a brave general who clearly wasn’t a coward, and who had good reasons for arguing against war with Athens at that moment.

One way of defining binary thinking is: “Dualistic thinking, also known as black-and-white, binary, or polarized thinking, is a general tendency to see things as good or bad, right or wrong, and us or them, without room for compromise and seeing shades of gray” (20). I’m not wild about that way of defining it, because it doesn’t quite describe how binary thinking contributes to train wrecks.

It isn’t that there was a grey area between graduated pressure and an immediate pull-out that McNamara and others should have considered (if anything, graduated pressure was a gray area between what the JCS wanted and pulling out entirely). The Spartan rhetor’s argument wouldn’t have been a better one had he argued that the general was sort of a coward. You can’t reasonably solve the problem of which car you should buy by buying half of one and half of the other.

The mistake is assuming that initial binary—of imagining there are only two options, and you have to choose between them. That’s binary thinking—of course there are other options.

When I point out the problems of binary thinking to people, I’m often told, “So, you’re saying we should just sit around forever and keeping talking about what to do?”

That’s binary thinking.



Seeds Over a Wall: Thoughts on Train Wrecks in Public Deliberation

a path through bluebonnet flowers

I’ve spent my career looking at bad, unforced decisions. I describe them as times that people took a lot of time and talk to come to a decision they later regretted. These aren’t times when people didn’t know any better—all the information necessary to make a better decision was available, and they ignored it.

Train wrecks aren’t particular to one group, one kind of person, one era. These incidents I’ve studied are diverse in terms of participants, era, consequences, political ideologies, topics, and various other important qualities. One thing that’s shared is that the interlocutors were skilled in rhetoric, and relied heavily on rhetoric to determine and advocate policies that wrecked the train.

That’s how I got interested in them—a lot of scholars of rhetoric have emphasized times that rhetors and rhetoric saved the day, or at least pointed the way to a better one. But these are times that people talked themselves in bad choices. They include incidents like: pretty much every decision Athens made regarding the Sicilian Expedition, Hitler’s refusal to order a fighting retreat from Stalingrad, the decision to dam and flood the Hetch Hetchy Valley (other options were less expensive), eugenics, the LBJ Administration’s commitment to “graduated pressure” in Vietnam; Earl Warren’s advocacy of race-based mass imprisonment; US commitment to slavery; Puritans’ decision to criminalize Baptist and Quakers.

I’ve deliberately chosen bad decisions on the part of people that can’t be dismissed as too stupid to make good decisions. Hitler’s military decisions in regard to invading France showed considerable strategic skill–while he wasn’t as good a strategist as he claimed, he wasn’t as bad as his generals later claimed. Advocates of eugenics included experts with degrees from prestigious universities—until at least WWII, biology textbooks had a chapter on the topic, and universities had courses if not departments of Eugenics. It was mainstream science. Athenians made a lot of good decisions at their Assembly, and a major advocate of the disastrous Sicilian Expedition was a student/lover of Socrates’. LBJ’s Secretary of Defense Robert McNamara was a lot of things, but even his harshest critics say he was smart.

The examples also come from a range of sorts of people. One temptation we have in looking back on bad decisions is to attribute them to out-group members. In-group decisions that turned out badly we try to dismiss on the grounds that they weren’t really bad decisions, they had no choice, an out-group is somehow really responsible for what happened.[1] (It’s interesting that that way of thinking about mistakes actively contributes to train wrecks.) The people who advocated the damming and flooding of the Hetch Hetchy Valley were conservationists and progressives (their terms for themselves, and I consider myself both[2]). LBJ’s social agenda got us the Voting Rights Act, the Civil Rights Act, Medicare, all of which I’m grateful for. Earl Warren went on to get Brown v. Board passed, for which I admire him.

In short, I don’t want these posts to be in-group petting that makes Us feel good about not being Those People. This isn’t about how They make mistakes, but how We do.

A lot of different factors contributed to each of these train wrecks; I haven’t determined some linear set of events or decisions that happened in every case, let alone the one single quality that every incident shares (I don’t think there is, except the train wrecking). It’s interesting that apparently contradictory beliefs can be present in the same case, and sometimes held by the same people.

So, what I’m going to do is write a little bit about each of the factors that showed up at least a few times, and give a brief and broad explanation. These aren’t scholarly arguments, but notes and thoughts about what I’ve seen. In many cases (all?) I have written scholarly arguments about them in which I’ve cited chapter and verse, as have many others. If people are interested in my chapter and verse version, then this is where to start. (In those scholarly versions, I also cite the many other scholars who have made similar arguments. Nothing that I’m saying is particularly controversial or unique.)

These pieces aren’t in any particular order—since the causality is cumulative rather than linear, there isn’t a way to begin at the beginning. It’s also hard not to write about this without at least some circularity, or at least backtracking. So, if someone is especially interested in one of these, and would like me to get to it, let me know.

Here are some of the assumptions/beliefs/arguments that contribute to train wrecks and that I intend to write about, not necessarily in this order:

Bad people make bad decisions; good people make good ones
Policy disagreements are really tug-of-war contests between two sides
Data=proof; the more data, the stronger the proof
The Good Samaritan was the villain of the story
There is a single (but not necessarily simple) right answer to every problem
That correct course of action is always obvious to smart people
What looks true (to me) is true—if you don’t believe that, then you’re a relativist
Might makes right, except when it doesn’t (Just World Model, except when not)
The ideal world is a stable hierarchy of kiss up/kick down
All ethical stances/critiques are irrational and therefore equally valid
Bad things can only be done by people who consciously intend to do them
Doing something is always better than doing nothing
Acting is better than thinking (“decisiveness” is always an ideal quality)
They cherry-pick foundational texts, but Our interpretations distinguish the transient from the permanent
In-group members and actions shouldn’t be held accountable (especially not to the same degree as out-group members and actions)

There are a few other qualities that often show up:
Binary thinking
Media enclaves
Mean girl rhetoric
Short-term thinking (Gus Johnson and the tuna)
Non-falsifiable conspiracy theories that exempt the in-group from accountability
Sloppy Machiavellianism
Tragic loyalty loops


[1] I’m using “in-“ and “out-“ groups as sociologists do, meaning groups we’re in, and groups against whom we define ourselves, not groups in or out of power. We’re each in a lot of groups, and have a lot of out-groups. Here’s more information about in- and out-groups. You and your friend Terry might be in-group when it comes to what soccer teams you support but out-group when it comes to how you vote. Given the work I do, I’m struck by how important a third category is: non in-group (but not out-group). For instance, you might love dogs, and for you, dog lovers are in-group. Dog-haters would be out-group. But people who neither love nor hate dogs are not in-group, yet not out-group. One of the things that happens in train wrecks is that the non in-group category disappears.

[2] For me, “conservatives” are not necessarily out-group. Again, given the work I do, I’ve come to believe that public deliberations are best when there is a variety of views considered, and “conservatism” is a term used in popular media, and even some scholarship, to identify a variety of political ideologies which are profoundly at odds with each other. Libertarianism and segregation–both called “conservative” ideologies by popular media–are not compatible. Our political world is neither a binary nor a continuum of ideologies.