Progressives are children of the Enlightenment

bee on a flower

I loathe putting my thesis first (the thesis-first tradition is directly descended from people who didn’t actually believe that persuasion is possible), but here I will. The way that a lot of liberals, progressives, and pro-democracy people are talking about GOP support for authoritarianism is neither helpful nor accurate. Both the narrative about how we got here and the policy agenda for what we should do now are grounded in assumptions about rhetoric that are wrong. And they’re narratives and assumptions that come from the Enlightenment.

I rather like the Enlightenment—an unpopular position, even among people who, I think, are direct descendants of it. But, I’ll admit that it has several bad seeds. One is a weirdly Aristotelian approach of valuing deductive reasoning.

In an early version of this post, I wrote a long explanation about how weird it is that Enlightenment philosophers all rejected Aristotle but they actually ended up reasoning like he did—collecting data in service of finding universally valid premises. I deleted it. It wouldn’t have made my argument any clearer or more effective. I too am a child of the Enlightenment. I want to go back to sources.

Here’s what matters: syllogistic reasoning starts with a universally valid premise and then makes a claim about a specific case. “All men are mortal, and Socrates is mortal, so Socrates must be mortal.” Inductive reasoning starts with the specific cases (“Socrates died; so did Aristotle; so did Plato”) in order to make a more general claim (“therefore, all Greek philosophers died”). For reasons too complicated to explain, Aristotle was associated with the first, although he was actually very interested in the second.

Enlightenment philosophers, despite claiming to reject Aristotle, had a tendency to declare something to be true (“All men are created equal”) and then reason, very selectively, from that premise. (It only applied to some men.) That tendency to want to reason from universally valid principles turned out to be something that was both liberating and authoritarian. Another bad seed was the premise that all problems, no matter how complicated, have a policy solution. There are two parts to this premise: first, that all problems can be solved, and second, that there is one solution. The Enlightenment valued free speech and reasonable deliberation (something I like about it), but in service of finding that one solution, and that’s a problem.[1]

The assumption was that enlightened people would throw off the blinders created by “superstition” and see the truth. So, like the authorities against whom they were arguing, they assumed that there was a truth. For many Enlightenment philosophers, the premise was that free and reasonable speech among reasonable people would enable them to find that one solution. The unhappy consequence was to try to gatekeep who participated in that speech, and to condemn everyone who disagreed—this move still happens in public discourse. People who agree with Us see the Truth, but people who don’t are “biased.”

The Enlightenment assumed a universality of human experience—that all people are basically the same—an assumption that directly led to the abolition of slavery, the extension of voting rights, public education. It also led to a vexed understanding of what deliberative bodies were supposed to do: 1) find the right answer, or 2) find a good enough policy. It’s interesting that the Federalist Papers vary among those two ways of thinking about deliberation.

The first is inherently authoritarian, since it assumes that people who have the wrong answer are stupid, have bad motives, are dupes, and should therefore be dismissed, shouted down, expelled. This way of thinking about politics leads to a cycle of purification (both Danton and Robespierre ended up guillotined).[2] I’m open to persuasion on this issue, but, as far as I know, any community that begins with the premise that there is a correct answer, and it’s obvious to good people, ends up in a cycle of purification. I’d love to hear about any counter-examples.

The second is one that makes some children of the Enlightenment stabby. It seems to them to mean that we are watering down an obviously good policy (the one that looks good to them) in order to appease people who are wrong. What’s weird about a lot of self-identified leftists is that we claim to value difference while actually denying that it should be valued when it comes to policy disagreements.

We’re still children of Enlightenment philosophers who assumed that there is a right policy, and that anyone who disagrees with us is a benighted fool.

Another weird aspect of Enlightenment philosophers was that they accepted a very old model of communication—the notion that if you tell people the truth they will comprehend it (unless they’re bad people). This is the transmission model of communication. Enlightenment philosophers, bless their pointed little heads, often seemed to assume that enlightening others simply involved getting the message right. (I think JQA’s rhetoric lectures are a great example of that model.)

I think that what people who support democracy, fairness, compassion, and accountability are now facing is a situation that has been brewing since the 1990s—a media committed to demonizing democracy, fairness, compassion, and in-group accountability. It’s a media that has inoculated its audience against any criticism of the GOP.

And far too many people are responding in an Enlightenment fashion—that the problem is that the Democratic Party didn’t get its rhetoric right. As though, had the Democratic Party transmitted the right message, people who reject on sight anything even remotely critical of the GOP would have chosen to vote Dem. Ted Cruz won reelection because he had ads about transgender kids playing girls’ sports. That wasn’t about rhetoric, but about policy.

We aren’t here because Harris’ didn’t get her rhetoric right. Republicans have a majority of state legislatures and governorships. This isn’t about Harris or the Dem party; this is about Republican voters. To imagine that Harris’ or the Dems’ rhetoric is to blame is to scapegoat. Blame Republican voters.

We are in a complicated time without a simple solution. Here is the complicated solution: Republicans have to reject what Trump is doing.

I think that people who oppose Trump and what he’s doing need to brainstorm ways to get Republican voters to reject their pro-Trump media and their kowtowing representatives.

I think that is a strategy necessary for our getting this train to stop wrecking, and I think it’s complicated and probably involves a lot of different strategies. And I think we shouldn’t define that strategy by deductive reasoning—I think this is a time when inductive reasoning is our friend. If there is a strategy that will work now, it’s worked in the past. So, what’s likely to work?





[1] The British Enlightenment didn’t make the rational/irrational split in the same way that the Cartesian tradition did. For the British philosophers, there wasn’t a binary between logic and feelings; for them, sentiments enhance reasonable deliberation, but the passions inhibit it.

[2] There’s some research out there that suggests that failure causes people to want to purify the in-group. My crank theory is that it depends on the extent to which people are pluralist.

Demagoguery and Disruption

various books that are or are about demagoguery

I’ve been asked why demagoguery rises and falls, more than once by people who like the “disruption” theory—that demagoguery is the consequence of major social disruption. The short version is that events create a set and severity of crises that “normal” politics and “normal” political discourse seem completely incapable of ameliorating, let alone solving. People feel themselves to be in a “state of exception” when things they would normally condemn seem attractive—anti-democratic practices, purifying of a community through ethnic/political cleansing, authoritarianism, open violation of constitutional protections.

When I first started working on demagoguery, I assumed that was the case. It makes sense, after all. And, certainly, in the instances that most easily come to mind when we think about demagoguery, there was major social disruption. Hitler rose to power in the midst of major social disruption: humiliation (the Great War and subsequent Versailles Treaty), economic instability (including intermittent hyperinflation), mass immigration from Central and Eastern Europe, an unstable government system.

And you can look at other famous instances of demagoguery and see social disruption: McCarthyism and the Cold War (specifically the loss of China), Charles Coughlin and the Great Depression, Athenian demagogues like Cleon and the Peloponnesian War, Jacobins and failed harvests. But, the more I looked at various cases, the weirder it got.

Take, for instance, McCarthyism and China. There are two questions never answered by people who blame(d) “the Democrats” for “losing” China is: what plan would have worked to prevent that victory? Did Republicans advocate that plan (so, would they have enacted it if in power)? McCarthy’s incoherent narrative was that spies in the State Department were [waves hands vaguely] somehow responsible for the loss of China. Were losing China a major social disruption, then it would have until that moment been seen as an important power by the people who framed its “loss” as a major international threat—American Republicans. But, prior to Mao’s 1949 victory in China, American Republicans were not particularly interested in China. In fact, FDR had to maneuver around various neutrality laws passed by Republicans in order to provide support for Chiang Kai Shek at all. After WWII, Republicans were still not very interested in intervening in China—they weren’t interested in China till Mao’s victory. So, why did it suddenly become a major disruption?

One possibility is that Mao’s victory afforded the rhetorical opportunity of having a stick with which to beat the tremendously successful Democrats (that’s Halberstam’s argument). If Halberstam is right, then demagoguery about China, communism, and communist spies was the cause, not the consequence, of social disruption. Another equally plausible possibility is that China becoming communist, and an ally of the USSR, took on much more significance in light of Soviet acquisition of nuclear weapons. So, the disruption led to demagoguery.

In other words, McCarthyism turned out not to be quite as clean a case as I initially assumed, although far from a counter-example.

Another problematic case was post-WWI demagoguery about segregation. In many ways, that demagoguery was simply a continuation of antebellum pro-slavery demagoguery, with added bits from whatever new “scientific” or philosophical movement might seem useful (e.g., eugenics, anti-communism). It wasn’t always at the same level, but tended to wax and wane. I couldn’t seem to correlate the waxing and waning to any economic, political, or social event, or even kind of event. What it seemed like was that it correlated more to specific political figures deciding to amp up the demagoguery for short-term gain (see especially Chapter Three).

Similarly, ante-bellum pro-slavery demagoguery didn’t consistently correlate to major disruptions; if anything, it often seemed to create them, or create a reframing of conflicts (such as with indigenous peoples). But, the main problem with the disruption narrative of causality is that I couldn’t control the variables—it’s extremely difficult to find a period of time when there wasn’t something going on that can be accurately described as a major disruption. Even if we look only at financial crises considered major (there was a major downturn in the economy that lasted for years), there were eight in the US in the 19th century: 1819, 1837, 1839, 1857, 1873, 1884, 1893, and 1896. Since several of these crises lasted for years, as much as half of the 19th century was spent in a major financial crisis.

And then there are other major disruptions. There were riots or uprisings related to slavery and race in almost every year of the 19th century. The Great Hunger in Ireland (1845-1852) and later recurrence (1879), 1848 revolutions in Western Europe, and various other events led to mass migrations of people whose ethnicity or religion was unwelcome enough to create major conflicts. And this is all just the 19th century only in the US.

Were demagoguery caused by crises, then it would always be full-throated, since there are always major crises of some kind. But it waxes and wanes, often to varying degrees in various regions, or among various groups, sometimes without the material conditions changing. Pro-slavery demagoguery varied in terms of themes, severity, popularity, but not in any way that I could determine correlated to the economic viability or political security of the system.

Anti-Japanese demagoguery was constant on the West Coast of the US from the late 19th century through at least the mass imprisonment in the 40s, but not as consequential or extreme elsewhere. One might be tempted to explain that discrepancy by population density, but there was not mass imprisonment in Hawaii, which had a large population of Japanese Americans. Anti-Judaism has never particularly correlated to the size (or even existence) of a local Jewish population; it’s not uncommonly the most extreme in situations almost entirely absent of Jews. And sometimes it’s impossible to separate the crisis from the demagoguery—as in the cases of demagoguery about fabricated threats, such as Satanic panics, stranger danger demagoguery, wild and entirely fabricated reports of massive abolitionist conspiracies, intermittent panics about Halloween candy.

I’ve come to think it has to do with two other factors: strategic threat inflation on the part of rhetors with a sufficiently large megaphone, and informational enclaves (and these two factors are mutually reinforcing). I’ve argued elsewhere that the sudden uptick in anti-abolitionist was fueled by Presidential aspirations; Truman strategically engaged in threat inflation regarding Soviet intentions in his speech “The Truman Doctrine;” the FBI has repeatedly exaggerated various threats in order to get resources; General DeWitt fabricated evidence to support race-based imprisonment of Japanese Americans. These rhetors weren’t entirely cynical; I think they felt sincerely justified in their threat inflation, but they knew that they were exaggerating.

And threat inflation only turns into demagoguery when it’s picked up by important rhetors. Japanese Americans were not imprisoned in Hawaii, perhaps because DeWitt didn’t have as much power there, and there wasn’t a rhetor as important as California Attorney General Earl Warren supporting it.

In 1835, there was a panic about the AAS “flooding” the South with anti-slavery pamphlets that advocated sedition. They didn’t flood the South; they sent the pamphlets, which didn’t advocate sedition, to Charleston, where they were burned. But, the myth of a flooded South was promoted by people so powerful that it was referred to in Congress as though it had happened, and is still referred to by historians who didn’t check the veracity of the story.

And that brings up the second quality: informational enclaves. Demagoguery depends on people either not being aware of or not believing disconfirming information. The myth of Procter and Gamble being owned by a Satan worshipper (who was supposed to have gone on either Phil Donahue or Oprah Winfrey and announced that commitment) was spread for almost 20 years despite it being quite easy to check and see if any recording of such a show existed. The people I knew who believed it didn’t bother even trying to check. Advocates of the AAS mass-mailing demagoguery (or other fabricated conspiracy stories) only credited information and sources that promoted the demagoguery.

Once the Nazis or Stalinists had control of the media in their countries, the culture of demagoguery escalated. But, even prior to the Nazi silencing of dissent, Germany was in a culture of demagoguery—because people could choose to get all their information from reinforcing media, and many made that choice. Antebellum media was diverse—it was far from univocal—but people could choose to get all their information from one source. They could choose to live in an informational enclave. Many made that choice.

It didn’t end well.

Seeds Over a Wall: The Pyramid of Harm

flowers in front of a wall

My grandmother had a “joke” (really more of a parable) about a guy who sees a pie cooling in the window, and steals it. Unfortunately, he leaves a perfect handprint on the sill, so he sneaks into the house in order to wash off his handprint. But then it’s obvious that the sill has been washed, since it’s so much cleaner than the wall. So he washes that wall. It’s still obvious that something has happened because that one wall is so much cleaner than the others. When the police came, he was repainting the attic.

You can tell this as a shaggy dog joke, with more of the steps between the sill and the attic. And, in a way, that’s how this situation often plays out, at least in regard to bad choices. Rather than admit the initial mistake, we get deeper and deeper into a situation; the more energy we expend to deflect the consequences of that first mistake, the more committed we are to making that expenditure worthwhile. So, we’re now in the realm of the “sunk cost” fallacy/cognitive bias. Making decisions on the basis of trying to retrieve sunk costs—also known as “throwing good money after bad”–enables us to deny that we made a bad decision.

In the wonderful book Mistakes Were Made, Carol Tavris and Elliot Aronson call this process “the pyramid of choice.” It’s usefully summarized here:

“The Analogy of the Pyramid (Tavris and Aronson, 2015). An initial choice -which is often triggered by the first “taking the temperature” vote -amounts to a step off on one side of the pyramid. This first decision sets in motion a cycle of self-justification which leads to further action (e.g., taking a public stance during the group discussion) and further selfjustification. The deeper down participants go, the more they can become convinced and the more the need arises to convince others of the correctness of their position.”

The example used by Tavris and Aronson is of two students who are faced with the choice of cheating or getting a bad grade on an exam. They are, initially, both in the same situation. One decides to cheat, and one decides to get the bad grade. But, after some time, each will find ways of not only justifying their decision, but they will be “convinced that they have always felt that way” (33).

In the equally wonderful book Denial, Jared Del Rosso describes a similar process for habituating a person to behaviors they would previously have condemned (such as engaging in torture). A prison guard or police officer is first invited to do something a little bit wrong; that small bad act is normalized, and then, once they’ve done that, it becomes easier to get them to do a little worse (Chapter 4). Christopher Browning describes a similar situation for Nazi Wehrmacht soldiers who participated in genocide; Hannah Arendt makes that argument about Adolf Eichmann; Robert Gellately makes it about Germans’ support for Hitler.

It’s like an upside-down pyramid—the one little bad act enables and requires more and worse ones, since refusing to continue doing harm would require admitting to one’s self and others that the first act was bad. It means saying, “I did this bad (or stupid) thing,” and that’s hard. It’s particularly hard for people who equate identity and action, and who believe that only bad people do bad things, and only stupid people do stupid things; that is, people who believe in a stark binary of identity.

This way of thinking also causes people to “double down” on mistakes. In late 1942, about 250,000 Nazi soldiers approaching and in Stalingrad were in danger of getting encircled by Soviet troops. Hitler refused to allow a retreat; instead opting for Goering’s plan of airlifting supplies. David Glantz and Jonathan House argue that Hitler was “trapped” by his previous decisions—to acknowledge the implausibility of Goering’s proposal (and it was extremely implausible) would amount to Hitler admitting that various decisions that he had made were wrong, and that his generals had been right. Glantz and House don’t mean he was actually trapped–other decisions could have been made, but not by Hitler. He was trapped by his own inability to admit that he had been wrong. Rather than admit that he was wrong in his previous bad decisions, he proceeded to make worse ones. That’s the pyramid of harm.

The more walls the thief washes, the harder it is to say that the theft of the pie was a one-time mistake.

Don’t be the thief.


Seeds Over a Wall: Credibility

blooming cilantro

tl;dr Believing isn’t a good substitute for thinking.

As mentioned in the previous post, Secretary of Defense Robert McNamara, LBJ, Dean Rusk, McGeorge Bundy, and various other decision-makers in the LBJ Administration were committed to the military strategy of “graduated pressure” with, as H.R. McMaster says, “an almost religious zeal” (74). Graduated pressure was (is) the strategy of slightly increasing the amount of military force by steps in order to pressure the opponent into giving up. It’s supposed to “signal” to the opponent that we are absolutely committed, but open to negotiation.

It’s a military strategy, and the people in favor of it were not people with much (or sometimes any) military training or experience. There were various methods for people with military experience to advise the top policy-makers. Giving such advice is the stated purpose of the Joint Chiefs of Staff, for instance. There were also war games, assessments, memos, and telegrams, and their hostility to “graduated pressure” ranged from dubious to completely opposed. The civilian advisors were aware of that hostility, but dismissed the judgments of military experts on the issue of military strategy.

It did not end well.

In the previous post, I wrote about binary thinking, with emphasis on the never/always binary. When it comes to train wrecks in public deliberation, another important (and false) binary is trustworthy/untrustworthy. That binary is partially created by others, especially the fantasy that complicated issues really have two and only two sides.

Despite what people think, there aren’t just two sides to every major policy issue—you can describe an issue that way, and sincerely believe it is, but doing so requires misdescribing the situation, and forcing it into a binary. “The Slavery Debate,” for instance, wasn’t between two sides; there were at least six different positions on the issue of what should happen with slavery, and even that number requires some lumping of people together who were actually in conflict.

(When I say this to people, I’m often told, “There are only two sides: the right one and the wrong one.” That pretty much proves my point. And, no, I am not arguing for all sides being equally valid, “relativism,” endless indecision, compulsive compromise, or what the Other term is in that false binary.)

I’ll come back to the two sides point in other posts, but here I want to talk about the binary of trustworthy/untrustworthy (aka, the question of “credibility”). What the “two sides” fallacy fosters is the tendency to imagine credibility as a binary of Us and Them: civilian v. military advisors; people who advocate “graduated pressure” and people who want us to give up.

In point of fact, the credibility of sources is a very complicated issue. There are few (probably no) sources that are completely trustworthy on every issue (everyone makes mistakes), and some that are trustworthy on pretty much nothing (we all have known people whom we should never trust). Expertise isn’t an identity; it’s a quality that some people have about some things, and it doesn’t mean they’re always right even about those some things. So, there is always some work necessary to try to figure out how credible a source is on this issue or with this claim.

There was a trendy self-help movement at one point that was not great in a lot of ways, but there was one part of it that was really helpful: the insistence that “there is no Santa Claus.” The point of this saying was that it would be lovely were there someone who would sweep in and solve all of our problems (and thereby save us from doing the work of solving them ourselves), but there isn’t. We have to do the work.[1] I think a lot of people talk about sources (media, pundit, political figure) as a Santa Claus who has saved them from the hard work of continually assessing credibility. They believe everything that a particular person or media says. If they “do their own research,” it’s often within the constraints of “motivated reasoning” and “confirmation bias” (more on that later).[2]

I mentioned in the first post in this series that I’m not sure that there’s anything that shows up in every single train wreck, except the wreck. Something that does show up is a particular way of assessing credibility, but I don’t think that causes the train wreck. I think it is the train wreck.

This way of assessing credibility is another situation that has a kind of mobius strip quality (what elsewhere I’ve called “if MC Escher drew an argument”): a source is credible if and only if it confirms what we already believe to be true; we know that what we believe is true because all credible sources confirm it.

This way of thinking about credibility is comforting; it makes us feel comfortable with what we already believe. It silences uncertainty.

The problem is that it’s wrong.

McNamara and others didn’t think they were making a mistake in ignoring what military advisors told them; they dismissed that advice on the grounds of motivism, and that’s pretty typical. They said that military advisors were opposed to graduated pressure because they were limited in their thinking, too oriented toward seeking military solutions, too enamored of bombing. The military advisors weren’t univocal in their assessment of Vietnam and the policy options—there weren’t only two sides on what should be done—but they had useful and prescient criticism of the path LBJ was on. And that criticism was dismissed.

It’s interesting that even McNamara would later admit he was completely wrong in his assessment of the situation, yet wouldn’t admit that he was told so at the time. His version of events, in retrospect, was that the fog of war made it impossible for him to get the information he needed to have advocated better policieds. But that simply isn’t true. McNamara’s problem wasn’t a lack of information—he and the other advisors had so very, very much information. In fact, they had all the information they needed. His problem was that he didn’t listen to anyone who disagreed with him, on the grounds that they disagreed with him and were therefore wrong.

McNamara read and wrote reports that listed alternatives for LBJ’s Vietnam policies, but they were “poisoning the well.” The alternatives other than graduated pressure were not the strongest alternative policies, they were described in nearly straw man terms, and dismissed in a few sentences.

We don’t have to listen to every person who disagrees with us, and we can’t possibly read every disconfirming source, let alone assess them. But we should be aware of the strongest criticisms of our preferred policy, and the strongest arguments for the most plausible of alternative policy options. And, most important, we should know how to identify if we’re wrong. That doesn’t mean wallowing in a morass of self-doubt (again, that’s binary thinking).

But it does mean that we should not equate credibility with in-group fanaticism. Unless we like train wrecks.









[1] Sometimes people who’ve had important conversion experiences take issue with saying there is no Santa Claus, but I think there’s a misunderstanding—many people believe that they’ve accomplished things post-conversion that they couldn’t have done without God, and I believe them. But conversion didn’t save them from doing any work; it usually obligates a person to do quite a bit of work. The desire for a “Santa Claus” is a desire for someone who doesn’t require work from us.

[2] Erich Fromm talked about this as part of the attraction of authoritarianism—stepping into that kind of system can feel like an escape from the responsibilities of freedom. Many scholars of cults point to the ways that cults promise that escape from cognitive work.

Seeds Over a Wall: Binary Thinking

primroses

Imagine that we’re disagreeing about whether I should drive the wrong way down a one-way street, and you say, “Don’t go that way—you could get in an accident!” And I say, “Oh, so no one has ever driven down a one-way street without getting into an accident?” You didn’t say anything about always or never. You’re talking in terms of likelihood and risk, about probability. I’m engaging in binary thinking.

What’s hard about talking to people about binary thinking is that, if someone is prone to it, they’re likely to respond with, “Oh, so you’re saying that there’s never a binary?” Or, they’ll understand you as arguing for what they think of as relativism—they imagine a binary of binary thinking or relativism.

(In other words, they assume that there’s a binary in how people think: a person either believes there’s always an obvious and clear absolutely good choice/thing and an obvious and always clear absolutely bad choice/thing OR a person believes there’s no such thing as good v. bad ever. That latter attitude is often called “relativism” and, for binary thinkers, they assume it’s the only possibility other than their approach. So, they’re binary thinkers about thinking, and that makes talking to them about it difficult.)

“Binary thinking” (also sometimes called “splitting” or “dichotomous thinking”) is a cognitive bias that encourages us to perceive people, events, ideas, and so on into two mutually exclusive categories. It’s thinking in terms of extremes like always or never—so if something doesn’t always happen, then it must never happen. Or if someone says you shouldn’t do something, you understand them to be saying you should never do it. Things are either entirely and always good, or entirely and always bad.

We’re particularly prone to binary thinking when stressed, tired, faced with an urgent problem. What it does is reduce our options, and thereby seems to make decision-making easier; it does make decision-making easier, but easy isn’t always good. There’s some old research suggesting that people faced with too many options get paralyzed in decision-making, and so find it easier to make a decision if there are only two options. There was a funny study long ago in which people had an option to taste salsas—if there were several options, more people walked by than if there were only two. (This is why someone trying to sell you something—a car, a fridge, a house–will try to get you to reduce the choice to two.)

Often, it’s a false dichotomy. For instance, the small circle of people making decisions about Vietnam during the LBJ Administration kept assuming that they should either stick with the policy of “graduated pressure” (which wasn’t working) or pull out immediately. It was binary thinking. While there continues to be considerable disagreement about whether the US could have “won” the Vietnam conflict, I don’t know of anyone who argues that graduated pressure could have done it. Nor does anyone argue there was actually a binary–there were plenty of options other than either graduated pressure or an immediate pull-out, and they were continually advocated at the time.

Instead of taking seriously the options advocated by others (including the Joint Chiefs of Staff), what LBJ policy-makers assumed was that they would either continue to do exactly what they were already doing or give up entirely. And that’s a common false binary in the train wrecks I’ve studied–stick with what we’re doing or give up, and it’s important to keep in mind that this is a rhetorical move, not an accurate assessment of options.

I think we’ve all known people who, if you say, “This isn’t working,” respond with, “So, you think we should just give up?” That isn’t what you said.

“Stick with this or give up” is far from the only binary that traps rhetors into failure. When Alcibiades argued that the Athenians either had to invade Sicily or betray Egesta, he was invoking the common fallacy of brave v. coward (and ignoring Athens’ own history). A Spartan rhetor used the same binary (go to war with Athens or you’re a coward) even while disagreeing with a brave general who clearly wasn’t a coward, and who had good reasons for arguing against war with Athens at that moment.

One way of defining binary thinking is: “Dualistic thinking, also known as black-and-white, binary, or polarized thinking, is a general tendency to see things as good or bad, right or wrong, and us or them, without room for compromise and seeing shades of gray” (20). I’m not wild about that way of defining it, because it doesn’t quite describe how binary thinking contributes to train wrecks.

It isn’t that there was a grey area between graduated pressure and an immediate pull-out that McNamara and others should have considered (if anything, graduated pressure was a gray area between what the JCS wanted and pulling out entirely). The Spartan rhetor’s argument wouldn’t have been a better one had he argued that the general was sort of a coward. You can’t reasonably solve the problem of which car you should buy by buying half of one and half of the other.

The mistake is assuming that initial binary—of imagining there are only two options, and you have to choose between them. That’s binary thinking—of course there are other options.

When I point out the problems of binary thinking to people, I’m often told, “So, you’re saying we should just sit around forever and keeping talking about what to do?”

That’s binary thinking.



Seeds Over a Wall: Thoughts on Train Wrecks in Public Deliberation

a path through bluebonnet flowers

I’ve spent my career looking at bad, unforced decisions. I describe them as times that people took a lot of time and talk to come to a decision they later regretted. These aren’t times when people didn’t know any better—all the information necessary to make a better decision was available, and they ignored it.

Train wrecks aren’t particular to one group, one kind of person, one era. These incidents I’ve studied are diverse in terms of participants, era, consequences, political ideologies, topics, and various other important qualities. One thing that’s shared is that the interlocutors were skilled in rhetoric, and relied heavily on rhetoric to determine and advocate policies that wrecked the train.

That’s how I got interested in them—a lot of scholars of rhetoric have emphasized times that rhetors and rhetoric saved the day, or at least pointed the way to a better one. But these are times that people talked themselves in bad choices. They include incidents like: pretty much every decision Athens made regarding the Sicilian Expedition, Hitler’s refusal to order a fighting retreat from Stalingrad, the decision to dam and flood the Hetch Hetchy Valley (other options were less expensive), eugenics, the LBJ Administration’s commitment to “graduated pressure” in Vietnam; Earl Warren’s advocacy of race-based mass imprisonment; US commitment to slavery; Puritans’ decision to criminalize Baptist and Quakers.

I’ve deliberately chosen bad decisions on the part of people that can’t be dismissed as too stupid to make good decisions. Hitler’s military decisions in regard to invading France showed considerable strategic skill–while he wasn’t as good a strategist as he claimed, he wasn’t as bad as his generals later claimed. Advocates of eugenics included experts with degrees from prestigious universities—until at least WWII, biology textbooks had a chapter on the topic, and universities had courses if not departments of Eugenics. It was mainstream science. Athenians made a lot of good decisions at their Assembly, and a major advocate of the disastrous Sicilian Expedition was a student/lover of Socrates’. LBJ’s Secretary of Defense Robert McNamara was a lot of things, but even his harshest critics say he was smart.

The examples also come from a range of sorts of people. One temptation we have in looking back on bad decisions is to attribute them to out-group members. In-group decisions that turned out badly we try to dismiss on the grounds that they weren’t really bad decisions, they had no choice, an out-group is somehow really responsible for what happened.[1] (It’s interesting that that way of thinking about mistakes actively contributes to train wrecks.) The people who advocated the damming and flooding of the Hetch Hetchy Valley were conservationists and progressives (their terms for themselves, and I consider myself both[2]). LBJ’s social agenda got us the Voting Rights Act, the Civil Rights Act, Medicare, all of which I’m grateful for. Earl Warren went on to get Brown v. Board passed, for which I admire him.

In short, I don’t want these posts to be in-group petting that makes Us feel good about not being Those People. This isn’t about how They make mistakes, but how We do.

A lot of different factors contributed to each of these train wrecks; I haven’t determined some linear set of events or decisions that happened in every case, let alone the one single quality that every incident shares (I don’t think there is, except the train wrecking). It’s interesting that apparently contradictory beliefs can be present in the same case, and sometimes held by the same people.

So, what I’m going to do is write a little bit about each of the factors that showed up at least a few times, and give a brief and broad explanation. These aren’t scholarly arguments, but notes and thoughts about what I’ve seen. In many cases (all?) I have written scholarly arguments about them in which I’ve cited chapter and verse, as have many others. If people are interested in my chapter and verse version, then this is where to start. (In those scholarly versions, I also cite the many other scholars who have made similar arguments. Nothing that I’m saying is particularly controversial or unique.)

These pieces aren’t in any particular order—since the causality is cumulative rather than linear, there isn’t a way to begin at the beginning. It’s also hard not to write about this without at least some circularity, or at least backtracking. So, if someone is especially interested in one of these, and would like me to get to it, let me know.

Here are some of the assumptions/beliefs/arguments that contribute to train wrecks and that I intend to write about, not necessarily in this order:

Bad people make bad decisions; good people make good ones
Policy disagreements are really tug-of-war contests between two sides
Data=proof; the more data, the stronger the proof
The Good Samaritan was the villain of the story
There is a single (but not necessarily simple) right answer to every problem
That correct course of action is always obvious to smart people
What looks true (to me) is true—if you don’t believe that, then you’re a relativist
Might makes right, except when it doesn’t (Just World Model, except when not)
The ideal world is a stable hierarchy of kiss up/kick down
All ethical stances/critiques are irrational and therefore equally valid
Bad things can only be done by people who consciously intend to do them
Doing something is always better than doing nothing
Acting is better than thinking (“decisiveness” is always an ideal quality)
They cherry-pick foundational texts, but Our interpretations distinguish the transient from the permanent
In-group members and actions shouldn’t be held accountable (especially not to the same degree as out-group members and actions)

There are a few other qualities that often show up:
Binary thinking
Media enclaves
Mean girl rhetoric
Short-term thinking (Gus Johnson and the tuna)
Non-falsifiable conspiracy theories that exempt the in-group from accountability
Sloppy Machiavellianism
Tragic loyalty loops


[1] I’m using “in-“ and “out-“ groups as sociologists do, meaning groups we’re in, and groups against whom we define ourselves, not groups in or out of power. We’re each in a lot of groups, and have a lot of out-groups. Here’s more information about in- and out-groups. You and your friend Terry might be in-group when it comes to what soccer teams you support but out-group when it comes to how you vote. Given the work I do, I’m struck by how important a third category is: non in-group (but not out-group). For instance, you might love dogs, and for you, dog lovers are in-group. Dog-haters would be out-group. But people who neither love nor hate dogs are not in-group, yet not out-group. One of the things that happens in train wrecks is that the non in-group category disappears.

[2] For me, “conservatives” are not necessarily out-group. Again, given the work I do, I’ve come to believe that public deliberations are best when there is a variety of views considered, and “conservatism” is a term used in popular media, and even some scholarship, to identify a variety of political ideologies which are profoundly at odds with each other. Libertarianism and segregation–both called “conservative” ideologies by popular media–are not compatible. Our political world is neither a binary nor a continuum of ideologies.

“Defeats will be defeats.”

copy of book--Foreign Relations of the US, Vietnam, 1964

“Defeats will be defeats and lassitude will be lassitude. But we can improve our propaganda.” (Carl Rowan, Director of the US Information Agency, June 1964, FRUS #189 I: 429).

In early June of 1964, major LBJ policy-makers met in Honolulu to discuss the bad and deteriorating situation in South Vietnam. SVN was on its third government in ten months (there had been a coup in November of 1963 and another in January of 1964), and advisors had spent the spring talking about how bad the situation was. In a March 1964 memo to LBJ, Secretary of Defense Robert McNamara reported that “the situation has unquestionably been growing worse” (FRUS #84). “Large groups of the population are now showing signs of apathy and indifference [….] Draft dodging is high while the Viet Cong are recruiting energetically and effectively [….] The political control structure extending from Saigon down into the hamlets disappeared following the November coup.” A CIA memo from May has this as the summary:

“The over-all situation in South Vietnam remains extremely fragile. Although there has been some improvement in GVN/ARVN performance, sustained Viet Cong pressure continues to erode GVN authority throughout the country, undercut US/GVN programs and depress South Vietnamese morale. We do not see any signs that these trends are yet ‘bottoming out.’ During the next several months there will be increasing danger than an assassination of Khanh, a successful coup, a grave military reverse, or a succession of military setbacks could have a critical psychological impact in South Vietnam. In any case, if the tide of deterioration has not been arrested by the end of the year, the anti-Communist position is likely to become untenable.” (FRUS #159)

At that June meeting, Carl Rowan presented a report as to what should be done, and he summarized it as: “Defeats will be defeats and lassitude will be lassitude. But we can improve our propaganda.” (FRUS #189). This is a recurrent theme in documents from that era, including military ones—the claim that effective messaging could solve what were structural problems. They didn’t. They couldn’t.

I was briefly involved in MLA, and I spent far too much time at meetings listening to people say that declining enrollments in the humanities could be solved by better messaging about the values of a humanistic education; I heard the same thing in far too many English Department meetings.

Just to be clear (and to try to head off people telling me that a humanistic education is valuable), I do not disagree with the message. I disagree that the problem can be solved by getting the message right, or getting the message out there. I’m saying that the rhetoric isn’t enough.

I am certain that there are tremendous benefits, both to an individual and to a culture, in a humanistic education, especially studying literature and language(s). That’s why I spent a career as a scholar and teacher in the humanities. But, enrollments weren’t (and aren’t) declining just because people haven’t gotten the message. There were, and are, declining enrollments for a variety of structural reasons, most of which are related to issues of funding for university educations. The fact is that the more that college costs, and the more that those costs are borne by students taking on crippling debt, the more that students want a degree that lands them a job right out of college.

Once again, I am not arguing that’s a good way for people to think about college; I am saying that the reason for declining enrollments isn’t something we can solve by better messaging about the values of a liberal arts education. For the rhetorical approach to be effective (and ethical) it has to be in conjunction with solving the structural problems. Any solution has to involve a more equitable system of funding higher education.

I am tired of people blaming the Dems’ “messaging” for the GOP’s success. I thought that Dem messaging was savvy and impressive. They couldn’t get it to enough people because people live in media enclaves. If you know any pro-GOP voters, then you know that they get all their information from media that won’t let one word of that message reach them, and that those voters choose to remain in enclaves. How, exactly, were the Dems supposed to reach your high school friend who rejects as “librul bullshit” anything that contradicts or complicates what their favorite pundit, youtuber, or podcaster tells them? What messaging would have worked?

The GOP is successful because enough people vote for the GOP and not enough vote against them. Voter suppression helps, but what most helps is anti-Dem rhetoric.

Several times I had the opportunity to hear Colin Allred speak, and his rhetoric was genius. It was perfect. Cruz didn’t try to refute Allred’s rhetoric; all Ted Cruz had to do was say, over and over (and he did), that Allred supported transgender rights.

From the Texas Observer: “Cruz and his allied political groups blitzed the airwaves with ads highlighting that vote and Allred’s other stances in favor of transgender rights. The ads, often featuring imagery of boys competing against girls in sports, reflected what Cruz’s team had found from focus groups and polling: Among the few million voters they’d identified who were truly on the fence, the transgender sports topic was most effective in driving support to Cruz, said Sam Cooper, a strategist for Cruz’s campaign.”

Transphobia is not a rhetorical problem that can be ended by the Dems getting the message right. Bigotry is systemic. Any solution will involve rhetoric, and rhetoric is important. But it isn’t enough.

Writing is hard; publishing is harder.

marked up draft of a book ms


In movies, struggling writers are portrayed as trying to come up with ideas. In my experience as a writer and teacher of writing, that isn’t the hard part. Ideas are easy, and are much better in our head than on the paper, so a very, very hard part of writing is to getting the smart and elegant ideas in our head to be comprehensible to someone else, let alone either persuasive or admired. But the even harder part is submitting something we’ve written—sending it off to be judged. It feels like the first day of sending a child to middle school—will they be bullied? Will they make friends? Will they change beyond recognition?

And I think there’s another reason that submitting a piece of writing is so hard. Our fantasies about what is going to happen when we submit a piece of writing are always more pleasurable than any plausible reality.

Somerset Maugham has a story called “Mirage,” about someone he knew when he was a medical student. Grosely was spending his time and money on wine and women, and eventually came up with a scam to get more money. He was caught, arrested, and kicked out of school. He became a kind of customs official in China, and, desperate to get back to partying in London, was as corrupt as possible: “He was consumed by one ambition, to save enough to be able to go back to England and live the life from which he had been snatched as a boy.”

After 25 years, he did go back to England, and he did try to live the life he’d lived at nineteen. But he couldn’t, of course. London was different, and so was he, and it was all a massive disappointment. He started to think about China, and what a great place it had been, and what a great time he could have there with all the money he’d made. So he headed back. He got almost to China, but stopped just shy of it (in Haiphong). Maugham explains:

“England had been such a terrible disappointment that now he was afraid to put China to the test too. If that failed him he had nothing. For years England had been like a mirage in the desert. But when he had yielded to the attraction, those shining pools and the palm trees and the green grass were nothing but the rolling sandy dunes. He had China, and so long as he never saw it again he kept it.”

I read that story as a graduate student trying to write a dissertation, and it resonated. As long as I didn’t finish the dissertation, I could entertain outrageous fantasies about its reception, quality, and impact. Once submitted, it was what it was. It was passable. (And unpublishable.) It was not anything like what I’d imagined it could be.

I have felt that way about every single piece of writing since (including this blog post)—I’m hesitant to finish it because of not wanting to give up the dream of what it could be.

Every writer of any genre has a lot of partially-written things. I knew a poet who actually had a drawer in his desk into which he dropped pieces of paper onto which he’d written lines that came to him that seemed good, but he didn’t have the rest of the poem. I don’t know if he ever pulled any of those pieces of paper out and wrote the rest of the poem (he did publish quite a bit of poetry). There’s nothing wrong with having a lot of incomplete projects, and lots of good reasons to leave them incomplete.

I once pulled out a ten-year old unsubmitted and unfinished piece of writing, revised it, and submitted it—it was published, and won an award. It took ten years for me to understand what that argument was really about, so leaving it unfinished for that long wasn’t a bad choice at all. There are others that will remain forever unfinished—also not a bad choice.

But there are times when one should just hit submit. The dreams may not come true, but there will be other pieces of writing about which we can dream.

I’m saying all this because I hope people who might be stuck in their writing will find it hopeful. Just hit submit.

The Writer’s Progress

“The most unreliable indication of whether your writing is good or bad is how you feel about it.” Susan Wells

marked up ms. draft

I’m in the process of significantly rewriting an article (based on readers’ comments), and last night I dreamed that I wanted to get to the top of a building, and I couldn’t find the right way to get there. I kept taking stairs that led me elsewhere. Some of those other places were interesting, and some were dreary, but they weren’t where I was trying to go. I often have dreams like that when I’m in the Slough of Despond stage of a writing process.

A friend mentioned that when she is writing, she has dreams about trying to drive a vehicle that is too big, unfamiliar, or unwieldy.

When I was writing my dissertation, I had moments of thinking that I couldn’t possibly write a dissertation, but I did. At several points in every writing project I have found myself convinced I can’t do it, there’s no point in my doing it, and all my previous successes were meaningless flukes. It’s much like the Slough of Despond in Bunyan’s Pilgrim’s Progress:

This miry Slough is such a place as cannot be mended; it is the descent whither the scum and filth that attends conviction for sin doth continually run, and therefore is it called the Slough of Despond: for still as the sinner is awakened about his lost condition, there ariseth in his soul many fears, and doubts, and discouraging apprehensions, which all of them get together, and settle in this place; and this is the reason of the badness of this ground.

Pilgrim’s Progress is a (very boring) book about the soul’s progress toward salvation, told in the form of “Christian” and his travels/travails. Initially, Christian is travelling with someone called Pliable, and they both fall into a bog. Pliable at that point gives up the journey entirely,

and angrily said to his fellow, Is this the happiness you have told me all this while of? If we have such ill speed at our first setting out, what may we expect between this and our journey’s end? May I get out again with my life, you shall possess the brave country alone for me. And with that he gave a desperate struggle or two, and got out of the mire on that side of the slough which was next to his own house: so away he went, and Christian saw him no more.

Christian gets help, appropriately enough from someone named Help, who explains that there are ways out of the bog, but it is despair itself that traps travellers.

I’m writing this because I wish it’s something that I had understood as a junior scholar. There may be people out there for whom writing a lot is a necessary part of their profession and who find it easy, or who go from start to finish on a project without falling into a bog, getting lost in a building, or driving a vehicle that just isn’t working the way it should. But I don’t know them.

The only way out of the bog is through. The moment of loss of confidence isn’t a moment of truth (although some of the insights I have in those moments are true—the introduction of this article really does suck pretty hard); it’s just a moment.

Is Satire a Useful/Effective Strategy with Trump Supporters?

2009 Irish tug of war team
https://en.wikipedia.org/wiki/Tug_of_war#/media/File:Irish_600kg_euro_chap_2009_(cropped).JPG

I’m often asked this question, and the answer is: it depends on the nature of their support, what we mean by “useful/effective,” and what we mean by “satire.”

1) Why do people support Trump?

There are, obviously, many reasons, and sometimes it’s a combination. But, for purposes of talking about satire’s effect, I’ll mention five:

A) Political Sociopathy (some people call it “political narcissism”). These are people who support Trump because they believe he will pass policies that will benefit them in the short run—lower taxes, eliminate environmental and employment protections, protect the wealthy from accountability, privatize public goods, and so on. They don’t care what the consequences will be for others, or what the long-term consequences might be—they only care that it will benefit them (they believe). Hence sociopathy.

B) Eschatalogical Understanding of Politics. This way of thinking isn’t necessarily explicitly religious—a certain kind of American Exceptionalism as well as Hegelian readings of history are also in this category, even if apparently secular. It reads history as inevitably headed toward [something]. That “something” might be: American hegemony, world capitalism, the return of Jesus, Armageddon, racial/ethnic domination, fascism, a people’s revolution, in-group dominance. People with this understanding don’t care about politics in terms of reasonable disagreements about policy, or even about specifics, but in terms of the apocalyptic battle or necessary triumph.

C) Resentment of Libs. (Stiiginit to the libs) These are people who support anything—regardless of its consequences even for them—that they believe pisses off “liberals.” That “liberals” are a hobgoblin, and that this orientation leads to “Vladimir’s Choice” doesn’t much matter.

D) Charismatic Leadership. This is a relationship that people have with a leader (sometimes several leaders). They believe that the leader is a kind of savior (the sacralized language is often striking), an embodiment of the in-group, who should be given unlimited power and held unaccountable so that they can “fight” on behalf of real people like them. (Charismatic leadership is often some kind of authoritarian populism—maybe always.

E) Purity politics. This group includes people who are radically committed to banning abortion—although there are policies that demonstrably reduce abortion, these people refuse to support them because they believe those policies (e.g., accurate sex education, access to effective birth control) are also sinful. Supporting birth control isn’t radially pure enough for them—that their policy will result in deaths in the short term doesn’t matter to them. They refuse to be pragmatic about short-term improvements or short-term devastation. People who refuse to vote for opposition candidates because that party or candidate isn’t radical enough are also in the category.

One characteristic shared among all of these kinds of supporters–in my experience is a tragic informational cycle—they refuse to look at anything that contradicts or complicates what they believe about Trump. They only pay attention to pro-Trump demagoguery because they believe that the entire complicated world of policy options and disagreements is really a tug-of-war between two groups. (A lot of people who aren’t Trump supporters think about politics that way–it isn’t helpful.)

2) Useful/Effective at what?


A) Persuading the interlocutor. People often assume that the point in engaging someone with whom we disagree is to get them to adopt our point of view.

B) Persuading bystanders. Sometimes, however, we aren’t trying to persuade the person with whom we’re disagreeing, but others who might be watching the disagreement.

C) Getting the topic off of Trump. Often, we’re just trying to get them to drop the subject, to let us enjoy dinner, a holiday, a coffee break, or whatever without talking about Trump, or taking swipes at the hobgoblin of “libs.” In other words, just get them to STFU about that topic.

D) Undermine the in- and out-group binary. We might want them to recognize the harm of their support for Trump and/or his policies—that is, to persuade them to empathize with an out-group.

3) What do we mean by “satire”?
The loose category of satire can mean: stable irony (a statement with a clear meaning that is not the literal statement—saying, “Great weather” in the midst of a nasty storm); unstable irony (the rhetor clearly doesn’t mean the literal statement, but it isn’t clear what they do mean), parody (which might be loving, as in the case of Best in Show, or critical, as in the case of much Saturday Night Live sketches about politics), or Juvenalian satire (scathing and often scatological). (Wikipedia has a useful entry on satire.)

So, the short answer to the question about effectiveness of satire on Trump supporters is: it depends on the kind of supporter, what effect we’re trying to have, and what kind of satire we use.

Many people object to satire because they assume that it’s insulting, and believe we should always rely on kindness and reason. But, as Jonathan Swift famously said: “Reasoning will never make a Man correct an ill Opinion, which by Reasoning he never acquired.” (For more on this quote and various versions, see this.) This is not always true, but it is true that a person has to be open to changing their mind for a reasoned argument to work. And not everyone is. So, if we’re talking to a Trump supporter who is open to changing their mind—that is, whose beliefs about Trump and Trump’s policies are falsifiable–then satire might not be the best strategy. But, to be blunt, I haven’t run across a Trump supporter whose beliefs can be falsified through reasoned argument in a long time.

The first category—the person who is in it for their own short term gain, and who might actually hate Trump—can seem to be “rational” insofar as they’re engaged in an apparently amoral calculation of costs and benefits (in my experience) is generally grounded in some version of the “just world model.” (That people who are wealthy/powerful/dominant deserve to be wealthy/powerful/dominant.) So, their reason for supporting policies that benefit them in the short-term isn’t falsifiable. They can be persuaded, sometimes, on very specific points about specific policies. Sometimes. Satire certainly doesn’t alienate them (although it might piss them off); it neither strengthens nor weakens their support.

Similarly, people whose support comes from an eschatological view of history/politics have a non-falsifiable narrative (they’re very prone to conspiracy theories), and, in my experience, have long since dug in. So, similarly, satire neither strengthens nor weakens their support.

The “stigginit to the libs” type person sometimes change their mind about Trump when they or someone they love gets harmed by Trump’s policies, so—sometimes—their support can be falsified by personal experience. But reasonable argumentation is right off the table—they like that Trump critics get frustrated with how unreasonable they are. (And, often, they have an unreasonable understanding of reasonable argumentation.)

Satire can increase their resentment of “libs,” especially if it hits close to home, but it isn’t as though some other rhetorical strategy would work. In theory, what should work would be some strategy of rejiggering their sense of in- and out-groups, or that creates empathy for an out-group, but I’m not sure I’ve seen that happen short of direct personal experience.

Satire, including the Juvenalian, can shame some people into shutting up, or allowing a change of subject, but I think other strategies are more effective (like refusing to engage). It can also have an impact on observers, but whether satire will cause them to feel sorry for the Trump supporters, get mad at libs, or distance themselves from Trump support/ers varies from person to person.

Satire can be very effective for people in a charismatic leadership relationship because it emphasizes that they look foolish. Their fanatical commitment is not, as they want to see it, a deeply personal and reciprocated loyalty, but gullibility. They’ll deflect by projecting their own fanatical commitment onto “libs,” and so it can be useful to insist that it stay on the stasis of their commitment. After all, it doesn’t matter if there are people equally fanatically committed to Biden or whoever—two wrongs don’t make a right. Biden supporters might howl at the moon and eat broken glass; that doesn’t mean that fanatical support of Trump is reasonable.

That Biden supporters might be wrong about something doesn’t mean Trump supporters are right. (And vice versa. Our political world is not, actually, a tug-of-war between two groups.)

And that points to one problem with satire of a group—what’s wrong with our political discourse is the fundamental premise that politics is a zero-sum battle between two groups. So, any satire that confirms that premise is, I think, problematic, and much of it does.

The final group is the purity politics folks. For those people, politics is a performance of purity; there’s a kind of narcissism to it. I don’t know whether satire would do much either to shame or persuade them—personally, I’ve never found that kind of person open to persuasion (regardless of where they are on the political spectrum).

So, to answer the original question: it depends.