Seeds Over a Wall: Credibility

blooming cilantro

tl;dr Believing isn’t a good substitute for thinking.

As mentioned in the previous post, Secretary of Defense Robert McNamara, LBJ, Dean Rusk, McGeorge Bundy, and various other decision-makers in the LBJ Administration were committed to the military strategy of “graduated pressure” with, as H.R. McMaster says, “an almost religious zeal” (74). Graduated pressure was (is) the strategy of slightly increasing the amount of military force by steps in order to pressure the opponent into giving up. It’s supposed to “signal” to the opponent that we are absolutely committed, but open to negotiation.

It’s a military strategy, and the people in favor of it were not people with much (or sometimes any) military training or experience. There were various methods for people with military experience to advise the top policy-makers. Giving such advice is the stated purpose of the Joint Chiefs of Staff, for instance. There were also war games, assessments, memos, and telegrams, and their hostility to “graduated pressure” ranged from dubious to completely opposed. The civilian advisors were aware of that hostility, but dismissed the judgments of military experts on the issue of military strategy.

It did not end well.

In the previous post, I wrote about binary thinking, with emphasis on the never/always binary. When it comes to train wrecks in public deliberation, another important (and false) binary is trustworthy/untrustworthy. That binary is partially created by others, especially the fantasy that complicated issues really have two and only two sides.

Despite what people think, there aren’t just two sides to every major policy issue—you can describe an issue that way, and sincerely believe it is, but doing so requires misdescribing the situation, and forcing it into a binary. “The Slavery Debate,” for instance, wasn’t between two sides; there were at least six different positions on the issue of what should happen with slavery, and even that number requires some lumping of people together who were actually in conflict.

(When I say this to people, I’m often told, “There are only two sides: the right one and the wrong one.” That pretty much proves my point. And, no, I am not arguing for all sides being equally valid, “relativism,” endless indecision, compulsive compromise, or what the Other term is in that false binary.)

I’ll come back to the two sides point in other posts, but here I want to talk about the binary of trustworthy/untrustworthy (aka, the question of “credibility”). What the “two sides” fallacy fosters is the tendency to imagine credibility as a binary of Us and Them: civilian v. military advisors; people who advocate “graduated pressure” and people who want us to give up.

In point of fact, the credibility of sources is a very complicated issue. There are few (probably no) sources that are completely trustworthy on every issue (everyone makes mistakes), and some that are trustworthy on pretty much nothing (we all have known people whom we should never trust). Expertise isn’t an identity; it’s a quality that some people have about some things, and it doesn’t mean they’re always right even about those some things. So, there is always some work necessary to try to figure out how credible a source is on this issue or with this claim.

There was a trendy self-help movement at one point that was not great in a lot of ways, but there was one part of it that was really helpful: the insistence that “there is no Santa Claus.” The point of this saying was that it would be lovely were there someone who would sweep in and solve all of our problems (and thereby save us from doing the work of solving them ourselves), but there isn’t. We have to do the work.[1] I think a lot of people talk about sources (media, pundit, political figure) as a Santa Claus who has saved them from the hard work of continually assessing credibility. They believe everything that a particular person or media says. If they “do their own research,” it’s often within the constraints of “motivated reasoning” and “confirmation bias” (more on that later).[2]

I mentioned in the first post in this series that I’m not sure that there’s anything that shows up in every single train wreck, except the wreck. Something that does show up is a particular way of assessing credibility, but I don’t think that causes the train wreck. I think it is the train wreck.

This way of assessing credibility is another situation that has a kind of mobius strip quality (what elsewhere I’ve called “if MC Escher drew an argument”): a source is credible if and only if it confirms what we already believe to be true; we know that what we believe is true because all credible sources confirm it.

This way of thinking about credibility is comforting; it makes us feel comfortable with what we already believe. It silences uncertainty.

The problem is that it’s wrong.

McNamara and others didn’t think they were making a mistake in ignoring what military advisors told them; they dismissed that advice on the grounds of motivism, and that’s pretty typical. They said that military advisors were opposed to graduated pressure because they were limited in their thinking, too oriented toward seeking military solutions, too enamored of bombing. The military advisors weren’t univocal in their assessment of Vietnam and the policy options—there weren’t only two sides on what should be done—but they had useful and prescient criticism of the path LBJ was on. And that criticism was dismissed.

It’s interesting that even McNamara would later admit he was completely wrong in his assessment of the situation, yet wouldn’t admit that he was told so at the time. His version of events, in retrospect, was that the fog of war made it impossible for him to get the information he needed to have advocated better policieds. But that simply isn’t true. McNamara’s problem wasn’t a lack of information—he and the other advisors had so very, very much information. In fact, they had all the information they needed. His problem was that he didn’t listen to anyone who disagreed with him, on the grounds that they disagreed with him and were therefore wrong.

McNamara read and wrote reports that listed alternatives for LBJ’s Vietnam policies, but they were “poisoning the well.” The alternatives other than graduated pressure were not the strongest alternative policies, they were described in nearly straw man terms, and dismissed in a few sentences.

We don’t have to listen to every person who disagrees with us, and we can’t possibly read every disconfirming source, let alone assess them. But we should be aware of the strongest criticisms of our preferred policy, and the strongest arguments for the most plausible of alternative policy options. And, most important, we should know how to identify if we’re wrong. That doesn’t mean wallowing in a morass of self-doubt (again, that’s binary thinking).

But it does mean that we should not equate credibility with in-group fanaticism. Unless we like train wrecks.









[1] Sometimes people who’ve had important conversion experiences take issue with saying there is no Santa Claus, but I think there’s a misunderstanding—many people believe that they’ve accomplished things post-conversion that they couldn’t have done without God, and I believe them. But conversion didn’t save them from doing any work; it usually obligates a person to do quite a bit of work. The desire for a “Santa Claus” is a desire for someone who doesn’t require work from us.

[2] Erich Fromm talked about this as part of the attraction of authoritarianism—stepping into that kind of system can feel like an escape from the responsibilities of freedom. Many scholars of cults point to the ways that cults promise that escape from cognitive work.

Seeds Over a Wall: Binary Thinking

primroses

Imagine that we’re disagreeing about whether I should drive the wrong way down a one-way street, and you say, “Don’t go that way—you could get in an accident!” And I say, “Oh, so no one has ever driven down a one-way street without getting into an accident?” You didn’t say anything about always or never. You’re talking in terms of likelihood and risk, about probability. I’m engaging in binary thinking.

What’s hard about talking to people about binary thinking is that, if someone is prone to it, they’re likely to respond with, “Oh, so you’re saying that there’s never a binary?” Or, they’ll understand you as arguing for what they think of as relativism—they imagine a binary of binary thinking or relativism.

(In other words, they assume that there’s a binary in how people think: a person either believes there’s always an obvious and clear absolutely good choice/thing and an obvious and always clear absolutely bad choice/thing OR a person believes there’s no such thing as good v. bad ever. That latter attitude is often called “relativism” and, for binary thinkers, they assume it’s the only possibility other than their approach. So, they’re binary thinkers about thinking, and that makes talking to them about it difficult.)

“Binary thinking” (also sometimes called “splitting” or “dichotomous thinking”) is a cognitive bias that encourages us to perceive people, events, ideas, and so on into two mutually exclusive categories. It’s thinking in terms of extremes like always or never—so if something doesn’t always happen, then it must never happen. Or if someone says you shouldn’t do something, you understand them to be saying you should never do it. Things are either entirely and always good, or entirely and always bad.

We’re particularly prone to binary thinking when stressed, tired, faced with an urgent problem. What it does is reduce our options, and thereby seems to make decision-making easier; it does make decision-making easier, but easy isn’t always good. There’s some old research suggesting that people faced with too many options get paralyzed in decision-making, and so find it easier to make a decision if there are only two options. There was a funny study long ago in which people had an option to taste salsas—if there were several options, more people walked by than if there were only two. (This is why someone trying to sell you something—a car, a fridge, a house–will try to get you to reduce the choice to two.)

Often, it’s a false dichotomy. For instance, the small circle of people making decisions about Vietnam during the LBJ Administration kept assuming that they should either stick with the policy of “graduated pressure” (which wasn’t working) or pull out immediately. It was binary thinking. While there continues to be considerable disagreement about whether the US could have “won” the Vietnam conflict, I don’t know of anyone who argues that graduated pressure could have done it. Nor does anyone argue there was actually a binary–there were plenty of options other than either graduated pressure or an immediate pull-out, and they were continually advocated at the time.

Instead of taking seriously the options advocated by others (including the Joint Chiefs of Staff), what LBJ policy-makers assumed was that they would either continue to do exactly what they were already doing or give up entirely. And that’s a common false binary in the train wrecks I’ve studied–stick with what we’re doing or give up, and it’s important to keep in mind that this is a rhetorical move, not an accurate assessment of options.

I think we’ve all known people who, if you say, “This isn’t working,” respond with, “So, you think we should just give up?” That isn’t what you said.

“Stick with this or give up” is far from the only binary that traps rhetors into failure. When Alcibiades argued that the Athenians either had to invade Sicily or betray Egesta, he was invoking the common fallacy of brave v. coward (and ignoring Athens’ own history). A Spartan rhetor used the same binary (go to war with Athens or you’re a coward) even while disagreeing with a brave general who clearly wasn’t a coward, and who had good reasons for arguing against war with Athens at that moment.

One way of defining binary thinking is: “Dualistic thinking, also known as black-and-white, binary, or polarized thinking, is a general tendency to see things as good or bad, right or wrong, and us or them, without room for compromise and seeing shades of gray” (20). I’m not wild about that way of defining it, because it doesn’t quite describe how binary thinking contributes to train wrecks.

It isn’t that there was a grey area between graduated pressure and an immediate pull-out that McNamara and others should have considered (if anything, graduated pressure was a gray area between what the JCS wanted and pulling out entirely). The Spartan rhetor’s argument wouldn’t have been a better one had he argued that the general was sort of a coward. You can’t reasonably solve the problem of which car you should buy by buying half of one and half of the other.

The mistake is assuming that initial binary—of imagining there are only two options, and you have to choose between them. That’s binary thinking—of course there are other options.

When I point out the problems of binary thinking to people, I’m often told, “So, you’re saying we should just sit around forever and keeping talking about what to do?”

That’s binary thinking.



Seeds Over a Wall: Thoughts on Train Wrecks in Public Deliberation

a path through bluebonnet flowers

I’ve spent my career looking at bad, unforced decisions. I describe them as times that people took a lot of time and talk to come to a decision they later regretted. These aren’t times when people didn’t know any better—all the information necessary to make a better decision was available, and they ignored it.

Train wrecks aren’t particular to one group, one kind of person, one era. These incidents I’ve studied are diverse in terms of participants, era, consequences, political ideologies, topics, and various other important qualities. One thing that’s shared is that the interlocutors were skilled in rhetoric, and relied heavily on rhetoric to determine and advocate policies that wrecked the train.

That’s how I got interested in them—a lot of scholars of rhetoric have emphasized times that rhetors and rhetoric saved the day, or at least pointed the way to a better one. But these are times that people talked themselves in bad choices. They include incidents like: pretty much every decision Athens made regarding the Sicilian Expedition, Hitler’s refusal to order a fighting retreat from Stalingrad, the decision to dam and flood the Hetch Hetchy Valley (other options were less expensive), eugenics, the LBJ Administration’s commitment to “graduated pressure” in Vietnam; Earl Warren’s advocacy of race-based mass imprisonment; US commitment to slavery; Puritans’ decision to criminalize Baptist and Quakers.

I’ve deliberately chosen bad decisions on the part of people that can’t be dismissed as too stupid to make good decisions. Hitler’s military decisions in regard to invading France showed considerable strategic skill–while he wasn’t as good a strategist as he claimed, he wasn’t as bad as his generals later claimed. Advocates of eugenics included experts with degrees from prestigious universities—until at least WWII, biology textbooks had a chapter on the topic, and universities had courses if not departments of Eugenics. It was mainstream science. Athenians made a lot of good decisions at their Assembly, and a major advocate of the disastrous Sicilian Expedition was a student/lover of Socrates’. LBJ’s Secretary of Defense Robert McNamara was a lot of things, but even his harshest critics say he was smart.

The examples also come from a range of sorts of people. One temptation we have in looking back on bad decisions is to attribute them to out-group members. In-group decisions that turned out badly we try to dismiss on the grounds that they weren’t really bad decisions, they had no choice, an out-group is somehow really responsible for what happened.[1] (It’s interesting that that way of thinking about mistakes actively contributes to train wrecks.) The people who advocated the damming and flooding of the Hetch Hetchy Valley were conservationists and progressives (their terms for themselves, and I consider myself both[2]). LBJ’s social agenda got us the Voting Rights Act, the Civil Rights Act, Medicare, all of which I’m grateful for. Earl Warren went on to get Brown v. Board passed, for which I admire him.

In short, I don’t want these posts to be in-group petting that makes Us feel good about not being Those People. This isn’t about how They make mistakes, but how We do.

A lot of different factors contributed to each of these train wrecks; I haven’t determined some linear set of events or decisions that happened in every case, let alone the one single quality that every incident shares (I don’t think there is, except the train wrecking). It’s interesting that apparently contradictory beliefs can be present in the same case, and sometimes held by the same people.

So, what I’m going to do is write a little bit about each of the factors that showed up at least a few times, and give a brief and broad explanation. These aren’t scholarly arguments, but notes and thoughts about what I’ve seen. In many cases (all?) I have written scholarly arguments about them in which I’ve cited chapter and verse, as have many others. If people are interested in my chapter and verse version, then this is where to start. (In those scholarly versions, I also cite the many other scholars who have made similar arguments. Nothing that I’m saying is particularly controversial or unique.)

These pieces aren’t in any particular order—since the causality is cumulative rather than linear, there isn’t a way to begin at the beginning. It’s also hard not to write about this without at least some circularity, or at least backtracking. So, if someone is especially interested in one of these, and would like me to get to it, let me know.

Here are some of the assumptions/beliefs/arguments that contribute to train wrecks and that I intend to write about, not necessarily in this order:

Bad people make bad decisions; good people make good ones
Policy disagreements are really tug-of-war contests between two sides
Data=proof; the more data, the stronger the proof
The Good Samaritan was the villain of the story
There is a single (but not necessarily simple) right answer to every problem
That correct course of action is always obvious to smart people
What looks true (to me) is true—if you don’t believe that, then you’re a relativist
Might makes right, except when it doesn’t (Just World Model, except when not)
The ideal world is a stable hierarchy of kiss up/kick down
All ethical stances/critiques are irrational and therefore equally valid
Bad things can only be done by people who consciously intend to do them
Doing something is always better than doing nothing
Acting is better than thinking (“decisiveness” is always an ideal quality)
They cherry-pick foundational texts, but Our interpretations distinguish the transient from the permanent
In-group members and actions shouldn’t be held accountable (especially not to the same degree as out-group members and actions)

There are a few other qualities that often show up:
Binary thinking
Media enclaves
Mean girl rhetoric
Short-term thinking (Gus Johnson and the tuna)
Non-falsifiable conspiracy theories that exempt the in-group from accountability
Sloppy Machiavellianism
Tragic loyalty loops


[1] I’m using “in-“ and “out-“ groups as sociologists do, meaning groups we’re in, and groups against whom we define ourselves, not groups in or out of power. We’re each in a lot of groups, and have a lot of out-groups. Here’s more information about in- and out-groups. You and your friend Terry might be in-group when it comes to what soccer teams you support but out-group when it comes to how you vote. Given the work I do, I’m struck by how important a third category is: non in-group (but not out-group). For instance, you might love dogs, and for you, dog lovers are in-group. Dog-haters would be out-group. But people who neither love nor hate dogs are not in-group, yet not out-group. One of the things that happens in train wrecks is that the non in-group category disappears.

[2] For me, “conservatives” are not necessarily out-group. Again, given the work I do, I’ve come to believe that public deliberations are best when there is a variety of views considered, and “conservatism” is a term used in popular media, and even some scholarship, to identify a variety of political ideologies which are profoundly at odds with each other. Libertarianism and segregation–both called “conservative” ideologies by popular media–are not compatible. Our political world is neither a binary nor a continuum of ideologies.