Seeds Over a Wall: The Pyramid of Harm

flowers in front of a wall

My grandmother had a “joke” (really more of a parable) about a guy who sees a pie cooling in the window, and steals it. Unfortunately, he leaves a perfect handprint on the sill, so he sneaks into the house in order to wash off his handprint. But then it’s obvious that the sill has been washed, since it’s so much cleaner than the wall. So he washes that wall. It’s still obvious that something has happened because that one wall is so much cleaner than the others. When the police came, he was repainting the attic.

You can tell this as a shaggy dog joke, with more of the steps between the sill and the attic. And, in a way, that’s how this situation often plays out, at least in regard to bad choices. Rather than admit the initial mistake, we get deeper and deeper into a situation; the more energy we expend to deflect the consequences of that first mistake, the more committed we are to making that expenditure worthwhile. So, we’re now in the realm of the “sunk cost” fallacy/cognitive bias. Making decisions on the basis of trying to retrieve sunk costs—also known as “throwing good money after bad”–enables us to deny that we made a bad decision.

In the wonderful book Mistakes Were Made, Carol Tavris and Elliot Aronson call this process “the pyramid of choice.” It’s usefully summarized here:

“The Analogy of the Pyramid (Tavris and Aronson, 2015). An initial choice -which is often triggered by the first “taking the temperature” vote -amounts to a step off on one side of the pyramid. This first decision sets in motion a cycle of self-justification which leads to further action (e.g., taking a public stance during the group discussion) and further selfjustification. The deeper down participants go, the more they can become convinced and the more the need arises to convince others of the correctness of their position.”

The example used by Tavris and Aronson is of two students who are faced with the choice of cheating or getting a bad grade on an exam. They are, initially, both in the same situation. One decides to cheat, and one decides to get the bad grade. But, after some time, each will find ways of not only justifying their decision, but they will be “convinced that they have always felt that way” (33).

In the equally wonderful book Denial, Jared Del Rosso describes a similar process for habituating a person to behaviors they would previously have condemned (such as engaging in torture). A prison guard or police officer is first invited to do something a little bit wrong; that small bad act is normalized, and then, once they’ve done that, it becomes easier to get them to do a little worse (Chapter 4). Christopher Browning describes a similar situation for Nazi Wehrmacht soldiers who participated in genocide; Hannah Arendt makes that argument about Adolf Eichmann; Robert Gellately makes it about Germans’ support for Hitler.

It’s like an upside-down pyramid—the one little bad act enables and requires more and worse ones, since refusing to continue doing harm would require admitting to one’s self and others that the first act was bad. It means saying, “I did this bad (or stupid) thing,” and that’s hard. It’s particularly hard for people who equate identity and action, and who believe that only bad people do bad things, and only stupid people do stupid things; that is, people who believe in a stark binary of identity.

This way of thinking also causes people to “double down” on mistakes. In late 1942, about 250,000 Nazi soldiers approaching and in Stalingrad were in danger of getting encircled by Soviet troops. Hitler refused to allow a retreat; instead opting for Goering’s plan of airlifting supplies. David Glantz and Jonathan House argue that Hitler was “trapped” by his previous decisions—to acknowledge the implausibility of Goering’s proposal (and it was extremely implausible) would amount to Hitler admitting that various decisions that he had made were wrong, and that his generals had been right. Glantz and House don’t mean he was actually trapped–other decisions could have been made, but not by Hitler. He was trapped by his own inability to admit that he had been wrong. Rather than admit that he was wrong in his previous bad decisions, he proceeded to make worse ones. That’s the pyramid of harm.

The more walls the thief washes, the harder it is to say that the theft of the pie was a one-time mistake.

Don’t be the thief.


Seeds Over a Wall: Credibility

blooming cilantro

tl;dr Believing isn’t a good substitute for thinking.

As mentioned in the previous post, Secretary of Defense Robert McNamara, LBJ, Dean Rusk, McGeorge Bundy, and various other decision-makers in the LBJ Administration were committed to the military strategy of “graduated pressure” with, as H.R. McMaster says, “an almost religious zeal” (74). Graduated pressure was (is) the strategy of slightly increasing the amount of military force by steps in order to pressure the opponent into giving up. It’s supposed to “signal” to the opponent that we are absolutely committed, but open to negotiation.

It’s a military strategy, and the people in favor of it were not people with much (or sometimes any) military training or experience. There were various methods for people with military experience to advise the top policy-makers. Giving such advice is the stated purpose of the Joint Chiefs of Staff, for instance. There were also war games, assessments, memos, and telegrams, and their hostility to “graduated pressure” ranged from dubious to completely opposed. The civilian advisors were aware of that hostility, but dismissed the judgments of military experts on the issue of military strategy.

It did not end well.

In the previous post, I wrote about binary thinking, with emphasis on the never/always binary. When it comes to train wrecks in public deliberation, another important (and false) binary is trustworthy/untrustworthy. That binary is partially created by others, especially the fantasy that complicated issues really have two and only two sides.

Despite what people think, there aren’t just two sides to every major policy issue—you can describe an issue that way, and sincerely believe it is, but doing so requires misdescribing the situation, and forcing it into a binary. “The Slavery Debate,” for instance, wasn’t between two sides; there were at least six different positions on the issue of what should happen with slavery, and even that number requires some lumping of people together who were actually in conflict.

(When I say this to people, I’m often told, “There are only two sides: the right one and the wrong one.” That pretty much proves my point. And, no, I am not arguing for all sides being equally valid, “relativism,” endless indecision, compulsive compromise, or what the Other term is in that false binary.)

I’ll come back to the two sides point in other posts, but here I want to talk about the binary of trustworthy/untrustworthy (aka, the question of “credibility”). What the “two sides” fallacy fosters is the tendency to imagine credibility as a binary of Us and Them: civilian v. military advisors; people who advocate “graduated pressure” and people who want us to give up.

In point of fact, the credibility of sources is a very complicated issue. There are few (probably no) sources that are completely trustworthy on every issue (everyone makes mistakes), and some that are trustworthy on pretty much nothing (we all have known people whom we should never trust). Expertise isn’t an identity; it’s a quality that some people have about some things, and it doesn’t mean they’re always right even about those some things. So, there is always some work necessary to try to figure out how credible a source is on this issue or with this claim.

There was a trendy self-help movement at one point that was not great in a lot of ways, but there was one part of it that was really helpful: the insistence that “there is no Santa Claus.” The point of this saying was that it would be lovely were there someone who would sweep in and solve all of our problems (and thereby save us from doing the work of solving them ourselves), but there isn’t. We have to do the work.[1] I think a lot of people talk about sources (media, pundit, political figure) as a Santa Claus who has saved them from the hard work of continually assessing credibility. They believe everything that a particular person or media says. If they “do their own research,” it’s often within the constraints of “motivated reasoning” and “confirmation bias” (more on that later).[2]

I mentioned in the first post in this series that I’m not sure that there’s anything that shows up in every single train wreck, except the wreck. Something that does show up is a particular way of assessing credibility, but I don’t think that causes the train wreck. I think it is the train wreck.

This way of assessing credibility is another situation that has a kind of mobius strip quality (what elsewhere I’ve called “if MC Escher drew an argument”): a source is credible if and only if it confirms what we already believe to be true; we know that what we believe is true because all credible sources confirm it.

This way of thinking about credibility is comforting; it makes us feel comfortable with what we already believe. It silences uncertainty.

The problem is that it’s wrong.

McNamara and others didn’t think they were making a mistake in ignoring what military advisors told them; they dismissed that advice on the grounds of motivism, and that’s pretty typical. They said that military advisors were opposed to graduated pressure because they were limited in their thinking, too oriented toward seeking military solutions, too enamored of bombing. The military advisors weren’t univocal in their assessment of Vietnam and the policy options—there weren’t only two sides on what should be done—but they had useful and prescient criticism of the path LBJ was on. And that criticism was dismissed.

It’s interesting that even McNamara would later admit he was completely wrong in his assessment of the situation, yet wouldn’t admit that he was told so at the time. His version of events, in retrospect, was that the fog of war made it impossible for him to get the information he needed to have advocated better policieds. But that simply isn’t true. McNamara’s problem wasn’t a lack of information—he and the other advisors had so very, very much information. In fact, they had all the information they needed. His problem was that he didn’t listen to anyone who disagreed with him, on the grounds that they disagreed with him and were therefore wrong.

McNamara read and wrote reports that listed alternatives for LBJ’s Vietnam policies, but they were “poisoning the well.” The alternatives other than graduated pressure were not the strongest alternative policies, they were described in nearly straw man terms, and dismissed in a few sentences.

We don’t have to listen to every person who disagrees with us, and we can’t possibly read every disconfirming source, let alone assess them. But we should be aware of the strongest criticisms of our preferred policy, and the strongest arguments for the most plausible of alternative policy options. And, most important, we should know how to identify if we’re wrong. That doesn’t mean wallowing in a morass of self-doubt (again, that’s binary thinking).

But it does mean that we should not equate credibility with in-group fanaticism. Unless we like train wrecks.









[1] Sometimes people who’ve had important conversion experiences take issue with saying there is no Santa Claus, but I think there’s a misunderstanding—many people believe that they’ve accomplished things post-conversion that they couldn’t have done without God, and I believe them. But conversion didn’t save them from doing any work; it usually obligates a person to do quite a bit of work. The desire for a “Santa Claus” is a desire for someone who doesn’t require work from us.

[2] Erich Fromm talked about this as part of the attraction of authoritarianism—stepping into that kind of system can feel like an escape from the responsibilities of freedom. Many scholars of cults point to the ways that cults promise that escape from cognitive work.

Seeds Over a Wall: Binary Thinking

primroses

Imagine that we’re disagreeing about whether I should drive the wrong way down a one-way street, and you say, “Don’t go that way—you could get in an accident!” And I say, “Oh, so no one has ever driven down a one-way street without getting into an accident?” You didn’t say anything about always or never. You’re talking in terms of likelihood and risk, about probability. I’m engaging in binary thinking.

What’s hard about talking to people about binary thinking is that, if someone is prone to it, they’re likely to respond with, “Oh, so you’re saying that there’s never a binary?” Or, they’ll understand you as arguing for what they think of as relativism—they imagine a binary of binary thinking or relativism.

(In other words, they assume that there’s a binary in how people think: a person either believes there’s always an obvious and clear absolutely good choice/thing and an obvious and always clear absolutely bad choice/thing OR a person believes there’s no such thing as good v. bad ever. That latter attitude is often called “relativism” and, for binary thinkers, they assume it’s the only possibility other than their approach. So, they’re binary thinkers about thinking, and that makes talking to them about it difficult.)

“Binary thinking” (also sometimes called “splitting” or “dichotomous thinking”) is a cognitive bias that encourages us to perceive people, events, ideas, and so on into two mutually exclusive categories. It’s thinking in terms of extremes like always or never—so if something doesn’t always happen, then it must never happen. Or if someone says you shouldn’t do something, you understand them to be saying you should never do it. Things are either entirely and always good, or entirely and always bad.

We’re particularly prone to binary thinking when stressed, tired, faced with an urgent problem. What it does is reduce our options, and thereby seems to make decision-making easier; it does make decision-making easier, but easy isn’t always good. There’s some old research suggesting that people faced with too many options get paralyzed in decision-making, and so find it easier to make a decision if there are only two options. There was a funny study long ago in which people had an option to taste salsas—if there were several options, more people walked by than if there were only two. (This is why someone trying to sell you something—a car, a fridge, a house–will try to get you to reduce the choice to two.)

Often, it’s a false dichotomy. For instance, the small circle of people making decisions about Vietnam during the LBJ Administration kept assuming that they should either stick with the policy of “graduated pressure” (which wasn’t working) or pull out immediately. It was binary thinking. While there continues to be considerable disagreement about whether the US could have “won” the Vietnam conflict, I don’t know of anyone who argues that graduated pressure could have done it. Nor does anyone argue there was actually a binary–there were plenty of options other than either graduated pressure or an immediate pull-out, and they were continually advocated at the time.

Instead of taking seriously the options advocated by others (including the Joint Chiefs of Staff), what LBJ policy-makers assumed was that they would either continue to do exactly what they were already doing or give up entirely. And that’s a common false binary in the train wrecks I’ve studied–stick with what we’re doing or give up, and it’s important to keep in mind that this is a rhetorical move, not an accurate assessment of options.

I think we’ve all known people who, if you say, “This isn’t working,” respond with, “So, you think we should just give up?” That isn’t what you said.

“Stick with this or give up” is far from the only binary that traps rhetors into failure. When Alcibiades argued that the Athenians either had to invade Sicily or betray Egesta, he was invoking the common fallacy of brave v. coward (and ignoring Athens’ own history). A Spartan rhetor used the same binary (go to war with Athens or you’re a coward) even while disagreeing with a brave general who clearly wasn’t a coward, and who had good reasons for arguing against war with Athens at that moment.

One way of defining binary thinking is: “Dualistic thinking, also known as black-and-white, binary, or polarized thinking, is a general tendency to see things as good or bad, right or wrong, and us or them, without room for compromise and seeing shades of gray” (20). I’m not wild about that way of defining it, because it doesn’t quite describe how binary thinking contributes to train wrecks.

It isn’t that there was a grey area between graduated pressure and an immediate pull-out that McNamara and others should have considered (if anything, graduated pressure was a gray area between what the JCS wanted and pulling out entirely). The Spartan rhetor’s argument wouldn’t have been a better one had he argued that the general was sort of a coward. You can’t reasonably solve the problem of which car you should buy by buying half of one and half of the other.

The mistake is assuming that initial binary—of imagining there are only two options, and you have to choose between them. That’s binary thinking—of course there are other options.

When I point out the problems of binary thinking to people, I’m often told, “So, you’re saying we should just sit around forever and keeping talking about what to do?”

That’s binary thinking.



Seeds Over a Wall: Thoughts on Train Wrecks in Public Deliberation

a path through bluebonnet flowers

I’ve spent my career looking at bad, unforced decisions. I describe them as times that people took a lot of time and talk to come to a decision they later regretted. These aren’t times when people didn’t know any better—all the information necessary to make a better decision was available, and they ignored it.

Train wrecks aren’t particular to one group, one kind of person, one era. These incidents I’ve studied are diverse in terms of participants, era, consequences, political ideologies, topics, and various other important qualities. One thing that’s shared is that the interlocutors were skilled in rhetoric, and relied heavily on rhetoric to determine and advocate policies that wrecked the train.

That’s how I got interested in them—a lot of scholars of rhetoric have emphasized times that rhetors and rhetoric saved the day, or at least pointed the way to a better one. But these are times that people talked themselves in bad choices. They include incidents like: pretty much every decision Athens made regarding the Sicilian Expedition, Hitler’s refusal to order a fighting retreat from Stalingrad, the decision to dam and flood the Hetch Hetchy Valley (other options were less expensive), eugenics, the LBJ Administration’s commitment to “graduated pressure” in Vietnam; Earl Warren’s advocacy of race-based mass imprisonment; US commitment to slavery; Puritans’ decision to criminalize Baptist and Quakers.

I’ve deliberately chosen bad decisions on the part of people that can’t be dismissed as too stupid to make good decisions. Hitler’s military decisions in regard to invading France showed considerable strategic skill–while he wasn’t as good a strategist as he claimed, he wasn’t as bad as his generals later claimed. Advocates of eugenics included experts with degrees from prestigious universities—until at least WWII, biology textbooks had a chapter on the topic, and universities had courses if not departments of Eugenics. It was mainstream science. Athenians made a lot of good decisions at their Assembly, and a major advocate of the disastrous Sicilian Expedition was a student/lover of Socrates’. LBJ’s Secretary of Defense Robert McNamara was a lot of things, but even his harshest critics say he was smart.

The examples also come from a range of sorts of people. One temptation we have in looking back on bad decisions is to attribute them to out-group members. In-group decisions that turned out badly we try to dismiss on the grounds that they weren’t really bad decisions, they had no choice, an out-group is somehow really responsible for what happened.[1] (It’s interesting that that way of thinking about mistakes actively contributes to train wrecks.) The people who advocated the damming and flooding of the Hetch Hetchy Valley were conservationists and progressives (their terms for themselves, and I consider myself both[2]). LBJ’s social agenda got us the Voting Rights Act, the Civil Rights Act, Medicare, all of which I’m grateful for. Earl Warren went on to get Brown v. Board passed, for which I admire him.

In short, I don’t want these posts to be in-group petting that makes Us feel good about not being Those People. This isn’t about how They make mistakes, but how We do.

A lot of different factors contributed to each of these train wrecks; I haven’t determined some linear set of events or decisions that happened in every case, let alone the one single quality that every incident shares (I don’t think there is, except the train wrecking). It’s interesting that apparently contradictory beliefs can be present in the same case, and sometimes held by the same people.

So, what I’m going to do is write a little bit about each of the factors that showed up at least a few times, and give a brief and broad explanation. These aren’t scholarly arguments, but notes and thoughts about what I’ve seen. In many cases (all?) I have written scholarly arguments about them in which I’ve cited chapter and verse, as have many others. If people are interested in my chapter and verse version, then this is where to start. (In those scholarly versions, I also cite the many other scholars who have made similar arguments. Nothing that I’m saying is particularly controversial or unique.)

These pieces aren’t in any particular order—since the causality is cumulative rather than linear, there isn’t a way to begin at the beginning. It’s also hard not to write about this without at least some circularity, or at least backtracking. So, if someone is especially interested in one of these, and would like me to get to it, let me know.

Here are some of the assumptions/beliefs/arguments that contribute to train wrecks and that I intend to write about, not necessarily in this order:

Bad people make bad decisions; good people make good ones
Policy disagreements are really tug-of-war contests between two sides
Data=proof; the more data, the stronger the proof
The Good Samaritan was the villain of the story
There is a single (but not necessarily simple) right answer to every problem
That correct course of action is always obvious to smart people
What looks true (to me) is true—if you don’t believe that, then you’re a relativist
Might makes right, except when it doesn’t (Just World Model, except when not)
The ideal world is a stable hierarchy of kiss up/kick down
All ethical stances/critiques are irrational and therefore equally valid
Bad things can only be done by people who consciously intend to do them
Doing something is always better than doing nothing
Acting is better than thinking (“decisiveness” is always an ideal quality)
They cherry-pick foundational texts, but Our interpretations distinguish the transient from the permanent
In-group members and actions shouldn’t be held accountable (especially not to the same degree as out-group members and actions)

There are a few other qualities that often show up:
Binary thinking
Media enclaves
Mean girl rhetoric
Short-term thinking (Gus Johnson and the tuna)
Non-falsifiable conspiracy theories that exempt the in-group from accountability
Sloppy Machiavellianism
Tragic loyalty loops


[1] I’m using “in-“ and “out-“ groups as sociologists do, meaning groups we’re in, and groups against whom we define ourselves, not groups in or out of power. We’re each in a lot of groups, and have a lot of out-groups. Here’s more information about in- and out-groups. You and your friend Terry might be in-group when it comes to what soccer teams you support but out-group when it comes to how you vote. Given the work I do, I’m struck by how important a third category is: non in-group (but not out-group). For instance, you might love dogs, and for you, dog lovers are in-group. Dog-haters would be out-group. But people who neither love nor hate dogs are not in-group, yet not out-group. One of the things that happens in train wrecks is that the non in-group category disappears.

[2] For me, “conservatives” are not necessarily out-group. Again, given the work I do, I’ve come to believe that public deliberations are best when there is a variety of views considered, and “conservatism” is a term used in popular media, and even some scholarship, to identify a variety of political ideologies which are profoundly at odds with each other. Libertarianism and segregation–both called “conservative” ideologies by popular media–are not compatible. Our political world is neither a binary nor a continuum of ideologies.

“Defeats will be defeats.”

copy of book--Foreign Relations of the US, Vietnam, 1964

“Defeats will be defeats and lassitude will be lassitude. But we can improve our propaganda.” (Carl Rowan, Director of the US Information Agency, June 1964, FRUS #189 I: 429).

In early June of 1964, major LBJ policy-makers met in Honolulu to discuss the bad and deteriorating situation in South Vietnam. SVN was on its third government in ten months (there had been a coup in November of 1963 and another in January of 1964), and advisors had spent the spring talking about how bad the situation was. In a March 1964 memo to LBJ, Secretary of Defense Robert McNamara reported that “the situation has unquestionably been growing worse” (FRUS #84). “Large groups of the population are now showing signs of apathy and indifference [….] Draft dodging is high while the Viet Cong are recruiting energetically and effectively [….] The political control structure extending from Saigon down into the hamlets disappeared following the November coup.” A CIA memo from May has this as the summary:

“The over-all situation in South Vietnam remains extremely fragile. Although there has been some improvement in GVN/ARVN performance, sustained Viet Cong pressure continues to erode GVN authority throughout the country, undercut US/GVN programs and depress South Vietnamese morale. We do not see any signs that these trends are yet ‘bottoming out.’ During the next several months there will be increasing danger than an assassination of Khanh, a successful coup, a grave military reverse, or a succession of military setbacks could have a critical psychological impact in South Vietnam. In any case, if the tide of deterioration has not been arrested by the end of the year, the anti-Communist position is likely to become untenable.” (FRUS #159)

At that June meeting, Carl Rowan presented a report as to what should be done, and he summarized it as: “Defeats will be defeats and lassitude will be lassitude. But we can improve our propaganda.” (FRUS #189). This is a recurrent theme in documents from that era, including military ones—the claim that effective messaging could solve what were structural problems. They didn’t. They couldn’t.

I was briefly involved in MLA, and I spent far too much time at meetings listening to people say that declining enrollments in the humanities could be solved by better messaging about the values of a humanistic education; I heard the same thing in far too many English Department meetings.

Just to be clear (and to try to head off people telling me that a humanistic education is valuable), I do not disagree with the message. I disagree that the problem can be solved by getting the message right, or getting the message out there. I’m saying that the rhetoric isn’t enough.

I am certain that there are tremendous benefits, both to an individual and to a culture, in a humanistic education, especially studying literature and language(s). That’s why I spent a career as a scholar and teacher in the humanities. But, enrollments weren’t (and aren’t) declining just because people haven’t gotten the message. There were, and are, declining enrollments for a variety of structural reasons, most of which are related to issues of funding for university educations. The fact is that the more that college costs, and the more that those costs are borne by students taking on crippling debt, the more that students want a degree that lands them a job right out of college.

Once again, I am not arguing that’s a good way for people to think about college; I am saying that the reason for declining enrollments isn’t something we can solve by better messaging about the values of a liberal arts education. For the rhetorical approach to be effective (and ethical) it has to be in conjunction with solving the structural problems. Any solution has to involve a more equitable system of funding higher education.

I am tired of people blaming the Dems’ “messaging” for the GOP’s success. I thought that Dem messaging was savvy and impressive. They couldn’t get it to enough people because people live in media enclaves. If you know any pro-GOP voters, then you know that they get all their information from media that won’t let one word of that message reach them, and that those voters choose to remain in enclaves. How, exactly, were the Dems supposed to reach your high school friend who rejects as “librul bullshit” anything that contradicts or complicates what their favorite pundit, youtuber, or podcaster tells them? What messaging would have worked?

The GOP is successful because enough people vote for the GOP and not enough vote against them. Voter suppression helps, but what most helps is anti-Dem rhetoric.

Several times I had the opportunity to hear Colin Allred speak, and his rhetoric was genius. It was perfect. Cruz didn’t try to refute Allred’s rhetoric; all Ted Cruz had to do was say, over and over (and he did), that Allred supported transgender rights.

From the Texas Observer: “Cruz and his allied political groups blitzed the airwaves with ads highlighting that vote and Allred’s other stances in favor of transgender rights. The ads, often featuring imagery of boys competing against girls in sports, reflected what Cruz’s team had found from focus groups and polling: Among the few million voters they’d identified who were truly on the fence, the transgender sports topic was most effective in driving support to Cruz, said Sam Cooper, a strategist for Cruz’s campaign.”

Transphobia is not a rhetorical problem that can be ended by the Dems getting the message right. Bigotry is systemic. Any solution will involve rhetoric, and rhetoric is important. But it isn’t enough.

Writing is hard; publishing is harder.

marked up draft of a book ms


In movies, struggling writers are portrayed as trying to come up with ideas. In my experience as a writer and teacher of writing, that isn’t the hard part. Ideas are easy, and are much better in our head than on the paper, so a very, very hard part of writing is to getting the smart and elegant ideas in our head to be comprehensible to someone else, let alone either persuasive or admired. But the even harder part is submitting something we’ve written—sending it off to be judged. It feels like the first day of sending a child to middle school—will they be bullied? Will they make friends? Will they change beyond recognition?

And I think there’s another reason that submitting a piece of writing is so hard. Our fantasies about what is going to happen when we submit a piece of writing are always more pleasurable than any plausible reality.

Somerset Maugham has a story called “Mirage,” about someone he knew when he was a medical student. Grosely was spending his time and money on wine and women, and eventually came up with a scam to get more money. He was caught, arrested, and kicked out of school. He became a kind of customs official in China, and, desperate to get back to partying in London, was as corrupt as possible: “He was consumed by one ambition, to save enough to be able to go back to England and live the life from which he had been snatched as a boy.”

After 25 years, he did go back to England, and he did try to live the life he’d lived at nineteen. But he couldn’t, of course. London was different, and so was he, and it was all a massive disappointment. He started to think about China, and what a great place it had been, and what a great time he could have there with all the money he’d made. So he headed back. He got almost to China, but stopped just shy of it (in Haiphong). Maugham explains:

“England had been such a terrible disappointment that now he was afraid to put China to the test too. If that failed him he had nothing. For years England had been like a mirage in the desert. But when he had yielded to the attraction, those shining pools and the palm trees and the green grass were nothing but the rolling sandy dunes. He had China, and so long as he never saw it again he kept it.”

I read that story as a graduate student trying to write a dissertation, and it resonated. As long as I didn’t finish the dissertation, I could entertain outrageous fantasies about its reception, quality, and impact. Once submitted, it was what it was. It was passable. (And unpublishable.) It was not anything like what I’d imagined it could be.

I have felt that way about every single piece of writing since (including this blog post)—I’m hesitant to finish it because of not wanting to give up the dream of what it could be.

Every writer of any genre has a lot of partially-written things. I knew a poet who actually had a drawer in his desk into which he dropped pieces of paper onto which he’d written lines that came to him that seemed good, but he didn’t have the rest of the poem. I don’t know if he ever pulled any of those pieces of paper out and wrote the rest of the poem (he did publish quite a bit of poetry). There’s nothing wrong with having a lot of incomplete projects, and lots of good reasons to leave them incomplete.

I once pulled out a ten-year old unsubmitted and unfinished piece of writing, revised it, and submitted it—it was published, and won an award. It took ten years for me to understand what that argument was really about, so leaving it unfinished for that long wasn’t a bad choice at all. There are others that will remain forever unfinished—also not a bad choice.

But there are times when one should just hit submit. The dreams may not come true, but there will be other pieces of writing about which we can dream.

I’m saying all this because I hope people who might be stuck in their writing will find it hopeful. Just hit submit.