John Muir and environmental demagoguery

One of the most controversial claims I make about demagoguery is that it isn’t necessarily harmful. When I make that argument, it’s common for someone to disagree with me by pointing out that some specific instance of demagoguery is harmful. But that isn’t refuting my argument because I’m not arguing for a binary of demagoguery being always or never harmful. I’m saying that not every instance of demagoguery is necessarily harmful. Whether demagoguery is harmful depends, I think, on where it lies on multiple axes: how demagogic the text is; how powerful that media is that is promoting the demagoguery; how widespread that kind of demagoguery is.

(Yeah, yeah, I know, that means a 3d map, but I honestly think you need all three axes.)

And the best way to talk about the harmless demagoguery is to talk more about one of the first examples of a failed deliberative process that haunted me. One spring, when I was a child, my family went to Yosemite Valley in Yosemite National Park. My family mostly tried (and failed) to teach one another bridge, and I wandered around the emerald valley. Having grown up in semi-arid southern California, the forested walks seemed to me magical, and I was enchanted. One evening, my mother took me to a campfire, hosted by a ranger, who told the story of John Muir, a California environmentalist crucial in the preservation of Yosemite National Park. The last part of the ranger’s talk was about Muir’s final political endeavor, his unsuccessful attempt to prevent the damming and flooding of the Hetch Hetchy Valley, a valley the ranger said was as beautiful as the one by which I had been entranced. The ranger presented the story as a dramatic tragedy of Good (John Muir) versus Evil (the people who wanted to dam and flood the valley), with Evil winning and Muir dying of a broken heart. I was deeply moved, and fascinated. And years later, I would come back to the story when trying to think about whether and how people can argue together on issues with profound disagreement.

The ranger had told the story of Good versus Evil, but that isn’t quite right, in several ways. For one thing, it wasn’t a debate with only two sides (something I have since discovered to be true of most political issues). In this case, it is more accurate to say that there were three sides: the corrupt water company currently supplying San Francisco that wanted to prevent San Francisco getting any publicly-owned water supply; the progressive preservationists like John Muir, who wanted San Francisco to get an outside publicly-owned water supply, but not the Hetch Hetchy; and the progressive conservationists like Gifford Pinchot or Marsden Manson, who wanted an outside publicly-owned water supply that included the Hetch Hetchy.

And a little background on each of the major figures in this issue. Gifford Pinchot was head of the Forest Service, with close political ties to Theodore Roosevelt. Born in 1865, he was a strong advocate of conservation—that is, keeping large parts of land in public ownership, sustainable foresting practices, and what is called “multiple use.” The principle of conservation (as opposed to preservation) is that public lands should be available to as many different uses as possible, such as foresting, hunting, camping, and fishing. The consensus among scholars is that Pinchot’s support for the Hetch Hetchy dam was crucial to its success.

Marsden Manson was far less famous than Pinchot. Born in 1850, he was an engineer (trained at Berkeley), member of the Sierra Club who had camped in Yosemite, and, from 1897 till 1912, was an engineer for the City of San Francisco, first serving on the San Francisco Drainage Committee, then in the Public Works Department, and finally City Engineer. It was in that capacity that he wrote the pamphlet I’ll talk about in a bit. He was an avid conservationist.

John Muir is probably the most famous of the people heavily involved in the controversy, and still a hero among environmentalists. Born in 1838 in Scotland, his family emigrated to the United States when he was around ten, to Wisconsin. He arrived in California in 1868, and promptly went to Yosemite Valley (which was not yet a national park). He stayed there for several years, writing about the Sierras, in what would become articles in popular magazines. His elegant descriptions of the beauties of the Sierra Nevada mountains were influential in persuading people to preserve the area, creating Yosemite National Park. He was the first President of the Sierra Club (formed in the early 1890s) which is still a powerful force in environmentalism. Muir was a preservationist, believing that some public lands should be preserved in as close to a wilderness state as possible.

Perhaps the most important character in the controversy is the Hetch Hetchy Valley. Part of the Yosemite National Park, it was less accessible than Yosemite Valley, and hence far less famous. Like many other valleys in the Sierra Nevada mountains, it was formed by glaciers. Two of its waterfalls are among the tallest waterfalls in North America.

The story the ranger told was between right and wrong, good and evil, and, even though I disagree with the stance Pinchot and Manson took, and believe that the Hetch Hetchy Valley should not have been dammed (and I believe they used some pretty sleazy rhetorical and political tactics to make it happen), I don’t think they were bad people. I don’t think they were selfish or greedy, or even that they didn’t appreciate nature. I think they believed that what they were doing was right, and they had some good arguments and good reasons, and they felt justified in some troubling rhetorical means because they believed their ends were good. I don’t think they were Evil.

After all, San Francisco had long been victimized by a corrupt water company, the Spring Valley Company, with a demonstrated record of exploiting users (particularly during the aftermath of the 1906 earthquake). San Francisco had a legitimate need for a new water supply, and the argument that such public goods should not be subject to the profit motive is a sensible argument. The proponents of the dam argued that turning the valley into a reservoir would increase the public’s access to it, and the ability of the public to benefit. The dam, it was promised, would provide electric power that would be a public utility (that is, not privately owned), thereby benefiting the public directly. Thus, both the preservationists and conservationists were concerned about public good, but they proposed different ways of benefitting the public.

Although John Muir was President and one of the founders of the Sierra Club, not everyone in the organization was certain the dam was a mistake, and so the issue was put to a vote—the Sierra Club at that point had both conservationists and preservationists. Muir wrote the case against, a pamphlet called “The Hetch Hetchy Valley,” which, along with Manson’s argument, “Statements of San Francisco’s Side of the Hetch Hetchy Reservoir Matter,” was distributed to members of the Sierra Club, and they were asked to vote.

For Muir’s pamphlet, he reused much of an 1873 article about Hetch Hetchy, originally written to persuade people to visit the Sierras. He kept much (but not all) of his highly poetical description of the Hetch Hetchy Valley, especially its two falls. His argument throughout the pamphlet is that the valley is beautiful, unique and sacred; it isn’t until the end of the pamphlet that he added a section specifically written for the dam controversy, and in that part he resorted to demagoguery, painting his opponents as motivated by greed and an active desire to destroy beauty, in the same category as the Merchants in the Temple of Jerusalem and Satan in the Garden of Eden: “despoiling gainseekers, — mischief-makers of every degree from Satan to supervisors, lumbermen, cattlemen, farmers, etc., eagerly trying to make everything dollarable […] Thus long ago a lot of enterprising merchants made part of the Jerusalem temple into a place of business instead of a place of prayer, changing money, buying and selling cattle and sheep and doves. And earlier still, the Lord’s garden in Eden, and the first forest reservation, including only one tree, was spoiled.” Muir presented the conflict as “part of the universal battle between right and wrong,” and characterized his opponents’ arguments as “curiously like those of the devil devised for the destruction of the first garden — so much of the very best Eden fruit going to waste; so much of the best Tuolumne water.” Muir called his opponents “Temple destroyers, devotees of ravaging commercialism,” saying, they “seem to have a perfect contempt for Nature, and, instead of lifting their eyes to the mountains, lift them to dams and town skyscrapers.” And he ended the pamphlet with the rousing peroration:

Dam Hetch-Hetchy! As well dam for water-tanks the people’s cathedrals and churches, for no holier temple has ever been consecrated by the heart of man. (John Muir Sierra Club Bulletin, Vol. VI, No. 4, January, 1908)

Muir’s argument is demagoguery—he takes a complicated situation (with at least three different positions) and divides it into a binary of good versus evil people. The bad people don’t have arguments; they have bad motives.

But this, too, is a controversial claim on my part, and it actually makes some people really angry with me for me to “criticize” Muir. The common response is that I shouldn’t criticize him because he was a good man and he was fighting for a good cause. In other words, the world is divided into good and bad people, and we shouldn’t criticize good people on our side. And I reject every part of that argument. I think we should criticize people on our side, especially if we agree with their ends (and especially if we’re looking critically at an argument in the past) because that’s how we learn to make better arguments. And I’m not even criticizing Muir in the sense those people mean—they mean I’m saying negative things about him, and that I believe he should have done things differently. The assumption is that demagoguery is bad, so by saying he engaged in demagoguery he’s a bad person.

Like Muir’s argument, that presumes a binary (or even continuum) between good and bad people. Whether there really is such a binary I don’t know, but I’m certain that it isn’t relevant. The debate wasn’t split into good and bad people, and we don’t have to make our heroes untouchable.

And, besides, I’m not criticizing Muir in the sense of saying he did the wrong thing. I’m not sure he did. His demagoguery had no particular harm. While his text (especially the last part) is demagoguery, and he was a powerful rhetor at the time, the kind of demagoguery in which he was engaged (against conservationists) wasn’t very widespread, so he wasn’t contributing to a broad cultural demonizing of some group. And I’m not even sure that his demagoguery did any harm (or benefit) to the effectiveness of his argument.

Muir was trying to get the majority of people in the Sierra Club—perhaps even all of them—to condemn the Hetch Hetchy scheme on preservationist grounds, so he already had the votes of preservationists like himself. What he had to do rhetorically is to move conservationists (or, at least, people drawn to that position) over to the preservationist side, at least in regard to the Hetch Hetchy Valley.

A useful step in an argument is identifying what, exactly, is the issue (or are the issues): why are we disagreeing? Called the “stasis” in classical rhetorical theory, the “hinge” of an argument points to the paradox that a productive disagreement requires agreement on several points—including on the geography of the argument: what is at the center, how broad an area can/should the argument cover, what areas are out of bounds? The stasis is the main issue in the argument, and arguments often go wrong because people disagree about what it is. In the case of the Hetch Hetchy, an ideal argument about the topic would be about whether damming and flooding that valley was the best long-term option for everyone who uses the valley—such a debate would require that people talk honestly and accurately about the actual costs, the various options, and as usefully as possible about the benefits (of all sorts) to be had from preserving the valley for camping (this is a big issue in California, in which camping is very popular).

It’s conventional in rhetoric to say that you have to argue from your opposition’s premises to persuade your opposition, and that would have necessitated Muir arguing on the premises that informed conservation.

Muir’s rhetorical options included:

    1. condemning conservationism in the abstract, and trying to persuade his conservationist audience to abandon an important value;
    2. arguing that conservationism is not a useful value in this particular case, and that this is a time when preservationism is a better route;
    3. arguing that damming and flooding the valley does not really enact conservationist values (e.g., it’s actually expensive).

But, to do any of those strategies effectively, he’d have to make the case on the conservationist premise that it’s appropriate to think about natural resources in terms of costs and benefits. And Muir’s stance about nature—his whole career—was grounded in the perception that such a way of looking at nature is a unethical.

Muir paraphrases (in quotes) the conservationist mantra: “Utilization of beneficent natural resources, that man and beast may be fed and the dear Nation grow great.” While I’ve never found any conservationist text that has that precise wording, it’s a fair representation of the basic principle of conservation; i.e., “greatest good for the greatest number.” And, certainly, conservationists did (and do) believe that there is no point in preserving any wilderness areas—all forests should be harvested, all lakes should be used, all areas should be open to hunting. But they didn’t do this out of a desire for financial gain, as much as from a different (and I would say wrong-headed) perception of how to define “the public.”

The conservationist argument in this case was pretty much bad faith, in that they claimed that they would improve the beauty of the valley by making it a lake. Muir argued they would destroy it. I agree with Muir, as it happens, and so my argument is not that Muir is factually wrong; the valley was destroyed by the damming. I also think some of the dam proponents—specifically Manson–knew that it would be destroyed, and Manson was lying when he described a road, increased camping, and other features that, as an engineer, he must have known were impossible. But many of the people drawn to the conservationist plan didn’t know that Manson was describing technologically impossible conditions, and they believed the proponents’ argument that the resulting reservoir would not only benefit San Franciscans (by providing safe cheap water and electric power) but it would have no impact on camping; it would, the conservationists claim, increase the accessibility of the area without interfering with the beauty of the valley at all. Again, that isn’t true, but it’s what people believed. And part of Aristotle’s point about rhetoric, and its reliance on the enthymeme, is that rhetoric begins with what people believe.

Manson’s response was fairly straightforward, and grounded, he insisted repeatedly, on facts. He argued:

    • San Francisco owned the valley floor.
    • Construction would not begin on the Hetch Hetchy dam until and unless San Francisco first developed Lake Eleanor (a water source not disputed by the preservationists) and then found that water source inadequate.
    • A photo he presented showed what the lake would look like when dammed and flooded—very little of the valley flooded, with no obstruction of the falls that Muir praised so heavily, a road around the edge enabling visitors to see more of the valley—so, he said, the valley will be more beautiful, reflecting the magnificent granite walls.
    • Keeping the reservoir water pure will not inhibit camping in any way.
    • The Hetch Hetchy plan is the least expensive option, and it will provide energy, thereby breaking the current energy monopoly.

Muir’s arguments, he says, “are not in reality based upon true and correct facts” (435).

Marsden Manson was City Engineer for San Francisco, and had done thorough reports on the issue. And so he had to know that almost all of what he was saying was “not in reality based upon true and correct facts.” San Francisco had bought the land, but, since it was within a national park, the seller had no right to sell it. Construction would begin immediately on the dam, flooding the entire valley, making the entire valley inaccessible, including the famous falls. It was not possible to build the roads that Manson drew on the photo and, being an engineer, he must have known that. The reservoir inhibited camping, and, most important, the Hetch Hetchy plan was the most expensive option available to San Francisco. Manson had muddled the numbers to make it appear less expensive.

In other words, either Manson lied, or he was muddled, uninformed, bad at arithmetic, and not a very good engineer.

Manson’s motives in all this are complicated, and ultimately irrelevant. He may have expected to benefit personally by the approval of the dam project, as he may have thought he would build it. But it would have been a benefit of glory but not money; I’ve never read anything to suggest that he was motivated by anything other than a sense that dominating nature is glorious, and that public projects providing water and power are better than preserving valleys. (He is reputed to have suggested damming and flooding Yosemite Valley.)

In other words, what presented itself as the pragmatic option was just as ideologically driven as what was rejected as the emotional one (I think the same thing happens now with arguments about the death penalty, welfare “reform,” the war on drugs, foreign policy, the deficit—there is a side that manages to be taken as more practical, but it might actually be the most ideologically driven).

Muir’s rhetorical options were limited by his opponent, an engineer, making claims about engineering issues that neither Muir nor his supporters had the expertise to refute. It took years for someone to look at the San Francisco reports and determine that the numbers were bad; preservationists didn’t know (and, presumably, many supporters of the dam didn’t know) that the numbers were misleading, and it was the most expensive option.

But would Muir have argued on such grounds anyway? To argue on the grounds of cost would have confirmed the Major Premise that public projects should be determined by cost—to say that the Hetch Hetchy should not be built because it is the most expensive would seem to confirm the perception that you can make natural cathedrals “dollarable” in Muir’s words. In other words, Muir rejected the very terms by which the conservationist argument was made—he rejected the premises. To argue on premises (except in rare circumstances) seems to confirm them, and so he would, in order to win the Hetch Hetchy argument, have argued against what he had spent a lifetime arguing for: that we should not look at nature in terms of money. Wilderness areas are, he insisted, sacred. And so he railed against his opposition.

As I mentioned above, I’m often attacked by people who think I’m attacking Muir. And I think that misunderstanding arises because of a particular perception of what the discipline of rhetoric is for: rhetorical analysis is often seen as implicitly normative; we do an analysis to say what a person should do or should have done. So, to say that Muir’s rhetorical strategies didn’t work is to say his rhetoric was bad, and it should have been different. Coupled with the notion that good people promote good things, if I say that Muir’s rhetoric was “demagoguery,” then I am saying he cannot have been a good person. There is, here, a theory of identity: that people are either good or bad; that good people say good things, and that bad people say bad things; that demagoguery is something only bad people do. That whole model of discourse and identity is wrong in too many ways to count, and I am not endorsing it.

I think Muir was a good man–he is a personal hero of mine—but that doesn’t mean he was perfect, and it certainly doesn’t mean we can’t learn from him. Muir did well within the Sierra Club (the Sierra Club vote was about 80% on Muir’s side and 20% in favor of the dam) , but ultimately lost the argument. And I think what we learn from his failure to persuade all conservationists to vote against the Hetch Hetchy project is not about Muir’s personal qualities or failings, but about rhetorical constraints and models of persuasion.

I’m arguing that, for Muir to have persuaded his opposition, he would have had to rely on premises that he rejected. This is sometimes called the “sincerity problem” in rhetoric. To what extent, and under what circumstances, should we make arguments we don’t believe in order to achieve an end in which we do believe? Muir didn’t argue from insincere premises; that may have weakened his effectiveness in the moment. But it definitely strengthened his effectiveness in the long run. His Hetch Hetchy pamphlet continues to be powerfully motivating for people, perhaps more motivating than it would have been had he compromised his rhetoric in order to be effective in the short-term. Muir’s demagoguery did no harm, and it may have even done some good. Demagoguery isn’t necessary harmful.

Demagoguery and Democracy

[image source: https://en.wikipedia.org/wiki/Hetch_Hetchy#/media/File:Hetch_Hetchy_Valley.jpg]

On career choices as mingling in Burke’s parlor

On Wednesday, I sent off the scholarly version of the demagoguery argument. It isn’t the book I once planned (that would involve a deeply theoretical argument about identity and the digital world), but it’s the one I really wanted to write, that would (I think) reach more people than that other one.

And it’s the last scholarly book I’ll write. I intend to spend the rest of my career trying to solve the interesting intellectual problem of making scholarly concepts and debates more accessible to non-academics. But that isn’t because I reject highly specialized academic writing as, in any way, a bad thing.

I have no problem with highly theoretical and very specialized books. My books have barely grazed the 1000 sales point, and that’s pretty good for a scholarly book. People have told me that something I’ve written has had an impact on their scholarship, pedagogy, program administration, so I’m really happy with my record as a scholar.

And I’m happy with the record of people who have sold both more and less because measuring impact is so very difficult. Publishing a book with an academic press is an extraordinary achievement, and measuring the impact of such books accurately is nigh impossible—a really powerful book is shared in pirated pdfs, checked out of libraries, passed from one person to another. Sales and impact are orthogonal in academia.

If you study the history of ideas even a little you have to know that what seemed major in the moment was sometimes just a trend (like mullets) and sometimes a sea change (like the synthesizer). No one reads Northrop Frye anymore, but he was a big deal at one moment, and yet Hannah Arendt is still in the conversation, who was also a big deal around the same time. And there are all those people who weren’t big deals in their era, but later came to have tremendous impact, such as Mikhail Bakhtin.

Some trade books on scholarly issues have had extraordinary sales (such as Mortimer Adler’s writings), but it’s hard to know what impact they had. Madison Grant’s racist book Passing of the Great Race had poor sales, but appears to have had a lot of impact. And there are lots of trade books that have come and gone without leaving any impact, so there’s no good reason to conclude that trade books necessarily have more impact than scholarly ones. I don’t think there are a lot of (any?) necessary conclusions that one can draw about whether trade or scholarly books have more impact, are more or less important, more or less valuable intellectual activity.

I have always loved Kenneth Burke’s analogy of the parlor for what it means to be interested in major questions. You show up at a party, he says, and it’s been going on for a while, and you find some conversation that seems interesting. You listen for a while, and then you take a side or point out something new. You get attacked and defended, and some people leave the conversation, and others join, and eventually you too leave. And it goes on, with other people taking sides that may or may not have to do with what you were arguing.

What Burke fails to mention is that, if it’s a good party, there are a lot of conversations going on. You might choose to leave that particular conversation, but not leave the party.

I have loved writing scholarly pieces (although I didn’t initially think I would), and my work has placed me in some particular conversations. I’ve moved from one conversation to another, but all on the side of the parlor engaged in very scholarly arguments. I’d like to leave that side of the parlor, not because it’s a bad one—it’s a wonderful one—but because it’s a party with a lot of conversations going on. I’d like to mingle.

I think a lot of discussions of the public intellectual make a lot of odd assumptions that assume binaries—that either specialized or public scholarship is good, for instance. Scholarship that speaks with authority to a small group is neither better nor worse than scholarship that reaches a broad audience—it’s just a different conversation in Burke’s parlor. And I’m going to wander over there for a bit.

 

The easy demagoguery of explaining their violence

When James Hodgkinson engaged in both eliminationist and terroristic violence against Republicans, factionalized media outlets blamed his radicalizing on their outgroup (“liberals”). In 2008, when James Adkisson committed eliminationist and terroristic violence against liberals, actually citing in his manifesto things said by “conservative” talk show hosts (namechecking some of the ones who blamed liberals for Hodgkinson), those media outlets and pundits neither acknowledged responsibility nor altered their rhetoric.[1]

That’s fairly typical of rabidly factional media: if the violence is on the part of someone who can be characterized as them (the outgroup), then outgroup rhetoric obviously and necessarily led to that violence. That individual can be taken as typical of them. If, however, the assailant was ingroup, then factionalized media either simply claimed that the person was outgroup (as when various media tried to claim that a neo-Nazi was a socialist and therefore lefty), or they insisted this person be treated as an exception.

That’s how ingroup/outgroup thinking works. The example I always use with my classes is what happens if you get cut off by a car with bumper stickers on a particularly nasty highway in Austin (you can’t drive it without getting cut off by someone). If the bumper stickers show ingroup membership, you might think to yourself that the driver didn’t see you, or was in a rush, or is new to driving. If the bumper stickers show outgroup membership, you’ll think, “Typical.” Bad behavior is proof of the essentially bad nature of the outgroup, and bad behavior on the part of ingroup membership is not. That’s how factionalized media works.

So, it’s the same thing with ingroup/outgroup violence and factionalized media (and not all media is factionalized). For highly factionalized right-wing media, Hodgkinson’s actions were caused by and the responsibility of “liberal” rhetoric, but Adkisson’s were not the responsibility of “conservative” rhetoric. For highly factionalized lefty media, it was reversed.

That factionalizing of responsibility is an unhappy characteristic of our public discourse; it’s part of our culture of demagoguery in which the same actions are praised or condemned not on the basis of the actions, but on whether it’s the ingroup or outgroup that does it. If a white male conservative Christian commits an of terrorism, the conservative media won’t call it terrorism, never mentions his religion or politics, and generally talks about mental illness; if a someone even nominally Muslim does the same act, they call it terrorism and blame Islam. In some media enclaves, the narrative is flipped, and only conservatives are acting on political beliefs. In all factional media outlets, they will condemn the other for “politicizing” the incident.

While I agree that violent rhetoric makes violence more likely, the cause and effect is complicated, and the current calls for a more civil tone in our public discourse is precisely the wrong solution. We are in a situation when public discourse is entirely oriented toward strengthened our ingroup loyalty and our loathing of the outgroup. And that is why there is so much violence now. It isn’t because of tone. It isn’t because of how people are arguing; it’s because of what people are arguing.

To make our world less violent, we need to make different kinds of arguments, not make those arguments in different ways.

Our world is so factionalized that I can’t even make this argument with a real-world example, so I’ll make it with a hypothetical one. Imagine that we are in a world in which some media that insist all of our problems are caused by squirrels. Let’s call them the Anti-Squirrel Propaganda Machine (ASPM).They persistently connect the threat of squirrels to end-times prophecies in religious texts, and both kinds of media relentlessly connect squirrels to every bad thing that happens. Any time a squirrel (or anything that kind of looks like a squirrel to some people, like chipmunks) does something harmful it’s reported in these media, any good action is met with silence. These media never report any time that an anti-squirrel person does anything bad. They declare that the squirrels are engaged in a war on every aspect of their group’s identity. They regularly talk about the squirrels’ war on THIS! and THAT! Trivial incidents (some of which never happened) are piled up so that consumers of that media have the vague impression of being relentlessly victimized by a mass conspiracy of squirrels.

Any anti-squirrel political figure is praised; every political or cultural figure who criticizes the attack on squirrels is characterized as pro-squirrel. After a while, even simply refusing to say that squirrels are the most evil thing in the world and that we must engage in the most extreme policies to cleanse ourselves of them is showing that you are really a pro-squirrel person. So, in these media, there is anti-squirrel (which means the group that endorses the most extreme policies) and pro-squirrel. This situation isn’t just ingroup versus outgroup, because the ingroup must be fanatically ingroup, so the ingroup rhetoric demands constant performance of fanatical commitment to ingroup policy agendas and political candidates.

If you firmly believe that squirrels are evil (and chipmunks are probably part of it too0, but you doubt whether this policy being promoted by the ASPM is really the most effective policy, you will get demonized as someone trying to slow things down, not sufficiently loyal, and basically pro-squirrel. Even trying to question whether the most extreme measures are reasonable gets you marked as pro-squirrel. Trying to engage in policy deliberation makes you pro-squirrel.

We cannot have a reasonable argument about what policy we should adopt in regard to squirrels because even asking for an argument about policy means that you are pro-squirrel. That is profoundly anti-democratic. It is un-American insofar as the very principles of how the constitution is supposed to work show a valuing of disagreement and difference of opinion.

(It’s also easy to show that it’s a disaster, but that’s a different post.)

ASPM media will, in addition, insist on the victimization narrative, and also the “massive conspiracy against us” argument, but that isn’t really all that motivating. As George Orwell noted in 1984, hatred is more motivating when it’s against an individual, and so these narratives end up fixating on a scapegoat. (Right now, for the right it’s George Soros, and for the left it’s Trump.) There can be institutional scapegoats—Adkisson tried to kill everyone in a Unitarian Church because he’d believed demagoguery that said Unitarianism is evil.

Inevitably, the more that someone lives in an informational world in which they are presented as in a war of extermination against us, the more that person will feel justified in using violence against them. If it’s someone who typically uses violence to settle disagreement, and there is easy access to weapons, it will end in violence against whatever institution, group, or individual that person has been persuaded is the evil incubus behind all of our problems.

At this point, I’m sure most readers are thinking that my squirrel example was unnecessarily coy, and that it’s painfully clear that I’m not talking about some hypothetical example about squirrels but the very real examples of the antebellum argument for slavery and the Stalinist defenses of mass killings of kulaks, most of the military officer class, and people who got on the wrong side of someone slightly more powerful.

And, yes, I am.

The extraordinary level of violence used to protect slavery as an institution (or that Stalin used, or Pol Pot, or various other authoritarians) was made to seem ordinary through rhetoric. People were persuaded that violence was not only justified, but necessary, and so this is a question of rhetoric—how people were persuaded. But, notice that none of these defenses of violence have to do with tone. James Henry Hammond, who managed to enact the “gag rule” (that prohibited criticism of slavery in Congress) didn’t have a different “tone” from John Quincy Adams, who resisted slavery. They had different arguments.

Demagoguery—rhetoric that says that all questions should be reduced to us (good) versus them (evil)—if given time, necessarily ends up in purifying this community of them. How else could it end? And it doesn’t end there because of the tone of dominant rhetoric. It ends there because of the logic of the argument. If they are at war with us, and trying to exterminate us, then we shouldn’t reason with them.

It isn’t a tone problem. It’s an argument problem. It doesn’t matter if the argument for exterminating the outgroup is done with compliments toward them (Frank L. Baum’s arguments for exterminating Native Americans), bad numbers and the stance of a scientist (Harry Laughlin’s arguments for racist immigration quotas), or religious bigotry masked as rational argument (Samuel Huntington’s appalling argument that Mexicans don’t get democracy).

In fact, the most effective calls for violence allow the caller plausible deniability—will no one rid me of this turbulent priest?

Lots of rhetors call for violence in a way that enables them to claim they weren’t literally calling for violence, and I think the question of whether they really mean to call for violence isn’t interesting. People who rise to power are often really good at compartmentalizing their own intentions, or saying things when they have no particular intention other than garnering attention, deflecting criticism, or saying something clever. Sociopaths are very skilled at perfectly authentically saying something they cannot remember having said the next day. Major public figures get a limited number of “that wasn’t my intention” cards for the same kind of rhetoric—after that, it’s the consequences and not the intentions that matter.

What matters is that whether it’s individual or group violence, the people engaged in it feel justified, not because of tone, but because they have been living in a world in which every argument says that they are responsible for all our problems, that we are on the edge of extermination, that they are completely evil, and therefore any compromise with them is evil, that disagreement weakens a community, and that we would be a better and stronger group were we to purify ourselves of them.

It’s about the argument, not the tone.

[A note about the image at the beginning: this is one of the stained glass windows in a major church in Brussels celebrating the massacre of Jews. The entire incident was enabled by deliberately inflammatory us/them rhetoric, but was celebrated until the 1960s as a wonderful event.]

[1] For more on Adkisson’s rhetoric, and its sources, see Neiwert’s Eliminationists (https://www.amazon.com/Eliminationists-Hate-Radicalized-American-Right/dp/0981576982)

For more about demagoguery: https://theexperimentpublishing.com/catalogs/fall-2017/demagoguery-and-democracy/

Making sure the poor don’t get any food they don’t deserve

“But when thou makest a feast, call the poor, the maimed, the lame, the blind”

In a recent interview, Kellyanne Conway said that “able-bodied” people who will lose Medicare with the GOP health plan should “go find employment” and then get “employee-sponsored benefits.” Critics of Conway presented evidence that large numbers of adults on Medicaid do have jobs, as though that would prove her wrong. But that argument won’t work with the people who like the GOP plan because their answer is that those people should get better jobs. The current GOP plan regarding health care is based on the assumption that benefits like health care should be restricted to working people.

For many, this looks like hardheartedness toward the poor and disadvantaged—exactly the kind of people embraced and protected by Jesus, so many people on the left have been throwing out the accusation of hypocrisy. That the same people who are, in effect, denying healthcare to so many people have protected it for themselves seems, to many, to be the merciless icing on the hateful cake.

And so progressives are attacking this bill (and the many in the state legislatures that have the same intent and impact) as heartless, badly-intentioned, cynical, and cruel. And that is exactly the wrong way to go about this argument. The category often called “white evangelical” tends to be drawn to the just world hypothesis and prosperity gospel, and those two (closely intertwined) beliefs provide the basis for the belief that public goods should not be equally accessible (let alone evenly distributed) because, they believe, those goods should be distributed on the basis of who deserves (not needs) them more. And they believe that Scripture endorses that view, so they are not hypocrites—they are not pretending to have beliefs they don’t really have. This isn’t an argument about intention; this is an argument about Scriptural exegesis.

Progressives will keep losing the argument about public policy until we engage that Scriptural argument. People who argue that the jobless, underemployed, and government-dependent should lose health care will never be persuaded by being called hypocrites because they believe they are enacting Scripture better than those who argue that healthcare is a right.

1. The Just World Hypothesis and Prosperity Gospel

There are various versions of the prosperity gospel (and Kate Bowler’s Prosperity Gospel elegantly lays them out), but they are all versions of what social psychologists call “the just world hypothesis.” That hypothesis is a premise that we live in a world in which people get what they deserve within their lifetimes—people who work hard and have faith in Jesus are rewarded. In some versions, it’s well within what Jesus says, that God will give us what we need. In others, however, it’s the ghost of Puritanism (as Max Weber called it) that haunts America: that wealth and success are perfect signs of membership in the elect. And it’s that second one that matters for understanding current GOP policies.

In that version, in this life, people get what they deserve, so that good people get and deserve good things, and bad people don’t deserve them—it is an abrogation of God’s intended order to allow bad people to get good things, especially if they get those good things for free. For people who believe that God perfectly and visibly rewards the truly faithful, there is a perfect match between faith and the goods such as health and wealth. People with sufficient faith are healthy and wealthy, and, because they have achieved those things by being closer to God, they deserve more of the other goods, such as access to political power. Rich people are just better, and their being rich is proof of their goodness. So, it’s a circular argument–good people get the good things, and that must mean that people with good things are good.

I would say that’s an odd reading of Scripture, but no odder than the defenses of slavery grounded in Scripture, nor of segregation, nor of homophobia. All of those defenders had their proof-texts, after all. And, in each case, the people who cited those texts and defended those practices had a conservative (sometimes reactionary) ideology. They positioned themselves as conserving a social order and set of practices they sincerely believed intended by God as against liberal, progressive, or “new” ways of reading Scripture.

[And here a brief note—they often didn’t know that their own readings were very new, but that’s a different post.]

Because they were reacting against the arguments they identified as liberal (or atheist), I’ll call them reactionary Christians for most of this post, and then in another post explain what’s wrong with that term.

In some cultures, political ideology and identity are identical, so that a person with a particular political belief automatically identifies everyone with that belief as in the category of “good person,” and anyone who doesn’t share that belief is a “bad person.” We’re in that kind of culture.

That easy equation of “believes what I do” and “good person” is enhanced by living within an informational enclave. In informational enclaves, a person only hears information that confirms their beliefs—antebellum Southern newspapers were filled with (false) reports of abolitionist plots, for instance,—so it would sincerely seem to their readers as though “everyone” agrees that abolitionists are trying to sow insurrection. In an informational enclave, “everyone” agrees that the Jews stab the host for no particular reason (the subject of the stained glass above–a consensus that resulted in massacre).

Informational enclaves are self-regulating in that anyone who tries to disrupt the consensus is shamed, expelled, perhaps even killed. By the 1830s, it was common for slave states to require the death penalty for anyone advocating abolition, and “advocating abolition” might be understood as “criticizing slavery.” American Protestant churches split so that Southern churches could guarantee they would not have a pastor that might condemn slavery (the founding of the SBC, for instance), and proslavery pastors could rain down on their congregations proof-texts to defend the actually fairly bizarre set of practices that constituted American slavery.

As Stephen Haynes has shown, the reliance of those pastors on an odd reading of Genesis IX became a Scriptural touchstone for defending segregation.

Southern newspapers were rabidly factional in the antebellum era, and (with a few exceptions) pro-segregation (or silent on segregation) in the Civil Rights eras. (This was not, by the way, “true of both sides,” in that the major abolitionist newspaper, The Liberator, often published the full text of proslavery arguments.) Because those proof-texts were piled up as defenses, and reactionary Christianity was hegemonic in various areas, many people simply knew that there were three kings who visited the baby Jesus, that those three kings related to the three races, with the “black” race condemned to slavery due to Noah’s curse.

If you’d like to see how hegemonic that (problematic) reading of Scripture was, look at older nativity scenes, and you will see that there is always a white, someone vaguely semitic, and an African. Ask yourself, how many wise men visited Jesus? Try to prove that number through Scripture.

That whole history of reactionary Christianity is ignored, and even the SBC has tried to rewrite its own history, not acknowledging the role of slavery in their founding. My point is simply that, when a method of interpreting Scripture becomes ubiquitous in a community, then people don’t realize that they’re interpreting Scripture through a particular lens—they think they’re just reading what is there.

For years, the story of Sodom was taken as a condemnation of homosexuality, but there is really nothing about homosexuality in it—the Sodomites were more commonly condemned for oppressing the poor. There are rapes in it, and one of them would have been homosexual, but there is no indication that homosexuality was accepted as a natural practice in the community. Yet, for years, the story of Sodom was flipped on the podium as though it obviously condemned all same-sex relationships.

For readers of The New York Times, The Nation, or other progressive outlets, the Scriptural argument over homosexuality was under the radar, but it was crucial to how far we’ve gotten for the civil rights of people with  sexualities stigmatized by reactionary Christians. The Scriptural argument about queer sexuality was always muddled—Sodom wasn’t really about gay sex, the word “homosexuality” is nowhere in Scripture, people who cite Leviticus about men lying with each other get that sentiment tattooed on themselves while wearing mixed fibers, Paul was opposed to sex in general.

Reactionary Christians managed to promote their muddled view as long as no one raised questions about exegesis, and the Christian Left raised those questions over and over. And now even mainstream reactionary churches who argue that Scripture condemns homosexuality have abandoned the story of Sodom as a proof text. That success can be laid at the feet of progressive Christians.

One thing that turned large numbers of people, I think, was the number of bloggers, popular Christian authors, and pastors making the more sensible Scriptural argument: there isn’t a coherent method of reading Scripture that demonizes queer sexuality and allows the practices reactionary Christians want to allow (such as non-procreative sex, divorce, wildflower mixes, corduroy, oppressing the poor).

Similarly, an important realm in the Civil Rights movements was that in which progressive Christians debated the Scriptural argument. One of the more appalling “down the memory hole” moments in American history is the role of reactionary Christians in civil rights. Segregation was a religious issue, supported by Genesis IX, and various other texts (about God putting peoples where they belong, and all the texts about mixing). Even “moderate” Christians, like those who opposed King, and to whom he responded in his letter, opposed integration.

That’s important. The major white churches in the South supported segregation, and all of the reactionary ones.The opponents of segregation (like the opponents of slavery) were progressive Christians, sometimes part of organizations (like the black churches) and sometimes on the edge of getting disavowed by their organizations. And that is obscured, sometimes deliberately, as when reactionary Christians try to claim that “Christianity” was on the side of King—no, n fact, reactionary Christianity was on the side of segregation.

Right now, there is a complicated fallacy of genus-species among many reactionary Christians, in that they are trying to claim the accomplishments of people like Jesse Jackson and Martin Luther King, Jr., and Stokely Carmichael on the grounds that King was Christian, while ignoring that their churches and leaders disavowed and demonized those people (and, in the case of Jackson and Carmichael, still do).

Reactionary Christianity has two major problems: one is a historical record problem, and the second, related, is an exegesis problem. They continually deny or rewrite their own participation in oppression, and they have thereby enabled the occlusion of the problems their method of exegesis presents. If their method of reading got them to support slavery and segregation, practices they now condemn, then their method is flawed. Denying the problems with their history enables them to deny the problems with their method.

Reactionary Christianity’s method of reading of Scripture begins by assuming that the current cultural hierarchy is intended by God, that this world is just, that everything they believe is right, and then goes in support of texts that will support that premise. And there is also a hidden premise that the world is easily interpretable, that uncertainty and ambiguity are unnecessary because they are the signs of a weak faith, and that the world is divided into the good and the bad.

2. The Scriptural argument

The proof-text for the notion that poor people don’t deserve health care or other benefits is 2 Thessalonians 3:10, “For even when we were with you, this we commanded you: that if any would not work, neither should he eat.”

Thessalonians may or may not have been written by Paul (probably not), but it certainly contradicts what both Paul and Jesus said about how to treat the poor. There are far more texts that insist on giving without question, caring for the poor, tending to people without judging, and for humans not presuming to be God (that is, we are not perfect judges of good and evil, and our fall was precisely on the grounds of thinking we should be).

That we have a large amount of public policy wavering on that single wobbly text of 2 Thessalonians 3:10 is concerning, but it isn’t new—the Scriptural arguments for slavery, segregation, and homophobia were and are similarly wobbly. Prosperity gospel has a very shaky Scriptural foundation, and the whole notion that Scripture supports an easy division into makers and takers isn’t any easier to argue than the readings that supported antebellum US practices regarding slavery.

Their reading of Scripture says that they should feel good about health insurance being restricted to people who have jobs (which is why Congress is cheerfully giving themselves benefits they’re denying to others—they see themselves as having earned those benefits by having the job of being in Congress). They can feel justified (in the religious sense) in cutting off people on Medicaid, those who are un- or underemployed, and those with pre-existing conditions because they believe that Scripture tells them that those people could simply stop being un- or underemployed, or have made different choices that wouldn’t have landed them on Medicaid, or could have prayed enough not to have those pre-existing conditions. They believe that they are, in this life, sitting by Jesus’ side and handing out judgments.

I think they’re wrong. But calling them hypocrites won’t work.

This is an argument about Scripture, and progressives need to understand that, as with other policy debates, progressive Christians will do some of the heavy lifting. And progressive Christians need to understand that it is our calling: to point, over and over, to Jesus’ passion for the poor and outcast, and to his insistence that the rewards of this world should never be taken as proof of much of anything.

http://theexperimentpublishing.com/tag/patricia-roberts-miller/

King Lear and charismatic leadership

Recently, various highly factionalized media worked their audience into a froth by reporting that New York’s “Shakespeare in the Park” had Julius Caesar represented as Trump. That these media were successful shows people are willing to get outraged on the basis of no or mis-information. Shakespeare’s Caesar is neither a villain nor a tyrant.

And it’s the wrong Shakespeare anyway for a Trump comparison. Shakespeare was deeply ambivalent about what we would now consider democratic discourse (look at how quickly Marc Antony turns the crowd, or Coriolanus’ inability to maintain popularity). But he wasn’t ambivalent about leaders who insist on hyperbolic displays of personal loyalty. They are the source of tragedy.

The truly Shakespearean moment recently was Trump’s cabinet meeting, which he seemed to think would gain him popularity with his base, since it was his entire cabinet expressing perfect loyalty to him. And anyone even a little familiar with Shakespeare immediately thought of the scene in King Lear when Lear demands professions of loyalty. Trump isn’t Caesar; he’s Lear.

Lear’s insistence on loyalty meant that he rejected the person who was speaking the truth to him, and the consequence was tragedy. It isn’t exactly news, at least among people familiar with the history of persuasion and leadership, that leaders who surround themselves with people who make the leader feel great (or who worship the leader) make bad decisions. Ian Kershaw’s elegant Fatal Choices makes the point vividly, showing how leaders like Mussolini, Hitler, or Hirohito skidded into increasingly bad decisions because they treated dissent as disloyalty.

In business schools, this kind of leadership is called “charismatic,” and it is often presented as an unequivocal good—something that is surely making Max Weber (who initially described it in 1916) turn in his grave. Weber identified three sources of power for leaders: tradition, legal, and charismatic, and Hannah Arendt (the scholar of totalitarianism) added a fourth: someone whose authority comes from having demonstrated context-specific knowledge. Weber argued that charismatic leadership is the most volatile.

In business schools, charismatic leadership is praised because it motivates followers to go above and beyond; followers who believe in the leader are less likely to resist. And, while that might seem like an unequivocal good, it’s only good if the leader is leading the institution in a good direction. If the direction is bad, then disaster just happens faster.

Charismatic leadership is a relationship that requires complete acquiescence and submission on the part of the followers. It assumes that there is a limited amount of power available (thus, the more power that others have, the less there is for the leader to have). And so the charismatic leader is threatened by others taking leadership roles, pointing out her errors, or having expertise to which she should submit. It is a relationship of pure hierarchy, simultaneously robust and fragile, because it can withstand an extraordinary amount of disconfirming evidence (that the leader is not actually all that good, does not have the requisite traits, is out of her depth, is making bad decisions) by simply rejecting them; it is fragile, however, insofar as the admission of a serious flaw on the part of the leader destroys the relationship entirely. A leader who relies on legitimacy isn’t weakened by disagreement (and might even be strengthened by it), but a charismatic leader is.

Hence, leaders who rely on legitimacy encourage disagreement and dissent because that leader’s authority is strengthened by the expertise, contributions, and criticism of others, but charismatic leaders insist on loyalty.

Charismatic leadership is praised in many areas because it leads to blind loyalty, and blind loyalty certainly does make an organization that has people working feverishly toward the leaders’ ends. But what if those ends aren’t good?

Whether charismatic leadership is the best model for business is more disputed than best sellers on leadership might lead one to believe. There is no dispute, however, that it’s a model of leadership profoundly at odds with a democratic society. It is deeply authoritarian, since the authority of the leader is the basis of decision-making, and dissent is disloyalty.

Lear demanded oaths of blind loyalty, and, as often happens under those circumstances, the person who was committed to the truth wouldn’t take such an oath. And that person was the hero.

“Just Write!” and the Rhetoric of Self-Help

There is a paradox regarding the large number of scholars who get stalled in writing—and a large number do get stalled at some point (50% of graduate students drop out)—they got far enough to get stalled because, for some long period of time, they were able to write. People who can’t write a second book, or a first one, or a dissertation, are people who wrote well enough and often enough to get to the point that they needed to write a dissertation, first book, second book, grant, and so on. So, what happened?

The advice they’re likely to be given is, “Just write.” And the reason we give that advice (advice I gave for years) is that we have the sense that they’re overthinking things, that, when they sit down to write, they’re thinking about failure, and success, and shame, and all the things that might go wrong, and all the ways what they’re writing might be inadequate, and all the negative reactions they might get for what they’ve written. So, we say, “Just write,” meaning, “Don’t think about those things right now.”

The project of writing may seem overwhelming because existentially risky, and the fear created by all the anxiety and uncertainty is paralyzing. It can seem impossibly complicated, and so we give simple advice because we believe that persuading them to adopt a simpler view of the task ahead will enable them to write something. Once they’ve written something, once they’re unstuck, then they can write something more, and then revise, and then write more. Seeing that they have written will give them the confidence they need to keep writing.

And I think that advice often works, hence the (deserved) success of books like Writing Your Dissertation in Fifteen Minutes a Day or Destination Dissertation. They simplify the task initially, and present the tasks involved in ways that are more precise than accurate, but with the admirable goal of keeping people moving. Many people find those books useful, and that’s great. But many people don’t, and I think the unhappy consequence of the “you just have to do this” rhetoric is that there is an odd shaming that happens to people for whom that advice doesn’t work. And, while it’s great that it works for a lot of people, there are a lot for whom it doesn’t, and I’m not happy that they feel shamed.

These books have, as Barbara Kamler and Pat Thomson have argued, characteristics typical of the self-help genre (“The Failure of Dissertation Advice Books”), especially in that it presents dissertation writing as “a series of linear steps” with “hidden rules” that the author reveals. While I am not as critical of those books, or of the genre of self-help, as Kamler and Thomson, I think their basic point is worth taking seriously: that this advice misleads students because it presents dissertation writing as a set of practices and habits rather than cognitive challenges and developments.

Academic writing is hard because it’s hard. Learning to master the postures, steps, and dances of developing a plausible research question, identifying and mastering appropriate sources, determining necessary kinds of support, managing a potentially sprawling project, and positioning a new or even controversial claim in an existing scholarly conversation—all of that is hard and requires cognitive changes, not just writing practices.

Telling people academic writing “just” requires anything (“just write,” “just write every day,” “just ignore your fears,”) is a polite and sometimes useful fiction. And self-help books’ reliance on simple steps and hidden rules is, I’d suggest, not necessarily or manipulative, but based in the sense that telling people something hard is actually hard can discourage them. If you lie, and thereby motivate them to try doing it, then they might realize that, while hard, it isn’t impossible.

I think the implicit analogy is to something like telling a person who needs to exercise that they should “just get up off the couch.” Telling people that improving their health will be a long and slow process with many setbacks is unlikely to motivate someone to start the process; it makes the goal seem impossible, and unrewarding. Telling someone that getting healthier is simple, and they “just” need to increase their exercise slightly, or reduce portion size slightly, or do one thing differently will at least get them started. Having gotten a little healthier might inspire them to do more, but, even if it doesn’t, they are getting a little better.

But that’s the wrong analogy.

A scholar who is having difficulty writing is not analogous to someone who needs to get up off the couch: it’s a person with a long record of successes as a writer. That is what we (and people who are stuck) so often lose track of when we give the “just write” advice. They are not a person sitting on a couch; they are someone with an exercise practice that has always worked for them in the past and it isn’t working now.

The better analogy, I would suggest, is a sprinter who is now trying to run a marathon. Sprinting has worked for them in the past, and many academics have a writing process that is akin to sprinting—chunks of time in which we do nothing but write, and try to get as much done as quickly as we can. Writing a dissertation or book, on the other hand, is more like running a marathon.

It would be unethical to tell a sprinter who is unable to run a marathon that she should “just run.” She has been running; she’s quite good at it. But the way that she has been running is not working for this new distance. And if she does try to run a marathon the way she has always run short races, she will hurt herself.

My intuition is that people who have trouble writing are people who have always used the sprinting method, and have simply managed to develop the motivational strategies to sprint for longer, or collapse from time to time while on the race, and pick themselves up. Often, it seems to me, that motivation relies on panic and negative self-talk—they manage to binge write because otherwise, they tell themselves, they are a failure.

So I’m not saying that “Just write” is always bad advice. I am saying that it sometimes is; it is sometimes something that can send people into shame spirals. It only works for some people, for people who do find that polite fiction motivating. For others, though, telling them “just write” is exactly like telling a person in a panic attack “just calm down” or someone depressed “just cheer up.”

The “just write” comes from a concern that lack of confidence will paralyze a student. But I think we might be solving the wrong problem.

Part of the problem is the myth of positive thinking, which has taken on an almost magical quality for some people. There is a notion that you should only think positive thoughts, as though thinking negative things brings on bad events. Since thinking clearly about how hard it is to write a book, dissertation, or grant (and, specifically, thinking clearly about how we might have habits or processes that inhibit our success) is thinking about “bad” things, about how things might go wrong or what troubles we might have, the myth of positive thinking says you shouldn’t do it. You should, instead, just imagine success.

This is a myth. It isn’t just a myth, but pernicious, destructive nonsense. A (sometimes secular) descendant of the positive psychology elegantly described by Bowler in Blessed, this is magical thinking pure and simple, and perfectly contrary to what research shows about how positive thinking actually affects motivation.

But here I should be clear. Some people who advocate wishful thinking do so because believe that the only other possibility is wallowing in self-loathing and a sense that the task is impossible, and they believe that telling students that academic writing is hard will necessarily lead to their believing it is impossible. In other words, there is an assumption that there is a binary between thinking only and entirely about positive outcomes or thinking only and entirely about tragic outcomes. The former is empowering and the latter is paralyzing. That narrative is wrong on all three counts—positive thinking is not necessarily enabling, moments of despair are not necessarily disabling, and our attitude toward our own challenges is not usefully described as a binary between pure optimism and pure despair. Left out of that binary is being hopefully strategic: aware of possible failures, mindful of hurdles, with confidence in our resilience as much as in our talents.

As to the first, studies clearly show that refusing to think negative thoughts about possible outcomes is actively harmful, and frequently impairs achievement. That’s important to remember: telling students they shouldn’t think about their own flaws, the challenges ahead of them, and how things might go wrong is not helping them, and it is making it less likely they will do what they need to do.

Gabriele Oettingen’s considerable research shows that (summarized in the very helpful book Rethinking Positive Thinking), while wishful thinking can be useful for maintaining hope in a bad situation or identifying long-term goals, it inhibits action. Fantasizing about how wonderful a dissertation or book will be doesn’t inspire us to write either; for many people, it makes the actual sometimes gritty work so much more unattractive in comparison that it’s impossible to write. The fantasy is far more fun than writing a crummy first draft. Similarly, Carol Dweck’s research on mindsets shows that success depends on acknowledging what has gone wrong and identifying how one might grow and change to get a different outcome in the future.

A sense that the task is so hard as to be impossible is not inevitably and necessarily disabling. It is, however, inevitable. It is dishonest to tell students that we never feel that what we’re trying to do can’t be done or isn’t worth doing, because so many of us do. And most of us got (and get) through it. Sometimes it took time, therapy, medication, changing things in our personal lives, changing jobs, changing projects, all of the above. But I don’t know any productive scholar free from times of slogging through the slough of despond.

In my experience, academic writing gets easier, but it’s never easy. The hardest writing is probably finishing a dissertation while writing job materials—nothing after that is so hard. But it’s always hard. If we tell students that it’s easy, or that it gets easy, even if we do so with the intention of keeping them moving, we do them a disservice. If they believe us, if they believe that we find it easy, then, when it gets hard, as it necessarily will, they have to conclude that there is something wrong with them. They are unhappily likely to conclude that they have been exposed for the imposter they always worried they were.

The “just write” advice almost certainly works for some people in some situations, as does the “just write every day” or “just freewrite” or “just start with your thesis” or any of the other practices and rules that begin with “just.” They work for someone somewhere and maybe they work for everyone some of the time, and they always strike me as sensible enough to suggest that people experiment with them. But we shouldn’t pretend that they’re magical and can’t possibly fail, or that someone “just” needs to do them. The perhaps well-intentioned fiction that academic writing “just” requires certain practice is magical thinking, and we need to stop saying it.

In my experience, people who find the “just write” advice useless find it too abstract. So, I think we need to be clear that scholarly productivity is, for most people, hard, and it’s find that a person finds it hard. And it takes practice, so there are some things a person might “just write”:

    • the methods section;
    • descriptions of an incident, moment in a text, interaction, or some other very, very specific epitome of their problem (Pirsig’s brick in the wall of the opera house);
    • summaries of their secondary materials with a discussion of how each text is and is not sufficient for their research;
    • a collection of data;
    • the threads from one datum to another;
    • a letter to their favorite undergrad teacher about their current research;
    • a description of their anxieties about their project;
    • an imitation of an introduction, abstract, conclusion, or transition paragraph they like written by a junior scholar.

I’m not presenting that list as a magical solution. It would be odd for me to say that simplistic advice is not helpful and then give a list of the five (or seven, or ten) things we “just” have to do to become (or teach others to become) skilled and productive academic writers. What we have to do is acknowledge that the project requires significant and complicated cognitive changes: that, for most of us, scholarly writing is hard because it’s hard. Let’s be honest about that.

Arguments from identity and the easy demagoguery of everyday commenting

I recently had a piece published on Salon, and it was thrilling. http://www.salon.com/2017/06/10/demagoguery-vs-democracy-how-us-vs-them-can-lead-to-state-led-violence/And the comments quickly skeeved off into the direction of whether “liberals” or “republicans” are better people. That was frustrating.

My argument about demagoguery has several parts:

    1. demagoguery shifts the stasis (as rhetoricians say) from policy arguments to identity arguments, relying on the assumption that all that matters is whether advocates/critics of a policy are ingroup or outgroup.
    2. therefore, in a culture of demagoguery all arguments about policy end up relying on two points: which group is better, and what group an advocate is in—in other words, it’s all identity politics.
    3. so, all arguments end up being deductive arguments from identity.
    4. this part is barely mentioned in either book I’ve done on the issue, but that reasoning on identity is done by homogenizing the outgroup, so if a person seems to be a member of this group, you can attribute to them everything any other member of that group has said or done.

There are other characteristics, but these are the ones that seemed especially important in the comment section on the article.

And here I have to go back to some really old work, and say that I think we remain muddled on how public discourse operates—we flop around among models of expression, deliberation, and purchasing.

Lay theories of public deliberation aren’t expected to be entirely consistent—as social psychologists have noted, we all toggle between naïve realism and skepticism in our everyday lives. But I think there are important consequences of our failing to realize that we flop around among various models of arguing and various models of knowing.

There is a basic premise: major policy decisions shouldn’t be made on the basis of some kind of model of us versus them when we’re talking about a culture that includes us and them. The idea that only group is entitled to determine policy isn’t democratic, sensible, or Christian.

If we want a thriving community (or nation state or world or even club) then we want enough disagreement that we can prevent the problems associated with what is often called groupthink—when a bunch of like-minded and ingroup people agree that what they think and who they are is, obviously, the best.

It’s clearly demonstrated the people have trouble admitting error, and therefore, if we want to make good decisions, we need people who will tell us we’re wrong. Good decisions rely on people contributing from various perspectives—not just people like us.

That’s the deliberative model of public argument: that the point of Congress and state legislatures is that they would consider various points of view, the impacts on all communities, and then come to a decision. If we look at public decision-making from that perspective (what’s often called the deliberative model), then we would ensure that there is diverse representation in deliberative assemblies, such as the state legislature or Congress. (The notion that the best decisions involve various perspectives is a given in successful business decision-making models.)

There is another model: the expressive model. For many people, there is no such thing as persuasion, and public discourse is all about people expressing their opinions (usually their statements of commitments to their group). Public discourse isn’t about deliberation or communal reasoning—it’s a bunch of people shouting in a stadium, and the group that has the people who shout the loudest win. You don’t go into that stadium intending to listen carefully to what other people are shouting in order to come to a new understanding of your own views: you come to shout out the others.

I can’t think of a time when this model of public discourse led to a community coming to a good decision.

The third model is that ideas/policies are products sold just like shampoo. The hope is that the market is rational, and so if a particular shampoo sells the best, it is the best product. This is a problematic model in many ways, not the least of which is that it’s circular. The market is assumed to be rational because it represents what people value, and it’s assumed that people’s values are rational. This is an almost religious belief in that it can’t be supported empirically, and has often been falsified (bubbles). The problem with the market model is three-fold: people buy products on the basis of short-term benefits and inadequate information, whereas policy decisions should be made in light of long-term consequences; second, it makes voters passive, who can whinge about a candidate not being adequately sold (instead of seeing it as being our responsibility to inform ourselves about candidates); finally, if I buy the wrong shampoo, my hair falls out, but if I buy the wrong candidate, my community is harmed.

The activity of market always represents short-term choices, and assessments of “marketability” tend to be about short-term gains. Unless you have a circular argument (the market choice is rational because the market choice is defined as rational—which a surprising amount of people on this issue assume), then the market does not represent the long-term best interest of the people (think bubbles). In addition, the market, by definition, cannot represent the values of those without the resources to participate (future generations, for instance). The market is always the tragedy of the commons.

(You never get a defense of the inherent rationality of the market that isn’t logically circular, doesn’t assume the just world hypothesis, or doesn’t appeal to prosperity gospel.)

While I believe that the deliberative model is best for community decision-making, I think a healthy public sphere has places where each of these models is practiced. It’s fine if someone’s facebook page (or twitter feed) is entirely expressive. But, on the whole, there should be a place where people try to deliberate with one another, or, at least, acknowledge in the abstract that the inclusion of people with them they disagree is valuable. The problem is that people are spending all of their time in expressive public spheres, and making decisions on the basis of group identity.

I was definitely one of the people who thought that the digitally-connected world would be the Habermasian public sphere, and that isn’t how it played out. I think there were moments (in the 80s) when it seemed to be something like what Habermas described—a realm in which argument and not identity mattered. But, what became clear is that identity does matter.

And so here is what I came to believe: in good arguments there are a lot of data. And identity is a datum. But that’s all it is. It isn’t a premise: it’s a datum.

[As an aside, I have to say that sometimes I think that public deliberation could be wonderful were we to understand five points: 1) a premise and datum are not the same thing; 2) don’t put always or never or necessarily into someone else’s argument; 3) treat others as you want to be treated; 4) there isn’t a binary between certainty and sloppy relativism; 5) a claim can be false and/or illogical even if the evidence for the claim is true.}

But, what happens in a lot of public discourse is that people assume that you can deduct the goodness of an argument from the goodness of the person making the argument, and you can make that determination on the basis of cues. That is, if a person says something that, for you, cues that they are a member of a particular group, you can assume that they believe all the things you think members of that group believe. If that particular group is one you share, then you’ll attribute all sorts of wonderful qualities and beliefs to them; if it’s an outgroup for you, then you’ll attribute all sorts of stupid beliefs, bad motives, and bad behavior to them.

That last point is simultaneously simple and complicated. We tend to homogenize the outgroup, and so if an outgroup member says that squirrels are awesome, and another outgroup member says that little dogs are the best, we’ll assume that second person thinks squirrels are awesome. People who are particularly drawn to thinking in terms of us versus them will take mere criticism of the ingroup as sufficient proof that the critic is a member of the outgroup, and will then attribute to that person all the things that are supposed to be true of outgroup members.

This is deductive reasoning—inferring beliefs of individuals from our assumptions about what those people believe. It’s pervasive in toxic publics.

And, no, it isn’t particular to any one “side” of the political spectrum. But, the fact that that question even comes up—who does this more?—is a sign of how uselessly committed to group loyalty our political world has become.

Democracy presumes that there is no single person, or single group, that knows all that is necessary to make good policy decisions. And that means that, while it isn’t necessary that people in a democracy believe that all views are equally valid (or even that all views are valid), it is necessary that we believe that we have something to learn from people with whom we disagree—we cannot delegitimate everyone who disagrees with us and continue to claim that we believe in democracy. (For me, this tendency to dismiss every other point of view as corrupt, servile, or in other ways illegitimate is especially troubling in people who self-identify as democratic socialists—c’mon, folks, it isn’t democratic if it’s a one-party system.) The tendency to insist that only one point of view if legitimate is profoundly anti-democratic—it assumes that the ideal situation is a one-party system. And that’s authoritarianism. And it has never ended well.

Comey’s testimony and identity politics

Comey, being a careful person, documented his deeply problematic meetings with Trump in the moment, and he’s released a statement with all anyone needs to know—Trump used his power to fire Comey in order to try to coerce him into closing down an investigation.

But that isn’t how it will play out in the hearing tomorrow.

For many years now (at least since the rise of Fox News), the GOP Political Correctness Machine has so consistently engaged in projection that you can tell the weakest point of a GOP candidate by noticing what accusations the Fox media (and other water carriers, as Limbaugh called himself) make about their opponents (think about their attack on Kerry for his war record).

For years, they’ve been flinging the accusation of political correctness at their opposition, and it’s a great example of projection.

Originally, the term came from the way that the Stalinist propaganda machine would decide what was the correct line to take on some event: Nazis are evil, Nazis are okay, Nazis are evil. To be politically correct meant that you were in line with what the higher-ups said was the right line to take on a political issue. And it was even better if you could pivot quickly.

To be politically correct means that you don’t have principles that operate across groups (adultery is bad whether it’s a GOP, Libertarian, Dem, Green), but that you know what your beliefs are supposed to be. And the GOP is all about political correctness in that sense—that’s why they accuse others of it so often. Michelle Obama dishonored the office of First Lady by wearing a sleeveless dress—that was presented as a principle. But, that they hadn’t objected to Nancy Reagan’s sleeveless dresses, nor the current First Lady’s problematic sartorial choices long ago shows it was never about the principle. They pivoted to condemn Obama and then pivoted again not to condemn Trump.

So, what will be the politically correct thing to say about Comey?

While large numbers of people across the political spectrum make policy judgments on the basis of their perceptions of identity (if “good” people support a policy, it must be a “good” policy), loyalty to the group is more a value among people who self-identify as conservative (see Jonathan Haidt’s The Righteous Mind). Authoritarians also tend to reason from ingroup membership, and authoritarians are more likely self-identify as conservative (Hetherington and Weiler’s Authoritarianism and Polarization in American Politics has a good summary of the research on this; so does John Jost’s work in political psychology).

In other words, the GOP Political Correctness Machine has also been engaged in projection in its making one of the politically correct things to say that lefties engage in identity politics. They’re all about identity politics.

So, what we can expect is that the politically correct Congresscritters will attack Comey’s identity. They’ll dodge any of his claims of what happened in favor of questions that enable them to present him as a bad person, especially as one disloyal to GOP values.

Of course the head of the FBI should not be a loyal Republican. The very same people who will condemn him for that disloyalty would fling themselves around in outrage were a Congress with a Dem majority and/or President to insist that he be loyal to Dems.

So, let’s be clear: this isn’t about a principle that operates across groups. This is purely and simply about factional politics. This is about loyalty only being a value when it’s a loyalty to their group.

It will be identity politics.

Charismatic leadership and this last week

I am hopeful about the last week, and that might seem odd.

Train wrecks in public deliberation happen when political issues become factional ones, so that decisions are weighed entirely in terms of whether the ingroup or outgroup wins. And decisions that hurt the outgroup, even if they hurt the ingroup more, are seen as wins. In those moments, it’s common for some narcissist to arise and become the object of a charismatic leadership relationship.

Under those circumstances, the leader’s claims about policies are irrelevant—all that matters are his (almost always) performances of decisive leadership. It doesn’t matter whether his decisions turn out to be right—what matters is that they were decisive. In these circumstances, rejecting expert advice, refusing to take time to come to a decision, refusing to listen to anyone who disagrees, turning away from disconfirming evidence—all those things contribute to the sense that the leader is decisive, and therefore good.

Of course, as far as actual evidence about that kind of leadership, it’s a disaster. (https://hbr.org/2012/11/the-dark-side-of-charisma) Stalin, Hitler, Pol Pot, Mussolini—all got their power from charismatic leadership. Not all people who draw power from the charismatic leadership are disasters for their countries (or regions) but everyone who only draws power from charismatic leadership is.

Here’s the difference. Every effective leader in the media-dominated world must be charismatic. But having charisma, and drawing power exclusively from charismatic leadership are not points on a continuum—they are orthogonal (despite what writing on leadership says). A charismatic leader is one who’s power comes entirely from his (again, almost always his) presenting himself as supernaturally wise and powerful and therefore above the normal standards of fairness, consistency, or reason. The charismatic leader being inconsistent, being unable to give reasons, violating all promises—all those things increase his power.

So, how do you know if a charismatic leader is following bad policies? You don’t. You can’t. That he appears to be following risky and unwise policies enhances his positions as a charismatic leader, and calls on you to demonstrate your commitment to him by continuing to believe him despite his engaging in policies all the experts say is wrong, that contradict what he said he’d do, and that might seem ill-considered. You must like that he is playing from the gut.

Once someone has entered in the charismatic leadership relationship, there is no way to admit that he is a shitty leader without your admitting that you’re a shitty judge of character. Charismatic leadership is inherently toxic in that it connects the followers sense of self-worth to the possibly arbitrary policy agenda of the person they have decided really represents them.

In really nasty situations—ones in which demagoguery has become the norm for political discourse–, than all the ambitious political figures try to enact charismatic leadership. Anyone who doesn’t is seen as not “Presidential” (aka, media coverage of the 2016 Presidential election). One problem we have to admit is that the dominant media love themselves some charismatic leadership—it’s great for ratings.

Communities in which charismatic leadership is the dominant relationship between voters and a leader don’t generally end well. They usually end up in an unnecessary war (the Sicilian Expedition, Napoleonic wars, or WWII) if there is a single leader who is mastering all the available energy. If it’s a situation with a lot of rhetors drawing power from the charismatic leadership relationship you might have tremendous cultural commitment to an obviously unwise policy (the US commitment to slavery and, later, segregation, or current homophobic policies).

Trump is in the former category. He is not a person to give up power, and he doesn’t play well with others (and that’s what his base likes about him). He has already shown that he will enact policies that harm his base, and they have shown they don’t care. This isn’t about some kind of rational commitment on their part to his policy agenda. You can tell that because, if you ask them, they say, BUT THE DEMS DID SOMETHING. This isn’t about policies—this is about being on the winning side.

That’s an interestingly irrational argument. Let’s say that the question is whether Trump fired Comey because Comey was pursuing Trump’s reliance on Russia’s having interfered with the election. True believers will say, THIS DEM FIRED SOMEONE. That’s completely irrelevant. It doesn’t matter if Clinton or Obama engaged in human sacrifice at every full moon and therefore fired someone. For Trump true believers, the question (every question) is an opportunity to prove that Trump is better than others, and so any bad (even if irrelevant) action on the part of THEM is proof that he is good.

And so they don’t see that doesn’t answer the question at all. Clinton might have kicked puppies and fired someone, and that’s actually irrelevant to whether Trump fired Comey because Comey was going to expose Trump’s reliance on Russia having interfered with the election. Both could be true.

In a charismatic leadership relationship, the followers don’t care if their leader did something bad; they only care whether (in some weird calculus in their minds) their leader can be positioned as better than the other.

And that’s how charismatic leaders screw over their followers. And they always do. Trump has done it faster than most, and his followers have shown themselves to be the most charismatic followers ever since they haven’t balked. He said he would release his returns, and didn’t. He said he would jail Hillary, and he didn’t. He said Obama’s birth certificate was an issue, and then he said it wasn’t. He said he would rid the government of lobbyists, and he filled his administration with them. He said he would end corruption, and he and his family are explicitly using his position to profit them more. He said he didn’t fire Comey because of Russia, and he said he did.

And his base stands by him.

They were thrown under the bus long ago, and there is no circumstance under which they will admit that. And you know that because, those of you with them in your FB feed know that they don’t even try to defend him. They say, BUT THE DEMS….

And they can’t defend what he’s done in terms of what he said he’d do, or what they said he’d do. They can’t only say his team is better than that team.

So, how do we get out of it?

Unhappily, one way is war. The charismatic leader (again, not the leader who is charismatic, but the leader whose power comes entirely from the charismatic relationship) leads people into a stupid war (and, given enough power, they always do) and it’s a disaster (because leaders who depend entirely on charismatic leadership are disastrous in war). The war is a disaster, and many people (not all) decide that was bad. Unfortunately, they generally either say the leader wasn’t wrong, or they pretend they never supported the guy in the first place.

Another is that the leader is representing a minority group, and gets shut down by the legal or traditional authorities (the other two sources of power that Weber identified). That’s what happened with the various rhetors who tried to play charistmatic leadership on behalf of white supremacy in the US South. The Supreme Court shut them down.

We have a Supreme Court that has a majority that is fine with authoritarianism, so that will not play well for democracy.

One of the premises of charismatic leadership is that normal rules of fairness don’t apply. The narrative is that the ingroup has been SO victimized by all these fairness rules, or innovation has been SO hampered by all these rules about how to treat labor, not being able to destroy the environment, you can’t scam investors, and “political correctness” that means you can’t just buy your way into the policies you want! The whole notion that balancing innovation and fairness might involve complicated compromises can be rejected in favor of all the decisions being thrown into the lap of the charismatic leader whose judgment will instantly solve the dilemma. In other words, decisions are complicated—a good leader sees the instantly obvious answer. (Notice that Trump has backtracked even on this, without any fallout from his base.)

Short of a disastrous war that shows the leader wrong (and even that doesn’t always workd), the best way to undercut charismatic leadership is for ingroup rhetors to condemn the leader on procedural or policy grounds. That is, while outgroup rhetors should condemn the leader, it won’t work for too long because one of the first acts of the charismatic leader is to shut down or marginalize that criticism—ingroup members never hear it.

Trump has Fox and the GOP Noise Machine on his side. And the major argument that those sources make is that their listeners shouldn’t listen to any potentially disconfirming information. And you know how well it works. You all have friends or family members who repeat talking points from biased sources but who won’t look at anything that might disagree with them on the grounds that those sources must be biased.

They don’t care about biased sources—that’s all they listen to, read, or watch. The like biased sources.

They only object to sources that might complicate their biases. That’s important to note, as people can start to flip when they realize that they’re being suckered. And Fox suckers them. If Fox really were telling them the truth, then there’d be no harm in looking at other information. The more that Fox tries to tell people not to look at other sources, the more it’s acknowledging its version of the “truth” can’t withstand actual analysis of evidence.

Fox isn’t conservative. It has no coherent political philosophy other than being GOP (which, like the Dems, has flopped all over the place on policy). There are conservatives, and they’re beginning to fall away from Trump, and that gives me hope.

Oddly enough, what also gives me hope is that Trump is overplaying his hand. What sometimes undoes a charismatic leader is that his own belief in himself means that he doesn’t really believe he can permanently alienate any group, and so he just does whatever he wants thinking he can charm or bluff people back into his entourage regardless of his having screwed them over.

So, there are two hopeful signs in the last week. First, Trump has a problem is that many of the people on whom he relies hate him, and feel used by him. (https://www.nytimes.com/2017/05/12/us/politics/trump-sean-spicer-sarah-huckabee-sanders.html?smid=pl-share)
And that number seems to be increasing. Trump, for all his ability to generate extraordinary public loyalty, doesn’t seem to have much ability to generate personal loyalty, and he never has.

That makes the actively bizarre relationship of him and his First Lady interesting. There has never been a First Lady who has signalled so much animosity toward her husband, and he has only one daughter who can manage to show public affection to him. It doesn’t matter because it shows he’s a bad person or blah blah blah. It matters because it shows that Trump, who will thrown anyone under the bus, has managed to gather around him people with his same ethics. That’s a good thing for democracy. When the time comes that it looks as though spilling the beans on him is a good choice, there will be many people willing to do it.

The second hopeful sign is that outlets like The Economist, Forbes, and Wall Street Journal are publishing scathing articles about his incompetence. Neoliberal free-market fetishists will put up with anything other than random incompetence. (They’ll even tolerate strategic incompetence, such as the Bush Administration.)

But here is one more unhappy point. What does in people like Trump is overreach. And so, at best, we have months of his continuing to behave badly, the GOP Propaganda Machine spinning it as fine, and the most of the GOP political figures selling their soul to Trump penny by penny. And they will try to consolidate their power (as every authoritarian government does) through voter suppression.

So, this is all about 2018, and every reasonable person voting against any figure who has supported Trump.

Blue lies matter

There is an odd moment in the description of the dinner that fired-FBI Director James Comey and Donald Trump had at the White House in January: “As they ate, the president and Mr. Comey made small talk about the election and the crowd sizes at Mr. Trump’s rallies.” (https://www.nytimes.com/2017/05/11/us/politics/trump-comey-firing.html?_r=0)

Or, in other words, Trump wanted Comey to talk about how wonderful and popular Trump is. And I want to know which rallies.

That last point matters because some of the rallies weren’t all that well-attended, including the most famous: the inauguration. Did they talk about the crowd at the inauguration? Trump has had a lot of trouble letting go of his lie about the crowd size, and he was, by all accounts, testing the loyalty of Comey that evening. Comey apparently thinks he failed the loyalty test because he wouldn’t explicitly pledge his loyalty to Trump, but I think the explicit request for a loyalty pledge came about because Comey had already failed the first loyalty test. And it’s a test most GOP political figures and all of his supporters are passing with ease, and that should worry us: it’s whether they will take Trump’s lies, and make them what are called “blue lies.”

Blue lies” is the term some social psychologists use for what they call “pro-social lies”—that is, lies that help maintain a flattering narrative or sense of identity about the ingroup. They’re the group equivalent of “white” lies (“Of course you don’t look fat in that dress!”) And, like a lot of “white” lies, they can be inconsequential—we might decide to tell a person she gave a great speech even if she didn’t simply because the speech is over and there’s nothing she can do about it anyway. Or we might tell a friend that the ex who dumped him was a total jerk anyway, and a complete fool, and our friend is completely in the right. A “blue” lie is a kid on a team saying that they lost just because the ref was out to get them, or that they actually played really well, or it’s members of a choir telling one another they did a great job even though no one got within a yard of the same key.

The inauguration had the best attendance of any inauguration; Trump didn’t (and did) fire Comey over the Russian investigation; Comey promised Trump three times he wasn’t under investigation; there were huge numbers of votes illegally cast by non-citizens; Trump hasn’t had (and has had) financial dealings with Russia—all of those are being handled as blue lies by politicians and media figures who propagate GOP talking points. And that’s troubling, because it means that lies that function almost exclusively to satisfy Trump’s ego are being given the powerful social force typically given to blue lies.

Social psychologists call these lies “pro-social” because, supposedly, they benefit the social group. But, as is clear from the white and blue lies mentioned above, that isn’t necessarily their consequence. We don’t necessarily tell a white lie because we don’t want to hurt someone—sometimes the lie will hurt them a lot in the long run, and we know it—but because we don’t want to hurt their ego right now, largely because we don’t want the conflict or drama that might ensue.

For instance, if a dress really is unflattering, and the person has a chance to change it, then the kind thing to do is to tell them—ideally, in an affirming way. If the person is going to give the speech again, or might need to give other speeches, then it might be helpful for someone to pass along some constructive criticism. If our friend keeps getting dumped because he’s doing something toxic or destructive in a relationship, such as always feeling like a victim, then lying about the situation and encouraging him to feel even more victimized is not helping. That isn’t to say that people have to tell the truth right here and now, or that everyone has to. The most helpful strategy might be to be comforting in the moment, and later having a more honest conversation. But it is saying that white lies prevent deliberation about an incident. It might be fine to prevent deliberation at that moment because it will happen elsewhere, or it might be that deliberation isn’t really necessary (the dress is a bridesmaid dress your friend must wear, and there’s no way to make it more flattering).

A parent might lie to a child about how well a game went, knowing that the coach will be more honest, and team members might similarly lie to one another without any particular harm for similar reasons. But, if there is no one to tell the truth, and if the lying will ensure that the friend will continue to get dumped, the team will continue to lose, the person will continue to make bad speeches that are bad in the same way, then the lies are harmful. If all or most of our information about something is blue or white lies, then we can’t deliberate effectively enough to make different choices in the future.

One of the characteristics I noticed in train wrecks in public deliberation was the prevalence of blue lies. It seems to me that these lies functioned in three ways (sometimes all three at once).

First, and most obviously, the lies that people told and shared helped them feel better about their group, often by reconciling some kind of cognitive dissonance, rationalizing a poor choice in the past, or excusing a decision to which they were already committed (e.g,. the Civil War was not about slavery, Germany lost WWI because of a Jewish stab in the back when it was just about to win). And, after a while, people forget that these are group-affirming polite fictions, and only pay attention to their power to affirm the group.

Second, these lies came to constitute group identity, so that being willing to commit to them in public came to serve as a signal of group identity and loyalty. You show that you are a true Chesterian by insisting that bunnies are never fluffy. If you reject that belief, then your identity as a Chesterian is suspect—these lies are constitutive of group identity.

In the antebellum slave states, you weren’t a “Southerner” unless you supported slavery (which explains the bizarre usage still sometimes in action, when people use “Southerner” and “supporter of slavery” synonymously, as though the millions of people living in the south who objected to slavery didn’t exist). For the purpose of showing ingroup membership and loyalty, it’s actively helpful for the statements to be obviously untrue or easily falsifiable. For instance, in the antebellum era, one blue lie was that slaveholders didn’t rape slaves—that was not just false, but obviously so, and yet it was a falsehood supported through threats of violence; you simply did not mention it. Now, it’s a point of loyalty in some circles to insist on the blue lie that the Civil War was not about slavery. That’s an easily falsified claim (simply looking at pro-secession rhetoric or statements causes shows that the CSA repeatedly identified their main motive as preserving slavery) and I have often found that people who make the statement refuse to look at the pro-secession rhetoric. Their insistence on the “true” causes isn’t something they’re willing to reconsider, and they know they’d have to if they looked at the evidence. They are more concerned with demonstrating loyalty to their group than thinking about whether the group might have screwed up.

And that brings up the third function of these lies. As time goes on, people often forget that the blue lies were lies (although, as mentioned above about the pro-secession rhetoric, their aversion to looking at possibly disconfirming evidence suggests to me that they know it deep in their heart of hearts). The ability of the ingroup to get its lies to become the truth for a larger group becomes an important demonstration of power. It is pleasurable simply because it is simultaneously a demonstration of power and an effective threat. “The Civil War was not about slavery” was one of those lies that, told initially by people who had, until after they lost, insisted it was about slavery; their ability to get that lie into the official histories of the event showed their power. Kenneth Greenberg (Honor and Slavery) tells an amazing story of a slaveholder who knowingly falsely accused a slave of having stolen something. He whipped the slave till the slave confessed. Then whipped the slave back into denial, and back into confession. It never anything to do with the theft—it had to do with the slaveholder’s demonstration that he controlled what could and couldn’t be said. Like the villain O’Brien’s forcing Winston Smith into saying that two plus two is five, this ability to force others into acquiescing on a blue lie is a consequence and demonstration of power.

(For some people, and this is an important point: it is the pleasure in having power.)

For the first function—making a group feel better about their past poor decisions or mistakes—the content of the lie matters, but it doesn’t for the other two. The lies don’t have to be useful lies, or, more accurately, may be most useful when the content of them is pretty nearly arbitrary. In fact, they function better as demonstrations of loyalty and power when the lies flip back and forth.

Of course, under those circumstances, they don’t function at all as useful bases for policy decisions. For instance, one of the blue lies during the buildup to the Iraq invasion was that the invasion was supported by the majority of the world’s powers and another was that only the US and UK had the balls to take on the invasion. Both of those were blue lines insofar as they were pro-Bush Administration and the GOP, and they were put forward by the same people, and they prevented even an intra-GOP debate over the need and solvency of the invasion plan. If everyone agreed we were justified, then we didn’t need to worry about whether the invasion would further alienate various Middle Eastern countries (or countries in general). We didn’t need to have a foreign policy oriented toward regaining goodwill. If, however, we were relatively isolated in our sense that the war was justified and necessary, then regaining goodwill was crucial to be able to benefit from even a successful deposing of Saddam Hussein. Those two different lies implied two different policy directions. Since the pro-invasion rhetors wouldn’t consistently hold to one or the other, there was no possibility of developing a plan that would respond to either contingency.

Similarly, it was common for proslavery rhetors to insist (sometimes in the same document) that slavery was eternal, and slavery would die out on its own. Both of those were dicta in the proslavery statement of creed, and each of those implied different policies for slave states as far as the long term. And neither could be debated, and therefore there couldn’t be a plan that would manage either contingency.

Thus, blue lies prohibit deliberation, and that’s probably why they’re associated with train wrecks. Blue lies rationalize precisely the decisions that got communities into bad situations in the first place (slaves love slavery! segregation is required by Christianity! everyone looks on the US as a liberator!).  In the case of the contradictory blue lies (slaves love slavery, slaves are always about to engage in race war) they prevent a community from looking carefully at those contradictory premises, and so they enable the community to recommit to a bad policy (e.g., the war on drugs).  The blue lie that we could have won in Vietnam if the liberal press hadn’t weakened our will was particularly promoted in the same group that agitated for invading Iraq—because they believed the US could have succeeded, the most important disconfirming example for their policy was simply renarrated.

So, blue lines increase ingroup loyalty, and they enable ingroup ideological policing, and they tank deliberation. That’s bad enough. But what’s happening with Trump’s lies is even worse than that. It’s the way that Trump’s lies are becoming blue lies for the GOP and its propagandists.

The blue lies mentioned above made a large group feel better, as in the lie that we were about to win Vietnam, which is often a sincere gesture to avoid dishonoring those who died or were severely wounded in the conflict; it functions to remove a stain from America and America’s military. The blue lies about slaves loving slavery functioned to make the entire class of supporters of slavery feel better about themselves and to demonstrate their informational power. That Germany could have won were it not for the Jewish press was comforting for the large number of Germans who felt shamed by its loss in WWI.

Trump’s lies don’t help a group. They are entirely about his ego, his achievements, and his ability to whip people to confession and denial and back. They are tests of his power over others, and their willingness to submit to whatever he wants to say at the moment. Loyalty to him is loyalty to the lies he tells himself. They don’t benefit others, except to the extent that those others see themselves as entirely dependent and submissive to him and his truth.

Trump’s lies demonstrate his ability to get anyone, even the GOP and media outlets that previously condemned him, to change their version of events at his whim. And it’s working. Republicans continue to support him, despite his having broken so many promises that he has resorted to scrubbing away evidence he ever made them.

I don’t know whether he’s conscious of that, and I don’t care. What matters is that that lies that have become blue lies for the GOP and major media are lies that function primarily (perhaps only) to make him feel better about himself, to get others to demonstrate loyalty to him, and to demonstrate his own power.

What matters is that, for whatever reason, the GOP and its propagandists have stopped flirting with authoritarianism. This is authoritarianism.