A few years ago, I was talking to one of those young people raised to believe that Reagan was pretty nearly God, and he said to me, “At first I was mad when people said Reagan has Alzheimer’s but then I decided that it didn’t matter.”
I thought that was interesting. He wasn’t mad because it was false; he was mad because it seemed mean. He didn’t change his mind about it because he went from thinking it was false to thinking it was true; he changed his mind because he found a way to explain it as non-trivial.
This was in 2005 or so, but it perfectly reproduced my experience of arguing with Reagan supporters in 1980. Reagan said a lot of things about himself and his record that were untrue. He might have sincerely believed them or not–I think he did–but if you pointed out he was saying things that were untrue, his fans said you were mean. He declared his candidacy literally on the site of one of the most appalling pro-segregation murders of the 1960s, and said he was in favor of states’ rights, and his supporters were apoplectic if you said he was appealing to racism. “He isn’t racist,” they’d say. “He’s a good man.”
If you tried to point out that the economic model on which he was going to base US policy was thoroughly irrational in that it was completely unfalsifiable, you were rejected as some kind of egghead.
When I asked a few more questions (such as, if your policies are best advocated by a person with Alzheimer’s, maybe there are problems with the policies), it became clear that he saw Reagan’s failure to be able to grasp complicated things as a virtue. That’s what made Reagan go for simple solutions, he thought, and he thought that meant that Reagan cut through the bullshit.
That, too, was my experience of Reagan supporters in the 80s (except the Marxists I knew who voted for them because they said he would bring about the people’s revolution faster, and the Dems who voted for him as a protest vote against Carter and then Mondale). They liked that he didn’t seem to understand the complexities of political situations. They sincerely believed that political issues aren’t really complicated, but are made so by professional politicians and eggheads just trying to keep their jobs, and so a person who looked at things in black and white terms would get ‘er done.
I think we have the same situation now. Clearly, the WH is made up of people who don’t understand the law about any of the things they’re trying to enact or the things they’re doing (whose defense is that they don’t and never did), who never had clear plans for any of the things they said they would achieve, who don’t understand how government actually works, who don’t understand what it means to be President, who are mad that they’re being treated the way they treated the previous President, and who are just engaged in rabid infighting.
People with even a moderate understanding of history are worried because this never works out well (for anyone, including his own party). People with a cherrypicked version of history don’t think it matters because they think he’ll enact the GOP agenda (and they think that’s great). And his base thinks it’s great because they think that a person who doesn’t think anything is complicated and isn’t deeply informed is exactly what we need.
In the US, the term “bigot” is used interchangeably with “racist,” but its use for a long time involved religious, not racial, bigotry. At a certain point, it became more broadly used for someone who could not be persuaded out of a belief, religious or political. The OED gives the first three definitions as:
A religious hypocrite; (also) a superstitious adherent of religion; A person considered to adhere unreasonably or obstinately to a particular religious belief, practice, etc.; In extended use: a fanatical adherent or believer; a person characterized by obstinate, intolerant, or strongly partisan beliefs. (OED, Third Edition, December 2008)
The OED notes that Smollet in 1751 condemned the political discourse of his era by referring to “The crazed tory, the bigot whig.”
And that’s what’s wrong with our political discourse. It isn’t whether people are “civil” or “hostile” or even “racist.” Our problem is that our political discourse is dominated by bigoted discourse. And a lot of those bigots pretend that their views are reasonable ones related to Scripture.
Democracy works when most people are open to persuasion, and it doesn’t work when too many of us are bigots. Being open to persuasion doesn’t mean that you’ll change your mind every time someone gives you new information (the test apparently used by some studies about persuasion), but it does mean that you can imagine changing your mind, and, ideally, you can identify the conditions under which you would change your mind.
A.J. Ayer famously argued that some beliefs are falsifiable (which he described as scientific) and some aren’t (which he defined as religious). I think he was wrong in the notion that science is always falsifiable and religious never is, and there are other quibbles with his claim, but, having spent a lot time arguing with people in academic, nonacademic, fringe, and just fucking loony realms, I have come to think, while there are lots of good criticisms of the specifics of his argument, his general point–that we have beliefs we open to change and we have beliefs we will not change—is a useful and accurate description. (In fact, a lot of descriptions about whether an argument is useful or not begin with exactly that determination—are you open to changing your mind about the argument? Are you arguing with someone who is?)
A bigot is someone who cannot imagine circumstances under which she might change her mind. Or, more aptly, a bigot is someone who imagines himself as never wrong, and always able to summon evidence to support his position. What he can’t imagine (and this is what makes him irrational) is the evidence that would prove him wrong, and she condemns everyone who disagrees as so completely and obviously wrong that they should be silenced without ever having carefully listened to their argument.
I do believe that Jesus is my savior, and in a God who is omniscient and omnipotent. That belief is not open to disproof. And I am comfortable with calling that a religious belief. And, so, in that regard, I am a bigot. On the other hand, I’ve read the arguments for atheism, and various other religions, and I don’t think advocates of those beliefs should be silenced.
In addition I don’t believe that those two claims necessarily attach me to beliefs about slavery or segregation—and it’s important to remember that, for much of American history, there were entire regions in which it was insisted that being Christian necessarily meant supporting slavery and segregation. When Christian scholars of Scripture pointed out that the Scriptural based defenses of slavery and segregation were problematic, they were condemned as having a prejudiced and politicized reading of Scripture by people who insisted the Scripture endorsed US slavery practices. The notion that Scripture justified slavery as practice in the US South, especially after 1830 or so, was a bigoted reading of Scripture—not because I think it was wrong, but because its proponents refused to think carefully or critically about their own reasons and positions. They could “defend” slavery in that they could come up with (cherry-picked) proof texts, but they couldn’t (or wouldn’t) arguefairly with their critics, and they couldn’t (or wouldn’t) articulate the conditions under which they would change their minds. There were none. It isn’t what they argued, but how they argued, that earns them the title of bigot.
Furthermore, they banned criticisms of slavery, enforcing that ban with violence. So, they had both parts of the bigot definition—their views weren’t open to disproof, and they advocated refusing to listen to criticism of their views. They were bigots on steroids, in that they advocated violence against their critics.
Right now, we’re in a situation in which a lot of very powerful people are insisting you shouldn’t listen to criticisms of the current GOP political agenda, and they’re claiming that their views are grounded in Scripture, and they are implicitly and explicitly advocating violence against their critics. You should read them. (You can start with American Family Association, or Family Research Institute, or any expert cited on Fox News. Really—go read them.)
They call themselves conservative Christians. But being theologically conservative in Christianity does not necessarily involve the current GOP political agenda. For instance, there are conservative Christian arguments for gay marriage, for women working outside the home, against patriarchy, against the argument that charity should be entirely voluntary, and even the connection between conservative Christianity and abortion is fairly new. I’m not saying that true conservative Christians have this or that view–I’m saying that being conservative theologically doesn’t necessarily lead you to the GOP political agenda. After all, it was, for a long time, argued that being a conservative Christian necessarily led to endorsing slavery and segregation, and conservative Christians don’t make those connections anymore–why assume that current “necessary” connections (made with the same exegetical method as the “necessary” connections to slavery and segregation) are any better than those? And even many conservative Christians who argue for positions more or less in line with the current GOP political agenda don’t do so in a bigoted way. So, there’s nothing about being a conservative Christian that requires religious bigotry.
So, let’s stop using the term “conservative Christian” for people who insist that being a true Christian so necessarily means believing that the GOP agenda is right that everyone who disagrees should be threatened with violence till they shut up. “Conservative Christian” for what is actually authoritarian bigotry is strategic misnaming. Whether the Founders imagined a Christian nation is open to argument; whether they imagined a nation without disagreement is not. They valued disagreement; they valued reconsideration, deliberation, and pluralist argument.
People who pant for a one-party state, who tell their audience not to listen to anyone who disagrees, and who threaten (or justify threatening) their critics with violence are not only violating what the Founders said our country means may or may not be Christian (since they’re explicitly violating the “do unto others rule” I think that’s open to argument) but they are showing themselves to be anti-democratic authoritarian bigots.
And here is one last odd point about people like this (since I spend a lot of time arguing with them). They have a tendency to equate calling them authoritarian bigots with calling for silencing them, and that’s an interesting and important instance of projection. They believe that people who disagree with them should be silenced, so they really seem to hear all criticism of their views as an argument for silencing them. But that’s just projection.
We shouldn’t silence them. We should ask them to argue, not just engage in sloppy Jeremiads. I think our country is better if there are people who are participating in public discourse from the perspective of conservative Christianity. I think that’s a view that should be heard, and it can be heard without insisting all other views should be threatened into silence.
[The image at the top of the post is from a series of stained glass celebrating the massacre of Jews.]
This was originally part of another post, but I cut it from that one. There’s a bunch of stuff floating around these days about how we shouldn’t use the term neoliberalism, as well as a lot of flinging the term at fellow lefties with whom we disagree, though, so I thought I’d go ahead and post it.
Elsewhere, I argued that the GOP objection to the ACA is grounded in the just world hypothesis—the notion that good things happen to good people and bad things happen to bad people–and so good things (money, healthcare, food) should only be given to good people. If people want healthcare, for instance, they should get a job. If they don’t have a job, they aren’t a good person, after all.
There’s also the argument that many of the GOP objected to ACA only because it was Obama who supported it. And that’s a reasonable argument. It was based, after all, on the recommendations of a very conservative think tank and Mitt Romney’s healthcare plan in Massachusetts. The argument is that they didn’t want any Democratic plan to succeed because our political landscape is so rabidly factionalized that parties are willing to do harm to the country as a whole rather than let the other side succeed.
And that rabid factionalism certainly mattered, but I think there is also a sincere ideological objection, having to do with hostility to third-way neoliberalism (explained below), and the rise of what might be called neopurconliberalism because it’s a muddle of various philosophies.
Loosely, Obama’s healthcare plan was a classic example of his tendency toward what political theory folks call “third-way neoliberal.” Although in popular usage, “liberal” means people who believe in a social safety net (and tend to vote Dem or Green), in political theory, “liberal” means people who accept the Enlightenment principles of universal rights (especially property, due process, and fair trial), a separation of church and state, minimal interference in the market, and a separation of public and private. Until very recently (the 2000s, really), most GOP and Dem voters were liberal, and it was the dominant lay political theory (meaning how non-specialists explained how a government should work). There were lots of arguments as to what “minimal interference” meant, and what is private (for instance, for years, wife-beating was considered a private act, and outside the realm of government “interference”). So, most people agreed on the principle but disagreed as to how the principle plays out in specific cases.
The other category that matters for thinking about hostility to Obamacare is democratic socialism, which is often used to describe systems in which the government is democratic (little d) and the government provides an extensive safety net. Democratic socialist countries tend to have high taxes and excellent infrastructures.
In the 1970s or so, a lot of economic theorists began arguing for what is often called “neoliberalism,” which is not “liberal” in the common sense–in fact, it’s deeply and profoundly opposed to the principles of someone like LBJ, JFK, or FDR. Neoliberalism says that the market is purely rational, and we should take as much as we can away from the government and put it into the private sector. Neoliberals don’t vote Dem, and they don’t fit the common usage of liberal–they tend to vote GOP or Libertarian. Supporting neoliberalism requires ignoring the whole field of behavioral economics and all the empirical critiques of the fantasy of the rational market, but neoliberalism and neoconservatism both got coopted by people whose political and economic theories are purely ideological (in the sense that their claims are deduced from their premises, and their premises are non-falsifiable–that is, there is no evidence they would accept to get them to reconsider their premises).
On the far right, there emerged an ideology that might be called neopurconliberal, a reemergence of one very specific aspect of early American Puritanism (that wealth is a sign of saintliness), entangled into the neocon assumption that the US is entitled to dictate to all other countries how they should do things–an entitlement that should be enforced through a domination-oriented “diplomacy” and the continual threat of intervention (so, shout a lot and carry a big stick)–and the neoliberal notion that as many social practices should be thrown into the market as possible (so there is no such thing as public goods that should not be sold). Or, more accurately, the far right thoroughly and completely endorsed the “just world hypothesis” (that everyone in this world gets exactly what we deserve).
Neoliberals (who aren’t necessarily religious at all) and neopurconliberals found common ground on public policies like deliberately underfunding public schools, universities, the arts, the USPS, Social Security, and Medicare–the neoliberals because they believed (in a non-falsifiable way) that the market is always better, and the neopurconliberals because they won’t want a secular government that provides goods, and want the goods of the world (healthcare, education, retirement benefits) connected to being what they consider a Christian.
Third-way neoliberalism has two defenses. One is that, given that we are in a post-Citizens United world, no one can win without a lot of money because low-information voters are persuaded by ads, no matter how misleading or rebutted. And while it might be nice to imagine that a political figure could get elected by getting all the necessary money from the 85% and members of the 15% who happen to be committed to democratic socialism (probably not a large number), the pragmatic solution is to make sure that the Dem candidate can make large numbers of very wealth people believe that they will thrive under Dem policies. So, the pragmatic version of third-way neoliberalism says it is a compromise we need to make.
The other version says that the information economy changes everything, and that the Democratic values of honoring workers, having a strong social safety net, being inclusive, having a bright line between religion (private) and secular activities (public), investing in infrastructure, creating stable and productive relationships with other countries, and enabling social mobility can be achieved in partnership with the kinds of industries that would also benefit economically from such values being common.
If you think about it in terms of healthcare, you can see how these ideologies play out. Democratic socialism would have in place single-payer health care, most healthcare provided by the state, and paid for by taxes of some kind. Neoliberalism would leave it all up to the market with little or no governmental control of insurance companies or healthcare providers. Third-way neoliberalism would try to develop a system that created profit incentives for insurance and healthcare providers to serve everyone—more governmental control (such as mandates) than neoliberalism, but not by providing the insurance or healthcare directly (as would happen in democratic socialism).
I really like Bertrand Russell’s argument for socialized medicine. Here’s the problem every healthcare plan faces: it’s the problem of a gambling establishment because insurance is just legalized gambling If you are running a casino, you need to make a prediction as to how much you will pay out, and you need to ensure that you will take more than you have to pay out. So, you have to have a system that collects enough from losers to pay out the winners.
Russell’s argument was exactly right: casinos work because losers pay into the system more than the winners take out. And that’s how insurance works. You have a lot of people who pay to play on the grounds that they might be someone who later gets a lot. You pay a dollar for a lottery ticket, not because you’re certain you’ll win the lottery, but because you’re willing to pay for the chance that you might win. You pay into a benefits pool, not because you’re certain you’ll win, but because you think you might.
The argument about healthcare is an argument about how to gamble. Russell saw that.
What Russell didn’t predict is how ingroup/outgroup preferences would impact healthcare decisions. We always see outrageous expenses on the part of beings with whom we identify as justified. The GOP made a big deal about death panels at the same that it was the party that had put such panels in place http://www.nationalreview.com/corner/428426/death-panel-futile-care-law-texas by reframing the issue as Obama would kill your grandmother, and many in their audience believed it because Obama is outgroup. They either never mentioned the GOP-supported death panels, denied they existed, or characterized them as just fiscal responsibility.
Looking at the issue the way Russell describes means just doing the math, and not worrying about whether the people winning at the tables are good or bad people, whether we think they “deserve” to win. Neoliberals hate the ACA because it doesn’t leave things to the market, and neopurconliberals hate it because it is not grounded in an obsession with whether healthcare is only going to people who “deserve” it.
So, this is also an argument about what we think the government should do, and how we should think about policies—in pragmatic terms, or in terms of punish/reward. Whether third-way neoliberalism is inherently bad or good from the perspective of social democrats is an interesting question, if not engaged in a purely ideological way. Can it be a bridge? Can it lead to social democratic policies? The right certainly thinks it can, and that’s why they oppose it. And we should engage the argument in pragmatic ways.
Over at The Resurgent, Senator Mike Lee (R-Utah) explains why he would not support the compromise health care plan, even with the (amended) amendment he and Cruz proposed. And I think that Lee is perfectly sincere in his argument, and I think that his argument shows why a lot of lefty critiques of Trumpcare just don’t quite work, but I’ll explain that after I try to be really, really fair to his argument.
Lee’s objection to ACA is that: “Millions of middle-class families are being forced to pay billions in higher health insurance premiums to help those with pre-existing conditions.” He calls it a “hidden tax,” since it’s “paid every month to insurance companies instead of to the government” and he maintains that hidden tax is: “one of the most crushing financial burdens middle-class families deal with today.”
Lee’s proposals is not, as many say, that people with pre-existing conditions and expensive medical costs would get thrown off insurance entirely. Instead, this plan would split insurees into two groups: people who already have high medical costs, and are bad risks for insurers, and people who have not yet developed expensive medical costs (whom Lee consistently identifies as “the middle class”–that’s an important point, since it implies that he thinks the middle class and people with serious medical costs are different groups). The people with high medical costs, Lee argues, shouldn’t be protected through price-fixing: “We don’t have to use price controls to force middle-class families to bear the brunt of the cost of helping those who need more medical care. We could just give those with pre-existing conditions more help to get the care they need.” So, insurers are “free” to charge whatever they want, and consumers are “free” to get insurance or not (hence the name “Consumer Freedom Amendment”), and this plan will not put the financial burden of healthcare of others on “middle-class families.”
There are a few points about Lee’s plan that are interesting. The first is that my social media has had a lot of criticisms of Trumpcare and this amendment, and none of them explained it correctly. The main criticism has been that this will throw large numbers of people with serious medical issues to the wolves–that millions will be unable to get insurance. The impression I had gotten from various articles was that Cruz, Lee, and others were cheerfully and knowingly ensuring that millions of people would lose access to their healthcare. And that isn’t quite right, and I think it’s important to get opposition arguments right (both because it’s more rhetorically effective, and because it’s more important for policy deliberation).
Jordan Weissman has a nice article at Slate that does an unusually good job of explaining the various proposals, especially Lee’s argument: “Lee doesn’t believe that healthy Americans should help pay for sick ones through their insurance premiums, and he doesn’t want to put his name on a bill that might—in theory, depending on regulatory decisions, maybe, one day—allow that to happen.”
So, what’s at stake for Lee (and many others) is the notion that paying for healthcare is paying for someone else–for a different group. The really tragic failure here is the failure to imagine an “us” that includes all Americans.
Lee’s argument is a little inconsistent on that point, though. He admits that the subsidies will be paid for in taxes, so the healthy will, in fact, still be paying for the unhealthy. Even if it’s done through tax breaks rather than subsidies, we all pay, since we will pay in the form of less infrastructure and lower funding of all public “goods.” While I do think I understand (but don’t agree with) the reasoning behind the insistence that people who don’t have jobs don’t “deserve” healthcare, I’m not sure I understand this theme that comes up a lot in current conservative talk about public goods–it’s as though they don’t understand that publicly-owned things aren’t owned by no one; they owned by everyone. And public goods aren’t given to them; they’re given to us.
The math on how healthcare expenses work out is not complicated. It might be worrisome ( e.g., how can we pay for an aging citizenry), but it isn’t really complicated: for ever person to who takes a dollar out, there must be someone who puts slightly more than a dollar in (so the insurance company can make some money, and let’s all start with the fact that they’re all doing pretty damn well). That dollar in/out might be direct (it’s a thing on your paystub, and you put it in) or it might be indirect–sales tax, user taxes, sin taxes, but (and this is important) if health care happens, someone pays for it.
A US Senator recently told this story. He was mowing his lawn, and a constituent came up to talk to him (because he is the kind of guy who sees every resident in his state as a constituent, unlike, say, Ted Cruz). That guy said he should be forced to pay for health care because he never got sick. “Oh really,” said the Senator. “You’ve never been to the ER?” “Oh, sure,” the constituent said, “but that’s free.”
That’s an important story–that you are not charged in the moment does not make a service free. Lee hasn’t learned that lesson. (And here I’ll make a generalization and say that I’ve yet to argue with a neoconservative who understands that point–you can see it in the twitterfluffle over Grover Norquist’s failure to explain taxes to his daughter.)
I don’t think that’s practical, as I don’t think it’s possible for public policy to make such clear distinctions between good and bad people, and I certainly don’t think that Lee’s “middle class” versus “people with pre-existing conditions” distinction is sensible. But, it’s an attractive argument to a lot of people because it’s simple, satisfying, and has just enough punitive spice in it to be pleasing. And, as in all us v. them rhetoric, it’s flattering. If we’re going to try to argue against these sorts of policies, and I think we should, we need to do it while understanding what their argument is, and it’s more complicated (and attractive) than is being acknowledged in a lot of lefty rhetoric.
In May, where I work, a young man with mental health issues stabbed several people (including a person of color). He was immediately subdued by some police officers who arrived quickly because they were on bikes. The politically useful narratives of this event arrived just about as fast as the police officers.
In July of 1835, some gamblers were lynched in Vicksburg, Mississippi, after a typical pre-lynching “trial.” An early account says it was because they behaved badly at a fourth of July celebration. There were later other versions.
The incident at my university quickly became a datapoint about the victimization of white males, the inherent violence of black males, and the failure of the liberal media to be sufficiently alarmist about the black/liberal conspiracy to exterminate white males. That it was such a datapoint was proven by several other claims that turned out to be false (the attacker went after “Greek” white males, there was another stabbing of a “Greek” white male at the same time—the attacker stabbed a non-white, wasn’t targeting “Greeks,” and the other attack never happened). The incident of the gamblers involved gamblers morphing into abolitionists, and then getting glommed onto a non-existent conspiracy of a guy called Murrell.
These two incidents seem to me extremely similar, and the similarity between them is why I’ve been worried for some time about American political discourse—the way the public “knows” things is worryingly similar to the rhetoric that got us into a war.
That’s obviously a strange argument, but not deliberately perverse. I mean it.
The short version is that how both incidents were quickly renarrated and used signifies larger problems with the normal political discourse of the day. I could have picked another pair—the Charleston pamphlet mailing and Benghazi, for instance—and the similarity would remain. The similarity isn’t about the incidents, but about how they were transformed into an entirely and obviously false narrative that resisted all attempts at refutation. It’s about the easy demagoguery of everyday politics.
This is a sort of complicated argument in that there is so much demagoguery about demagoguery that I have to do a lot of clearing before I can make the argument I want to make. And I worry about starting with a statement of my argument, since my whole point is that what caused the Civil War was that a large number of people refused to listen to anything that might contradict their central beliefs. The Civil War is, unhappily, a great example of how the narration of historical events gets glued on to current issues of ingroup identification (so that whether a particular narration is “true” is determined by whether it is loyal to the ingroup).
In a culture of demagoguery, all issues are reduced to a competition between the ingroup and outgroup. A claim is “true” if it shows that the ingroup is better than the outgroup.
Popular understandings of the Civil War (really, a failed revolution) are dominated by ingroup/outgroup thinking. For many people, admitting that secession and the firing on Fort Sumter were bad ideas would entangle admitting that ingroup members behaved badly. There isn’t a way to look clearly at primary documents about slavery, the declarations of secession, the proslavery provocation of war, segregation, and “the South” that doesn’t involve acknowledging that “the South” (an instance of strategic misnaming, explained below) judged things badly.
The South—that is, the entirety of people in the southern regions of the US–never supported the fairly bizarre system that was US slavery. While Native American tribes in the southern regions had slaves, the system was nothing like the dominant version (which was primarily lucrative because of selling slaves); it’s reasonable to think the large number of slaves didn’t support slavery; Quakers and others were sometimes opposed to slavery, and often opposed to the dominant system (which, by the 1830s, largely prohibited teaching slaves to read the Bible, and which violated property rights by the amount of state control over what slaveholders could do with what law said was their property). The equation of “the South” and “proslavery” is an example of the “no true Scotsman” fallacy, in which disconfirming examples are simply not counted.
[This is NOT to say that the large number of white politicians who criticized slavery opposed it, by the way, since many of the were making either the “necessary evil” or “wolf by the ears” argument. The first of those was that slavery was bad, but it was necessary for some vague greater good, and the second (most famously promoted by Jefferson) was that slavery was a crime against Africans, and they were so justified in being angry about slavery that we couldn’t free them. Slavery enabled us to hold them down, and any release of that hold would result in their killing whites in an act of justified rage. So, we must maintain slavery.]
When we talk about “the South” we generally mean the white proslavery political leaders, and their motives in secession were absolutely clear: they were protecting and promoting slavery. And that is what they said, over and over, every time the issue came up. Speeches in Congress, speeches for secession, declarations of secession, speeches at fourth of July celebrations, sermons, judicial decisions—the South was about slavery.
So, anyone who wants to argue that the Civil War was about “states rights” and not slavery has to argue that the people who wrote the declarations of secession and the people arguing in favor of secession were lying. [They also have to explain how the Dred Scott decision and the Fugitive Slave Laws respected the principle of states’ rights—I’ve always found it entertaining how a CSA apologist will, when presented with that argument, either go silent or threaten violence—both responses are admissions that they have no rational response.]
There is a more complicated argument about secession not really being about slavery per se, but about how Southern political and intellectual leaders wove slavery into Southern culture. That argument is that proslavery rhetoric had become a staple of American politics, with oneupsmanship about loyalty to slavery requiring that Southern politicians (and their non-Southern allies, called “doughfaces” because proslavery politicians bragged they could make them have any emotion they wanted) get increasingly extreme in arguments about what should be done to ensure the expansion of slavery. So, it wasn’t slavery, but rhetoric about slavery that caused many slave states to engage in the extraordinarily unwise and unnecessary act of secession.
What people often don’t realize is that slavery was safe, even under Lincoln. Slavery was well-ensconced in US politics, with a majority of the Supreme Court, Congress, and the Presidency. Lincoln’s election was a glitch, in that he was only able to win the Presidency because the proslavery forces split. And he was willing to support a constitutional amendment to protect slavery in the existing slave states. The rational choice on the part of slave states would have been to sit tight until the next election, resolve their internal divisions, and elect another proslavery President.
Thus, were the secession really about slavery as an economic institution, it wouldn’t have happened—slavery as an economic institution was safe, unless you believe the evidence (which is pretty compelling) that slavery was not an economically efficient way to grow sugar or cotton. There are some who argue that slavery was deliberately uneconomic in that owning slaves wasn’t about making money, but it was a marker of success. So, just as driving an unnecessarily large car with poor gas mileage is a marker of masculine success in our culture, and not a rational economic choice, so owning slaves was a marker of masculine success in the antebellum south. A different argument is that slavery wasn’t profitable as an economic system, but it was profitable as a sales system—the profit in slavery came from selling slaves, so slavery was only profitable as an economic institution if there were expanding markets for slaves. If you put both these arguments together, then the otherwise irrational behavior of proslavery rhetors makes more sense, in that, while Lincoln was willing to allow slavery to exist eternally in slave states, he wouldn’t let it expand. Certainly, a lot of primary documents of the era insist on the importance of opening new markets to slavery. (If you want to see a longer review of scholarship on this argument, and my own take, see Fanatical Schemes.)
Whatever the motivations—and perhaps all three arguments are right about some set of people—from the 1820s until the Civil War, proslavery rhetoric was consistent: every single political issue was about ingroup (proslavery) and outgroup (not proslavery), and any success on the part of the outgroup meant the extermination of the ingroup. And that is our situation now. And while all parties engage in it too much, not all sides do so to the same degree.
And, no “both sides” aren’t equally guilty because saying there are two sides is part of the problem.
I’m saying that the Civil War wasn’t about slavery, per se, but the consequence of proslavery rhetoric. Slavery can’t cause a war, but how people value it, what they connect it to, what it means to them, how central it is to their sense of identity, how they think they would look if they were seen as not supporting slavery—all those things can cause people to go to war because those things cause people to believe that their identity is threatened with extermination if this policy passes. And that’s what pro-secession rhetoric said (which went back into the 1820s): if we don’t get this policy passed, then the Federal Government will send troops into the South and force abolition on us and then we’ll have race war (it’s disturbingly similar to NRA rhetoric about the Federal Government knocking down doors, taking guns, and then the riot of criminals that will ensue).
In a world in which you’re hearing the same claims and same kind of claims repeated everywhere, the fact that none of the are true doesn’t matter as far as the kind of impact that those rumors can have. There is a Chesterton story in which Father Brown says that people think that 0 + 0 + 0 + 0 equals more than 0, and I’ve always thought it’s a sweet description of how antebellum proslavery rhetoric worked (and how much rhetoric works now): a long series of non-events is taken as proof of something for many people simply because the series is so long, and they forget it’s a series of false predictions. If the media we’re consuming is repeatedly wrong, the rational choice is to abandon it as unreliable. But, if the media keeps making predictions we want to be true, then the fact that those predictions are always false doesn’t make us mistrust the media—we trust them more because we perceive them as media that want the same things we do. [I’ll mention two examples: Charles and Camilla breaking up, the world ending this year. The fact that those predictions are always wrong doesn’t destroy the credibility of the predicting media for many people because those media keep making the prediction—that the media is always wrong triggers the cognitive bias of no smoke without fire.]
Tremendous numbers of people who didn’t financially benefit from slavery personally identified with slavery, and so they sincerely believed that an end to slavery meant an extermination of their identity.
And none of that was true: Lincoln wouldn’t end slavery; the end of slavery wouldn’t mean race war; as was demonstrated in the non-slave states, it was quite easy to maintain white supremacy without slavery. But the proslavery claim that it was either support slavery in the most extreme ways possible or there will be race war would have seemed true to someone reading southern newspapers because those papers were full of reports of events that never happened. And that argument signified what was, to me, the most striking characteristic of antebellum Southern newspaper rhetoric—it was rabidly factional.
It wasn’t a binary. In the 1830s (the era in which I dredged deep), there were multiple parties. And each party had its newspaper system, and each system reprinted articles from others in the system. Some reports were shared (fabricated reports about abolitionist conspiracies would be reported in all the factions hoping to benefit from anti-abolitionist fear-mongering, for instance), and some weren’t, but an article was printed or not on the basis of whether it helped the faction. And all those papers had mottos like “free of faction.”
In rhetoric, that’s called strategic misnaming. You simply declare that you’re doing the opposite of what you’re doing. It works to a disturbing degree, mostly with people who make political decisions on the basis of political faction (or ingroup favoritism).
Someone reading southern newspapers could list all sorts of times that abolitionists engaged in conspiracies of extermination against them. The very real incidents of mass killings of “them”—Native Americans, African Americans, anyone accused of abolitionism—were not mentioned, or were not framed as incidents of ingroup violence. They were self-defense, even if the incidents that supposedly justified the revenge hadn’t actually happened (and that was common). Consumers of that media couldn’t have a reasonably accurate understanding of who was committing violence against whom. There were in the antebellum era (and in the postbellum) communally insane acts of violence against the bodies of Others (mostly African or Native American, but with other kinds of Other thrown in), all of which were rhetorically rationalized as self-defense, and none of which were. Some, like the gambler incident, had nothing to do with politics, and some were political only in the sense that the people enacting the rhetorically-framed “revenge” violence were motivated by a politics of racist or pro-slavery politics. So, in the antebellum era, everything was politicized. And even, as in the case of the gamblers, the correct version of the incident was available to the media, the false version lumbered around the public sphere, crushing any accurate version.
And here we return to the tragedy of my campus. The incident on my campus was not racially motivated, and it was not part of some massive conspiracy against privileged white males. The notion that it was part of a May Day revolution, that an antiracist group had anything to do with it, or that there were other attacks has been thoroughly and completely refuted in any media open to reason. But we live in a world so rabidly factionalized that many of the media that promoted the false version either continue to repeat the false one, or have never repudiated the false one. And so the fear-mongering one lumbers around the internet confirming people in that informational cave that black people and liberals are conspiring against them, that whites are the real victims here, and that the “liberal media” won’t report the truth about the war on whites. And so, as in the antebellum public sphere, there are people roused to violent levels of self-defense over incidents that never actually happened.
In other words, those two incidents worry me because they indicate eras with similar ways of arguing about politics. Then, as now, many people believe that you should get all your information from people who are like you, who share your values, and who remain in a state of permanently charged outrage about them. You only trust people who, like you, insist that we are inherently and essentially good and they are inherently and essentially bad.
Since the dominant method of political argument didn’t play out well in the antebellum era—it ended in a war that was unnecessary–maybe we should rethink that we’re doing it now.
Americans love the Hitler analogy, the claim that their political leader is just like Hitler. And it’s almost always very badly done—their leader (let’s call him Chester) is just like Hitler because…. and then you get trivial characteristics, such as characteristics that don’t distinguish either Hitler or Chester from most political leaders (they were both charismatic, they used Executive Orders), or that flatten the characteristics that made Hitler extraordinary (Hitler was conservative). That process all starts with deciding that Chester is evil, and Hitler is evil, and then looking for any ways that Chester is like Hitler. So, for instance, in the Obama is Hitler analogy, the argument was that Obama was charismatic, he had followers who loved him, he was clearly evil (to the person making the comparison–I’ll come back to that), and he maneuvered to get his way.
Bush was Hitler because he was charismatic, he had followers who loved him, he was clearly evil (to the people making the comparison), and he used his political powers to get his way. And, in fact, every effective political figure fits those criteria in that someone thought they were clearly evil: Lincoln, Washington, Jefferson, FDR, Reagan, Bush, and Trump, for instance.
He was clearly evil. In the case of Hitler it means he killed six million Jews; in the case of Obama it means he tried to reduce abortions in a way that some people didn’t like (he didn’t support simply outlawing them), in the case of Bush it was that he invaded Iraq, for Lincoln it was that he tried to end slavery, and so on. In other words, in the case of Hitler, every reasonable person agrees that the policies he adopted six or seven years into his time as Chancellor were evil. But not everyone who wants to reduce abortions to the medically necessary agrees that Obama’s policies were evil, and not everyone who wants peace in the middle East agrees that Bush was evil.
So, what does it mean to decide a political leader is evil?
For instance, people who condemned Obama as evil often did so on grounds that would make Eisenhower and Nixon evil (support for the EPA, heavy funding for infrastructure, high corporate taxes, a social safety net that included some version of Medicare, secular public education), and many of which would make Eisenhower, Nixon, Reagan, and the first Bush evil (faith in social mobility, protection of public lands, promoting accurate science education, support for the arts, an independent judiciary, funding for infrastructure, good relations with other countries, the virtues of compromise). So, were the people condemning Obama as evil doing so on grounds that would cause them to condemn GOP figures as evil? No—their standards didn’t apply to figures they liked. It just a way of saying he wasn’t GOP.
Every political figure has some group of people who sincerely believe that leader is obviously evil. And every political figure who gets to be President has mastered the arts of being charismatic (not every one gets power from charismatic leadership, but that’s a different post), compromising, manipulating, engaging followers. So, is every political leader just like Hitler?
Unhappily, we’re in a situation in which people make the Hitler analogy to everyone else in their informational cave, and the people in that cave think it’s obviously a great analogy. Since we’re in a culture of demagoguery in which every disagreement is a question of good (our political party) or evil (their political party), any effective political figure of theirs is Hitler.
We’re in a culture in which a lot of media says, relentlessly, that all political choices are between a policy agenda that is obviously good and a policy agenda that is obviously evil, and, therefore, nothing other than the complete triumph of our political agenda is good. That’s demagoguery.
The claim that He was clearly evil is important because it raises the question of how we decide whether something is true or not. And that is the question in a democracy. The basic principle of a democracy is that there is a kind of common sense, that most people make decisions about politics in a reasonable manner, and that we all benefit because we get policies that are the result of the input of different points of view. Democracy is a politics of disagreement. But, if some people are supporting a profoundly anti-democratic leader, who will use the power of government to silence and oppress, then we need to be very worried. So the question of whether we are democratically electing someone who will, in fact, make our government an authoritarian one-party state is important. But, how do you know that your perception that this leader is just like Hitler is reasonable? What is your “truth test” for that claim?
1. Truth tests, certainty, and knowledge as a binary
Talking about better and worse Hitler analogies requires a long digression into truth tests and certainty for two reasons. First, the tendency to perceive their effective political leaders as evil because their policies are completely evil is based on and reinforces the tendency to think of political questions as between obvious good and obvious evil, and that perception is reinforced by and reinforces what I’ll explain as the two-part simple truth test (does this fit with what I already believe, and do reliable authorities say this claim is true). Second, believing that all beliefs and claims can be divided into obvious binaries (you are certain or clueless, something is right or wrong, a claim is true or false, there is order or chaos) correlates strongly to authoritarianism, and one of the most important qualities of Hitler was that he was authoritarian (and that’s where a lot of these analogies fail—neither Obama nor Bush were authoritarians).
And so, ultimately, as the ancient Greeks realized, any discussion about democracy quickly gets to the question of how common people make decisions as to whether various claims are true or false. Democracies fail or thrive on the single point of how people assess truth. If people believe that only their political faction has the truth and every other political faction is evil, then democracies collapse and we have an authoritarian leader. Hitlers arise when people abandon democratic deliberation.
That’s the most important point about Hitler: leaders like Hitler come about because we decide that diversity of opinion weakens our country and is unnecessary.
The notion that authoritarian governments arise from assumptions about how people argue might seem counterintuitive, since that seems like some kind of pedantic question only interesting to eggheads (not what you believe but how you believe beliefs work) and therefore off the point. But, actually, it is the point—democracies turn into authoritarian systems under some circumstances and thrive under others, and it all depends on what is seen as the most sensible way to assess whether a claim is true or not. The difference between democracy and authoritarianism is that practice of testing claims—truth tests.
For instance, some sources say that Chester is just like Hitler, and other sources say that Hubert it just like Hitler. How do you decide which claim is true?
One truth test is simple, and it has two parts: does perceiving Chester as just like Hitler fit with what you already believe? do sources you think are authorities tell you that Chester is just like Hitler? Let’s call this the simple two-part truth test, and the people who use it are simple truth-testers.
Sometimes it looks as though is a third (but it’s really just the first reworded): can I find evidence to show that Chester is just like Hitler?
For many people, if they can confirm a claim through those three tests (does it fit what I believe, do authorities I trust say that, can I find confirming evidence), then they believe the claim is rational.
(Spoiler alert: it isn’t.)
That third question is really just the same as the first two. If you believe something—anything, in fact—then you can always find evidence to support it. If you are really interested in knowing whether your beliefs are valid, then you shouldn’t look to see whether there is evidence to support what you believe; you should look to see whether there is evidence that you’re wrong. If you believe that someone is mad at you, you can find a lot of evidence to support that belief—if they’re being nice, they’re being too nice; if they’re quiet, they’re thinking about how angry they are with you. You need to think about what evidence you would believe to persuade you they aren’t mad. (If there is none, then it isn’t a rational belief.) So, those three questions are two: does a claim (or political figure) confirm what I believe; do the authorities I trust confirm this claim (or political figure)?
Behind those two questions is a background issue of what decisions look like. Imagine that you’re getting your hair cut, and the stylist says you have to choose between shaving your head or not cutting your hair at all—how do you decide whether that person is giving you good advice?
And behind that is the question of whether it’s a binary decision—how many choices to you have? Is the stylist open to other options? Do you have other options? Once the stylist has persuaded you that you either do nothing to your hair or shave it, then all he has to do is explain what’s wrong with doing nothing. And you’re trapped by a logical fallacy, because leaving your hair alone might be a mistake, but that doesn’t actually mean that shaving your head is a good choice. People who can’t argue for their policy like the fallacy of the false division (the either/or fallacy) because it hides the fact that they can’t persuade you of the virtues of their policy.
The more that you believe every choice is between two absolutely different extremes, the more likely it is that you’ll be drawn to political leaders, parties, and media outlets that divide everything into absolutely good and absolutely bad.
It’s no coincidence that people who believe that the simple truth test is all you need also insist (sometimes in all caps) that anyone who says otherwise is a hippy dippy postmodernist. For many people, there is an absolute binary in everything, including how to look at the world—you can look and make a judgment easily and clearly or else you’re saying that any kind of knowledge at all is impossible. And what you see is true, obviously, so anyone who says that judgment is vexed, flawed, and complicated is a dithering weeny. They say that, for a person of clear judgment, the right course of action in all cases is obvious and clear. It’s always black (bad) or white (good, and what they see). Truth tests are simple, they say.
In fact, even the people who insist that the truth is always obvious and it’s all black or white go through their day in shades of grey. Imagine that you’re a simple truth tester. You’re sitting at your computer and you want an ‘e’ to appear on your screen, so you hit the ‘e’ key. And the ‘e’ doesn’t appear. Since you believe in certainty, and you did not get the certain answer you predicted, are you now a hippy-dippy relativist postmodernist (had I worlds enough and time I’d explain why that term is incredibly sloppy and just plain wrong) who is clueless? Are you paralyzed by indecision? Do you now believe that all keys can do whatever they want and there is no right or wrong when it comes to keys?
No, you decide you didn’t really hit the ‘e’ or your key is gummed up or autocorrect did something weird. When you hit the ‘e’ key, you can’t be absolutely and perfectly certain that the ‘e’ will appear, but that’s probably what will happen, and if it doesn’t you aren’t in some swamp of postmodern relativism and lack of judgment.
Your experience typing shows that the binary promoted by a lot of media between absolutely certainty and hippy dippy relativism is a sloppy social construct. They want you to believe it, but your experience of typing, or making any other decision, shows it’s a false binary. You hit ‘e’ key, and you’re pretty near certain that an ‘e’ will appear. But you also know it might not, and you won’t collapse into some pile of cold sweat of clueless relativism if it doesn’t. You’ll clean your keyboard.
It’s the same situation with voting for someone, marrying someone, buying a new car, making dinner, painting a room. You can feel certain in the moment that you’re making the right decision, but any honest person has to admit that there are lots of times we felt totally and absolutely certain and turned out to have been mistaken. Feeling certain and being right aren’t the same thing.
That isn’t to say that the hippy-dippy relativists are right and all views are equally valid and there is no right or wrong—it’s to say that the binary between “the right answer is always obviously clear” and hippy-dippy relativism is wrong. For instance, in terms of the assertion that many people make that the distinction between right and wrong is absolutely obvious: is killing someone else right or wrong? Everyone answers that it depends. So, does that mean we’re all people with no moral compass? No, it means the moral compass is complicated, and takes thought, but it isn’t hopeless.
Our world is not divided into being absolutely certain and being lost in clueless hippy dippy relativism. But, and this is important, that is the black and white world described by a lot of media—if you don’t accept their truth, then you’re advocating clueless postmodern relativism. What those media say is that what you already believe is absolutely true, and, they say, if it turns out to be false, you never believed it, and they never said it. (The number of pundits who advocated the Iraq invasion and then claimed they were opposed to it all along is stunning. Trump’s claiming he never supported the invasion fits perfectly what with Philip Tetlock says about people who believe in their own expertise.)
And that you have been and always be right is a lovely, comforting, pleasurable message to consume. It is the delicate whipped cream of citizenship—that you, and people like you, are always right, and never wrong and you can just rely on your gut judgment. Of course, the same media that says it’s all clear has insisted that something is absolutely true that turned out not to be (Saddam Hussein has weapons of mass destruction, voting for Reagan will lead to the people’s revolution, Trump will jail Clinton, Brad Pitt is getting back together with Angelina Jolie, studies show that vaccines cause autism, the world will end in 1987). The paradox is that people continue to consume and believe media who have been wrong over and over, and yet are accepted as trusted authorities because they have sometimes been right, or, more often, because, even if wrong, what they say is comforting and assuring.
But, what happens when media say that Trump has a plan to end ISIS and then it turns out his plan is to tell the Pentagon to come up with a plan? What happens when the study that people cite to say autism is caused by vaccines turns out to be fake? Or, as Leon Festinger famously studied, what happens when a religion says the world will end, and it doesn’t? What happens when something you believe that fits with everything else you believe and is endorsed by authorities you believe turns out to be false? You could decide that maybe things aren’t simple choices between obviously true and obviously false, but that isn’t generally what people do. Instead, we recommit to the media because now we don’t want to look stupid.
Maybe it would be better if we all just decided that complicated issues are complicated, and that’s okay.
There are famous examples that show the simple truth test—you can just trust your perception—is wrong.
For instance, there is this example.
If you’re looking at paint swatches, and you want a darker color, you can look at two colors and decide which is darker. You might be wrong. Here’s a famous example of our tendency to interpret color by context.
Those examples look like special cases, and they (sort of) are: if you know that you have a dark grey car, and there is a grey and dark grey car in the parking lot, you don’t stand in the parking lot paralyzed by not knowing which car is yours because you saw something on the internet that showed your perception of darkness might be wrong. That experiment shows you might be entirely wrong, but you will not go on in your life worrying about it.
But you have been wrong about colors. And we’ve all tried to get into the wrong car, but in those cases we get instant feedback that we were wrong. With politics it’s more complicated, since media that promoted what turns out to have been a disastrous decision can insist they never promoted it (when Y2K turned out not to be a thing, various radio stations that had been fear mongering about it just never mentioned it again), claim it was the right decision, or blame it on someone else. They can continue to insist that their “truth” is always the absolutely obvious decision and that there is binary between being certain and being clueless. But, in fact, our operative truth test in the normal daily decisions we make is one that involves skepticism and probability. Sensible people don’t go through life with a yes/no binary. We operate on the basis of a yes/various degrees of maybe/no continuum.
What’s important about optical illusions is that they show that the notion central to a lot of argutainment—that our truth tests for politics should involve being absolutely certain that our group is right or else you’re in the muck of relativistic postmodernism—isn’t how we get through our days. And that’s important. Any medium, any pundit, any program, that says that decisions are always between us and them is lying to us. We know, from decisions about where to park, what stylist to use, what to make for dinner, how to get home, that it isn’t about us vs. them: it’s about making the best guesses we can. And we’re always wrong eventually, and that’s okay.
We tend to rely on what social psychologists call heuristics—meaning mental short cuts—because you can’t thoroughly and completely think through every decision. For instance, if you need a haircut, you can’t possibly thoroughly investigate every single option you have. You’re likely to have method for reducing the uncertainty of the decision—you rely on reviews, you go where a friend goes, you just pick the closest place. If a stylist says you have to shave your head or do nothing, you’ll walk away.
You might tend to have the same thing for breakfast, or generally take the same route to work, campus, the gym. Your route will not be the best choice some percentage of the time because traffic, accidents, or some random event will make your normal route slower than others from time to time (if you live in Austin, it will be wrong a lot). Even though you know that you can’t be certain you’re taking the best route to your destination, you don’t stand in your apartment doorway paralyzed by indecision. You aren’t clueless about your choices—you have a lot of information about what tends to work, and what conditions (weather, a football game, time of day, local music festivals, roadwork) are likely to introduce variables in your understanding of what is the best route. You are neither certain nor clueless.
And there are dozens of other decisions we make every day that are in that realm of neither clueless nor certain: whether you’ll like this movie, if the next episode of a TV program/date/game version/book in a series/cd by an artist/meal at a restaurant will be as good as the last, whether your boss/teacher will like this paper/presentation as much as the previous, if you’ll enjoy this trip, if this shirt will work out, if this chainsaw will really be that much better, if this mechanic will do a good job on your car, if this landlord will not be a jerk, if this class/job will be a good one.
We all spend all of our time in a world in which we must manage uncertainty and ambiguity, but some people get anxious when presented with ambiguity and uncertainty, and so they talk (and think) as so there is an absolute binary between certain and clueless, and every single decision falls into one or the other.
And here things get complicated. The people who don’t like uncertainty and ambiguity (they are, as social psychologists say, “drawn to closure”) will insist that everything is this or that, black or white even though, in fact, they continually manage shades of grey. They get in the car or walk to the bus feeling certain that they have made the right choice, when their choice is just habit, or the best guess, or somewhere on that range of more or less ambiguous.
So, there is a confusion between certainty as a feeling (you feel certain that you are right) and certainty as a reasonable assessment of the evidence (all of the relevant evidence has been assessed and alternative explanations disproven)—as a statement about the process of decision-making. Most people use it in the former way, but think they’re using it in the latter, as though the feeling of certainty is correlated to the quality of evidence. In fact, how certain people feel is largely a consequence of their personality type (On Being Certain has a great explanation of that, but Tetlock’s Expert Political Judgment is also useful). There’s also good evidence that the people who know the most about a subject tend to express themselves with less certainty than people who are un- or misinformed (the “Dunning-Kruger effect”).
What all that means is that people who get anxious in the face of ambiguity and uncertainty resolve that anxiety by feeling certain, and using a rigid truth test. So, the world isn’t rigidly black or white, but their truth test is. For instance, it might have been ambiguous whether they actually took the best route to work, but they will insist that they did, and that they obviously did. They managed uncertainty and ambiguity by denying it exists. This sort of person will get actively angry if you try to show them the situation is complicated.
They manage the actual uncertainty of situations by, retroactively, saying that the right answer was absolutely clear.[1] That sort of person will say that “truth test” is just simply asking yourself if something is true or not. Let’s call that the simple truth test, and the people who use it simple truth testers.
The simple truth test has two parts: first, does this claim fit with what I already believe? and, second, do authorities I consider reliable promote this claim?
People who rely on this simple truth test say it works because, they believe, the true course of action is always absolutely clear, and, therefore, it should be obvious to them, and it should be obvious to people they consider good. (It shouldn’t be surprising that they deny having made mistakes in the past, simply refashioning their own history of decisions—try to find someone who supported the Iraq invasion or was panicked about Y2K.)
The simple truth test is comfortable. Each new claim is assessed in terms of whether it makes us feel good about things we already believe. Every time we reject or accept a claim on the basis of whether it confirms our previous beliefs it confirms our sense of ourselves as people who easily and immediately perceive the truth. Thus, this truth test isn’t just about whether the new claim is true, but about whether they and people like them are certainly right.
The more certain we feel about a claim, the less likely we are to doublecheck whether we were right, and the more likely we are to find ways to make ourselves have been right. Once we get to work, or the gym, or campus, we don’t generally try to figure out whether we really did take the fastest route unless we have reason to believe we might have been mistaken and we’re the sort of person will to consider that we might have been mistaken.
There’s a circle here, in other words: the sort of person who believes that there is a binary between being certain and being clueless, and who is certain about all of her beliefs, is less likely to do the kind of work that would cause her to reconsider her sense of self and her truth tests. Her sense of herself as always right appears to be confirmed because she can’t think of any time she has been wrong. Because she never looked for such a time.
Here I need to make an important clarification: I’m not claiming there is a binary between people who believe you’re either certain or clueless and people who believe that mistakes in perception happen frequently. It’s more of a continuum, but a pretty messy one. We’re all drawn to black or white thinking when we’re stressed, frightened, threatened, or trying to make decisions with inadequate information. Most people have some realms or sets of claims they think are certain (this world is not a dream, evolution is a fact, gravity happens). Some people need to feel certain about everything, and some people don’t need to feel certain much at all, and a lot of people feel certain about many things but not everything.
Someone who believes that her truth tests enable certainty on all or most things will be at one end of the continuum, and someone who managed to live in a constant state of uncertainty would be at the other. Let’s call the person at the “it’s easy to be certain about almost everything important” authoritarian (I’ll explain the connection better later).
Authoritarians have trouble with the concept of probabilities. For instance, if the weather report says there will be rain, that’s a yes/no. And it’s proven wrong if the weather report says yes and there is no rain. But if the weather report says there is a 90% chance of rain and it doesn’t rain, the report has not been proven wrong.
Authoritarians believe that saying there is a 90% chance is just a skeezy way to avoid making a decision—that the world really is divided into yes or no, and some people just don’t want to commit. And they consume media that says exactly that.
This is another really important point: many people spend their consuming media that says that every decision is divided into two categories: the obviously right decision, and the obviously wrong one. And that media says that anyone who says that the right decision might be ambiguous, unclear, or a compromise is promoting relativism or postmodernism. So, as those media say, you’re either absolutely clear or you’re deep in the muck of clueless relativism. Authoritarians who consume that media are like the example above of the woman who believes that her certainty is always justified because she never checks to see whether she was wrong. They live in a world in which their “us” is always right, has always been right, and will always be right, and the people who disagree are wrong-headed ditherers who pretend that it’s complicated because they aren’t man enough to just take a damn stand.
(And, before I go on, I should say that, yes, authoritarianism isn’t limited to one political position—there are authoritarians all over the map. But, that isn’t to say that “both sides are just as bad” or authoritarianism is equally distributed. The distribution of authoritarianism is neither a binary nor a constant; it isn’t all on one side, but it isn’t evenly distributed.)
I want to emphasize that the authoritarian view—that you’re certain or clueless—is often connected to a claim that people are either authoritarians or relativists (or postmodernists or hippies) because there are two odd things about that insistence. First, a point I can’t pursue here, authoritarians rarely stick to principles across situations and end up fitting their own definition of relativist/postmodern. (Briefly, what I mean is that authoritarians put their group first, and say their group is always right, so they condemn behavior in them that they praise or justify in us. In other words, whether an act is good or bad is relative to whether it’s done by us or them—that’s moral relativism. So, oddly enough, you end up with moral relativism attacked by people who engage in it.) Second, even authoritarians actually make decisions in a world of uncertainty and ambiguity, and don’t use the same truth test for all situations. When their us turns out to be wrong, then they will claim the situation was ambiguous, there was bad information, everyone makes mistakes, and go on to insist that all decisions are unambiguous.
So, authoritarians say that all decisions are clear, except when they aren’t, and that we are always right, except when we aren’t. But those unclear situations and mistakes should never be taken as reasons to be more skeptical in the future.
2. Back to Hitler
Okay, so how do most people decide whether their leader is like Hitler? (And notice that it is never about whether our leader is like Hitler.) If you believe in the simple two-part truth test, then you ask yourself whether their leader seems to you to be like Hitler, and whether authorities you trust say he is. And you’re done.
But what does it mean to be like Hitler? What was Hitler like?
There is the historical Hitler who was, I think, evil, but didn’t appear so to many people, and who had tremendous support from a lot of authoritarians, and there is the cartoon Hitler. Hitler was evil because he tried to exterminate entire peoples (and he started an unnecessary war, but that’s often left out). The cartoon version assumes that his ultimate goals were obvious to everyone from the beginning—that he came on the scene saying “Let’s try to conquer the entire world and exterminate icky people” and always stuck to that message, so that everyone who supported him knew they were supporting someone who would start a world war and engage in genocide.
But that isn’t how Hitler looked to people at the time. Hitler didn’t come across as evil, even to his opponents (except to the international socialists), until the Holocaust was well under way. Had he come across as evil he would never have gotten into power. While Mein Kampf and his “beerhall” speeches were clearly eliminationist and warmongering, once he took power his recorded and broadcasted speeches never mentioned extermination and were about peace. (According to Letters to Hitler, his supporters were unhappy when he started the war.) Hitler had a lot of support, of various kinds, and his actions between 1933 and 1939 actually won over a lot of people, especially conservatives and various kinds of nationalists, who had been skeptical or even hostile to him before 1933. His supporters ranged from the fans (the true believers), through conservative nationalists who wanted to stop Bolshevism and reinstate what they saw as “traditional” values, conservative Christians who objected to some of his policies but also liked a lot of them (such as his promotion of traditional roles for women, his opposition to abortion and birth control, his demonizing of homosexuality), and people of various political ideologies who liked that (they thought) he was making Germany respected again, had improved the economy, had ended the bickering and instability they associated with democratic deliberation, and was undoing a lot of the shame associated with the Versailles Treaty.
Until 1939, to his fans, Hitler came across as a truth-teller, willing to say politically incorrect things (that “everyone” knew were true), cut through all the bullshit, and be decisive. He would bring honor back to Germany and make it the military powerhouse it had been in recent memory; he would sideline the feckless and dithering liberals, crush the communists, and deal with the internal terrorism of the large number of immigrants in Germany who were stealing jobs, living off the state, and trying to destroy Germany from within; he would clean out the government of corrupt industrialists and financiers who were benefitting from the too-long deliberations and innumerable regulations. He would be a strong leader who would take action and not just argue and compromise like everyone else. He didn’t begin by imprisoning Jews; he began by making Germany a one-party state, and that involved jailing his political opponents.
Even to many people willing to work with him, Hitler came across as crude, as someone pandering to popular racism and xenophobia, a rabble-rouser who made absurd claims, and who didn’t always make sense, whose understanding of the complexities of politics appeared minimal. But conservatives thought he would enable them to put together a coalition that would dominate the Reichstag (the German Congress, essentially) and they could thereby get through their policy agenda. They thought they could handle him. While they granted that he had some pretty racist and extreme things (especially his hostility to immigrants and non-Christians, although his own record on Christian behavior wasn’t exactly great), they thought that was rabble-rousing he didn’t mean, a rhetoric he could continue to use to mobilize his base for their purposes, or that he could be their pitbull whom they could keep on a short chain. He instantly imposed a politically conservative social agenda that made a lot of conservative Christians very happy—he was relentless in his support for the notion that men earn money and women work in the home, homosexuality and abortion are evil [2], sexual immorality weakens the state, and his rhetoric was always framed in “Christian terms” (as Kenneth Burke famously argued—his rhetoric was a bastardization of Christian rhetoric, but it still relied on Christian tropes).
Conservative Christians (Christians in general, to be blunt) had a complicated reaction to him. Most Christian churches of the era were anti-Semitic, and that took various forms. There were the extreme forms—the passion plays that showed Jews as Christ-killers, who killed Christians for their blood at Passover, even religious festivals about how Jews stabbed consecrated hosts (some of which only ended in the 1960s).
There were also the “I’m not racist but” versions of Christian anti-Semitism promoted by Catholic and Protestant organizations (all of this is elegantly described in Antisemitism, Christian Ambivalence, and the Holocaust). Mainstream Catholic and Lutheran thought promoted the notion that Jews were, at best, failed Christians, and that the only reason not to exterminate them was so that they could be converted. There was, in that world, no explicit repudiation of the sometimes pornographic fantasies of greedy Jews involved in worldwide conspiracies, stabbing the host, drinking the blood of Christian boys at Passover, and plotting the downfall of Germany. And there was certainly no sense that Christians should tolerate Jews in the sense of treating them as we would want to be treated; it simply meant that they shouldn’t be killed. As Ian Kershaw has shown, a lot of German Christians didn’t bother themselves about oppression (even killing) of Jews, as long at it happened out of their ken; they weren’t in favor of killing Jews, but, as long as they could ignore it was happening, they weren’t going to do much to protest (Hitler, The Germans, and the Final Solution).
Many of his skeptics (even international ones) were won over by his rhetoric. His broadcast speeches emphasized his desire for peace and prosperity; they liked that he talked tough about Germany’s relations to other countries (but didn’t think he’d lead them into war), they loved that he spent so much of his own money doing good things for the country (in fact, he got far more money out of Germany than he put into it, and he didn’t pay taxes—for more on this, see Hitler at Home), and they loved that he had the common touch, and didn’t seem to be some inaccessible snob or aristocrat, but a person who really understood them (Letters to Hitler is fascinating for showing his support). They believed that he would take a strong stance, be decisive, look out for regular people, clear the government of corrupt relationships with financiers, silence the kind of people who were trying to drag the nation down, and cleanse the nation of that religious/racial group that was essentially ideologically committed to destroying Germany.
There were a lot of people who thought Hitler could be controlled and used by conservative forces (Van Papen) or was a joke. In middle school, I had a teacher who had been in the Berlin intelligentsia before and during the war, and when asked why people like her didn’t do more about Hitler, she said, “We thought he was a fool.” Many of his opponents thought he would never get elected, never be given a position of power.
But still, some students say, you can see in his early rhetoric that there was a logic of extermination. And, yes, I think that’s true, but, and this is important, what makes you think you would see it? Smart people at the time didn’t see it, especially since, once he got a certain level of attention he only engaged in dog whistle racism. Look, for instance, at Triumph of the Will—the brilliant film of the 1934 Nazi rally in Nuremburg—in which anti-Semitism appears absent. The award-winning movie convinced many that Hitler wasn’t really as anti-Semitic as Mein Kampf might have suggested. But, by 1934, true believers had learned their whistles—everything about bathing, cleansing, purity, and health was a long blow on the dog whistle of “Jews are a disease on the body politic.” Hitler’s first speech on the dissolution of the Reichstag (March 1933) never uses the word Jew, and looked reasonable (he couldn’t control himself, however, and went back to his non-dog whistle demagoguery in what amounted to the question and answer period—Kershaw’s Hubris describes the whole event).
We focus on Hitler’s policy of extermination, but we don’t always focus enough on his foreign policy, especially between 1933 and 1939. Just as we think of Hitler as a raging antisemite (because of his actions), so we think of him as a warmonger, and he was both at heart and eventually, but he managed not to look that way for years. That’s really, really important to remember. He took power in 1933, and didn’t show his warmongering card till 1939. He didn’t show his exterminationist card till even later.
Hitler’s foreign policy was initially tremendously popular because he insisted that Germany was being ill-treated by other nations, was carrying a disproportionate burden, and was entitled to things it was being denied. Hitler said that Germany needed to be strong, more nationalist, more dominating, more manly in its relations with other nations. Germany didn’t want war, but it would, he said, insist upon respect.
Prior to being handed power, Hitler talked like an irresponsible war-monger and raging antisemite (especially in Mein Kampf), but his speeches right up until the invasion of Poland were about peace, stability, and domestic issues about helping the common working man. Even in 1933-4, the Nazi Party could release a pamphlet with his speeches and the title Germany Desires Work and Peace.
What that means is that from 1933 to 1939 Hitler managed a neat rhetorical trick, and he did it by dog whistles: he persuaded his extremist supporters that he was still the warmongering raging antisemite they had loved in the beerhalls and for whom Streicher was a reliable spokesman, and he persuaded the people frightened by his extremism that he wasn’t that guy, he would enable them to get through their policy agenda. (His March 1933 speech is a perfect example of this nasty strategy, and some day I intend to write a long close analysis of it.)
And even many of the conservatives who were initially deeply opposed to him came around because he really did seem to be effective at getting real results. He got those results by mortgaging the German economy, and setting up both a foreign policy and economic policy that couldn’t possibly be maintained without massive conquest; it had short-term benefits, but was not sustainable.
Hitler benefitted by the culture of demagoguery of Weimar Germany. After Germany lost WWI, the monarchy was ended, and a democracy was imposed. Imposing democracy is always vexed, and it doesn’t always work because democracy depends on certain cultural values (a different post). One of those values is seeing pluralism—that is, diversity of perspective, experience, and identity—as a good thing. If you value pluralism, then you’ll tend to value compromise. If you believe that a strong community has people with different legitimate interests, points of view, and beliefs, then you will see compromise as a success. If, however, you’re an authoritarian, and you believe that you and only you have the obvious truth and everyone else is either a knave or a fool, then you will see refusing to compromise as a virtue.
And then democracy stalls. It doesn’t stall because it’s a flawed system; it stalls when people reject the basic premises of democracy, when, despite how they make decisions about how to get to work in the morning, or whether to take an umbrella, they insist that all decisions are binaries between what is obviously right (us) and what is obviously wrong (them).
And, in the era after WWI, Germany was a country with a democratic constitution but a rabidly factionalized set of informational caves. People could (and did) spend all their time getting information from media that said that all political questions are questions of good (us) and evil (them). Those media promoted conspiracy theories—the Protocols of the Elders of Zion, for instance—insisted on the factuality of non-events, framed all issues as apocalyptic, and demonized compromise and deliberating. They said it’s a binary. The International Socialists said the same thing, that anything other than a workers’ revolution now was fascism, that the collapse of democracy was great because it would enable the revolution. Monarchists wanted the collapse of the democracy because they hoped to get a monarchy back, and a non-trivial number of industrialists wanted democracy to collapse because they were afraid people would vote for a social safety net that would raise their taxes.
It was a culture of demagoguery.
But, in the moment, large numbers of people didn’t see it that way because, if you were in a factional cave, and you used the two-step test, everything you heard in your cave would seem to be true. Everything you heard about Hitler would fit with what you already believed, and it was being repeated by people you trusted.
Maybe what you heard confirmed that he would save Germany, that he was a no-bullshit decisive leader who really cared about people like you and was going to get shit done, or maybe what you heard was that he was a tool of the capitalists and liberals and that you should refuse to compromise with them to keep him out of power. Whether what you heard was that Hitler was awesome or that he was completely wrong, what you heard was that he was obviously one or the other, and that anyone who disagreed with you was evil. What you heard was the disagreement itself was proof that evil was present. And heard democracy was a failure.
And that helped Hitler, even the attacks on him . As long as everyone agreed that the truth is obvious, that disagreement is a sign of weakness, the compromise is evil, then an authoritarian like Hitler would come along and win.
There were a lot of people who more or less supported the aims he said he had—getting Germany to have a more prosperous economy, fighting Bolshevism, supporting the German church, avoiding war, renegotiating the Versailles Treaty, purifying Germany of anti-German elements, making German politics more efficient and stable—but who thought Hitler was a loose cannon and a demagogue. Many of those were conservatives and centrists.
And, once Hitler was in power they watched him carefully. And, really, all his public speeches, especially any ones that might get international coverage, weren’t that bad. They weren’t as bad as his earlier rhetoric. There wasn’t as much explicit anti-Semitism, for instance, and, unlike in Mein Kampf, he didn’t advocate aggressive war. He said, over and over, he wanted peace. He immediately took over the press, but, still and all, every reader of his propaganda could believe that Hitler was a tremendously effective leader, and, really, by any standard he was: he effected change.
There wasn’t, however, much deliberation as to whether the changes he effected were good. He took a more aggressive stance toward other countries (a welcome change from the loser stance adopted from the end of WWI, which, technically, Germany did lose), he openly violated the deliberately shaming aspects of the Versailles Treaty, he appeared to reject the new terms of the capitalism of the era (he met with major industrial leaders and claimed to have reached agreements that would help workers), he reduced disagreement, he imprisoned people who seemed to many people to be dangerous, he enacted laws that promoted the cultural “us” and disenfranchised “them.” And he said all the right things. At the end of his first year, Germany published a pamphlet of his speeches, with the title “The New Germany Desires Work and Peace.” So, by the simple two-art truth test (do the claims support what you already believe? do authorities you trust confirm these claims?) Hitler’s rhetoric would look good to a normal person in the 30s. Granted, his rhetoric was always authoritarian—disagreement is bad, pluralism is bad, the right course of action is always obvious to a person of good judgment, you should just trust Hitler—but it would have looked pretty good through the 30s. A person using that third test—can I find evidence to support these claims—would have felt that Hitler was pretty good.
3. So, would you recognize Hitler if you liked what he was saying?
What I’m trying to say is that asking the question of “Is their political leader just like Hitler” is just about as wrong as it can get as long as you’re relying on simple truth tests.
If you get all your information from sources you trust, and you trust them because what they say fits in with your other beliefs, then you’re living in a world of propaganda.
If you think that you could tell if you were following a Hitler because you’d know he was evil, and you are in an informational cave that says all the issues are simple, good and evil are binaries and easy to tell one from another, there is either certainty or dithering, disagreement and deliberation are what weak people do, compromise is weakening the good, and the truth in any situation is obvious, then, congratulations, you’d support Hitler! Would you support the guy who turned out to start a disastrous war, bankrupt his nation, commit genocide? Maybe—it would just be random chance. Maybe you would have supported Stalin instead. But you would definitely have supported one or the other.
Democracy isn’t about what you believe; it’s about how you believe. Democracy thrives when people believe that they might be wrong, that the world is complicated, that the best policies are compromises, that disagreement can be passionate, nasty, vehement, and compassionate–that the best deliberation comes when people learn to perspective shift. Democracy requires that we lose gracefully, and it requires, above all else, that we don’t assess policies purely on whether they benefit people like us, but that we think about fairness across groups. It requires that we do unto others as we would have them do unto us, that we pass no policy that we would consider unfair if we were in all the possible subject positions of the policy. Democracy requires imagining that we are wrong.
[1] That sort of person often ascribes to the “just world model” or “just world hypothesis” which is the assumption that we are all rewarded in this world for our efforts. If something bad happens to you, you deserved it. People who claim that is Scriptural will cherry-pick quotes from Proverbs, ignoring what Jesus said about rewards in this world, as well as various other important parts of Scripture (Ecclesiastes, Job, Paul).
[2] There is a meme circulating that Hitler was pro-abortion. His public stance was opposition to abortion at least through the thirties. Once the genocides were in full swing, Nazism supported abortion for “lesser races.”
When I teach about the Holocaust, one of the first questions students ask is: why didn’t the Jews leave? The answer is complicated, but one part isn’t: where would they go? Countries like the US had such restrictive immigration quotas for the parts of Europe from which the Jews were likely to come that we infamously turned back ships. And, so, students ask, why did we do that?
We did it because of that era’s version of the peanut argument.
The peanut argument (more recently presented with a candy brand name attached to it, but among neo-Nazis the analogy used is a bowl of peanuts) has been shared by many, including by members of our administration, as a mic-drop strong defense of a travel ban on people from regions and of religions considered dangerous because, as the analogy goes, would you eat from a bowl of peanuts if you knew that one was poisoned?
People who make that argument insist that they are not being racist, because their objection is, they say, not based in an irrational stereotype about this group. They say it is a rational reaction to what members of this group have really done. And, they say, for the same reason, that they are not being hypocritical: as descendants of immigrants, they are open to safe immigrant groups. These immigrants, unlike their forbears, have dangerous elements.
What they don’t know is that every ethnicity and religion that has come to America has had members that struck large numbers of existing citizens as dangerous—the peanut argument has always been around. And it’s exactly the argument that was used for sending Jews back to death. The tragedies of the US immigration policy during Nazi extermination were the consequence of the 1924 Immigration Act, a bill that set race-based immigration quotas grounded in arguments that this set of immigrants (at that point, Italians and eastern and central Europeans) was too fundamentally and dangerously antagonistic to American traditions and institutions to admit. Architects of that act (and defenders of maintaining the quotas, in the face of people escaping genocide) insisted that they weren’t opposed to immigration, just this set of immigrants.
At least since Letters from an American Farmer (first published in 1782), Americans have taken pride in being a nation of immigrants. And, since around the same time, large numbers of Americans who took pride in being descended from immigrants have stoked fear about this set of immigrants.
Arguments about whether Catholics were a threat to democracy raged throughout the nineteenth century, for instance. Samuel Morse (of the Morse code) wrote a tremendously popular book arguing that German and Irish Catholics were conspiring to overthrow American democracy, which appealed to popular notions about Catholics’ religion being essentially incompatible with democracy. Hostility towards the Japanese and Chinese (grounded in stereotypes that their political and religious beliefs necessarily made them dangerous citizens) resulted in laws prohibiting their naturalization, owning property, repatriation, and, ultimately, their immigration (and, in the case of the Japanese, it led to race-based imprisonment). After the revolutions of 1848, and especially with the rise of violent political movements in the late nineteenth century (anarchism, Sinn Fein, various anti-colonial and independence movements), large numbers of politicians began to focus on the possibility that allowing this group would mean that we were allowing violent terrorists bent on overthrowing our government.
And that’s exactly what it did mean. Every one of those groups did have individuals who advocated violent change.
A large number of the defendants in the Haymarket Trial (concerning a fatal bomb-throwing incident at a rally of anarchists–photo left) were immigrants or children of immigrants; by the early 20th century, people arguing that this group had dangerous individuals could (and did) cite examples like Emma Goldman (a Jewish anarchist imprisoned for inciting to riot), Nicola Sacco and Bartolomeo Vanzetti (Italian anarchists executed murder committed during a robbery), Jacob Abrams and Charles Schenck (Jews convicted of sedition), and Leon Czolgosz (the son of Polish immigrants, who shot McKinley). Even an expert like Harry Laughlin, of the Eugenics Record Office, would testify that the more recent set of immigrants were genetically dangerous (they weren’t—his math was bad).
History has shown that the fear mongerers were wrong. While those groups did all have advocates of violence, and individuals who advocated or committed terrorism, the peanut analogy was fallacious, unjust, and unwise. Those groups also contributed to America, and they were not inherently or essentially un-American.
Looking back, we should have let the people on those ships disembark. Looking forward, we should do the same.
[image: By Internet Archive Book Images – https://www.flickr.com/photos/internetarchivebookimages/14782377875/Source book page: https://archive.org/stream/christianheralds09unse/christianheralds09unse#page/n328/mode/1up, No restrictions, https://commons.wikimedia.org/w/index.php?curid=42730228]
One of the most controversial claims I make about demagoguery is that it isn’t necessarily harmful. When I make that argument, it’s common for someone to disagree with me by pointing out that some specific instance of demagoguery is harmful. But that isn’t refuting my argument because I’m not arguing for a binary of demagoguery being always or never harmful. I’m saying that not every instance of demagoguery is necessarily harmful. Whether demagoguery is harmful depends, I think, on where it lies on multiple axes: how demagogic the text is; how powerful that media is that is promoting the demagoguery; how widespread that kind of demagoguery is.
(Yeah, yeah, I know, that means a 3d map, but I honestly think you need all three axes.)
And the best way to talk about the harmless demagoguery is to talk more about one of the first examples of a failed deliberative process that haunted me. One spring, when I was a child, my family went to Yosemite Valley in Yosemite National Park. My family mostly tried (and failed) to teach one another bridge, and I wandered around the emerald valley. Having grown up in semi-arid southern California, the forested walks seemed to me magical, and I was enchanted. One evening, my mother took me to a campfire, hosted by a ranger, who told the story of John Muir, a California environmentalist crucial in the preservation of Yosemite National Park. The last part of the ranger’s talk was about Muir’s final political endeavor, his unsuccessful attempt to prevent the damming and flooding of the Hetch Hetchy Valley, a valley the ranger said was as beautiful as the one by which I had been entranced. The ranger presented the story as a dramatic tragedy of Good (John Muir) versus Evil (the people who wanted to dam and flood the valley), with Evil winning and Muir dying of a broken heart. I was deeply moved, and fascinated. And years later, I would come back to the story when trying to think about whether and how people can argue together on issues with profound disagreement.
The ranger had told the story of Good versus Evil, but that isn’t quite right, in several ways. For one thing, it wasn’t a debate with only two sides (something I have since discovered to be true of most political issues). In this case, it is more accurate to say that there were three sides: the corrupt water company currently supplying San Francisco that wanted to prevent San Francisco getting any publicly-owned water supply; the progressive preservationists like John Muir, who wanted San Francisco to get an outside publicly-owned water supply, but not the Hetch Hetchy; and the progressive conservationists like Gifford Pinchot or Marsden Manson, who wanted an outside publicly-owned water supply that included the Hetch Hetchy.
And a little background on each of the major figures in this issue. Gifford Pinchot was head of the Forest Service, with close political ties to Theodore Roosevelt. Born in 1865, he was a strong advocate of conservation—that is, keeping large parts of land in public ownership, sustainable foresting practices, and what is called “multiple use.” The principle of conservation (as opposed to preservation) is that public lands should be available to as many different uses as possible, such as foresting, hunting, camping, and fishing. The consensus among scholars is that Pinchot’s support for the Hetch Hetchy dam was crucial to its success.
Marsden Manson was far less famous than Pinchot. Born in 1850, he was an engineer (trained at Berkeley), member of the Sierra Club who had camped in Yosemite, and, from 1897 till 1912, was an engineer for the City of San Francisco, first serving on the San Francisco Drainage Committee, then in the Public Works Department, and finally City Engineer. It was in that capacity that he wrote the pamphlet I’ll talk about in a bit. He was an avid conservationist.
John Muir is probably the most famous of the people heavily involved in the controversy, and still a hero among environmentalists. Born in 1838 in Scotland, his family emigrated to the United States when he was around ten, to Wisconsin. He arrived in California in 1868, and promptly went to Yosemite Valley (which was not yet a national park). He stayed there for several years, writing about the Sierras, in what would become articles in popular magazines. His elegant descriptions of the beauties of the Sierra Nevada mountains were influential in persuading people to preserve the area, creating Yosemite National Park. He was the first President of the Sierra Club (formed in the early 1890s) which is still a powerful force in environmentalism. Muir was a preservationist, believing that some public lands should be preserved in as close to a wilderness state as possible.
Perhaps the most important character in the controversy is the Hetch Hetchy Valley. Part of the Yosemite National Park, it was less accessible than Yosemite Valley, and hence far less famous. Like many other valleys in the Sierra Nevada mountains, it was formed by glaciers. Two of its waterfalls are among the tallest waterfalls in North America.
The story the ranger told was between right and wrong, good and evil, and, even though I disagree with the stance Pinchot and Manson took, and believe that the Hetch Hetchy Valley should not have been dammed (and I believe they used some pretty sleazy rhetorical and political tactics to make it happen), I don’t think they were bad people. I don’t think they were selfish or greedy, or even that they didn’t appreciate nature. I think they believed that what they were doing was right, and they had some good arguments and good reasons, and they felt justified in some troubling rhetorical means because they believed their ends were good. I don’t think they were Evil.
After all, San Francisco had long been victimized by a corrupt water company, the Spring Valley Company, with a demonstrated record of exploiting users (particularly during the aftermath of the 1906 earthquake). San Francisco had a legitimate need for a new water supply, and the argument that such public goods should not be subject to the profit motive is a sensible argument. The proponents of the dam argued that turning the valley into a reservoir would increase the public’s access to it, and the ability of the public to benefit. The dam, it was promised, would provide electric power that would be a public utility (that is, not privately owned), thereby benefiting the public directly. Thus, both the preservationists and conservationists were concerned about public good, but they proposed different ways of benefitting the public.
Although John Muir was President and one of the founders of the Sierra Club, not everyone in the organization was certain the dam was a mistake, and so the issue was put to a vote—the Sierra Club at that point had both conservationists and preservationists. Muir wrote the case against, a pamphlet called “The Hetch Hetchy Valley,” which, along with Manson’s argument, “Statements of San Francisco’s Side of the Hetch Hetchy Reservoir Matter,” was distributed to members of the Sierra Club, and they were asked to vote.
For Muir’s pamphlet, he reused much of an 1873 article about Hetch Hetchy, originally written to persuade people to visit the Sierras. He kept much (but not all) of his highly poetical description of the Hetch Hetchy Valley, especially its two falls. His argument throughout the pamphlet is that the valley is beautiful, unique and sacred; it isn’t until the end of the pamphlet that he added a section specifically written for the dam controversy, and in that part he resorted to demagoguery, painting his opponents as motivated by greed and an active desire to destroy beauty, in the same category as the Merchants in the Temple of Jerusalem and Satan in the Garden of Eden: “despoiling gainseekers, — mischief-makers of every degree from Satan to supervisors, lumbermen, cattlemen, farmers, etc., eagerly trying to make everything dollarable […] Thus long ago a lot of enterprising merchants made part of the Jerusalem temple into a place of business instead of a place of prayer, changing money, buying and selling cattle and sheep and doves. And earlier still, the Lord’s garden in Eden, and the first forest reservation, including only one tree, was spoiled.” Muir presented the conflict as “part of the universal battle between right and wrong,” and characterized his opponents’ arguments as “curiously like those of the devil devised for the destruction of the first garden — so much of the very best Eden fruit going to waste; so much of the best Tuolumne water.” Muir called his opponents “Temple destroyers, devotees of ravaging commercialism,” saying, they “seem to have a perfect contempt for Nature, and, instead of lifting their eyes to the mountains, lift them to dams and town skyscrapers.” And he ended the pamphlet with the rousing peroration:
Dam Hetch-Hetchy! As well dam for water-tanks the people’s cathedrals and churches, for no holier temple has ever been consecrated by the heart of man. (John Muir Sierra Club Bulletin, Vol. VI, No. 4, January, 1908)
Muir’s argument is demagoguery—he takes a complicated situation (with at least three different positions) and divides it into a binary of good versus evil people. The bad people don’t have arguments; they have bad motives.
But this, too, is a controversial claim on my part, and it actually makes some people really angry with me for me to “criticize” Muir. The common response is that I shouldn’t criticize him because he was a good man and he was fighting for a good cause. In other words, the world is divided into good and bad people, and we shouldn’t criticize good people on our side. And I reject every part of that argument. I think we should criticize people on our side, especially if we agree with their ends (and especially if we’re looking critically at an argument in the past) because that’s how we learn to make better arguments. And I’m not even criticizing Muir in the sense those people mean—they mean I’m saying negative things about him, and that I believe he should have done things differently. The assumption is that demagoguery is bad, so by saying he engaged in demagoguery he’s a bad person.
Like Muir’s argument, that presumes a binary (or even continuum) between good and bad people. Whether there really is such a binary I don’t know, but I’m certain that it isn’t relevant. The debate wasn’t split into good and bad people, and we don’t have to make our heroes untouchable.
And, besides, I’m not criticizing Muir in the sense of saying he did the wrong thing. I’m not sure he did. His demagoguery had no particular harm. While his text (especially the last part) is demagoguery, and he was a powerful rhetor at the time, the kind of demagoguery in which he was engaged (against conservationists) wasn’t very widespread, so he wasn’t contributing to a broad cultural demonizing of some group. And I’m not even sure that his demagoguery did any harm (or benefit) to the effectiveness of his argument.
Muir was trying to get the majority of people in the Sierra Club—perhaps even all of them—to condemn the Hetch Hetchy scheme on preservationist grounds, so he already had the votes of preservationists like himself. What he had to do rhetorically is to move conservationists (or, at least, people drawn to that position) over to the preservationist side, at least in regard to the Hetch Hetchy Valley.
A useful step in an argument is identifying what, exactly, is the issue (or are the issues): why are we disagreeing? Called the “stasis” in classical rhetorical theory, the “hinge” of an argument points to the paradox that a productive disagreement requires agreement on several points—including on the geography of the argument: what is at the center, how broad an area can/should the argument cover, what areas are out of bounds? The stasis is the main issue in the argument, and arguments often go wrong because people disagree about what it is. In the case of the Hetch Hetchy, an ideal argument about the topic would be about whether damming and flooding that valley was the best long-term option for everyone who uses the valley—such a debate would require that people talk honestly and accurately about the actual costs, the various options, and as usefully as possible about the benefits (of all sorts) to be had from preserving the valley for camping (this is a big issue in California, in which camping is very popular).
It’s conventional in rhetoric to say that you have to argue from your opposition’s premises to persuade your opposition, and that would have necessitated Muir arguing on the premises that informed conservation.
Muir’s rhetorical options included:
condemning conservationism in the abstract, and trying to persuade his conservationist audience to abandon an important value;
arguing that conservationism is not a useful value in this particular case, and that this is a time when preservationism is a better route;
arguing that damming and flooding the valley does not really enact conservationist values (e.g., it’s actually expensive).
But, to do any of those strategies effectively, he’d have to make the case on the conservationist premise that it’s appropriate to think about natural resources in terms of costs and benefits. And Muir’s stance about nature—his whole career—was grounded in the perception that such a way of looking at nature is a unethical.
Muir paraphrases (in quotes) the conservationist mantra: “Utilization of beneficent natural resources, that man and beast may be fed and the dear Nation grow great.” While I’ve never found any conservationist text that has that precise wording, it’s a fair representation of the basic principle of conservation; i.e., “greatest good for the greatest number.” And, certainly, conservationists did (and do) believe that there is no point in preserving any wilderness areas—all forests should be harvested, all lakes should be used, all areas should be open to hunting. But they didn’t do this out of a desire for financial gain, as much as from a different (and I would say wrong-headed) perception of how to define “the public.”
The conservationist argument in this case was pretty much bad faith, in that they claimed that they would improve the beauty of the valley by making it a lake. Muir argued they would destroy it. I agree with Muir, as it happens, and so my argument is not that Muir is factually wrong; the valley was destroyed by the damming. I also think some of the dam proponents—specifically Manson–knew that it would be destroyed, and Manson was lying when he described a road, increased camping, and other features that, as an engineer, he must have known were impossible. But many of the people drawn to the conservationist plan didn’t know that Manson was describing technologically impossible conditions, and they believed the proponents’ argument that the resulting reservoir would not only benefit San Franciscans (by providing safe cheap water and electric power) but it would have no impact on camping; it would, the conservationists claim, increase the accessibility of the area without interfering with the beauty of the valley at all. Again, that isn’t true, but it’s what people believed. And part of Aristotle’s point about rhetoric, and its reliance on the enthymeme, is that rhetoric begins with what people believe.
Manson’s response was fairly straightforward, and grounded, he insisted repeatedly, on facts. He argued:
San Francisco owned the valley floor.
Construction would not begin on the Hetch Hetchy dam until and unless San Francisco first developed Lake Eleanor (a water source not disputed by the preservationists) and then found that water source inadequate.
A photo he presented showed what the lake would look like when dammed and flooded—very little of the valley flooded, with no obstruction of the falls that Muir praised so heavily, a road around the edge enabling visitors to see more of the valley—so, he said, the valley will be more beautiful, reflecting the magnificent granite walls.
Keeping the reservoir water pure will not inhibit camping in any way.
The Hetch Hetchy plan is the least expensive option, and it will provide energy, thereby breaking the current energy monopoly.
Muir’s arguments, he says, “are not in reality based upon true and correct facts” (435).
Marsden Manson was City Engineer for San Francisco, and had done thorough reports on the issue. And so he had to know that almost all of what he was saying was “not in reality based upon true and correct facts.” San Francisco had bought the land, but, since it was within a national park, the seller had no right to sell it. Construction would begin immediately on the dam, flooding the entire valley, making the entire valley inaccessible, including the famous falls. It was not possible to build the roads that Manson drew on the photo and, being an engineer, he must have known that. The reservoir inhibited camping, and, most important, the Hetch Hetchy plan was the most expensive option available to San Francisco. Manson had muddled the numbers to make it appear less expensive.
In other words, either Manson lied, or he was muddled, uninformed, bad at arithmetic, and not a very good engineer.
Manson’s motives in all this are complicated, and ultimately irrelevant. He may have expected to benefit personally by the approval of the dam project, as he may have thought he would build it. But it would have been a benefit of glory but not money; I’ve never read anything to suggest that he was motivated by anything other than a sense that dominating nature is glorious, and that public projects providing water and power are better than preserving valleys. (He is reputed to have suggested damming and flooding Yosemite Valley.)
In other words, what presented itself as the pragmatic option was just as ideologically driven as what was rejected as the emotional one (I think the same thing happens now with arguments about the death penalty, welfare “reform,” the war on drugs, foreign policy, the deficit—there is a side that manages to be taken as more practical, but it might actually be the most ideologically driven).
Muir’s rhetorical options were limited by his opponent, an engineer, making claims about engineering issues that neither Muir nor his supporters had the expertise to refute. It took years for someone to look at the San Francisco reports and determine that the numbers were bad; preservationists didn’t know (and, presumably, many supporters of the dam didn’t know) that the numbers were misleading, and it was the most expensive option.
But would Muir have argued on such grounds anyway? To argue on the grounds of cost would have confirmed the Major Premise that public projects should be determined by cost—to say that the Hetch Hetchy should not be built because it is the most expensive would seem to confirm the perception that you can make natural cathedrals “dollarable” in Muir’s words. In other words, Muir rejected the very terms by which the conservationist argument was made—he rejected the premises. To argue on premises (except in rare circumstances) seems to confirm them, and so he would, in order to win the Hetch Hetchy argument, have argued against what he had spent a lifetime arguing for: that we should not look at nature in terms of money. Wilderness areas are, he insisted, sacred. And so he railed against his opposition.
As I mentioned above, I’m often attacked by people who think I’m attacking Muir. And I think that misunderstanding arises because of a particular perception of what the discipline of rhetoric is for: rhetorical analysis is often seen as implicitly normative; we do an analysis to say what a person should do or should have done. So, to say that Muir’s rhetorical strategies didn’t work is to say his rhetoric was bad, and it should have been different. Coupled with the notion that good people promote good things, if I say that Muir’s rhetoric was “demagoguery,” then I am saying he cannot have been a good person. There is, here, a theory of identity: that people are either good or bad; that good people say good things, and that bad people say bad things; that demagoguery is something only bad people do. That whole model of discourse and identity is wrong in too many ways to count, and I am not endorsing it.
I think Muir was a good man–he is a personal hero of mine—but that doesn’t mean he was perfect, and it certainly doesn’t mean we can’t learn from him. Muir did well within the Sierra Club (the Sierra Club vote was about 80% on Muir’s side and 20% in favor of the dam) , but ultimately lost the argument. And I think what we learn from his failure to persuade all conservationists to vote against the Hetch Hetchy project is not about Muir’s personal qualities or failings, but about rhetorical constraints and models of persuasion.
I’m arguing that, for Muir to have persuaded his opposition, he would have had to rely on premises that he rejected. This is sometimes called the “sincerity problem” in rhetoric. To what extent, and under what circumstances, should we make arguments we don’t believe in order to achieve an end in which we do believe? Muir didn’t argue from insincere premises; that may have weakened his effectiveness in the moment. But it definitely strengthened his effectiveness in the long run. His Hetch Hetchy pamphlet continues to be powerfully motivating for people, perhaps more motivating than it would have been had he compromised his rhetoric in order to be effective in the short-term. Muir’s demagoguery did no harm, and it may have even done some good. Demagoguery isn’t necessary harmful.
On Wednesday, I sent off the scholarly version of the demagoguery argument. It isn’t the book I once planned (that would involve a deeply theoretical argument about identity and the digital world), but it’s the one I really wanted to write, that would (I think) reach more people than that other one.
And it’s the last scholarly book I’ll write. I intend to spend the rest of my career trying to solve the interesting intellectual problem of making scholarly concepts and debates more accessible to non-academics. But that isn’t because I reject highly specialized academic writing as, in any way, a bad thing.
I have no problem with highly theoretical and very specialized books. My books have barely grazed the 1000 sales point, and that’s pretty good for a scholarly book. People have told me that something I’ve written has had an impact on their scholarship, pedagogy, program administration, so I’m really happy with my record as a scholar.
And I’m happy with the record of people who have sold both more and less because measuring impact is so very difficult. Publishing a book with an academic press is an extraordinary achievement, and measuring the impact of such books accurately is nigh impossible—a really powerful book is shared in pirated pdfs, checked out of libraries, passed from one person to another. Sales and impact are orthogonal in academia.
If you study the history of ideas even a little you have to know that what seemed major in the moment was sometimes just a trend (like mullets) and sometimes a sea change (like the synthesizer). No one reads Northrop Frye anymore, but he was a big deal at one moment, and yet Hannah Arendt is still in the conversation, who was also a big deal around the same time. And there are all those people who weren’t big deals in their era, but later came to have tremendous impact, such as Mikhail Bakhtin.
Some trade books on scholarly issues have had extraordinary sales (such as Mortimer Adler’s writings), but it’s hard to know what impact they had. Madison Grant’s racist book Passing of the Great Race had poor sales, but appears to have had a lot of impact. And there are lots of trade books that have come and gone without leaving any impact, so there’s no good reason to conclude that trade books necessarily have more impact than scholarly ones. I don’t think there are a lot of (any?) necessary conclusions that one can draw about whether trade or scholarly books have more impact, are more or less important, more or less valuable intellectual activity.
I have always loved Kenneth Burke’s analogy of the parlor for what it means to be interested in major questions. You show up at a party, he says, and it’s been going on for a while, and you find some conversation that seems interesting. You listen for a while, and then you take a side or point out something new. You get attacked and defended, and some people leave the conversation, and others join, and eventually you too leave. And it goes on, with other people taking sides that may or may not have to do with what you were arguing.
What Burke fails to mention is that, if it’s a good party, there are a lot of conversations going on. You might choose to leave that particular conversation, but not leave the party.
I have loved writing scholarly pieces (although I didn’t initially think I would), and my work has placed me in some particular conversations. I’ve moved from one conversation to another, but all on the side of the parlor engaged in very scholarly arguments. I’d like to leave that side of the parlor, not because it’s a bad one—it’s a wonderful one—but because it’s a party with a lot of conversations going on. I’d like to mingle.
I think a lot of discussions of the public intellectual make a lot of odd assumptions that assume binaries—that either specialized or public scholarship is good, for instance. Scholarship that speaks with authority to a small group is neither better nor worse than scholarship that reaches a broad audience—it’s just a different conversation in Burke’s parlor. And I’m going to wander over there for a bit.
When James Hodgkinson engaged in both eliminationist and terroristic violence against Republicans, factionalized media outlets blamed his radicalizing on their outgroup (“liberals”). In 2008, when James Adkisson committed eliminationist and terroristic violence against liberals, actually citing in his manifesto things said by “conservative” talk show hosts (namechecking some of the ones who blamed liberals for Hodgkinson), those media outlets and pundits neither acknowledged responsibility nor altered their rhetoric.[1]
That’s fairly typical of rabidly factional media: if the violence is on the part of someone who can be characterized as them (the outgroup), then outgroup rhetoric obviously and necessarily led to that violence. That individual can be taken as typical of them. If, however, the assailant was ingroup, then factionalized media either simply claimed that the person was outgroup (as when various media tried to claim that a neo-Nazi was a socialist and therefore lefty), or they insisted this person be treated as an exception.
That’s how ingroup/outgroup thinking works. The example I always use with my classes is what happens if you get cut off by a car with bumper stickers on a particularly nasty highway in Austin (you can’t drive it without getting cut off by someone). If the bumper stickers show ingroup membership, you might think to yourself that the driver didn’t see you, or was in a rush, or is new to driving. If the bumper stickers show outgroup membership, you’ll think, “Typical.” Bad behavior is proof of the essentially bad nature of the outgroup, and bad behavior on the part of ingroup membership is not. That’s how factionalized media works.
So, it’s the same thing with ingroup/outgroup violence and factionalized media (and not all media is factionalized). For highly factionalized right-wing media, Hodgkinson’s actions were caused by and the responsibility of “liberal” rhetoric, but Adkisson’s were not the responsibility of “conservative” rhetoric. For highly factionalized lefty media, it was reversed.
That factionalizing of responsibility is an unhappy characteristic of our public discourse; it’s part of our culture of demagoguery in which the same actions are praised or condemned not on the basis of the actions, but on whether it’s the ingroup or outgroup that does it. If a white male conservative Christian commits an of terrorism, the conservative media won’t call it terrorism, never mentions his religion or politics, and generally talks about mental illness; if a someone even nominally Muslim does the same act, they call it terrorism and blame Islam. In some media enclaves, the narrative is flipped, and only conservatives are acting on political beliefs. In all factional media outlets, they will condemn the other for “politicizing” the incident.
While I agree that violent rhetoric makes violence more likely, the cause and effect is complicated, and the current calls for a more civil tone in our public discourse is precisely the wrong solution. We are in a situation when public discourse is entirely oriented toward strengthened our ingroup loyalty and our loathing of the outgroup. And that is why there is so much violence now. It isn’t because of tone. It isn’t because of how people are arguing; it’s because of what people are arguing.
To make our world less violent, we need to make different kinds of arguments, not make those arguments in different ways.
Our world is so factionalized that I can’t even make this argument with a real-world example, so I’ll make it with a hypothetical one. Imagine that we are in a world in which some media that insist all of our problems are caused by squirrels. Let’s call them the Anti-Squirrel Propaganda Machine (ASPM).They persistently connect the threat of squirrels to end-times prophecies in religious texts, and both kinds of media relentlessly connect squirrels to every bad thing that happens. Any time a squirrel (or anything that kind of looks like a squirrel to some people, like chipmunks) does something harmful it’s reported in these media, any good action is met with silence. These media never report any time that an anti-squirrel person does anything bad. They declare that the squirrels are engaged in a war on every aspect of their group’s identity. They regularly talk about the squirrels’ war on THIS! and THAT! Trivial incidents (some of which never happened) are piled up so that consumers of that media have the vague impression of being relentlessly victimized by a mass conspiracy of squirrels.
Any anti-squirrel political figure is praised; every political or cultural figure who criticizes the attack on squirrels is characterized as pro-squirrel. After a while, even simply refusing to say that squirrels are the most evil thing in the world and that we must engage in the most extreme policies to cleanse ourselves of them is showing that you are really a pro-squirrel person. So, in these media, there is anti-squirrel (which means the group that endorses the most extreme policies) and pro-squirrel. This situation isn’t just ingroup versus outgroup, because the ingroup must be fanatically ingroup, so the ingroup rhetoric demands constant performance of fanatical commitment to ingroup policy agendas and political candidates.
If you firmly believe that squirrels are evil (and chipmunks are probably part of it too0, but you doubt whether this policy being promoted by the ASPM is really the most effective policy, you will get demonized as someone trying to slow things down, not sufficiently loyal, and basically pro-squirrel. Even trying to question whether the most extreme measures are reasonable gets you marked as pro-squirrel. Trying to engage in policy deliberation makes you pro-squirrel.
We cannot have a reasonable argument about what policy we should adopt in regard to squirrels because even asking for an argument about policy means that you are pro-squirrel. That is profoundly anti-democratic. It is un-American insofar as the very principles of how the constitution is supposed to work show a valuing of disagreement and difference of opinion.
(It’s also easy to show that it’s a disaster, but that’s a different post.)
ASPM media will, in addition, insist on the victimization narrative, and also the “massive conspiracy against us” argument, but that isn’t really all that motivating. As George Orwell noted in 1984, hatred is more motivating when it’s against an individual, and so these narratives end up fixating on a scapegoat. (Right now, for the right it’s George Soros, and for the left it’s Trump.) There can be institutional scapegoats—Adkisson tried to kill everyone in a Unitarian Church because he’d believed demagoguery that said Unitarianism is evil.
Inevitably, the more that someone lives in an informational world in which they are presented as in a war of extermination against us, the more that person will feel justified in using violence against them. If it’s someone who typically uses violence to settle disagreement, and there is easy access to weapons, it will end in violence against whatever institution, group, or individual that person has been persuaded is the evil incubus behind all of our problems.
At this point, I’m sure most readers are thinking that my squirrel example was unnecessarily coy, and that it’s painfully clear that I’m not talking about some hypothetical example about squirrels but the very real examples of the antebellum argument for slavery and the Stalinist defenses of mass killings of kulaks, most of the military officer class, and people who got on the wrong side of someone slightly more powerful.
And, yes, I am.
The extraordinary level of violence used to protect slavery as an institution (or that Stalin used, or Pol Pot, or various other authoritarians) was made to seem ordinary through rhetoric. People were persuaded that violence was not only justified, but necessary, and so this is a question of rhetoric—how people were persuaded. But, notice that none of these defenses of violence have to do with tone. James Henry Hammond, who managed to enact the “gag rule” (that prohibited criticism of slavery in Congress) didn’t have a different “tone” from John Quincy Adams, who resisted slavery. They had different arguments.
Demagoguery—rhetoric that says that all questions should be reduced to us (good) versus them (evil)—if given time, necessarily ends up in purifying this community of them. How else could it end? And it doesn’t end there because of the tone of dominant rhetoric. It ends there because of the logic of the argument. If they are at war with us, and trying to exterminate us, then we shouldn’t reason with them.
It isn’t a tone problem. It’s an argument problem. It doesn’t matter if the argument for exterminating the outgroup is done with compliments toward them (Frank L. Baum’s arguments for exterminating Native Americans), bad numbers and the stance of a scientist (Harry Laughlin’s arguments for racist immigration quotas), or religious bigotry masked as rational argument (Samuel Huntington’s appalling argument that Mexicans don’t get democracy).
In fact, the most effective calls for violence allow the caller plausible deniability—will no one rid me of this turbulent priest?
Lots of rhetors call for violence in a way that enables them to claim they weren’t literally calling for violence, and I think the question of whether they really mean to call for violence isn’t interesting. People who rise to power are often really good at compartmentalizing their own intentions, or saying things when they have no particular intention other than garnering attention, deflecting criticism, or saying something clever. Sociopaths are very skilled at perfectly authentically saying something they cannot remember having said the next day. Major public figures get a limited number of “that wasn’t my intention” cards for the same kind of rhetoric—after that, it’s the consequences and not the intentions that matter.
What matters is that whether it’s individual or group violence, the people engaged in it feel justified, not because of tone, but because they have been living in a world in which every argument says that they are responsible for all our problems, that we are on the edge of extermination, that they are completely evil, and therefore any compromise with them is evil, that disagreement weakens a community, and that we would be a better and stronger group were we to purify ourselves of them.
It’s about the argument, not the tone.
[A note about the image at the beginning: this is one of the stained glass windows in a major church in Brussels celebrating the massacre of Jews. The entire incident was enabled by deliberately inflammatory us/them rhetoric, but was celebrated until the 1960s as a wonderful event.]
[1] For more on Adkisson’s rhetoric, and its sources, see Neiwert’s Eliminationists (https://www.amazon.com/Eliminationists-Hate-Radicalized-American-Right/dp/0981576982)
For more about demagoguery: https://theexperimentpublishing.com/catalogs/fall-2017/demagoguery-and-democracy/