A sociologist friend and I were talking about how deeply entrenched it is for people to think in terms of in- and out-groups (Us v. Them), and he joked that the only thing that could unite humanity was an attack from outer space. And there’s something to that—in rhetoric, it’s sometimes called “unification through a common enemy.” The rhetoric scholar Kenneth Burke, in 1939, published an article in which he pointed out that that was one of Hitler’s strategies for uniting Germans. It’s how a lot of families function—everyone is mad at each other until they can agree how much they hate Aunt Agnes. I’ve seen fractious departments unify against an upper administrator. Churchill unified a deeply divided country when its existence was threatened by Nazism—his speeches continually spoke to a common, shared identity, and a common effort (FDR was much the same).
Those four examples show that the impulses that cause us to unite in our shared division can range from the trivial (the family dislikes the aunt, the department dislikes the Dean) to somewhat more important (if the Aunt is trying to defraud the family or the dean is trying to defund the department) to the very existence being threatened (as in the case of the UK). But what of the missing fourth example—Germany?
Germany is a strange case, because many Germans felt deeply threatened by various things— a world economic collapse that threatened large numbers of people with poverty and unemployment. Many Germans also felt threatened by the secularization of education, losses of privilege, modernization of various kinds, and their sense of group was esteem was threatened by the disastrous outcome of the Great War. But their existence wasn’t threatened; their prestige as a nation was, but not their existence as a nation.
But they became persuaded it was. The irony, of course, was that this belief in existential threat was a self-fulfilling prophecy. Germans, persuaded that the Reichstag Fire demonstrated an existential threat, put in power a leader and party that would, actually, lead to the extermination of Germany as a nation, and the extermination of between five and eight million Germans (with about 500,000 killed as part of the racial and political purification programs).
Athens, in the fifth century BCE, was facing an existential threat in the form of the Spartans. Instead of uniting as a city-state to fight that threat, they were more concerned with the existential threat to their faction, to the possibility that the other faction might exterminate them, and so focused their energy on exterminating the other. And they lost the war to Sparta.
What I’m saying is that the existential threat doesn’t have to be real for it to be really effective at unifying, and having a real existential threat doesn’t necessarily unify. What makes the difference is the rhetoric of the leadership.
Churchill and FDR responded to existential threat with a rhetoric that tried to unify the entire country, even the political parties that had recently been their worst critics. Both had opposition members in their cabinets. Both listened to people who disagreed (Kershaw’s Fateful Choices describes their decision-making processes, and how much they relied on thoughtful attention to the opposition, elegantly.) FDR and Churchill used the existential threat to transcend factionalism. Hitler and the demagogues of Athens manufactured or used the existential threat in order to amplify the factionalism, to equate opposition groups and critics with the external threat, and thereby enable elimination of fellow citizens. Instead of trying to unify a people, their goal was purification through extermination of the opposition.
In a way, COVID-19 is the external threat my sociologist friend joked about. It could be the moment of unification, a moment when we transcend factional disagreement in order to unify against this disease. It could be that moment if political leadership decides to make it that.
Promoting unity is hard, and nobody does it perfectly, but some do it better than others. FDR allowed a rhetoric of internal purification to lead to massive race-based imprisonment, and Churchill treated India as only sort of unified with the UK (enemy enough for mass starvation). But they were better than Hitler or the Athenian demagogues, and they resisted even more extreme forms of internal purification.
We’re in a culture of demagoguery, in which every issue is not just us v. them, but treated as a zero-sum war of existential threat between us and them. Someone saying “Happy Holidays” threatens Christians with extermination because it’s part of the “war on Christmas.” Requiring vaccines is a war on liberty. Trying to reduce poverty is a war. Treating every issue as a war means treating people who disagree with our policy agenda as traitors. That’s a bad idea.
We do have a common enemy in the form of COVID-19; we need a leadership that enables us to transcend our differences to work together. The last thing we need is a leader who exacerbates internal animosity, who openly tries to exterminate dissent, who has a fragile ego that has to be stroked, who refuses to listen to anyone who disagrees, and who is now openly toying with exterminating democracy itself. We need someone even better than FDR, not someone even worse than Cleon.
Category: Uncategorized
Bad math, belief, and half Nazis
The above are two very popular tweets (as you can see from the likes), and they rely on a way of thinking about political choices that is often popular. The argument is that you shouldn’t vote for this person because s/he is still in a category of evil people.
You see it all over the political spectrum (we need to stop talking about either a binary or single-line continuum of political positions—it’s false and damaging, and it fuels demagoguery). In 2016, there were informational enclaves that said that people should vote against HRC because she was a socialist, fascist, neoliberal, and therefore no different from Stalin, Hitler, Thatcher.
It’s a way of arguing that eats its own premises, and yet it’s so often persuasive. For instance, the argument that you shouldn’t vote for Biden because he’s half the nazi that Trump is has the major premise that you should never choose the thing that is twice as good.
Of course you should choose the thing that is twice as good. You should buy the car that is twice as good, rent the apartment that is twice as good, take the job that is twice as good. When we’re deciding about a car, apartment, or job, we can do that math, but, when it comes to politics, suddenly people can’t see that half a fascist is twice as good as a full fascist, let alone whether Biden is half a fascist.
So, why do people who can take an imperfect apartment that is twice as good as their other option, when it comes to politics, reject taking an option that is twice as good as the other?
There are a lot of reasons. Here, I want to mention two. First, politics is tied up with identity in a way that getting an apartment usually isn’t (although, people I’ve known for whom their apartment is closely attached to their identity have the same bad math—an apartment twice as good as the other is just as bad as the other); second, people who reason deductively often have false narratives about the past, or don’t care about what has happened. A politics of purity is often connected to a belief in belief.
The first move in that argument is to treat everyone who disagrees with us as in the Other category. There are good arguments that Trump is fairly high on the fascism scale (although with some important caveats, particularly about individualism), but Biden is not a fascist. He’s a third-way neoliberal. But, really, when people are making this kind of argument—HRC is basically Stalin, Sanders is Castro, HRC is Trump—they aren’t putting the argument forward as some kind of invitation to a nuanced discussion about political ideologies. It’s a hyperbolic appeal to purity politics.
Like all hyperbole, the main function of the claim is that it is a performance of in-group fanatical commitment, a demonstration of loyalty on the part of the speaker. The point is to demonstrate that they think in terms of us or them, and they are purely opposed to them.
That seems like a responsible political posture because, in cultures of demagoguery, there are a lot of people (who are bad at math) who decide that being purely committed to the in-group is the right course of action, regardless of whether that has ever worked in the past. They believe that we can succeed if we purely commit to a pure commitment to a pure in-group set of pure policies. That way of thinking about politics—the way to win in politics is to refuse to compromise—is all over the political spectrum.
And, I just want to emphasize: the math is bad. A half-nazi is actually better than a full nazi. A leader who would have done half what Hitler did would have been better than Hitler. Unless you are thinking in terms of purity, and so you don’t actually care about how many people are killed, in which case you’ve fallen into what George Orwell, the democratic socialist, called the fallacy of saying that half a loaf is the same as nothing at all. If you’re hungry, half a loaf is still half a loaf.
A friend once compared it to the trolley problem, in which a person refuses to pull the lever that involves being a participant in an action they really dislike in order to prevent a much worse outcome. I’m not a big fan of the trolley problem as an actual test of ethical judgment, but I think the metaphor is good—it’s a question of whether a person who refused to act (pull a lever that would cause one person to die rather than five) feels that this failure to act is more ethical than acting. When I talk to people who are in this kind of ethical dilemma, it’s clear that they are balking at that moment of their grabbing the lever—they want the trolley to shift tracks; they don’t want Trump to get reelected; they just don’t want to pull the lever.
That was complicated, but all I’m saying is that it’s a question of whether people recognize sins of omission. They don’t object to Biden getting elected; they object to voting for him.
So, how has that worked out in the past? I can’t think of a time when refusing to vote because one candidate was half as bad as the other has worked to lead to a better political situation (but I’m open to persuasion on this), but I can think of a lot of times when it hasn’t. I’ll mention one. It happens to be a time that people could vote for half-nazis, and liberals tried to persuade voters to do exactly that.
It’s important to remember that the Weimar Communists could have prevented Hitler from coming to power by being willing to form a coalition government, but they wouldn’t because, they said, every other political party (including the democratic socialists) were, basically, fascists.
I’m not saying that compromising principles is always a good choice; a lot of people made the mistake of thinking that they could work with Hitler, that they should stay in his administration (or on his military staff) so that they could try to control him or, at least, direct him toward better actions. They couldn’t. Within a couple of years of his being installed as Chancellor, all the people in his administration who were going to try to moderate him were either fired or radicalized. It took longer with the military, and in that case the people who tried to control him were fired, strategically complacent, or radicalized. But it was the same outcome. There was no working with Hitler—there was only working for him.
If we want to prevent another Hitler, then we have to vote against him.
Time management for associate professors
I posted something about time management for graduate students and assistant professors, and so now I should write something about associate professors, and that means writing about imposter syndrome.
The presumption, not always true, is that associate professors are oriented toward promotion to full. The advice I’m giving here is oriented toward finding a manageable and sustainable career–whether it’s to get promoted, or to remain at the associate level.
My crank theory is that people who developed a sustainable set of work practices (that is, ones not driven by panic or binge writing) as a graduate student or assistant professor just need to keep doing what they were doing once they get tenure. They’ll face many the same decisions—whether to take on a leadership position in the department, college, or discipline, what the next set of scholarly projects should be, how many new courses to develop—but, if they negotiated those shoals well as an assistant professor, things should be okay.
There is a lot of shaming rhetoric about people who remain at the level of associate professor, and that shaming makes me ragey. An awful lot of departments (not my current one, btw—the full profs have heavy service responsibilities) enable full professors to focus on scholarship because the whole department is functioning on the backs of those “stalled” associate professors. There are lots of reasons that people lose the thread of their scholarly life, many of which I’m not talking about here (ranging from bad, such as a family health crisis, to good, such as deciding that promotion isn’t desirable), but one of them is that there are some very toxic narratives about writing and scholarly productivity.
A lot of people say our world is oriented toward extraverts, but it really isn’t; it’s oriented toward narcissists. A lot of narcissists flame out in grad school; a lot of flame out as assistant professors. But, in my experience, narcissists who make it to associate make it to full.
So, this leaves us with non-narcissists, and why so many really good and smart people who have produced enough good writing to get where they are have trouble producing enough to get any further. One common explanation is imposter syndrome, but I don’t think that’s the problem; I think the problem is how people try to get past it.
Every reasonable accomplished person I have met has imposter syndrome—feeling that they have gotten more rewards and praise than their work actually merits, that they only got where they were through luck. The only people I have ever met who don’t have imposter syndrome are narcissistic fucks. So, there is no “getting over” imposter syndrome. In fact, we are always pretending to be more sure than we are; we fling ourselves into new projects when we don’t know what we’re doing; we make claims we aren’t entirely sure are accurate; we decide we can make a contribution to a field even when we haven’t actually read everything in that field. And people who succeed haven’t done so entirely on merit—only narcissists think that—hard work is necessary but not sufficient for success. People with imposter syndrome are honest about the intellectual precarity of our work; narcissists don’t know they’re imposters, but they are. They don’t know they’re imposters because narcissists can never really look at themselves from the position of a reasonably skeptical group of people who know things they don’t; they dismiss those people as fools. People with imposter syndrome know there is that group, although we don’t always know who they are.
One way that people manage imposter syndrome is through perfectionism. Some people refuse to submit anything for publication unless it’s perfect—that way, no one will expose them as an imposter. These are people who spend years working on things that they refuse to submit until perfect—that is, beyond criticism–, and so they don’t submit it. Or they don’t write at all, and just imagine the perfect thing they would write if they weren’t so swamped by obligations that they keep taking on.
Another way that people manage imposter syndrome (and fear of failure, and various other related issues) is by letting panic take the wheel. People who have succeeded in writing through high school, college, and coursework often have a truncated writing process: they are faced with an assignment, and they first decide on their argument, and then they decide on the organization for that argument, and then they write it out. (A lot of writing teachers think they’re teaching “the writing process” by teaching this linear method. They aren’t.) If you’re not a narcissist, and you’re trying to follow the “process” you’ve been taught, then, when you sit down to write, you’re trying to write, critique, and revise all at the same time.
And that’s how you get a writing block.
One of my crank theories is that some people have gotten to associate professor through generating enough sheer panic to make it past the crunch points. But that doesn’t mean the solution for either associate professors or people who want to mentor them is to panic them. (I’ve had full professors tell me that the reason that associates can’t publish is that they aren’t panicked enough—a sweet example of how Strict Father Morality is a pond into which supposedly lefty academics dip their toes from time to time). People who let panic take the wheel seem to think that people should spend their entire career in a panic in order to produce enough.
A lot of “stalled” associate professors are people who have been given that advice, and told that narrative, and have said, “Fuck that shit.”
And so they should. So should we all. It makes sense to reject a toxic narrative about productivity.
If you’ve never developed a long-term sustainable work practice—if your only method of motivating yourself to write is to be in a white-hot panic about your situation (and it appears that the only other method is to be an asshole narcissist) then the decision to remain a permanent associate professor seems not only sensible, but compassionate to the people in your life.
The problem isn’t that associate professors are insufficiently panicked—the problem is that far too many people promote a writing process dependent on panic and valorize a toxic narrative about success.
Once you get tenure, you get committee assignments. It looks different from the challenges of being assistant, but it really isn’t—you still have to figure out what scholarly projects to pursue, what committee assignments to take, what new classes to develop. The difference is one I have a hard time describing. Despite academics’ reputations for being lefty, far too many academics (including several department chairs I’ve known) have thoroughly embraced the neoliberal narrative of what it means to be a good worker—you throw yourself on the pyre of your own career to meet the standards of “good work” of your institution. You live and breathe in a world of panic, 60-hour work weeks, and self-congratulation for having no boundaries about work.
There is another option. It’s about creating a sustainable relationship to work.
And the first step in that creation of a sustainable relationship to work is stepping away from a writing process that relies on panic. A responsible graduate program would ensure that first step happens in graduate school, but we aren’t in that world (although there are many graduate advisors who are trying to do exactly that).
The best way to respond to imposter syndrome is to stop approaching every step in the writing and publication process as the moment we might be exposed to the world, but to be comfortable with writing shitty stuff, submitting things that someone might slam, and to know that we will never reach a point in our career when we are not being told that what we wrote is shitty by someone. And they may be right. So?
That response involves a lot of possible moves— most of them involve abandoning thinking about each publication process as risking everything, and they mean working because you want the outcomes the work will get, you’re interested in the crafting of the work, you want others to know about these insights you have. It also involves breaking the writing process into at least three different kinds of work that don’t happen all at once—creating, critiquing, revising. It involves walking away from perfectionism. It involves rejecting (and getting help rejecting) toxic narratives about how much we should be working; it involves finding allies and mentors. It doesn’t necessitate giving up on scholarship, although that might be a viable and joyful choice (some people decide they really love administration, for instance), and it certainly doesn’t necessitate living life in a state of panic.
Time management for assistant professors
In an earlier post, about time management for graduate students, I mentioned that there is a limit as to how much a person can write in a day. I also think that a lot of people get burned out working day after day on the same topic, and, if they don’t get burned out, they lose their ability to think critically about what they’re writing. Some people manage that second problem by working on multiple projects at the same time. When they just can’t work on, they work on that for the next three weeks or so, and then come back. I can’t do that.
In many fields, a graduate student teaches one class (perhaps two), is on very few committees, and has one or two major scholarly obligations (finishing the dissertation and trying to get something published). The kinds of classes that graduate students teach often have fairly established syllabi (or, at least, course requirements).
There’s a post here where I talk some about the challenges. The time management challenges for assistant professors are, I think (and I was an assistant professor for a long time—at three different institutions), very different from either graduate student or full professor, but they are much like the issues for associates (with a big exception I’ll mention).
These challenges are: much more open-ended teaching opportunities, the vagaries of establishing a professional identity, service requirements, multiple scholarly obligations, and (if it wasn’t already a challenge in graduate school) often a family or just very different sorts of living conditions.
Perhaps somewhat paradoxically, one of the challenges of being an assistant professor is the freedom regarding teaching. Often, departments rely on new hires to create new courses, modify curriculum, or in other ways be the innovators. There are good reasons for that reliance—assistant professors are likely to be trained in ways that are very different from the older faculty, simply because they were recently at a very different program. It can be tempting to create too many new courses—a strategic choice is to spend the first year creating a repertoire of courses, and then tinkering with them for a while. It can be intoxicating to teach entirely new ones, to have the chances to work in programs (such as honors or mentoring programs) that are often overload.
There’s a similar problem with service—assistant professors want to make themselves central to the department, and want to be liked. It’s important to make strategic choices about obligations. And, it’s also important to keep in mind that women and POC get a lot more pressure to take on service-heavy responsibilities, for both good (representation) and bad (tokenism) reasons. Learning to say, “I’d love to do that after I have finished my book” (or “enough for tenure” or “have tenure”) in a genuinely enthusiastic way can be very useful.
It’s important to go to conferences, since it’s good to network (find other scholars working on similar projects, find out who might be a good co-panelist, co-author, co-editor of a collection), and also good to get a sense of who people are citing a lot, where the field appears to be going.
But it’s often hard to figure out which conferences, how many, and it isn’t a good idea to spend a lot of time writing paper conferences that aren’t candidates for articles or chapters. Conferences used to be good for chatting with editors (to try to figure out if a project has a market), but presses are attending fewer conferences, so it’s hard to say.
Many students (especially ones who took some time between grad and undergrad) have children in graduate school; many don’t until they’re assistant professors. Some people get tired of crappy student apartments and really want a house. Those kinds of choices have some odd consequences—I became much more productive when I reduced my commute, something I hadn’t expected. So, choices to live far from campus (because it’s more affordable, schools are better, or other reasons) can introduce variables.
In short, being an assistant professor is a challenge in terms of time management because, even more than as a graduate student, it involves making decisions without enough information to make good ones.
Being an assistant professor is a challenge in terms of time management because, even more than as a graduate student, it involves making choices without knowing what all the options really are, the relative advantages and disadvantages, the potential consequences. It’s just as much uncertainty as a graduate student, but with more choices.
The most obvious course of action is to get good mentoring, but even that is choosing among several paths in a forest of unknowns. While I feel comfortable giving advice in the abstract, I don’t think I know enough about conditions now for junior scholars to make a lot of specific recommendations. I think it’s useful to have several mentors—someone just one rank above at a different institution, someone high up at your institution, someone just one rank above at your institution.
Because I am none of those things, the advice I’m about to give should be taken with a grain of salt (or more). Regardless of the publication standards for tenure at your institution, publish. I know that isn’t easy, but publication is the scholarly equivalent of “fuck you” money. It gives you the ability to move (which, paradoxically, makes it easier to stay). If you’re at an institution that requires a book for tenure, you have to have a manuscript ready to submit to a publisher by your third year.
A lot of graduate students spend the year or two (or three) that they’re writing their dissertation in a white-hot panic, they develop back problems, they sleep badly. Sometimes there is a six-month period when they are basically alternating between terror and panic. That happens because very few programs prepare students well for that last marathon of dissertation-writing (and an unhappy number of faculty believe that their job is to make sure that last stretch is boot-camp).
As I’ve tried to write about elsewhere, the unfortunate consequence is that people come to rely on a writing process that is driven by panic. That is not sustainable as an assistant professor. But, for some people, that’s the only way they know to write—they only know how to run sprints, and so they spend some amount of time (perhaps the last two years, when it’s publish or get fired) in that same white-hot panic, making everyone around them miserable, but most of all themselves.
That’s an emergency, not a career. The goal during graduate school should be to find a work process that is sustainable for life. But there really isn’t a lot of incentive to do that. Graduate courses inevitably reward treating paper writing as a sprint, and, despite the best efforts of the best advisors, so many documents leading up to the dissertation are written out of panic—because of fear of failure, imposter syndrome, panic-driven writing processes, decisional ambiguity. Good writers, and anyone who gets into graduate school is a good writer, are people accustomed to sit down and produce a product. That they might have to revise, draft, and cut can feel like a failure. Graduate students spend a lot of time trying to reproduce the writing processes that got them into graduate school, even though those processes are no longer working. This problem of remaining committed to panic-driven writing processes isn’t helped by the unpleasant fact that there are advisors who actively work to keep students sprinting (they deliberately work their advisees into panics, they delay reading material, they believe their job is to “toughen up” students, they have panic-driven writing processes and can’t imagine any other).
Since it is so very possible to write a dissertation in a year of sheer panic, as a series of exhausting sprints, a lot of assistant professors treat trying to publish enough to get tenure as the same world of panic and sprinting that got them to finish their dissertation. That is a very bad decision.
Here’s what I wish someone had told me when I got my first job: create the work life you want to have for your entire career; stop treating your work responsibilities as a series of crises.
Trump’s border rhetoric/policies and COVID-19
Right now, I’m seeing a lot of people say that the COVID-19 crisis proves that Trump was right in his controversial policies to shut down the borders. I’m seeing it in enough different places that it’s clearly become a talking point getting repeated as a truism in pro-Trump media and communities. It’s a really interesting argument because many people think it’s a clobber argument—one that should end the argument. But critics of Trump don’t find it all that persuasive. Why not?
There are a lot of reasons, including that some people won’t grant Trump credit for anything (just as there are Trump supporters who won’t acknowledge any criticism of him)—that’s just rabid factionalism.
Another reason has to do with how people think about politics (and lots of other things). Many people reason associatively. There’s a famous quiz for testing thinking processes that has questions like this:
There is a group of women, 30% of whom are librarians, and 70% of whom are nurses. Mary is one of those women, and she is 35. What are the chances that she is a librarian?
A. 10-40%
B. 40-60%
C. 60-80%
D. 80-100%
A fair number of people will pick 30%.
If the example is:
There is a group of women, 30% of whom are librarians, and 70% of whom are nurses. Mary is one of those women, and she is 35 and wears glasses. What are the chances that she is a librarian?
A. 10-40%
B. 40-60%
C. 60-80%
D. 80-100%
Under those circumstances, a fair number of people will pick a higher percentage, as though the added detail “wears glasses” changes the chances of her being a librarian. But, that detail doesn’t change the chances—there are, as far as I know, no studies showing that librarians are more likely to wear glasses than nurses. Wearing glasses is something we associate with librarians, largely because of movies and TV. It isn’t logically related, but associatively.
Another example of that kind of thinking is to ask one group of people how many calories a meal has, such as a meal consisting of 6 ounces of poached chicken breast and 1 cup of rice, and to ask another group of people about the calories of a meal consisting of 6 ounces of poached chicken breast, 1 cup of rice, and a salad (4 ounces mixed green lettuces, 3 cherry tomatoes, and 1 tablespoon oil and vinegar dressing). A lot of people will give the meal with the salad fewer calories than the one without. (Sometimes even the same people will give the meal with the salad fewer calories than the one without.)
Of course, the meal with the salad has more calories, but people think it doesn’t because salads are associated with healthy food, and healthy eating is associated with consuming fewer calories.
A few years ago, I had a funny conversation with someone about McDonald’s—they said that they got the fried chicken sandwiches rather than any of the hamburgers (even though they liked the hamburgers more) because it had fewer calories than any of the hamburgers. Actually, it doesn’t. Again, it’s a question of association—chicken is associated with healthy food, and so this person was simply assuming that chicken sandwiches had fewer calories. I had a similar conversation with someone who bragged that she didn’t let her children drink milk for health reasons; she gave them fruit juice instead.
I once lived somewhere that, several years before, had had a series of burglaries that took place in the middle of the day, while people were away at work. Several of the neighbors responded by leaving very bright outdoor lights on all night, and that’s an interesting response. It wasn’t going to make any difference as far as preventing the burglaries—they happened during the day. But daytime burglaries are burglaries, and they’re associated with danger. And leaving lights on during the night is associated with safety, with safety against a different kind of burglary, but one that’s still associated with daytime burglaries.
So, did the policy of leaving lights on protect those neighbors against the burglars who were active in the neighborhood? No, but it protected them against something, and so seemed like a good policy.
When we’re frightened, we have a tendency to believe that protecting our borders (physical, biological, ideological) is a good plan, simply because it’s associated with protection—regardless of whether that particular way of protecting our borders will actually prevent the outcome about which we’re frightened. We protect our house against one kind of burglary, but not the one actually threatening us.
Trump’s policies regarding “borders” has as much logical relevance to COVID-19 as leaving lights on all night had for daytime burglaries. Trump’s policies were (and are) about blocking land-based immigration from Mexico and any immigration (or travel) from various Muslim countries. He never did anything about Americans travelling to and from China, and that’s how we got COVID-19. As Jeff Goodell says, “In fact, the travel ban was a failure before it began. “You can’t hermetically seal the United States off from the rest of the world,” Rice says. For one thing, the ban only applied to Chinese citizens, not to Americans coming home from China or other international travelers, or to cargo that was coming into the U.S. from China.”
His rhetoric associated various Others as evil and dangerous, but never in a way that would have kept the US safe from this virus. And, despite what many people who are repeating the talking point about his policies being right seem to think, Trump got his way with his travel bans. They went into effect.
So, this talking point is simply saying that Trump was right to make Americans fearful about our borders, but he didn’t make Americans fearful about borders. He made Americans fearful about Mexicans and Muslims, and now he’s trying to make us fear the Chinese. Viruses don’t have a race, and they don’t see race. Building the wall wouldn’t have prevented COVID-19. His travel ban (which was instituted) didn’t prevent COVID-19. His second travel ban (about which he bragged) was ineffective.
That Trump’s rhetoric is a rhetoric of fear of Others, and that his policies are associated with that fear, doesn’t mean his policies were effective. That two (or more) things are associated in our minds is not actually proof that they are either causally or logically connected. They’re just associated in our mind, and sometimes someone’s rhetoric.
Time management for graduate students
Time management as a graduate student is really hard. It’s hard to do things like calendar effectively, set deadlines, manage your time effectively when it’s for a kind of project you’ve never done. Even if you are in a program that is ethical as far as time off, it’s hard to figure out how to use that time for a few reasons.
First, far too many faculty endorse toxic notions about how much people should be working, and advocate irresponsible and unethical relationships to work, talking like we’re a gamer startup or high-powered law firm, and should be grateful to get an afternoon off every couple of weeks. Those people get paid a lot more than graduate students (or faculty) do, and just because there are fields that are unethical and exploitative doesn’t mean we should be.
Not only is that model unethical, it’s unsustainable. The little research there is suggests that people who thrive in academia don’t work sixty hour weeks, sacrifice any life other than work. They make strategic decisions about their time (including deciding to do some things badly).
So, one thing that makes time management as a graduate vexed is that people give bad advice about it.
Second, graduate students were excellent undergraduates, and undergraduates are actively rewarded for having shitty time management practices. It’s conventional in time management to use a process that, I’m told, Eisenhower made famous (but Covey has written a lot about it): thinking about tasks in terms of urgent versus important. In terms of the lives of graduate students it looks like this.
It’s generally considered bad time management to spend most of your time dealing with tasks that are urgent and important and to ignore important but not-urgent tasks till they become urgent, but that’s what undergraduates have to do, and it’s what graduate students have to do while in coursework.
Third, (or maybe this is really part of the first), far too many graduate advisors tell their students they have to do all the things, and do them all beautifully, rather than teaching students how to be strategic about choices. It’s important to understand that faculty, especially in the humanities, are in a terrible position ethically. But that’s a different post. The short version is that a lot of faculty can’t deal with the cognitive dissonance of wanting to have a lot of graduate students (so that we can teach graduate classes, which are hella fun) and the fact that those students are going into debt to get a degree that won’t get them a job. And they resolve that dissonance by telling students that “if you get a magic feather, you will be fine.”
There is a fourth problem, true even in programs with good placement. There are no good studies on the issue of scholarly productivity, as far as I can tell, and that absence of research means that it’s a problem to give specific advice about how much time a person can spend a day writing. Many ethical programs give graduate students a teaching-free semester for completing their dissertations, and I completely support that effort. As I said, no studies to support what I’m saying, but I’ve consistently found that it’s hard for anyone to write more than 3-4 hours a day. In my experience (and I tracked this pretty carefully), writing for 3-4 hours a day in the morning (with breaks) enables about 90 minutes of editing in the afternoon. Graduate students, even ones on fellowship, often feel that they should be writing their dissertation eight hours a day, but I don’t think that’s possible.
The fifth problem is that faculty are too often dogmatic that graduate students must follow a writing process that isn’t actually working for the faculty members insisting on a process. Throughout my career, and at every institution, there have been faculty with wicked bad writing blocks–who haven’t published in years– who insist that students follow the writing process that is clearly not working for them.
My point is that time management as a graduate student is vexed because there are institutional restraints (including, possibly, an advisor with toxic notions about work and writing processes) such that much advice that graduate students are likely to be given is useless.
So, what is my advice for graduate students?
Calendar back from your deadlines, don’t expect to write for more than four hours a day, find your best four hours (which for a lot of people is ridiculously early), have at least one day a week and at least a couple of hours every day when you feed your soul—walk, run, play basketball, hang out with beings you like (and don’t talk about your work), do yoga, cook something interesting, garden, read shitty novels.
Thesis statements, topic sentences, and “good” writing
In something I have that’s about writing, I have a footnote, and I was asked about this footnote in my advice on writing by a smart person who noticed that I had packed an awful lot into that footnote. And their question was, more or less, whuh? This is the footnote:
“Here you’re in a bind. American writing instructors, and many textbooks, mis-use the term “thesis statement.” The thesis statement is a summary of the main point of the paper; it is not the same as the topic statement. Empirical research shows that most introductions end with a statement of topic, not the thesis. But, our students are taught to mis-identify the topic sentence as the thesis statement (e.g., so they think that “What are the consequences of small dogs conspiring with squirrels?” is a thesis statement). This is not a trivial problem, and I would suggest is one reason that students have so much trouble with reasoning and critical reading. I’m not kidding when I say that I also think it contributes significantly to how bad public argument is. You can insist on the correct usage (which is pretty nearly spitting into the wind), or you can come up with other terms—proposal statement, main claim, main point.”
I wrote it badly (I said “most paragraphs end….rather than most introductions”). It’s now corrected. Still and all, what did I mean? I was saying that we should distinguish between thesis statements and other kinds of contracts, but why does that distinction matter? Before I can persuade anyone that it matters, I have to persuade people there is a distinction to be made.
Many teachers and textbooks tell students that “the introduction has to tell ‘em what you’re gonna tell ‘em, or your reader won’t know what the paper is about.” And they identify the thesis statement (the last sentence in a summary introduction) as the way to do that. Certainly, there is a sense in which that is good advice. You can see that students who have followed that advice get excellent scores on the SAT. Here are two sample “excellent” introductions for the SAT:
“In response to our world’s growing reliance on artificial light, writer Paul Bogard argues that natural darkness should be preserved in his article “Let There be dark”. He effectively builds his argument by using a personal anecdote, allusions to art and history, and rhetorical questions.”
“In the article, “Why Literature Matters” by Dana Gioia, Gioia makes an argument claiming that the levels of interest young Americans have shown in art in recent years have declined and that this trend is a severe problem with broad consequences. Strategies Gioia employs to support his argument include citation of compelling polls, reports made by prominent organizations that have issued studies, and a quotation from a prominent author. Gioia’s overall purpose in writing this article appears to be to draw attention towards shortcomings in American participation in the arts. His primary audience would be the American public in general with a significant focus on millenials.”
Those are summary introductions, with the thesis statement (that is simultaneously a partition ) very clearly stated. Thus, as far as helping students get good SAT scores, it’s pretty clear that teachers and textbooks are right to tell students to write summary introductions, and land that thesis hard in the introduction. I would say, based on my experience, that, although college teachers make fun of the “five-paragraph essay,” a non-trivial number of them do still want a summary introduction with that thesis landing hard, and a paper that is a list of reasons. Given that the thesis-driven format for a paper is rewarded, it might seem that I’m being a crank to say there is a difference between a thesis statement and a topic sentence (or, more accurately, a “contract”). So, am I?
Or, to put it the other way, are teachers and textbooks who insist that “good” writing has a summary introduction right? Is the SAT testing “good” writing?
One way to test those hypotheses is to look at essays that are valued in English classes, such as Martin Luther King, Jr.’s “Letter from Birmingham Jail” or George Orwell’s “Politics and the English Language.” Here’s the introduction from King:
“My Dear Fellow Clergymen:
While confined here in the Birmingham city jail, I came across your recent statement calling my present activities “unwise and untimely.” Seldom do I pause to answer criticism of my work and ideas. If I sought to answer all the criticisms that cross my desk, my secretaries would have little time for anything other than such correspondence in the course of the day, and I would have no time for constructive work. But since I feel that you are men of genuine good will and that your criticisms are sincerely set forth, I want to try to answer your statement in what I hope will be patient and reasonable terms.”
Here is the introduction from Orwell:
“Most people who bother with the matter at all would admit that the English language is in a bad way, but it is generally assumed that we cannot by conscious action do anything about it. Our civilization is decadent and our language — so the argument runs — must inevitably share in the general collapse. It follows that any struggle against the abuse of language is a sentimental archaism, like preferring candles to electric light or hansom cabs to aeroplanes. Underneath this lies the half-conscious belief that language is a natural growth and not an instrument which we shape for our own purposes.
“Now, it is clear that the decline of a language must ultimately have political and economic causes: it is not due simply to the bad influence of this or that individual writer. But an effect can become a cause, reinforcing the original cause and producing the same effect in an intensified form, and so on indefinitely. A man may take to drink because he feels himself to be a failure, and then fail all the more completely because he drinks. It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts. The point is that the process is reversible. Modern English, especially written English, is full of bad habits which spread by imitation and which can be avoided if one is willing to take the necessary trouble. If one gets rid of these habits one can think more clearly, and to think clearly is a necessary first step toward political regeneration: so that the fight against bad English is not frivolous and is not the exclusive concern of professional writers. I will come back to this presently, and I hope that by that time the meaning of what I have said here will have become clearer.
“These five passages have not been picked out because they are especially bad — I could have quoted far worse if I had chosen — but because they illustrate various of the mental vices from which we now suffer.”
Neither of those is a summary introduction, and neither has a thesis statement in it.
When I point this out to people who advocate the “you must have your thesis in your introduction,” they say that “I want to try to answer your statement in what I hope will be patient and reasonable terms” and “they illustrate various of the mental vices from which we now suffer” are thesis statements. But they aren’t, or, more accurately, it isn’t useful to use “thesis statement in such a broad way.” A “thesis statement” is (or should be used for) the statement of the thesis—that is, the sentence (or, more often, series of sentences) that clearly states the main argument the author is making.
If we use it that way, then it’s clear that neither King nor Orwell have the thesis in the introduction. King doesn’t have a single sentence that summarizes his argument. It’s a complicated argument, but stated most clearly in eleven paragraphs almost at the very end of the piece (from “I have traveled” to “Declaration of Independence”).
Orwell looks as though he’s giving a thesis, but he isn’t—he gives a really clear partition. (“These five passages have not been picked out because they are especially bad — I could have quoted far worse if I had chosen — but because they illustrate various of the mental vices from which we now suffer.”) He gives a kind of hypo-thesis (“Now, it is clear that the decline of a language must ultimately have political and economic causes”), something much less specific than what he actually argues. His thesis is most clearly stated at the end (from “What is above all needed” through his six rules).
I could give other examples (and often do) of scholarly articles, even abstracts, long-form journalism, discourse oriented toward an opposition audience of various kinds that show that clever rhetors delay their thesis when what they’re saying is controversial. That’s Cicero’s advice—if you have a controversial argument, delay it till after the evidence.
But if “I want to try to answer your statement in what I hope will be patient and reasonable terms” is not a thesis statement, what is it? It’s more accurately called a topic sentence, but some people call it a “contract.” It states, very clearly, what the topic of the letter will be. It establishes expectations with the reader about the rest of the piece.
At this point, it might seem that I’m being a pedant to insist on the distinction, but I think it makes a difference (one I can’t go into here). Here, I’ll just make a couple of other points. This advice—“tell ‘em what you’re gonna tell ‘em”—isn’t just presented as a way to write a particular genre (teachers and test writers like that genre because it is extremely easy to grade); it’s presented as “good” writing. And it isn’t. No one would read the sample student introductions and think, “Oh boy, I want to read this whole paper” unless we were being paid to read them. But we’d read King or Orwell. So, it isn’t good writing—it’s easy to grade writing.
What I’m saying is that there is a genre (“student writing”) that is not the same as writing we actually value. We’re teaching students to write badly.
I have sometimes taught a course on how high school teachers should teach writing. At one point, I had a class of genuinely good people but who were very focused on enforcing prescriptive grammar and the genre of student writing regardless of my trying to tell them about the problems with prescriptive grammar and the genre of student writing. I don’t have a problem with people teaching students how to perform the genre of student writing, but I do have a problem with people teaching anyone that that genre is not just about student writing, but about “good” writing. And that’s what this group of students kept doing.
So, I gave them a passage of writing, and asked them to assess it, and they all trashed it. It didn’t have a summary introduction, it didn’t start with a thesis, it didn’t have paragraphs that began with main claims. They agreed that it was badly written. And then I told them that they were the high school teacher who told James Baldwin he was a bad writer.
“I sent you a rowboat:” Prosperity gospel and throwing others into the flood
The fundagelical Governors of Mississippi and Alabama have decided to resist expert recommendations about COVID-19, with the Governor of Mississippi going so far as to prevent any cities or counties from enacting policies grounded in expert opinion. And many people are shocked that governors would reject expert opinion, but, from within those governors’ imagined world, it makes perfect sense.
I’ve spent a non-trivial amount of time arguing with fundagelicals, and they are yet another set of people who argue so badly that their consistent inability to argue well should make them reconsider their beliefs. But they don’t, because they think they’re arguing well.
They believe that they’re arguing well because they are making claims that they feel certain are true, and they can find evidence to support those claims. [As a side note, I’ll say that far too many high school and college courses in argumentation would confirm that sense of what it means to make a good argument.]
What fundagelicals can’t see (nor can other people who reason badly) is that their way of reasoning is one even they reject as a bad way to reason, but they only reject that way of reasoning when other people reason that way.
For the sake of argument, I’ll stick with fundagelicals, but this toxic approach to deliberation is all over the political spectrum (and also slithers through other fields in which people make bad decisions, such as people who keep having disastrous relationships that don’t make them rethink their way of thinking about relationships).
Fundagelicals believe that everything about your life can be changed if you have enough faith. New Age grifters who have killed people also advocate that narrative that, as do get laid quick and make money fast grifters. Nazis also made that argument. So did Maoists. And Stalinists.
Fundagelicals believe that Scripture is not just soteriological, but politically eschatological. That is, many Christians believe that Scripture tells us about the spiritual journey we as individuals must make (soteriology). Fundagelicals believe that Scripture tells the story of political history (political eschatology). For people who read Scripture as eschatalogical, Revelation is neither a time-specific political allegory, nor a celebration of individual faith, but a perfectly accurate narrative of what is yet to come. The notion that Revelation is a codebook that, if we read it correctly, will tell us when the world is ending, is much more controversial than many people realize.
Fundagelicals have an oddly flat reading of Scripture—Scripture means what it seems to mean, as long as that meaning supports the political agenda they now have. Thus, when conservative Christians supported slavery and segregation, they cheerfully dismissed “Do unto others” (fundagelicals still evade that one) and the very clear rules about treatment of slaves, and they equally cheerfully insisted on odd readings in order to justify racism. In my experience, fundagelicals opt for the literal reading, except when they don’t—there is no coherence to their exegetical method, except political. That is, when reading literally gets them the “proof” they want, they read literally; when it doesn’t, they read metaphorically (or dismiss the passage as a cultural blip).
For instance, arguing for Hell on literal grounds is more vexed than many people realize, and, so, people who want to argue for it have to read a fair number of verses in a non-literal way. They’re literal (to the English translation, a serious problem when you’re talking about a literal reading) when it comes to “homosexuality” (neither a word nor concept that is in Scripture), but dismiss as “cultural” the equally clear proscriptions regarding women wearing makeup, people wearing mixed fibers, the death penalty.
When I’ve argued with fundagelicals about this point, the argument gets hung up at exactly the same place. For instance, on the issue of homosexuality, they cite the clobber verses, and I give them various links showing they’re relying on vexed readings of those verses, and they say, “That is what it says.” (In English, of course, not in Greek. Let’s set that aside.) I point out that they are citing one item from a list of behaviors that are condemned, and those lists always include behavior they allow, such as divorce, women wearing makeup in church, wearing mixed fibers, or benefitting from money loaned with interest). And they say, “Those are just cultural values of that moment.” And, then I say, “So were the practices you translate as ‘homosexuality,’” and they say, “No, those are universal.” They can’t say why they’re universal without engaging in a kind of simultaneously narcissistic and circular way of reasoning: they’re universal because I think they’re universal, and these other things are culturally specific because I think they’re culturally specific.
They can’t identify an exegetical method that they apply consistently, other than the narcissistic and circular one, because that’s how read Scripture in a politicized and narcissistic way: they approach Scripture expecting to see their political agenda confirmed, and so they treat every interpretation/meaning as real that confirms their political agenda, and dismiss every one as just an appearance that doesn’t. In rhetoric, this is called dissociation. In psychology, it’s considered an instance of “motivated reasoning,” and most of us do it. I’m saying that, in my experience, fundagelicals–again, like many people–won’t admit that’s what they’re doing, and that is the problem.
That their exegetical method is politicized from the beginning is why they accuse their opponents of politicizing Scripture. Projection is the first move of people who can’t reflect on their own processes.
This discussion of exegesis might seem a long way from why fundagelicals are dismissing the advice of experts (except when they aren’t), but it isn’t.
What I’m saying is that fundagelicals are yet one more instance of conservative Christians for whom being conservative matters more than being Christian. Here’s the best evidence that they are in-group first, and thoughtful exegesis second: when people try to criticize their reading of Scripture, they dismiss those criticisms on the grounds that the critics are bad people. That isn’t Scriptural exegesis—that’s demagoguery. That’s an admission that they are thinking about protecting their political in-group more than being honest and reflective of their methods of reading Scripture.
Or, tldr; they cherry-pick data. They cherry-pick Scripture; they cherry-pick “science.” And, just as their interpretation of Scripture is not defensible as anything other than “whatever supports our political agenda is true,” regardless of method, so is their way of citing “science.” They’ll cite a bad study as true because it agrees with them, while critiquing a study with the same (or better) methodology—on methodological grounds (Family Research Association is a great site for seeing this contradiction).
This cherry-picking of data while pretending to have a principled stance is not restricted to evangelicals. (Do not get me started about raw foodies.) But their cherry-picking of data is important because fundagelicals are politically powerful right now, despite their perpetual and ridiculous whingeing about being victims (talk about “snowflakes”—another instance of projection).
What I think a lot of non-fundagelicals are having trouble understanding about our current political moment is the dominance of prosperity gospel (an example of the “just world model”).
Prosperity gospel is a non-falsifiable interpretive frame that says that, if you have enough faith, you can get anything you want. It’s non-falsifiable in two ways. First, if you don’t get what you want, then you didn’t have enough faith—there’s no way to disprove this explanation of success/faith. Second, if something happens that simply cannot be explained as a lack of faith, it’s just a temporary setback, just God testing our faith. (Although most people tie it back to 19th century movements, it’s close to the muckled 17th century New England Puritan doctrine of signs.)
Just to be clear: I am a person of faith, and I think faith enables us to do extraordinary things. It also enables us to put one foot in front of another through difficult times because faith is the belief that things will turn out all-right. I also tithe. But, I don’t believe that faith guarantees us the outcome we want—that we are entitled to all of our desires being fulfilled by having perfect faith (or giving enough money). Such a belief substitutes our will (our desires, really) for God’s; that seems blasphemous to me.
I’ve also seen that kind of faith, not in God, but in our ability to get our way if we have enough faith, do great damage. It’s the old joke about the person of faith who refused to heed warnings, with the “punchline” of a drowned person of great faith asking God, “Why did you let me drown–I had perfect faith in you?” and God answering, “I sent you a warning, a rowboat, a motorboat, and a helicopter–what more did you want?”
Paradoxically, the just world model, especially when coupled with the notion that we can get whatever we want if we have enough faith, leads to tragedy. People don’t help others because we blame the victims. We ignore systemic failings on the assumption that any problem is always a failure of individual faith. Thus, people who believe in the just world model tend not to recognize systemic problems like poverty, racism, sexism, and they don’t support systemic solutions, such as communities supporting infrastructures (good schools, roads, healthcare). The just world model increases us v. them thinking, The paradox of the just world model is that it leads to an unjust world—whether religious or not (as mentioned above, the idea that you can get whatever you want if you have enough faith/will/confidence is the basis of philosophies as diverse as Libertarianism, Nazism, get rich quick schemes, pseudo-mystical success schemes).
Once a person or community has stepped into this ideology, it’s hard to get out. Rejecting the rowboat and helicopter becomes how one demonstrates faith. The difference between our situation and the guy who rejects the flood warnings is that he drowned; if we sit on the roof, and reject the epidemiologists, public health experts, social distancing, and ventilators to demonstrate our individual (or church’s) faith, we aren’t the only ones who drown. We may not drown at all. But health workers will. Police, EMT, the vulnerable.
We aren’t just sitting on the roof risking our lives. We’re throwing others into the water. Being Christian should mean we care for the vulnerable—we’re being given that chance. God sent us the epidemiologists; let’s listen to them.
Revised syllabus for RHE330D Spring 2020
I’ve put the revised syllabus here in case canvas crashes. All policies of the previous syllabus remain except for those explicitly changed by this syllabus.
What do Followers want?
I’ve known a lot of people, both personally and virtually, who were Followers. Sometimes they changed churches multiple times, sometimes philosophies, political ideologies, identities (like the guy I knew in college who flailed around from preppie to Che-Marxist to tennis fanatic—each with an entirely new wardrobe), with each new identity/community the one to which they were fully committed.
I’m not saying there’s anything wrong with changing wardrobes, identities, churches, even religions. People should change. What made (and makes) these people different is how they talk(ed) about each new conversion—this group was perfect, this group made them feel complete, this group/ideology answered all their questions, gave meaning to their lives, was something to which they could commit with perfect certainty.
That they went through this process multiple times, and kept failing to find a community/ideology that continued to satisfy them never made them doubt the quest, nor doubt that this time they found it. And I thought that was interesting. Each of these people was just someone in my circle of acquaintance for a few years, and I eventually lost track of them—in three cases because they’d joined cults.
There were (are) a lot of interesting things about these people, not just that their continued disenchantment never made them reconsider their goals, but also that they didn’t see themselves as followers at all, let alone Followers. They saw themselves as independent people, critical thinkers, autonomous individuals of good judgment—who were continually searching for, and temporarily finding, a group or ideology that enabled them to surrender all judgment and doubt. That’s a paradox.
In the mid-thirties, Theodore Abel, an American sociologist, offered a prize of 400 German marks for “the best” personal narrative of someone who had joined the Nazis prior to 1933—essentially a conversion narrative. In 1938, he published a book about it. The Nazis sounded like my various acquaintances, not in terms of being Nazis, but as far as simultaneously seeing (and representing) themselves as autonomous individuals of purely independent judgment who were seeking a totalizing group experience—one that demanded pure loyalty and complete submission.
They were Followers.
In the Platonic dialogue Gorgias, Socrates gets into an argument with two people who want to study rhetoric so that they can control the masses and thereby become powerful, perhaps even a tyrant. When Socrates asks why, one of them answers, more or less, “For the power. D’uh.” And Socrates says, “Does the tyrant really have the power?” Socrates points out that the tyrant is, in a way, being controlled by the masses he’s trying to control. He can’t, for instance, advocate what he really thinks is best, but only what he thinks his base will go along with.
It’s a typically paradoxical Socratic argument, but there’s something to it. The tyrant can only succeed as long as he (or she—not an option Socrates and the others considered) gives the Followers what they want. In other words, if we care about tyrants, we should see the source of power as Followers. Instead of asking why tyrants (or demagogues) do, we should ask what Followers want. So, what do Followers want?
Here’s the short version. They want a leader who speaks and acts decisively for them, who is a “universal genius,” and whose continued success at crushing and shaming opponents not only gives them “agency by proxy” in that shaming and crushing, but confirms the followers’ excellent judgment in having chosen to follow, and who is supported by total loyalty.
That’s the short version. Here’s the longer.
They want a leader who is a “universal genius,” not in the sense of a polymath (someone trained or educated in multiple fields), but in the sense of a person who has a capacity for seeing the right answer in any situation, without training, or expertise, or prior knowledge. This genius can lecture actual experts on those experts’ fields, correct their errors, see solutions they’ve overlooked simply because of his extraordinarily brilliant ability to see.
Followers’ model of leaderhips assumes that there is a right answer, and that’s something else that the followers want—the erasure of a particular kind of uncertainty. They don’t mind the uncertainty of a gamble, as long as the leader expresses confidence in his ability to succeed at what is obviously to him the right course of action. They mind the uncertainty of a situation that might not have a single right answer, or in which an answer isn’t obvious to them, or, even more triggering, in which the right answer isn’t obvious to anyone. That anger and anxiety are heightened if they are responsible for making the choice, since now they face the prospect of being shamed if they turn out to be wrong.
Avoiding shame is important to Followers, and they often associate masculinity with decisiveness. Not just the decisiveness of making a decision quickly (they don’t always require quick decisions), but of deciding to take action, to do something, powerful, dramatic, clear. Followers like things to be black and white, and they want a leader whose actions are similarly stark, and who advocates those actions in similarly stark terms. Followers don’t like nuance, hedging, or subtlety, but that doesn’t mean they reject all kinds of complexity.
They don’t mind complexity of a particular kind. Followers can enjoy if the leader explains things in ways that don’t quite make sense, or endorse an incoherently complicated conspiracy theory—the leader’s ability to understand things they can’t confirms their faith that he is a genius. That the leader is confidently saying something that doesn’t quite sense is taken by Followers to mean that, while things might seem complicated to the follower, they are clear to the leader. Thus, the leader has a direct connection to the ways of the universe–universal genius. Not quite making sense confirms that perception of the leader as a person who clearly sees what is unclear to others, but hedging or nuance would suggest that the leader does not perceive things perfectly clearly, and that is unacceptable in the leader.
Followers’ sense of themselves as people with excellent judgment—autonomous thinkers who are completely submitting their judgment to the leader–requires that the leader always be confident, clear, and describe issues in black and white terms.
This part is hard to explain. These Followers I knew kept looking for a system of belief that would mean they were not only never wrong, but never unsure, never in danger of being wrong, of being shamed. And, like many people, they equated clarity with certainty and certainty with being right, and they equated nuance and hedging with uncertainty, and uncertainty with being more likely to be wrong. Thus, a leader who says, “This is absolutely true–even if it isn’t– is more trustworthy than one who hedges because the first leader has more confidence. Being confident is more important than being accurate. (“It’s a higher truth,” Followers tell me.)
It’s interesting that, sometimes, a leader can take a while to make a decision, but, when he does make it, he has to announce his decision in unequivocal terms and enact it immediately, since that signals clarity of purpose and confidence. To put his decision into the world of deliberation and disagreement would be to allow the decision of a genius to be muddled, compromised, and dithered. Followers mistake quick action justified by over-confidence for a masculine and decisive response. They mistake recklessness for decisiveness–because they admire recklessness, since it signals faith, will, and commitment.
Followers need the leader to give them plausible narratives that guarantee success through strength, will, and commitment. So, what happens when the leader fails? At that point, we get scapegoating and projection. Oddly enough, Followers can tolerate complicated conspiracy narratives, even ones they can’t entirely follow, as long as the overall gist of the narrative is simple: we are good people entitled to everything we want, and They are the ones keeping us from getting it. We are blameless.
Followers don’t care if the leader lies. They like it. They don’t feel personally lied to, and they like that the leader can get away with lying—they admire that degree of confidence, and the shamelessness. They want a shameless leader. They want a leader who isn’t accountable; they want one without restraints. They don’t see the leader engaging in quid pro quo, violating the law, or even openly lining his pocket as a problem, let alone corruption. They think that’s what power is for. And, as with the lying, they admire the shamelessness.
They also like if the leader says ridiculously impossible things; they like the hyperbole. They think it signals passionate commitment to their cause because it is unrestrained.
They don’t expect the leader to be loyal to individuals, although the leader demands perfect loyalty from individuals, and Followers demand perfect loyalty from the leader’s subordinates. If leader’s aides betray the leader in any way, such as revealing that the leader is incompetent, Followers are outraged, even if everything the aide says is true.
This part is also hard to explain, so I’ll try to explain. A Follower I knew was on the edges of a cult run by a man who called himself various things, including Da Free John. At one point, Da Free John had followers who came forward and accused him of, among other things, egregious sexual harassment. Those accusations inspired my friend to get more involved with the cult. When I asked about the accusations, he was angry with the people who had made the accusations. His argument was something along the lines of, “They knew what they were getting into, and they betrayed him.” In other words, as far as I could tell, he was willing to grant the sexual harassment, but blamed the victims, not just for being victims, but for being disloyal enough to complain about it.
Albert Speer was condemned for his disloyalty, as though he should not have admitted to any flaws in Hitler (I think condemning him for his being a lying liar who lied is reasonable criticism, but not disloyalty). Victims of abuse by church officials are regularly condemned for their disloyalty, as though that’s the biggest problem.
Followers pride themselves on their ability to be loyal, and they will remain loyal as long as the leader continues to be a beacon of confidence, certainty, decisiveness. That commitment can even withstand some serious failures on the part of the leader, for a few reasons. The most important, mentioned above, is they refuse to listen to any criticism of the leader, even if made by informed people (such as close aides). Followers only pay attention to pro-leader media, and they dismiss as “biased” any media (or figure) critical of their leader. This dismissing of criticism of the leader as “biased” is not only motivism, but ensures that Followers remain in informational enclaves, ones that will spiral into in-group amplification (aka, “rhetorical radicalization“).
If the leader does completely fail, they are likely to blame his aides, rather than him (as happened with a large number of Germans in regard to Hitler). To admit the leader was fallible would be to admit that the Follower had bad judgment, and that’s not acceptable.
So, what I’m saying is that Followers are people who put perfect faith in a leader, a faith that is impervious to disproof, and they refuse to look at any evidence that their loyalty might be displaced. The conventional way to describe that kind of relationship is blind loyalty, but they don’t think they have blind loyalty (they think the out-group does). They think their loyalty is rational and clear-eyed because they believe they have the true perception of the leader, one that comes from an accurate assessment of his traits and accomplishments. They believe the leader is transparent to them.
But, if this isn’t blind loyalty, since they refuse to look at anything outside of their pro-leader media, it’s certainly blindered loyalty. And it generally ends badly.