Time management for assistant professors

weekly work schedule

In an earlier post, about time management for graduate students, I mentioned that there is a limit as to how much a person can write in a day. I also think that a lot of people get burned out working day after day on the same topic, and, if they don’t get burned out, they lose their ability to think critically about what they’re writing. Some people manage that second problem by working on multiple projects at the same time. When they just can’t work on, they work on that for the next three weeks or so, and then come back. I can’t do that.

In many fields, a graduate student teaches one class (perhaps two), is on very few committees, and has one or two major scholarly obligations (finishing the dissertation and trying to get something published). The kinds of classes that graduate students teach often have fairly established syllabi (or, at least, course requirements).

There’s a post here where I talk some about the challenges. The time management challenges for assistant professors are, I think (and I was an assistant professor for a long time—at three different institutions), very different from either graduate student or full professor, but they are much like the issues for associates (with a big exception I’ll mention).

These challenges are: much more open-ended teaching opportunities, the vagaries of establishing a professional identity, service requirements, multiple scholarly obligations, and (if it wasn’t already a challenge in graduate school) often a family or just very different sorts of living conditions.

Perhaps somewhat paradoxically, one of the challenges of being an assistant professor is the freedom regarding teaching. Often, departments rely on new hires to create new courses, modify curriculum, or in other ways be the innovators. There are good reasons for that reliance—assistant professors are likely to be trained in ways that are very different from the older faculty, simply because they were recently at a very different program. It can be tempting to create too many new courses—a strategic choice is to spend the first year creating a repertoire of courses, and then tinkering with them for a while. It can be intoxicating to teach entirely new ones, to have the chances to work in programs (such as honors or mentoring programs) that are often overload.

There’s a similar problem with service—assistant professors want to make themselves central to the department, and want to be liked. It’s important to make strategic choices about obligations. And, it’s also important to keep in mind that women and POC get a lot more pressure to take on service-heavy responsibilities, for both good (representation) and bad (tokenism) reasons. Learning to say, “I’d love to do that after I have finished my book” (or “enough for tenure” or “have tenure”) in a genuinely enthusiastic way can be very useful.

It’s important to go to conferences, since it’s good to network (find other scholars working on similar projects, find out who might be a good co-panelist, co-author, co-editor of a collection), and also good to get a sense of who people are citing a lot, where the field appears to be going.

But it’s often hard to figure out which conferences, how many, and it isn’t a good idea to spend a lot of time writing paper conferences that aren’t candidates for articles or chapters. Conferences used to be good for chatting with editors (to try to figure out if a project has a market), but presses are attending fewer conferences, so it’s hard to say.

Many students (especially ones who took some time between grad and undergrad) have children in graduate school; many don’t until they’re assistant professors. Some people get tired of crappy student apartments and really want a house. Those kinds of choices have some odd consequences—I became much more productive when I reduced my commute, something I hadn’t expected. So, choices to live far from campus (because it’s more affordable, schools are better, or other reasons) can introduce variables.

In short, being an assistant professor is a challenge in terms of time management because, even more than as a graduate student, it involves making decisions without enough information to make good ones.

Being an assistant professor is a challenge in terms of time management because, even more than as a graduate student, it involves making choices without knowing what all the options really are, the relative advantages and disadvantages, the potential consequences. It’s just as much uncertainty as a graduate student, but with more choices.

The most obvious course of action is to get good mentoring, but even that is choosing among several paths in a forest of unknowns. While I feel comfortable giving advice in the abstract, I don’t think I know enough about conditions now for junior scholars to make a lot of specific recommendations. I think it’s useful to have several mentors—someone just one rank above at a different institution, someone high up at your institution, someone just one rank above at your institution.

Because I am none of those things, the advice I’m about to give should be taken with a grain of salt (or more). Regardless of the publication standards for tenure at your institution, publish. I know that isn’t easy, but publication is the scholarly equivalent of “fuck you” money. It gives you the ability to move (which, paradoxically, makes it easier to stay). If you’re at an institution that requires a book for tenure, you have to have a manuscript ready to submit to a publisher by your third year.

A lot of graduate students spend the year or two (or three) that they’re writing their dissertation in a white-hot panic, they develop back problems, they sleep badly. Sometimes there is a six-month period when they are basically alternating between terror and panic. That happens because very few programs prepare students well for that last marathon of dissertation-writing (and an unhappy number of faculty believe that their job is to make sure that last stretch is boot-camp).

As I’ve tried to write about elsewhere, the unfortunate consequence is that people come to rely on a writing process that is driven by panic. That is not sustainable as an assistant professor. But, for some people, that’s the only way they know to write—they only know how to run sprints, and so they spend some amount of time (perhaps the last two years, when it’s publish or get fired) in that same white-hot panic, making everyone around them miserable, but most of all themselves.

That’s an emergency, not a career. The goal during graduate school should be to find a work process that is sustainable for life. But there really isn’t a lot of incentive to do that. Graduate courses inevitably reward treating paper writing as a sprint, and, despite the best efforts of the best advisors, so many documents leading up to the dissertation are written out of panic—because of fear of failure, imposter syndrome, panic-driven writing processes, decisional ambiguity. Good writers, and anyone who gets into graduate school is a good writer, are people accustomed to sit down and produce a product. That they might have to revise, draft, and cut can feel like a failure. Graduate students spend a lot of time trying to reproduce the writing processes that got them into graduate school, even though those processes are no longer working. This problem of remaining committed to panic-driven writing processes isn’t helped by the unpleasant fact that there are advisors who actively work to keep students sprinting (they deliberately work their advisees into panics, they delay reading material, they believe their job is to “toughen up” students, they have panic-driven writing processes and can’t imagine any other).

Since it is so very possible to write a dissertation in a year of sheer panic, as a series of exhausting sprints, a lot of assistant professors treat trying to publish enough to get tenure as the same world of panic and sprinting that got them to finish their dissertation. That is a very bad decision.

Here’s what I wish someone had told me when I got my first job: create the work life you want to have for your entire career; stop treating your work responsibilities as a series of crises.




Trump’s border rhetoric/policies and COVID-19

a small concrete ball with an entrance
A four-person bomb shelter in Munich

Right now, I’m seeing a lot of people say that the COVID-19 crisis proves that Trump was right in his controversial policies to shut down the borders. I’m seeing it in enough different places that it’s clearly become a talking point getting repeated as a truism in pro-Trump media and communities. It’s a really interesting argument because many people think it’s a clobber argument—one that should end the argument. But critics of Trump don’t find it all that persuasive. Why not?

There are a lot of reasons, including that some people won’t grant Trump credit for anything (just as there are Trump supporters who won’t acknowledge any criticism of him)—that’s just rabid factionalism.

Another reason has to do with how people think about politics (and lots of other things). Many people reason associatively. There’s a famous quiz for testing thinking processes that has questions like this:

There is a group of women, 30% of whom are librarians, and 70% of whom are nurses. Mary is one of those women, and she is 35. What are the chances that she is a librarian?

A. 10-40%
B. 40-60%
C. 60-80%
D. 80-100%

A fair number of people will pick 30%.

If the example is:

There is a group of women, 30% of whom are librarians, and 70% of whom are nurses. Mary is one of those women, and she is 35 and wears glasses. What are the chances that she is a librarian?

A. 10-40%
B. 40-60%
C. 60-80%
D. 80-100%

Under those circumstances, a fair number of people will pick a higher percentage, as though the added detail “wears glasses” changes the chances of her being a librarian. But, that detail doesn’t change the chances—there are, as far as I know, no studies showing that librarians are more likely to wear glasses than nurses. Wearing glasses is something we associate with librarians, largely because of movies and TV. It isn’t logically related, but associatively.

Another example of that kind of thinking is to ask one group of people how many calories a meal has, such as a meal consisting of 6 ounces of poached chicken breast and 1 cup of rice, and to ask another group of people about the calories of a meal consisting of 6 ounces of poached chicken breast, 1 cup of rice, and a salad (4 ounces mixed green lettuces, 3 cherry tomatoes, and 1 tablespoon oil and vinegar dressing). A lot of people will give the meal with the salad fewer calories than the one without. (Sometimes even the same people will give the meal with the salad fewer calories than the one without.)

Of course, the meal with the salad has more calories, but people think it doesn’t because salads are associated with healthy food, and healthy eating is associated with consuming fewer calories.

A few years ago, I had a funny conversation with someone about McDonald’s—they said that they got the fried chicken sandwiches rather than any of the hamburgers (even though they liked the hamburgers more) because it had fewer calories than any of the hamburgers. Actually, it doesn’t. Again, it’s a question of association—chicken is associated with healthy food, and so this person was simply assuming that chicken sandwiches had fewer calories. I had a similar conversation with someone who bragged that she didn’t let her children drink milk for health reasons; she gave them fruit juice instead.

I once lived somewhere that, several years before, had had a series of burglaries that took place in the middle of the day, while people were away at work. Several of the neighbors responded by leaving very bright outdoor lights on all night, and that’s an interesting response. It wasn’t going to make any difference as far as preventing the burglaries—they happened during the day. But daytime burglaries are burglaries, and they’re associated with danger. And leaving lights on during the night is associated with safety, with safety against a different kind of burglary, but one that’s still associated with daytime burglaries.

So, did the policy of leaving lights on protect those neighbors against the burglars who were active in the neighborhood? No, but it protected them against something, and so seemed like a good policy.

When we’re frightened, we have a tendency to believe that protecting our borders (physical, biological, ideological) is a good plan, simply because it’s associated with protection—regardless of whether that particular way of protecting our borders will actually prevent the outcome about which we’re frightened. We protect our house against one kind of burglary, but not the one actually threatening us.

Trump’s policies regarding “borders” has as much logical relevance to COVID-19 as leaving lights on all night had for daytime burglaries. Trump’s policies were (and are) about blocking land-based immigration from Mexico and any immigration (or travel) from various Muslim countries. He never did anything about Americans travelling to and from China, and that’s how we got COVID-19. As Jeff Goodell says, “In fact, the travel ban was a failure before it began. “You can’t hermetically seal the United States off from the rest of the world,” Rice says. For one thing, the ban only applied to Chinese citizens, not to Americans coming home from China or other international travelers, or to cargo that was coming into the U.S. from China.”

His rhetoric associated various Others as evil and dangerous, but never in a way that would have kept the US safe from this virus. And, despite what many people who are repeating the talking point about his policies being right seem to think, Trump got his way with his travel bans. They went into effect.

So, this talking point is simply saying that Trump was right to make Americans fearful about our borders, but he didn’t make Americans fearful about borders. He made Americans fearful about Mexicans and Muslims, and now he’s trying to make us fear the Chinese. Viruses don’t have a race, and they don’t see race. Building the wall wouldn’t have prevented COVID-19. His travel ban (which was instituted) didn’t prevent COVID-19. His second travel ban (about which he bragged) was ineffective.

That Trump’s rhetoric is a rhetoric of fear of Others, and that his policies are associated with that fear, doesn’t mean his policies were effective. That two (or more) things are associated in our minds is not actually proof that they are either causally or logically connected. They’re just associated in our mind, and sometimes someone’s rhetoric.

Time management for graduate students

dream weekly schedule
Ideal weekly schedule

Time management as a graduate student is really hard. It’s hard to do things like calendar effectively, set deadlines, manage your time effectively when it’s for a kind of project you’ve never done. Even if you are in a program that is ethical as far as time off, it’s hard to figure out how to use that time for a few reasons.

First, far too many faculty endorse toxic notions about how much people should be working, and advocate irresponsible and unethical relationships to work, talking like we’re a gamer startup or high-powered law firm, and should be grateful to get an afternoon off every couple of weeks. Those people get paid a lot more than graduate students (or faculty) do, and just because there are fields that are unethical and exploitative doesn’t mean we should be.

Not only is that model unethical, it’s unsustainable. The little research there is suggests that people who thrive in academia don’t work sixty hour weeks, sacrifice any life other than work. They make strategic decisions about their time (including deciding to do some things badly).

So, one thing that makes time management as a graduate vexed is that people give bad advice about it.

Second, graduate students were excellent undergraduates, and undergraduates are actively rewarded for having shitty time management practices. It’s conventional in time management to use a process that, I’m told, Eisenhower made famous (but Covey has written a lot about it): thinking about tasks in terms of urgent versus important. In terms of the lives of graduate students it looks like this.

chart of important v. urgent tasks

It’s generally considered bad time management to spend most of your time dealing with tasks that are urgent and important and to ignore important but not-urgent tasks till they become urgent, but that’s what undergraduates have to do, and it’s what graduate students have to do while in coursework.

Third, (or maybe this is really part of the first), far too many graduate advisors tell their students they have to do all the things, and do them all beautifully, rather than teaching students how to be strategic about choices. It’s important to understand that faculty, especially in the humanities, are in a terrible position ethically. But that’s a different post. The short version is that a lot of faculty can’t deal with the cognitive dissonance of wanting to have a lot of graduate students (so that we can teach graduate classes, which are hella fun) and the fact that those students are going into debt to get a degree that won’t get them a job. And they resolve that dissonance by telling students that “if you get a magic feather, you will be fine.”

There is a fourth problem, true even in programs with good placement. There are no good studies on the issue of scholarly productivity, as far as I can tell, and that absence of research means that it’s a problem to give specific advice about how much time a person can spend a day writing. Many ethical programs give graduate students a teaching-free semester for completing their dissertations, and I completely support that effort. As I said, no studies to support what I’m saying, but I’ve consistently found that it’s hard for anyone to write more than 3-4 hours a day. In my experience (and I tracked this pretty carefully), writing for 3-4 hours a day in the morning (with breaks) enables about 90 minutes of editing in the afternoon. Graduate students, even ones on fellowship, often feel that they should be writing their dissertation eight hours a day, but I don’t think that’s possible.

The fifth problem is that faculty are too often dogmatic that graduate students must follow a writing process that isn’t actually working for the faculty members insisting on a process. Throughout my career, and at every institution, there have been faculty with wicked bad writing blocks–who haven’t published in years– who insist that students follow the writing process that is clearly not working for them.

My point is that time management as a graduate student is vexed because there are institutional restraints (including, possibly, an advisor with toxic notions about work and writing processes) such that much advice that graduate students are likely to be given is useless.

So, what is my advice for graduate students?

Calendar back from your deadlines, don’t expect to write for more than four hours a day, find your best four hours (which for a lot of people is ridiculously early), have at least one day a week and at least a couple of hours every day when you feed your soul—walk, run, play basketball, hang out with beings you like (and don’t talk about your work), do yoga, cook something interesting, garden, read shitty novels.

Thesis statements, topic sentences, and “good” writing

marked up draft

In something I have that’s about writing, I have a footnote, and I was asked about this footnote in my advice on writing by a smart person who noticed that I had packed an awful lot into that footnote. And their question was, more or less, whuh? This is the footnote:

“Here you’re in a bind. American writing instructors, and many textbooks, mis-use the term “thesis statement.” The thesis statement is a summary of the main point of the paper; it is not the same as the topic statement. Empirical research shows that most introductions end with a statement of topic, not the thesis. But, our students are taught to mis-identify the topic sentence as the thesis statement (e.g., so they think that “What are the consequences of small dogs conspiring with squirrels?” is a thesis statement). This is not a trivial problem, and I would suggest is one reason that students have so much trouble with reasoning and critical reading. I’m not kidding when I say that I also think it contributes significantly to how bad public argument is. You can insist on the correct usage (which is pretty nearly spitting into the wind), or you can come up with other terms—proposal statement, main claim, main point.”

I wrote it badly (I said “most paragraphs end….rather than most introductions”). It’s now corrected. Still and all, what did I mean? I was saying that we should distinguish between thesis statements and other kinds of contracts, but why does that distinction matter? Before I can persuade anyone that it matters, I have to persuade people there is a distinction to be made.

Many teachers and textbooks tell students that “the introduction has to tell ‘em what you’re gonna tell ‘em, or your reader won’t know what the paper is about.” And they identify the thesis statement (the last sentence in a summary introduction) as the way to do that. Certainly, there is a sense in which that is good advice. You can see that students who have followed that advice get excellent scores on the SAT. Here are two sample “excellent” introductions for the SAT:

In response to our world’s growing reliance on artificial light, writer Paul Bogard argues that natural darkness should be preserved in his article “Let There be dark”. He effectively builds his argument by using a personal anecdote, allusions to art and history, and rhetorical questions.”

In the article, “Why Literature Matters” by Dana Gioia, Gioia makes an argument claiming that the levels of interest young Americans have shown in art in recent years have declined and that this trend is a severe problem with broad consequences. Strategies Gioia employs to support his argument include citation of compelling polls, reports made by prominent organizations that have issued studies, and a quotation from a prominent author. Gioia’s overall purpose in writing this article appears to be to draw attention towards shortcomings in American participation in the arts. His primary audience would be the American public in general with a significant focus on millenials.”

Those are summary introductions, with the thesis statement (that is simultaneously a partition ) very clearly stated. Thus, as far as helping students get good SAT scores, it’s pretty clear that teachers and textbooks are right to tell students to write summary introductions, and land that thesis hard in the introduction. I would say, based on my experience, that, although college teachers make fun of the “five-paragraph essay,” a non-trivial number of them do still want a summary introduction with that thesis landing hard, and a paper that is a list of reasons. Given that the thesis-driven format for a paper is rewarded, it might seem that I’m being a crank to say there is a difference between a thesis statement and a topic sentence (or, more accurately, a “contract”). So, am I?

Or, to put it the other way, are teachers and textbooks who insist that “good” writing has a summary introduction right? Is the SAT testing “good” writing?

One way to test those hypotheses is to look at essays that are valued in English classes, such as Martin Luther King, Jr.’s “Letter from Birmingham Jail” or George Orwell’s “Politics and the English Language.” Here’s the introduction from King:

“My Dear Fellow Clergymen:
While confined here in the Birmingham city jail, I came across your recent statement calling my present activities “unwise and untimely.” Seldom do I pause to answer criticism of my work and ideas. If I sought to answer all the criticisms that cross my desk, my secretaries would have little time for anything other than such correspondence in the course of the day, and I would have no time for constructive work. But since I feel that you are men of genuine good will and that your criticisms are sincerely set forth, I want to try to answer your statement in what I hope will be patient and reasonable terms.”

Here is the introduction from Orwell:
“Most people who bother with the matter at all would admit that the English language is in a bad way, but it is generally assumed that we cannot by conscious action do anything about it. Our civilization is decadent and our language — so the argument runs — must inevitably share in the general collapse. It follows that any struggle against the abuse of language is a sentimental archaism, like preferring candles to electric light or hansom cabs to aeroplanes. Underneath this lies the half-conscious belief that language is a natural growth and not an instrument which we shape for our own purposes.

“Now, it is clear that the decline of a language must ultimately have political and economic causes: it is not due simply to the bad influence of this or that individual writer. But an effect can become a cause, reinforcing the original cause and producing the same effect in an intensified form, and so on indefinitely. A man may take to drink because he feels himself to be a failure, and then fail all the more completely because he drinks. It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts. The point is that the process is reversible. Modern English, especially written English, is full of bad habits which spread by imitation and which can be avoided if one is willing to take the necessary trouble. If one gets rid of these habits one can think more clearly, and to think clearly is a necessary first step toward political regeneration: so that the fight against bad English is not frivolous and is not the exclusive concern of professional writers. I will come back to this presently, and I hope that by that time the meaning of what I have said here will have become clearer.

“These five passages have not been picked out because they are especially bad — I could have quoted far worse if I had chosen — but because they illustrate various of the mental vices from which we now suffer.”

Neither of those is a summary introduction, and neither has a thesis statement in it.

When I point this out to people who advocate the “you must have your thesis in your introduction,” they say that “I want to try to answer your statement in what I hope will be patient and reasonable terms” and “they illustrate various of the mental vices from which we now suffer” are thesis statements. But they aren’t, or, more accurately, it isn’t useful to use “thesis statement in such a broad way.” A “thesis statement” is (or should be used for) the statement of the thesis—that is, the sentence (or, more often, series of sentences) that clearly states the main argument the author is making.

If we use it that way, then it’s clear that neither King nor Orwell have the thesis in the introduction. King doesn’t have a single sentence that summarizes his argument. It’s a complicated argument, but stated most clearly in eleven paragraphs almost at the very end of the piece (from “I have traveled” to “Declaration of Independence”).

Orwell looks as though he’s giving a thesis, but he isn’t—he gives a really clear partition. (“These five passages have not been picked out because they are especially bad — I could have quoted far worse if I had chosen — but because they illustrate various of the mental vices from which we now suffer.”) He gives a kind of hypo-thesis (“Now, it is clear that the decline of a language must ultimately have political and economic causes”), something much less specific than what he actually argues. His thesis is most clearly stated at the end (from “What is above all needed” through his six rules).

I could give other examples (and often do) of scholarly articles, even abstracts, long-form journalism, discourse oriented toward an opposition audience of various kinds that show that clever rhetors delay their thesis when what they’re saying is controversial. That’s Cicero’s advice—if you have a controversial argument, delay it till after the evidence.

But if “I want to try to answer your statement in what I hope will be patient and reasonable terms” is not a thesis statement, what is it? It’s more accurately called a topic sentence, but some people call it a “contract.” It states, very clearly, what the topic of the letter will be. It establishes expectations with the reader about the rest of the piece.

At this point, it might seem that I’m being a pedant to insist on the distinction, but I think it makes a difference (one I can’t go into here). Here, I’ll just make a couple of other points. This advice—“tell ‘em what you’re gonna tell ‘em”—isn’t just presented as a way to write a particular genre (teachers and test writers like that genre because it is extremely easy to grade); it’s presented as “good” writing. And it isn’t. No one would read the sample student introductions and think, “Oh boy, I want to read this whole paper” unless we were being paid to read them. But we’d read King or Orwell. So, it isn’t good writing—it’s easy to grade writing.

What I’m saying is that there is a genre (“student writing”) that is not the same as writing we actually value. We’re teaching students to write badly.

I have sometimes taught a course on how high school teachers should teach writing. At one point, I had a class of genuinely good people but who were very focused on enforcing prescriptive grammar and the genre of student writing regardless of my trying to tell them about the problems with prescriptive grammar and the genre of student writing. I don’t have a problem with people teaching students how to perform the genre of student writing, but I do have a problem with people teaching anyone that that genre is not just about student writing, but about “good” writing. And that’s what this group of students kept doing.

So, I gave them a passage of writing, and asked them to assess it, and they all trashed it. It didn’t have a summary introduction, it didn’t start with a thesis, it didn’t have paragraphs that began with main claims. They agreed that it was badly written. And then I told them that they were the high school teacher who told James Baldwin he was a bad writer.






“I sent you a rowboat:” Prosperity gospel and throwing others into the flood

chart of deaths from covid
https://coronavirus.1point3acres.com/en?fbclid=IwAR0ooEsBuC0WlYcZ3byJ1Sz7CA2WfFEuMSYp3rkvPuMHNDiN0otLnErBRA4

The fundagelical Governors of Mississippi and Alabama have decided to resist expert recommendations about COVID-19, with the Governor of Mississippi going so far as to prevent any cities or counties from enacting policies grounded in expert opinion. And many people are shocked that governors would reject expert opinion, but, from within those governors’ imagined world, it makes perfect sense.

I’ve spent a non-trivial amount of time arguing with fundagelicals, and they are yet another set of people who argue so badly that their consistent inability to argue well should make them reconsider their beliefs. But they don’t, because they think they’re arguing well.

They believe that they’re arguing well because they are making claims that they feel certain are true, and they can find evidence to support those claims. [As a side note, I’ll say that far too many high school and college courses in argumentation would confirm that sense of what it means to make a good argument.]

What fundagelicals can’t see (nor can other people who reason badly) is that their way of reasoning is one even they reject as a bad way to reason, but they only reject that way of reasoning when other people reason that way.

For the sake of argument, I’ll stick with fundagelicals, but this toxic approach to deliberation is all over the political spectrum (and also slithers through other fields in which people make bad decisions, such as people who keep having disastrous relationships that don’t make them rethink their way of thinking about relationships).

Fundagelicals believe that everything about your life can be changed if you have enough faith. New Age grifters who have killed people also advocate that narrative that, as do get laid quick and make money fast grifters. Nazis also made that argument. So did Maoists. And Stalinists.

Fundagelicals believe that Scripture is not just soteriological, but politically eschatological. That is, many Christians believe that Scripture tells us about the spiritual journey we as individuals must make (soteriology). Fundagelicals believe that Scripture tells the story of political history (political eschatology). For people who read Scripture as eschatalogical, Revelation is neither a time-specific political allegory, nor a celebration of individual faith, but a perfectly accurate narrative of what is yet to come. The notion that Revelation is a codebook that, if we read it correctly, will tell us when the world is ending, is much more controversial than many people realize.

Fundagelicals have an oddly flat reading of Scripture—Scripture means what it seems to mean, as long as that meaning supports the political agenda they now have. Thus, when conservative Christians supported slavery and segregation, they cheerfully dismissed “Do unto others” (fundagelicals still evade that one) and the very clear rules about treatment of slaves, and they equally cheerfully insisted on odd readings in order to justify racism. In my experience, fundagelicals opt for the literal reading, except when they don’t—there is no coherence to their exegetical method, except political. That is, when reading literally gets them the “proof” they want, they read literally; when it doesn’t, they read metaphorically (or dismiss the passage as a cultural blip).

For instance, arguing for Hell on literal grounds is more vexed than many people realize, and, so, people who want to argue for it have to read a fair number of verses in a non-literal way.  They’re literal (to the English translation, a serious problem when you’re talking about a literal reading) when it comes to “homosexuality” (neither a word nor concept that is in Scripture), but dismiss as “cultural” the equally clear proscriptions regarding women wearing makeup, people wearing mixed fibers, the death penalty.

When I’ve argued with fundagelicals about this point, the argument gets hung up at exactly the same place. For instance, on the issue of homosexuality, they cite the clobber verses, and I give them various links showing they’re relying on vexed readings of those verses, and they say, “That is what it says.” (In English, of course, not in Greek. Let’s set that aside.) I point out that they are citing one item from a list of behaviors that are condemned, and those lists always include behavior they allow, such as divorce, women wearing makeup in church, wearing mixed fibers, or benefitting from money loaned with interest). And they say, “Those are just cultural values of that moment.” And, then I say, “So were the practices you translate as ‘homosexuality,’” and they say, “No, those are universal.” They can’t say why they’re universal without engaging in a kind of simultaneously narcissistic and circular way of reasoning: they’re universal because I think they’re universal, and these other things are culturally specific because I think they’re culturally specific.

They can’t identify an exegetical method that they apply consistently, other than the narcissistic and circular one, because that’s how read Scripture in a politicized and narcissistic way: they approach Scripture expecting to see their political agenda confirmed, and so they treat every interpretation/meaning as real that confirms their political agenda, and dismiss every one as just an appearance that doesn’t. In rhetoric, this is called dissociation. In psychology, it’s considered an instance of “motivated reasoning,” and most of us do it. I’m saying that, in my experience, fundagelicals–again, like many people–won’t admit that’s what they’re doing, and that is the problem.

That their exegetical method is politicized from the beginning is why they accuse their opponents of politicizing Scripture. Projection is the first move of people who can’t reflect on their own processes.

This discussion of exegesis might seem a long way from why fundagelicals are dismissing the advice of experts (except when they aren’t), but it isn’t.

What I’m saying is that fundagelicals are yet one more instance of conservative Christians for whom being conservative matters more than being Christian. Here’s the best evidence that they are in-group first, and thoughtful exegesis second: when people try to criticize their reading of Scripture, they dismiss those criticisms on the grounds that the critics are bad people. That isn’t Scriptural exegesis—that’s demagoguery. That’s an admission that they are thinking about protecting their political in-group more than being honest and reflective of their methods of reading Scripture.

Or, tldr; they cherry-pick data. They cherry-pick Scripture; they cherry-pick “science.” And, just as their interpretation of Scripture is not defensible as anything other than “whatever supports our political agenda is true,” regardless of method, so is their way of citing “science.” They’ll cite a bad study as true because it agrees with them, while critiquing a study with the same (or better) methodology—on methodological grounds (Family Research Association is a great site for seeing this contradiction).

This cherry-picking of data while pretending to have a principled stance is not restricted to evangelicals. (Do not get me started about raw foodies.) But their cherry-picking of data is important because fundagelicals are politically powerful right now, despite their perpetual and ridiculous whingeing about being victims (talk about “snowflakes”—another instance of projection).

What I think a lot of non-fundagelicals are having trouble understanding about our current political moment is the dominance of prosperity gospel (an example of the “just world model”).

Prosperity gospel is a non-falsifiable interpretive frame that says that, if you have enough faith, you can get anything you want. It’s non-falsifiable in two ways. First, if you don’t get what you want, then you didn’t have enough faith—there’s no way to disprove this explanation of success/faith. Second, if something happens that simply cannot be explained as a lack of faith, it’s just a temporary setback, just God testing our faith. (Although most people tie it back to 19th century movements, it’s close to the muckled 17th century New England Puritan doctrine of signs.)

Just to be clear: I am a person of faith, and I think faith enables us to do extraordinary things. It also enables us to put one foot in front of another through difficult times because faith is the belief that things will turn out all-right. I also tithe. But, I don’t believe that faith guarantees us the outcome we want—that we are entitled to all of our desires being fulfilled by having perfect faith (or giving enough money). Such a belief substitutes our will (our desires, really) for God’s; that seems blasphemous to me.

I’ve also seen that kind of faith, not in God, but in our ability to get our way if we have enough faith, do great damage. It’s the old joke about the person of faith who refused to heed warnings, with the “punchline” of a drowned person of great faith asking God, “Why did you let me drown–I had perfect faith in you?” and God answering, “I sent you a warning, a rowboat, a motorboat, and a helicopter–what more did you want?”

Paradoxically, the just world model, especially when coupled with the notion that we can get whatever we want if we have enough faith, leads to tragedy. People don’t help others because we blame the victims. We ignore systemic failings on the assumption that any problem is always a failure of individual faith. Thus, people who believe in the just world model tend not to recognize systemic problems like poverty, racism, sexism, and they don’t support systemic solutions, such as communities supporting infrastructures (good schools, roads, healthcare). The just world model increases us v. them thinking, The paradox of the just world model is that it leads to an unjust world—whether religious or not (as mentioned above, the idea that you can get whatever you want if you have enough faith/will/confidence is the basis of philosophies as diverse as Libertarianism, Nazism, get rich quick schemes, pseudo-mystical success schemes).

Once a person or community has stepped into this ideology, it’s hard to get out. Rejecting the rowboat and helicopter becomes how one demonstrates faith. The difference between our situation and the guy who rejects the flood warnings is that he drowned; if we sit on the roof, and reject the epidemiologists, public health experts, social distancing, and ventilators to demonstrate our individual (or church’s) faith, we aren’t the only ones who drown. We may not drown at all. But health workers will. Police, EMT, the vulnerable.

We aren’t just sitting on the roof risking our lives. We’re throwing others into the water. Being Christian should mean we care for the vulnerable—we’re being given that chance. God sent us the epidemiologists; let’s listen to them.

What do Followers want?

Eichmann on trial in Israel

I’ve known a lot of people, both personally and virtually, who were Followers. Sometimes they changed churches multiple times, sometimes philosophies, political ideologies, identities (like the guy I knew in college who flailed around from preppie to Che-Marxist to tennis fanatic—each with an entirely new wardrobe), with each new identity/community the one to which they were fully committed.

I’m not saying there’s anything wrong with changing wardrobes, identities, churches, even religions. People should change. What made (and makes) these people different is how they talk(ed) about each new conversion—this group was perfect, this group made them feel complete, this group/ideology answered all their questions, gave meaning to their lives, was something to which they could commit with perfect certainty.

That they went through this process multiple times, and kept failing to find a community/ideology that continued to satisfy them never made them doubt the quest, nor doubt that this time they found it. And I thought that was interesting. Each of these people was just someone in my circle of acquaintance for a few years, and I eventually lost track of them—in three cases because they’d joined cults.

There were (are) a lot of interesting things about these people, not just that their continued disenchantment never made them reconsider their goals, but also that they didn’t see themselves as followers at all, let alone Followers. They saw themselves as independent people, critical thinkers, autonomous individuals of good judgment—who were continually searching for, and temporarily finding, a group or ideology that enabled them to surrender all judgment and doubt. That’s a paradox.

In the mid-thirties, Theodore Abel, an American sociologist, offered a prize of 400 German marks for “the best” personal narrative of someone who had joined the Nazis prior to 1933—essentially a conversion narrative. In 1938, he published a book about it. The Nazis sounded like my various acquaintances, not in terms of being Nazis, but as far as simultaneously seeing (and representing) themselves as autonomous individuals of purely independent judgment who were seeking a totalizing group experience—one that demanded pure loyalty and complete submission.

They were Followers.

In the Platonic dialogue Gorgias, Socrates gets into an argument with two people who want to study rhetoric so that they can control the masses and thereby become powerful, perhaps even a tyrant. When Socrates asks why, one of them answers, more or less, “For the power. D’uh.” And Socrates says, “Does the tyrant really have the power?” Socrates points out that the tyrant is, in a way, being controlled by the masses he’s trying to control. He can’t, for instance, advocate what he really thinks is best, but only what he thinks his base will go along with.

It’s a typically paradoxical Socratic argument, but there’s something to it. The tyrant can only succeed as long as he (or she—not an option Socrates and the others considered) gives the Followers what they want. In other words, if we care about tyrants, we should see the source of power as Followers. Instead of asking why tyrants (or demagogues) do, we should ask what Followers want.  So, what do Followers want?

Here’s the short version. They want a leader who speaks and acts decisively for them, who is a “universal genius,” and whose continued success at crushing and shaming opponents not only gives them “agency by proxy” in that shaming and crushing, but confirms the followers’ excellent judgment in having chosen to follow, and who is supported by total loyalty.

That’s the short version. Here’s the longer.

They want a leader who is a “universal genius,” not in the sense of a polymath (someone trained or educated in multiple fields), but in the sense of a person who has a capacity for seeing the right answer in any situation, without training, or expertise, or prior knowledge. This genius can lecture actual experts on those experts’ fields, correct their errors, see solutions they’ve overlooked simply because of his extraordinarily brilliant ability to see.

Followers’ model of leaderhips assumes that there is a right answer, and that’s something else that the followers want—the erasure of a particular kind of uncertainty. They don’t mind the uncertainty of a gamble, as long as the leader expresses confidence in his ability to succeed at what is obviously to him the right course of action. They mind the uncertainty of a situation that might not have a single right answer, or in which an answer isn’t obvious to them, or, even more triggering, in which the right answer isn’t obvious to anyone. That anger and anxiety are heightened if they are responsible for making the choice, since now they face the prospect of being shamed if they turn out to be wrong.

Avoiding shame is important to Followers, and they often associate masculinity with decisiveness. Not just the decisiveness of making a decision quickly (they don’t always require quick decisions), but of deciding to take action, to do something, powerful, dramatic, clear. Followers like things to be black and white, and they want a leader whose actions are similarly stark, and who advocates those actions in similarly stark terms. Followers don’t like nuance, hedging, or subtlety, but that doesn’t mean they reject all kinds of complexity.

They don’t mind complexity of a particular kind. Followers can enjoy if the leader explains things in ways that don’t quite make sense, or endorse an incoherently complicated conspiracy theory—the leader’s ability to understand things they can’t confirms their faith that he is a genius. That the leader is confidently saying something that doesn’t quite sense is taken by Followers to mean that, while things might seem complicated to the follower, they are clear to the leader. Thus, the leader has a direct connection to the ways of the universe–universal genius. Not quite making sense confirms that perception of the leader as a person who clearly sees what is unclear to others, but hedging or nuance would suggest that the leader does not perceive things perfectly clearly, and that is unacceptable in the leader.

Followers’ sense of themselves as people with excellent judgment—autonomous thinkers who are completely submitting their judgment to the leader–requires that the leader always be confident, clear, and describe issues in black and white terms.

This part is hard to explain. These Followers I knew kept looking for a system of belief that would mean they were not only never wrong, but never unsure, never in danger of being wrong, of being shamed. And, like many people, they equated clarity with certainty and certainty with being right, and they equated nuance and hedging with uncertainty, and uncertainty with being more likely to be wrong. Thus, a leader who says, “This is absolutely true–even if it isn’t– is more trustworthy than one who hedges because the first leader has more confidence. Being confident is more important than being accurate.  (“It’s a higher truth,” Followers tell me.)

It’s interesting that, sometimes, a leader can take a while to make a decision, but, when he does make it, he has to announce his decision in unequivocal terms and enact it immediately, since that signals clarity of purpose and confidence. To put his decision into the world of deliberation and disagreement would be to allow the decision of a genius to be muddled, compromised, and dithered. Followers mistake quick action justified by over-confidence for a masculine and decisive response. They mistake recklessness for decisiveness–because they admire recklessness, since it signals faith, will, and commitment.

Followers need the leader to give them plausible narratives that guarantee success through strength, will, and commitment. So, what happens when the leader fails? At that point, we get scapegoating and projection. Oddly enough, Followers can tolerate complicated conspiracy narratives, even ones they can’t entirely follow, as long as the overall gist of the narrative is simple: we are good people entitled to everything we want, and They are the ones keeping us from getting it. We are blameless.

Followers don’t care if the leader lies. They like it. They don’t feel personally lied to, and they like that the leader can get away with lying—they admire that degree of confidence, and the shamelessness. They want a shameless leader. They want a leader who isn’t accountable; they want one without restraints. They don’t see the leader engaging in quid pro quo, violating the law, or even openly lining his pocket as a problem, let alone corruption. They think that’s what power is for. And, as with the lying, they admire the shamelessness.

They also like if the leader says ridiculously impossible things; they like the hyperbole. They think it signals passionate commitment to their cause because it is unrestrained.

They don’t expect the leader to be loyal to individuals, although the leader demands perfect loyalty from individuals, and Followers demand perfect loyalty from the leader’s subordinates. If leader’s aides betray the leader in any way, such as revealing that the leader is incompetent, Followers are outraged, even if everything the aide says is true.

This part is also hard to explain, so I’ll try to explain. A Follower I knew was  on the edges of a cult run by a man who called himself various things, including Da Free John.  At one point, Da Free John had followers who came forward and accused him of, among other things, egregious sexual harassment. Those accusations inspired my friend to get more involved with the cult. When I asked about the accusations, he was angry with the people who had made the accusations. His argument was something along the lines of, “They knew what they were getting into, and they betrayed him.” In other words, as far as I could tell, he was willing to grant the sexual harassment, but blamed the victims, not just for being victims, but for being disloyal enough to complain about it.

Albert Speer was condemned for his disloyalty, as though he should not have admitted to any flaws in Hitler (I think condemning him for his being a lying liar who lied is reasonable criticism, but not disloyalty). Victims of abuse by church officials are regularly condemned for their disloyalty, as though that’s the biggest problem.

Followers pride themselves on their ability to be loyal, and they will remain loyal as long as the leader continues to be a beacon of confidence, certainty, decisiveness. That commitment can even withstand some serious failures on the part of the leader, for a few reasons. The most important, mentioned above, is they refuse to listen to any criticism of the leader, even if made by informed people (such as close aides). Followers only pay attention to pro-leader media, and they dismiss as “biased” any media (or figure) critical of their leader. This dismissing of criticism of the leader as “biased” is not only motivism, but ensures that Followers remain in informational enclaves, ones that will spiral into in-group amplification (aka, “rhetorical radicalization“).

If the leader does completely fail, they are likely to blame his aides, rather than him (as happened with a large number of Germans in regard to Hitler). To admit the leader was fallible would be to admit that the Follower had bad judgment, and that’s not acceptable.

So, what I’m saying is that Followers are people who put perfect faith in a leader, a faith that is impervious to disproof, and they refuse to look at any evidence that their loyalty might be displaced. The conventional way to describe that kind of relationship is blind loyalty, but they don’t think they have blind loyalty (they think the out-group does). They think their loyalty is rational and clear-eyed because they believe they have the true perception of the leader, one that comes from an accurate assessment of his traits and accomplishments. They believe the leader is transparent to them.

But, if this isn’t blind loyalty, since they refuse to look at anything outside of their pro-leader media, it’s certainly blindered loyalty. And it generally ends badly.

Narcissism and bad political outcomes

Like many teachers trying to shift to online teaching and still provide a useful experience for students, I’m got way too much to do this week, and so I don’t have the time I’d like for writing about Trump’s putting a “strong” economy over the health of the people he is supposed to care for. I don’t even have the time to point out that his first moves were not to protect the economy, but the stock market. (They are not the same thing.)

All I have time for is to make a few quick points that others have already said. There are lots of ways that Trump could help the economy that would, in fact, raise all boats (something that boosting the stock market does not do)—FDR figured them out. This isn’t about fixing the economy; this is about fixing the perception of the economy (since so many people do associate “the economy” with “the stock market”).

I’m about to give an example of how that way of thinking of things worked out for another world leader in order to make the point that it isn’t a good way to approach the situation. I want to give an example in order to make a claim about process—whether this way or approaching a situation is a good one.

My rhetorical problem is that a lot of people (especially authoritarians) have trouble making that shift to the more abstract question of process. It has nothing to do with how educated someone is—I’ve known lots of people with many advanced degrees who couldn’t grasp the point, and many people with no degrees who could. It’s about authoritarian thinking, not education. (Expert Political Opinion and Superforecasting are two books about this phenomenon.)

To complicate things further, authoritarians (who exist all over the political spectrum) not only have trouble thinking about process, but understand an example as a comparison, and a comparison as an analogy, and an analogy as an equation.

For instance, imagine that you and I are arguing about whether Chester’s proposal that we pass a law requiring that everyone tap dance down the main street of town is a good one, and you point out that the notoriously disastrous leader of the squirrels, Squirrely McSquirrelface, passed a similar law, and it ended disastrously. If I’m an authoritarian, then I’ll sincerely believe that you just said that Chester is Squirrely McSquirrelface, and, in a sheer snitfit of moral outrage, I will point out all the ways he isn’t. For extra points, I will accuse you of being illogical.

All that I will have thereby shown is that I don’t understand how examples about processes work.

I’ll give one more example. I often get into disagreements with people about “protest voting” (or “protest nonvoting”). I think that’s a bad way to think about voting, since I don’t know of any example of a time it’s worked to get the kinds of political changes the people who advocate it want. And, instead of providing me with examples, the people with whom I’m disagreeing dismiss me for not having sufficient faith (a Follower move). They only argue about process deductively (from a presumption that purity of intent is not only necessary but sufficient for a good outcome—a premise I think is indefensible historically).

So, let’s get back to the question of privileging the stock market and “the economy” over what experts on health say. And there’s an example of that way of thinking. (There are a lot, but I’ll pick one.)

I think Trump, who didn’t want to be President, now can’t stand the idea of not being re-elected, because he is ego first and foremost (as indicated by all his lies, even on stupid stuff, like his height). And he believes that he can’t get reelected if the economy sucks in October. And that’s a reasonable assumption. People will vote against a President (Carter) or party (GOP in 2008) if the economy sucks at that moment, regardless of whether it sucks because the President did the right thing (Carter), or the economy tanked because of processes in which both Dems and GOP were complicit (2008). Hell, people vote on the basis of shark attacks.

There are many problems with Trump, but one is that he sincerely believes he is a “universal genius”—a person so smart he can see the right course of action, regardless of having no training in it. This is important to his sense of self, and that’s why he keeps firing people who make it clear that they are more knowledgeable than he is about anything. Not only can’t he be wrong, but he can’t have anyone in his administration smarter than he is.

This isn’t the first administration like that. It doesn’t end well. It can’t end well. The notion of “universal genius” is nonsense. Intelligent (as opposed to raging narcissist) people know that they don’t know everything, and so need people around them who know more than they do about all sorts of thing.

Intelligent people know that disagreement is useful. Raging narcissists fire people for disloyalty if they dissent, and then they make bad decisions. Firing people for “disloyalty” (i.e., dissent) doesn’t play out well in the business world (e.g., Enron, Theranos) in the long run (although it can in the short run), nor does it in the political world, nor the military.

Making decisions about the economy purely on the basis of how it will play out for a regime also doesn’t lead to good long-term outcomes. How Democracies Die shows how authoritarians shift from democracy to authoritarianism through disastrous manipulation of the economy.

There’s another example.

Germans, on the whole, never really admitted that they’d lost WWI. The dominant narrative was that they were winning, and could have won had people been willing to stick it out, but the willingness to stick it out collapsed for two reasons. First, there was the “stab in the back” myth—the notion that Jewish media lied to the Germans and said they couldn’t win. Second was the narrative that people on the homefront lost hope because they were suffering in basic ways, such as food, housing, and coal. And they were.

It’s important to note that the dominant narrative was wrong on both points. There wasn’t a stab in the back, and Germany didn’t lose the war because of homefront morale. The homefront morale could have stayed strong, and they would still have lost. It just would have taken longer and cost even more lives.

But Hitler believed that narrative, and both its points.

As Adam Tooze shows in his thorough book (that I can’t recommend highly enough), Hitler’s economic (and military) decisions were gambles. And those decisions were also at odds. He wanted to prevent the stab in the back by, as much as possible,  ensuring that his base was comfortable. He made bad decisions about the economy because he wanted to preserve his support and win a war he probably shouldn’t have taken on.

Hitler’s way of deliberating was bad. He wanted outcomes he wasn’t smart enough to realize were incompatible. And by “smart enough,” I mean “willing to listen to people more expert than he.” Hitler’s rejection of his military experts’ advice is infamous, as is his firing anyone who disagreed with him.

What matters about Hitler, from the perspective of thinking about process, about the way an administration or leader deliberates, is how he decided. As Albrecht Speer said, Hitler sincerely believed himself to be a universal genius, and the paradoxical consequence was that he only allowed around him third-rate intellects. Hitler was obsessed with world domination and purifying the Germans. But he was even more obsessed with being the smartest person in the room, with having around him people who flattered him, with silencing dissent (on the grounds that it was disloyalty), with firing anyone who actually knew more than he did. He hired and fired on the basis of loyalty, not expertise.

That ended with people huddled in bomb shelters like the one in the photo.

When has it ended well?

Research on businesses says it doesn’t end well; I can’t think of a single historical example when it’s turned out well.

I’m making a falsifiable claim. I’m saying that Trump’s way of handling decision making is bad, and I’m using Hitler as an example.

When I pose this question to people who support the model of a “universal genius” who silences dissent, relies on his (almost always his) gut instinct, and who only get their information from in-group media, the response is always some version of “Hey, I’m winning—screw you for asking.” They say I’m biased for criticizing Trump, Obama was worse, abortion is bad. They say, in other words, because they like what they’re getting, they don’t care whether this has never worked out well in the past.

Guess who else thought that way.

What they don’t say is “here is an example of a leader who claimed to be a universal genius, who fired anyone who criticized him, who wouldn’t allow anyone in the room who was more an expert than he, who made every issue about him, who lied about big and small things, who used his power to reward people and states who were loyal to him and punish ones who weren’t, who openly declared himself above the law, and it worked out great.”

That’s because there is no such example.

In other words, they can’t come up with an example of a time when an administration that reasons that way has been successful. They are committed to a way that has never worked. They are committed to the way that people supported Hitler.

When Germany was finally conquered, 25% of the population thought it was right to have followed Hitler, and that he had been badly served by the people below him. 25% of the German population were so committed to believing that Hitler was their savior that no evidence could prove them wrong.

I’m not saying that Trump is Hitler.

I’m saying something much more troubling: I’m saying that the people who support Trump reason the way that people supported Hitler. I’m not talking about Trump. I’m talking about his supporters. I’m not saying they would have supported Hitler. I’m asking them to consider whether their way of supporting Trump is a good way to support a political figure.

This is two-part: can they give examples of times when this kind of support for this kind of leader has worked out well? And, can they identify the evidence that would persuade them their support for Trump is wrong? What is it?

And, if their answer is that there is nothing that would make them question their loyalty to Trump, and nothing that would persuade them to venture outside of the pro-Trump media, then they aren’t just admitting their political position is irrational, but they’re committing to a way of thinking about politics that has never ended well.

It wasn’t that long ago that that way of thinking about politics ended up with Germans huddled in concrete balls.

You can’t know what you don’t know because you don’t know that you don’t know it

someone texting and driving
Image from here: https://www.safewise.com/faq/auto-safety/danger-texting-driving/

In another post, I mentioned that we don’t know what we don’t know.

This is the central problem in rational deliberation, and why so many people (such as anti-vaxxers) sincerely believe their beliefs are rational. They know what they know, but they don’t know what they don’t know. People have strong beliefs about issues, about which they sincerely believe they are fully informed because all of the places they live for information tell them that they’re right, and those sites provide a lot of data (much of which is technically true), and also provide a lot of information (much of which is technically true). But, that information is often incomplete—out of context, misleading, outdated, not logically related to the policy or argument being proposed.

There is a way in which we’re still the little kid who thinks that something that disappears ceases to exist—the world consists of what we can see.

I first became dramatically aware of this when I was commuting to Cedar Park, or Cedar Fucking Park, as I called it. I saw people talking on their cell phones drift into other lanes, and other drivers would prevent an accident, and the driver would continue with their phone call. They didn’t know that they had been saved from an accident by the behavior of people not on the phone. They thought that they were good at talking on the phone and driving because they never saw themselves in near-accidents. They never saw those near accidents because they were distracted by their conversation.

I have had problems with students who think they’re parallel-processing in class—who think they can be playing a game on their computer and pay attention to class—but they aren’t. We really aren’t as good at parallel-processing as we think. The problem is that the students would miss information, and not know that they had because, like the distracted drivers, they never saw the information they’d missed. They couldn’t—that’s the whole problem.

I eventually found a way to explain it. I took to asking students how many of them have a friend whom they think can safely drive and text at the same time—that, as they’re sitting in the passenger seat, and the driver is texting and driving, they feel perfectly safe. None of them raise their hands. Sometimes I ask why, and students will describe what I saw on the drive to Cedar Park—the driver didn’t see the near-misses. Then I ask, how many of you think you can text and drive safely? Some raise their hands. And I ask, “Do any of your friends who’ve been passengers while you text and drive think you can do it safely?”

That works.

For years, I’ve begun the day by walking the dogs up to a walkup/drivethrough coffee place (in a converted 24-hour photo booth—remember those?), and used to get there very early while it was still dark. There was one barista who didn’t notice me (the light was bad, in her defense). I would let her serve two cars before I’d tap on the window. She would say, “Be patient! I’m helping someone!” She sincerely thought that I arrived at the moment she noticed me, and immediately tapped on the window. It never occurred to her that I was there long before she noticed me.

When I talk to people who live in informational enclaves, and mention some piece of information their media didn’t tell them, they’ll far too often say something along the lines of, “That can’t be true—I’ve never heard that.”

That’s like the bad drivers who didn’t notice the near misses and so thought they were good drivers.

That you’ve never heard something is a relevant piece of information if you live in a world in which you should have heard it. If, however, you live in an informational in-group enclave, that you’ve not heard something is to be expected. There’s a lot of stuff you haven’t heard.

What surprises me about that reaction is that it’s generally an exchange on the internet. They’re connected to the internet. I’ve said something they haven’t heard. They could google it. They don’t; they say it isn’t true because they haven’t heard it.

That they haven’t heard it is fine; that they won’t google it is not. And, ideally, they’ll google in such a way that they are getting out of their informational enclave (a different post yet to be written).

In that earlier post apparently about anti-vaxxers, but really about all of us, I mentioned several questions. One of them is: If the out-group was right in an important way, or the in-group wrong in an important way, am I relying on sources of information that would tell me?

And that’s important now.

If you live in informational worlds that are profoundly anti-Trump, and he  did something really right in regard to the covid-19 virus, would you know? To answer: he couldn’t have, or, if he did, it was minor is to say no.

And the answer is also no if you’re relying on the arguments that Rachel Maddow says his supporters are making, as well as on your dumbass cousin on Facebook. Unless you deliberately try to find pro-Trump arguments made by the smartest available people, you don’t.

If you live in a pro-Trump informational world, and Trump really screwed up in regard to the covid-19 virus, would you know? To answer: no, or perhaps there were some minor glitches with his rhetoric is to say no.

And the answer is also no if you’re relying on the arguments that Fox, Limbaugh, Savage and so on say critics of Trump are making, as well as the dumbass arguments your cousin on Facebook makes. Unless you unless you deliberately try to find arguments critical of Trump made by the smartest available people, you don’t.

People are dying. We need to know what we don’t know, and remaining in an informational enclave will make more people die.