Batboy and democratic deliberation

image of batboy

[Image from here]

One of my several useless superpowers is picking the wrong line, especially at the grocery store. And it isn’t because the people ahead of me are jerks trying pay with pennies or something; it’s just that the moment I get in that line is the moment that bar codes are wrong, or the computer can’t handle some kind of payment reasonably, or toads start falling from the ceiling. Okay, not that last one, but close enough.

And, because of this really sucky superpower, I have spent a lot of time looking at, and sometimes reading, magazines in the checkout line. And The National Enquirer had a kind of bad car crash fascination for me. It seemed to me the Etch-a-Sketch of news sources. The Etch-a-Sketch, if you don’t know, was a really fun device on which you could create various drawings (within limits) and then shake it and the previous drawing would disappear.

That, it seemed to me, perfectly described The National Enquirer. Every issue wiped clean the slate of a previous one. And, yet, every issue presented its information as obviously true. I remember—even read—the issue when a major star died of cancer. The previous week had the headline that he had been completely cured of cancer through a miracle treatment! The issue announcing his death didn’t mention that previous error.

That failure to admit error is important because admitting error is at the heart of effective decision-making—whether you’re thinking about what car to buy, what media you consume, how you behave at work, what kind of relationship you want, what movie reviewers you should believe, how you treat others, and how you should vote. You can’t get better unless you admit you were wrong. If you never admit you have made a mistake, then you’ll keep making that mistake.

If you’re willing to admit you’ve made a mistake, that’s great. But if you treat that mistake as a one-off, and not really relevant, then you’re still not learning from your mistake.

The point is not just that The National Enquirer was wrong about that actor, but that it was wrong to present its information as certain. Learning from mistakes doesn’t just mean that we learn that this claim was wrong (that actor had not been miraculously cured) but that our source is imperfect and its information is not certainly true.

When I mention this to students, about various sources (all over the political spectrum), some of them will say something along the lines of, “Well, yeah, but they got this right.” When I argue with people (again, all over the political spectrum) who are citing completely false information (claims on which their source has been shown to be completely wrong), I can sometimes get them to admit that error, but they still intend to rely on that source. They still refuse to admit their source is unreliable because, they say, “they got this other thing right.”

And that’s assuming I can even get them to admit that their source was wrong. Too often, they’ll refuse to look at any source that says their favored source is wrong simply on the grounds that it disagrees with them. That’s kind of shocking if you think about it.

Here is a person claiming something is true, and they refuse to consider any evidence that they might be wrong, on the grounds that the source is biased because it says they might be wrong. It’s a perfect circle of ignorance.

Good decision-making isn’t about getting some things right; it’s about being willing to admit to being wrong. No matter what your profession, if you go through that profession refusing to consider any criticism of you, your actions, and/or your policies on the grounds that only “biased” people would criticize you, you’re running your business into the ground.

Imagine, for instance, being a doctor. You were trained to believe that infections are the consequence of miasma. Would it be reasonable for you to refuse to read any studies that said that you were wrong about infections? Would you be a good doctor if you refused to pay attention to anything that complicated or contradicted your understanding of infection?

You’d be a lousy doctor.

You’d be a lousy doctor not because you’re a bad person, or because you mean to hurt people, or even because you’re stupid, but because being right means being willing to be wrong. Far too many people reason on the basis of in-group loyalty (I’m right because this seems right to me, and everyone like me agrees about this), and won’t admit that they’ve ever been wrong, let alone that they rely on sources that have been wrong. There are major media sources that regularly engage in the equivalent of “this actor is cured and whoops, now he’s dead but we’re still a reliable source!” And the consumers of those sources never conclude that the persistent inaccuracy of a source is a reason to doubt its reliability.

And that is what is wrong with our current state of public discourse. Too many people aren’t willing to admit to being wrong, and if they do grant a fact or two here and there, they aren’t willing to give up on sources.

It doesn’t matter where on the political spectrum your sources are; what matters is
1) Are you getting your information from a source that links to opposition sources (that is, is the source so confident in its representation of the opposition that it gives you direct access to their arguments, instead of their mediated version);
2) Do your sources admit when they’re wrong, and admit corrections clearly and unequivocally, without scapegoating? A source that never admits error is not a more reliable source—it’s bigoted propaganda;
3) Does your source make falsifiable claims? That is, does your source spend all its time ranting about evil the other side is rather than making falsifiable claims about what your side will do?

Again, imagine that you’re a doctor, or that you’re a patient seeing a doctor, and you’re trying to decide whether to get surgery, try medications, or perhaps make major lifestyle changes. Would you think that the way that the pundits on Fox or Rachel Maddow or various tremendously popular people on youtube argue would be a good way to make a decision about your health?

They all argue different things, but they all argue the same way: the correct course of action is obvious, and everyone who disagrees is spit from the bowels of Satan, and if you’re a good [in-group] member, you’ll make this choice and refuse to listen to anyone who says it’s the wrong choice.

Refusing to listen to out-group sources, dismissing as biased anyone who tells you that you’re wrong, believing that the only problem is that we have to commit more purely to the in-group—those are terrible ways to make decisions, in every aspect of a life.

Imagine that you’re in a hospital bed, and you’re presented with a variety of options, or you’re a surgeon, and you’re trying to decide what to do, and a doctor comes to you and says, “I support Trump [or Warren, or Biden, or whoever], so this the right kind of surgery for you.” Or, perhaps, “I’m a Republican, so I’m going to choose this surgery.” As a patient or surgeon, you’d recognize that’s a terrible way to make decisions. A good surgeon would assess the choices regardless of politics; no even remotely competent surgeon would make a decision about a surgical practice on the basis of the political affiliation of the people advocating this practice versus that.

Since we recognize that loyalty to party would be a terrible way to make decisions about policies regarding our bodies, why not admit it’s equally terrible when it comes to policies about our body politic?

The deficit model of education and unintentional racism

If you stop someone on the street, and ask them about what it means for something to be racist, it’s pretty likely that they’ll tell you that racism is what racist people do, and that racist people are people who consciously hate everyone of some race, or, perhaps, everyone of every other race. Racists are evil, deliberately evil, intentionally evil. Racist acts are acts that are done by people who intend to be racist.

If you’re reasoning from within this (inadequate) understanding of racism, as long as you do not consciously hate every single member of some race, or if you don’t intend to do them harm, or you do not intend to be racist, you aren’t racist. And, therefore, you didn’t do anything racist. You are not evil. So, whatever you did that someone is saying is racist is now off the table of consideration, since the real issue is whether you’re a mustache-twirling racist who gets up in the morning and thinks about how to harm people of other races. You aren’t. You don’t even have a mustache.

Therefore, anyone calling you racist is engaging in defamation of character, since they’re saying you’re deliberately evil, and, if someone calls you racist, your losing your temper is justified, since what they did to you is so offensive.

That’s a little muddled, but it’s how far too many arguments about racism play out:

Chester: “You did a racist thing.”
Hubert: “You’re calling me a racist. And here are all the ways I’m a good person (and therefore not racist). You’re the real racist here for making it an issue of race.”

I’ve seen this flawed understanding of racism, and then the same domino effect of fallacious reasoning (I didn’t do anything racist because I’m not a racist because I sometimes do non- or anti-racist things) all over the political spectrum, and on scholarly mailing lists, at meetings of scholars, at faculty meetings—so, this problem isn’t just something They do.

This common notion of racism is wrong because the issue of a racist world is not usefully reduced to the problem of individuals who consciously feel hostility to members of other races or intend to be racist (nor is racism bad just because racist words “offend” people). There isn’t some binary between racist and non-racist, and, therefore, that a person has done something non-racist doesn’t mean they can never do anything racist. That you hate racism doesn’t mean you are magically immune from doing anything racist. In fact, you can be trying to do something you think is anti-racist, and unintentionally be making things worse.

Take, for instance, the deficit model of education. The deficit model of education says that some students struggle because they lack things that good teachers should pour into their heads. Rather than presenting students as people who are bringing a lot of knowledge and skills, it describes them as little jars of absence.

Often, that absence is described as a consequence of their coming from a deficient culture. Their cultural (racial) background is inferior to the dominant culture, and so we need to pour into their brains (or drill them on) the things they don’t know, the habits they don’t have. This model means that teachers work with their students from an assumption that they need to pull some students up to the mean, and, too often, that assumption is racist, even when the teacher is trying to do the right thing. I’ve seen teachers respond with so much enthusiasm to the contribution of a student of color that I wanted to crawl under a table and hide.

Basically, the “deficit model” appears to be anti-racist insofar as it’s saying that students of color who are underperforming (or not—they might be performing just fine) need extra care from a white teacher because they lack certain things the (white) teacher can pour into them. The white teacher is trying to save them, so isn’t racist (racists hate people of other races). The white teacher feels compassion for these students and is trying to save them.

But it is racist in so many ways. For instance, it reinforces the racist cultural narrative that all important stories about reducing racism are about how white people use their agency to save POC, and POC are (or should be) grateful subjects of how good white people use their agency. A narrative about people of color who succeed is about the white people who helped them.

A racist act is one that reinforces the racist hierarchies of a culture or society; and racist hierarchies are hierarchies that are socially constructed categories that claim to be essential and inherent in groups.

As the Encyclopedia of Educational Psychology(Vol. 1. ) says,
[T]he deficit model asserts that racial/ethnic minority groups do not achieve as well as their White majority peers in school and life because their family culture is dysfunctional and lacking important characteristics compared to the White American culture. […]

Criticisms of the deficit model are numerous. First, the deficit model is unfair to minority children and their families, focusing the blame on their culture. The deficit model is also inaccurate because it deemphasizes the powerful effects of poverty on the families, schools, and neighborhoods, which synergistically affect academic achievement and occupational attainment. It also strongly implied that White middle-class values are superior. Fourth, the deficit model became equated with pathology in which a group’s cultural values, families, or lifestyles transmit the pathology. Finally, the deficit model has limitations for scholarship because it is too narrow as an explanatory model (i.e., rigidly blames the family) for the academic underachievement of poor minority children. In short, the deficit model’s negative effects are that children were narrowly viewed as “deprived” and their families became “disadvantaged,” “dysfunctional,” and “pathological.”

The deficit model is racist in impact, and not intent. It implies that good teachers would identify students whom they think are deficient (likely to be POC) and try to save them by pouring into their heads the things they lack. Those teachers would mean well, and still be engaging in actions that reinforce our racist culture. Does that mean they’re racist? Yes. Does that mean they wear a hood and burn crosses and intend to be racist? No. Is it useful to identify the problem as their being racist people? No. Does it matter that they’re (unintentionally) promoting a racist narrative about students? Yes.

Are some students lacking important skills important for success in college? Yes.

In fact, students should be lacking the skills we intend to teach in our class—otherwise, why are they in the class? Every person, including the teacher, walks into a classroom deficient. A good class makes everyone in that room better. Every person, including the teacher, walks into a classroom with an excess of gifts, skills, and knowledge. Assuming that the deficiencies map onto (or are explained by) race or culture is inevitably going to put white students at an advantage. But teachers with the deficit model are likely to pay more attention to “grammar” errors on the part of students of color (or multi-lingual students) than white students; they’ll over-identify errors (noticing ones they wouldn’t notice in a white student’s paper, identifying as “grammar” errors things that are orthographic, stylistic, or rhetorical). They’ll also explain the errors differently, as the consequence of gaps in knowledge (whereas they’re likely to explain white students’ errors as typos). By conveying the expectation that POC students will perform badly, and need saving, they deny students something all students need: confidence.

Racism isn’t about intent, or whether people have their feelings hurt. Racism is about actions, policies, structures, practices, systems, institutions that reinforce the racial hierarchies of a culture. A narrative that students from certain cultures are deficient reinforces the very narratives and tropes central to our current racist world.

This is no time for compromise

When confronted with a world in which decisions that seemed certainly and obviously right (think of the arguments for invading Iraq as a policy option we should feel certain is correct) that turn out to be wrong, things get a little vexed for the people who insisted what they’d been saying was obviously true. Turns out they were not so obviously true after all. In fact, they were false.

Fox and various other media relentlessly promoted the WMD argument, as well as the argument that even Bush said was false (that Saddam Hussein was responsible for 9/11), and when media and pundits were now faced with the problem that even the lowest bar of journalistic responsibility would involve their admitting they were either fools or liars, they either stopped talking about it, or claimed that Bush was responsible.

Their argument was often a little odd, though. They sometimes said that they couldn’t be blamed for being loyal to a person who had turned out to lie. I think that’s interesting. They were admitting that they saw their job as supporting the Republican Party, and not promoting the truth. The traditional distinction between a medium of party propaganda and a medium that is at least trying to be above faction is the willingness to investigate and report on information that hurts its preferred party.

Fox not only didn’t investigate the WMD claims, but it slammed anyone who said what turned out to be true. It promoted, relentlessly, a claim that was obviously a lie (that Iraq was behind 9/11)—even Bush said so–, and another set of claims that were deeply problematic (such as the WMD accusation, or various arguments Colin Powell made before the UN). Fox didn’t do that investigation, or if it did, it gleefully promoted what it knew to be a lie. (At this point, people who are deeply immersed in the tragic narrative that our complicated and vexed political options are reduced to the fallacious question of whether Dems or Republicans are better will say, but the Dems do it too! Maybe, but the Dems lying doesn’t mean that what Fox said was true. Fox was either irresponsible or dishonest, and any behavior on the part of the Dems doesn’t change that. If I rob a bank, that someone else did it too doesn’t magically change my robbing a bank from anything other than what it was.)

The failure to investigate was spread all over the political spectrum of media. For instance, Colin Powell’s speech before the UN was deeply problematic, but, instead of doing responsible investigation, or even reporting accurately (such as saying “Powell showed” when the accurate report would have been “Powell claimed”), media endorsed his problematic argument. His argument was so problematic that even the conservative–and pro-invasion–British periodical The Economist noted his case was thin in some places. But, in most media, his argument wasn’t reported as wobbly (and, again, not on any one place on the political spectrum).

Fox and various other media outlets were, from the perspective of someone who studies demagoguery, pretty extreme. It wasn’t just that they promoted various false claims–again, even ones Bush said were false–, but that they promoted those false claims as the only thing a reasonable person could believe. The amount of propaganda—that is, the factional promotion of false claims—is one reason that 40% of the American public believed that it should be legal to prohibit dissenting from the invasion.

What that means is that 40% of the American public were fine with silencing the point of view that turned out to be right. And that is really worrisome for democracy.

Even more worrisome is that the people I know who were part of that 40% have yet to admit that they were wrong to want to silence the people who turned out to be right. And their having been completely wrong about Iraq didn’t caused them to question the sources that led them astray, nor, more important, the underlying (and false) narrative that the correct course of action is so obvious to good people that dissent should be dismissed as biased or duped.

And that’s my experience with people all over the political spectrum–that people who believe that it is obvious that we should do this thing now, and that everyone who disagrees should be dismissed (as biased, ignorant, duped, dishonest) never admit that their having been wrong in the past is any reason to reconsider their narrative about political decision making.

When people are frightened, faced with uncertainty, or have failed, in-group entitativity increases. Group entitativity is what social psychologists call the sense a person has 1) that their mental categories of kinds of people (Christians, liberals, Texans) are Real; and 2) that their loyalty and commitment to their in-group is essential and unarguable. (Scholars in rhetoric would say that their sense of group identification is constitutive.)

Fear, uncertainty, and failure all increase the belief that The In-Group is Real, and thereby paradoxically encourage people to feel that the solution to our current problem is to purify the in-group. Politically, this means that a failure encourages people to believe that the solution is for the political group not to be a coalition of various interests, but for every member of the in-group, who is Really in-group, to commit more purely to a more pure vision of the in-group.

The train wrecks in public deliberation that I study all have calls for purer commitment to the pure in-group. But, at times, a group’s decision to stop disagreeing, and just work together has been effective. So, how do you disagree between the irrational response that what we need now is purity (because the in-group has failed) and what we need now is to stop disagreeing?

You don’t do it through deductive reasoning. You don’t do it through the circular reasoning process of deciding that only commitment to your narrative is right, and so only people who agree to that narrative can be right. You reconsider the narrative.

Or you don’t. Instead, you engage in Machiavellian unifying strategies.

The problem is that no political party can win an election without gathering together people with wildly different narratives. So, a party needs what rhetoricians call “a unifying device.” There are a lot (Kenneth Burke listed them pretty effectively in 1939).

The easiest strategy is to unify by opposition to a common enemy. Burke says that Hitler unified Germans (who were a very disparate group) by opposition to the Jews, and, while that was true in Mein Kampf (and Hitler’s ideology generally), when it came to the Nazis’ best electoral successes, it was by unifying voters against “Bolsheviks”—he included any form of socialism in that category (and his base knew he meant Jews). Hitler argued for purifying the community of dissenters.

William Lloyd Garrison made a similar argument in the era before the Civil War. Abolitionists couldn’t count on the government to help them, and they suffered a lot of failures. And so Garrison decided there was one right way to think about the vexed question of whether the Constitution allowed slavery, and he thereby alienated Frederick Douglass.

Hitler was evil; Garrison was not. In other words, the notion that the solution to our problem is to insist on one narrative and crush all dissent is something that both good and bad people share.

Good decision-making requires that, at some point, people stop arguing, and commit to the plan. If my unit has decided that we’re going to issue red balls to all dogs, then we need to get full-in on issuing red balls. But there needs to be an opportunity for the people who think the issuing red balls is a dumb plan. In other words, every good plan makes falsifiable claims.

In the decisions I’ve studied, when communities have decided to make disastrous decisions, or even made good decisions that ended badly, they have gotten feedback that their decisions were bad, and they decided that the response to that setback was increased in-group purity.

Responding to failure by believing that our problem is that our in-group was not pure enough, and that therefore the solution is to be more pure in our ideological commitment, is a natural human bias.

But it isn’t a useful way to deliberate.

Can dogs eat…. your head?

The whole process whereby we got Clarence remains a little unclear to me. We had had three dogs for a while, and Duke died. Jim got in touch with a group that did mastiff rescue, and then had his heart stolen by Louis, so we had three dogs. And then the mastiff rescue people got in touch with us. They had a four-year old mastiff. And so we ended up with four dogs.

So, we took the pack—Ella, Louis, Marquis—up to a neutral place where they could all meet (basically a barn). And they all wandered around and sniffed each other and things, and Clarence came up and put his head in my lap, and, well, that was that. We would later find out this was odd—Clarence didn’t like strange dogs, and really didn’t want to be approached by them. He wasn’t always okay with strangers. But he was fine with this pack, and he was fine with us.

Having passed the adoption test (they have to be careful about people who are getting dogs because of dog fighting), Jim and I went up and got him.

Louis was dubious about Clarence, but Louis was pretty much dubious about everyone (and kind of the fun police). And Louis ended up getting along fine with Clarence, basically because Ella was actually in charge of the pack.

When we adopt a new dog, we set up a bed on the floor in some room in such a way that I and the pack are all sleeping together. For Clarence, we set that up in the living room, but it happened to be a night with a major thunderstorm, something that always agitates dogs. And that’s when I discovered that Clarence’s previous owner had, for reasons that remain obscure to me, decided it would be a great idea to teach a 160 lb. dog to jump on people and nom their arms. So, I found myself with a 160 lb. (or maybe 170 since he’s thinner now than he was then, and we’re pretty sure he’s now around 165) dog who was leaping around, especially leaping on me, and trying to hold my arm in his mouth.

I threw the other dogs out of the room and was, for the first time in my life, edging on intimidated by one of my dogs. But it was so clearly high spirits, and—and this continued to be the case—although he was grabbing my arm in his massive mouth and holding it tight, I didn’t feel any teeth. I still don’t know how he did that. He spent the first night across the room from me. The next night he was closer. The third night he was spooning with me.

The storm passed, in both senses.

I’m calling him Clarence, but we hadn’t decided on his name. We were considering various big guys, such as Charlie Mingus, but also guys with wrinkled faces, like Willie Nelson or Levon Helm. Clarence “Gatemouth” Brown was a small, wiry guy, so no resemblance, but Clarence felt like a Clarence (and I do love me some Clarence “Gatemouth” Brown). Also, Brown had, as far as I know, a good and long life, and we wished that for him. He came to us with the clear signs of having been fed the wrong food for four years, but no real signs of abuse (except for, in the backyard, a male carrying something).

He was a momma’s boy from the beginning. We discovered that it was cheaper to buy twin mattresses (he required two, on top of each other) than dog beds. We discovered that he got cold at night, so we took to putting a blanket over him when he went to bed. He often created a doggy burrito. It was hilarious.

We discovered that he got lonely at around 5 am, and wanted me to roll off my bed onto his. If he had a bad dream in the night, he would insist that I move over and give him room to get in bed with us. We have a ritual of waking up and cuddling with all the dogs and whatever cats choose to show up first thing in the morning. Clarence would wake us up, and then pretend to be asleep. He would, and I’m not kidding, fake snore.

We often joked, or perhaps it wasn’t entirely a joke, that he would wait till we were asleep, and then unzip his dog suit and emerge as a really empathic and mildly neurotic human.

He loved walks. He hated strange dogs approaching him. He loved stuffed animals, and would cuddle with them. He was intimidated by the cats. If a cat lay on his bed, he would come and get one of us and look sad. Or just lie on the floor and hope the cat would move. He would, if they wanted, let them boop him, but he was always at least a little worried that they would kick his ass.

In other words, he was intimidated by a being that weighed .06 of him. Our cats weigh less than his head. He could have eaten either of our cats in one gulp. But, instead, he was sad and hoped he could get his bed back.

The whole “no strange dogs” thing was fraught. It’s really common for dogs who are totally comfortable with other dogs off-leash to get freaky on-leash. The problem is: if you have a dog who weighs 165 lbs, who can swallow some dogs whole, you can’t risk that he’s got an on- or off-leash distinction. So, after someone lost control of their dog, and it charged Clarence, and he alpha rolled it, Clarence (and Jim and I) spent a day every week with a really good dog trainer, who got him to be okay with other dogs. As long as I wasn’t holding the leash.

As I said, Clarence was a momma’s boy. So, for years, I was the one who held his leash. And, when we saw a strange dog, I got nervous because I was afraid that Clarence would get agitated, and then Clarence sensed my agitation, and he thought he needed to protect me. It was a nasty spiral of anxiety about the anxiety of each other. The solution was for Jim to hold the leash, but still, when things got twitchy, Clarence attached himself to me. So, for Clarence’s sake, I had to learn to manage my anxiety more effectively than the method on which I’d relied for 40+ years–pretending I wasn’t anxious. He needed me to recognize when I was anxious, even when I “thought” I wasn’t. I did that for Clarence, but it turns out that it applied in all sorts of other areas. Clarence demanded that I learn something about myself. Clarence made me a better person.

Clarence did that with Pearl too. She came to us a dog who didn’t like to eat, who didn’t like people, but who loved Clarence. And Pearl, on walks, checked in with Clarence (and Jim—she’s a daddy’s girl) in order to be a little bit more brave. And she is. Because of him.

Clarence tolerated Louis, but he loved Ella and Pearl. He was the gruff older brother who was sweetly grumpy about their getting up in his face. On walks, when Pearl was upset (by airsocks, people with yellow vests, really scary leaves, that asshole Labradoodle) she checked in with him, and he had this move that always gave me a catch in my throat. It was a kind of shoulder bump, and it calmed her down. We all need that shoulder bump. I miss that shoulder bump.

Clarence loved rolling in the grass, and his method made me laugh every time. His roll started from his nose. He rolled in various places along busy streets, and it was fun to watch drivers laugh. He had a few favorite spots—we really don’t know why. Sometimes he wouldn’t roll on a favorite spot, and we never figured out the criteria.

Clarence’s previous owner probably paid a lot of money for him (since he appeared to be a purebred bull mastiff, and they’re pricey) and then fed him the wrong food (as is clear from his paws), taught him to jump on people, nom arms, mistrust males holding things while in the yard, and yet gave him enough love that he came into our home expecting to be loved. So they did something very important very right.

Mastiffs have a lifespan of 8-10 years. Given that he had clear signs of having been fed the wrong food, we figured he’d be on the short side. About a month after we got him, I gave him a corncob (something we used to do—the dogs nom on it for a while, and then cheerfully lose interest). He swallowed it whole. It was an obstruction. We ended up at the emergency vet. They stapled down his stomach (thereby preventing bloat—what kills a lot of big dogs), so I’ll admit I had hopes that he might live longer. But last summer was hot and long.

We used to walk the dogs for two miles every day. And Clarence had three places that he stopped to roll. Near the coffee place, where he got a treat, in front of an auto repair place (where people driving by would laugh), and on a particular lawn (sometimes two). In summer, Jim would wear a pack that had water and water bowls, and we’d stop halfway through and give them water. But, even so, Clarence was panting way too much (we all were—it was a long summer), so we took to taking a one-mile walk with him—up to the coffee place, where he got a treat—and then back home where we dropped him off, and then took the girls for another mile. He was always thrilled, to his last day, to go on a walk, but also quite happy to be dropped off.

He was stoical. In the four years we had him, he never yelped. He once flinched (this last week, when I touched a sensitive spot). But, he stopped eating, and seemed to be holding himself as though he was in pain, and so we took him to the vet, discovered he had cancer that had metastasized, and we were in the realm of palliative care. So we were. And we got lots of great advice from friends who had been through the same thing, some very recently, even the same time (take lots of photos and videos, offer scrambled eggs, indulge). We gave him lots of pain meds, and were getting up twice during the night in order to ensure he was always medicated. And then it was time. Pearl and Ella saw him after he died, but we put them away while they took his body away, and I watched them track the path of his body.

And so, here we are, without him, but blessed and better because of him.

Winston and Louis

cat and dog cuddling

[I posted this originally in January of 2018, but took it down when it became part of a book. Since the book has been out a while, I’m putting it back up.]

Today we lost a 14 year old cat and a 2 year old dog.

We got Winston Churchill and Emma Goldman on the same day around 14 or 15 years ago because someone in the Cedar Park neighborhood we were then living in (big mistake) was influenced by the “Secret Life of Dogs” (I assume) and so let his dogs out at night. They killed little dogs and cats, among them a neighbor’s dog two cats of ours. One of many reasons I’m glad we moved out of Cedar Park.

We got the two kittens from different rescue groups, and they bonded instantly. Winston was (we found out quickly) ill, but before we figured that out, he was waking us up around 4 am to harangue us, so we named him Winston Churchill (who was famous for the same behavior). It turns out Winston had a virus, which he passed to Emma Goldman (named that because she was clearly a total anarchist), and so I had to pill him multiple times a day. One of my secret superpowers is pilling animals (I also include fixing wonky toilets, getting total strangers to tell me their life stories, and losing things), so I was pilling this poor kitten all the effing time. I can do it, but I can’t do it in a way that animals like.

Yet, he forgave me.

We took to calling him Winston, and not Winston Churchill, because in many ways he was closer to Winston Smith. He disappeared whenever strangers appeared (there are people who’ve been over to our house many times who’ve never seen him), and we had to start working with an in-home vet because if we got out the cat carrier, he simply evaporated.

On the other hand, he could be incredibly brave. When we got him, we had a Great Dane and two mutts. Winston loved Emma, but he loved the dogs more. He spent his whole life sincerely believing he was a dog. He had complicated medical issues—he couldn’t eat fish, or eat anything from plastic. Because the Marquis de Lafayette was his best bud, he ate from the Marquis dish, and so the Marquis had to eat out of non-plastic containers and we couldn’t add fish to Marquis’ bowl. And Winston, at all of 12 pounds at most, snuggled with Hubert (120 lbs) and Duke (100 lbs).

For cats, head-rubbing is submission. Cats are not pack animals, and so normally the whole pack configuration isn’t really something to which it’s worth paying attention when you’re talking about cats. But it was interesting with Winston. Winston, after a while, took to beating up on Emma, so she dumped him, but he was entirely submissive to the dogs—to all the dogs. Most of the dogs tolerated him, but Hubert, George, Marquis, and Louis were actively sweet with him and allowed him to rub heads (which doesn’t mean the same thing in dog language).

After a while, the three cats each claimed domains, and Winston claimed the bedroom. He always slept with us on the bed, exerting the cat gravity power so that a 12 pound cat is actually an immovable force. He was probably the single most affection-loving cat I’ve ever had. For a while, he allowed Emma to sleep in the bedroom, but at some point that ended, and he allowed Sapphira to come in and get morning snuggles (Louis put an end to that, oddly enough). So, morning snuggles was Winston and the dogs. When we fed the dogs, he would head into the study, and eat out of Marquis’ bowl. Winston LOVED dogs. He especially loved licking their faces and ears. Hubert and Duke kind of liked it, and Ella and Clarence barely tolerated it, but Louis loved Winston. When we knew we were putting Winston down, I worried about how Louis would react.

Winston was always an indoor cat (with the exception of the catio), and he was until recently a beefy guy (and ended up being kind of a bully with Emma). The last year has been vexed in that we knew he was losing weight and something was going on, but he remained his dog-loving cuddle all night self. When definitive tests were done, he had major intestinal tumors and cancer that had metastasized to his paws. And so, today, we had an appointment with a vet to come and put him down. He was still, even with the damn cone on his head, cuddling with the dogs, and sleeping with us at night, but he was clearly unhappy. And he died, in the lap of someone who loves him, purring. He died about 90 minutes after Louis.

Louis was really sweet with Winston. Winston had a cancer that metastasized quickly, and gave him bloody tumors in his paws. He continued to sleep on the bed, and Louis (who always slept on the bed) accommodated him endlessly.

When Duke (a 100 lb Great Dane) died, we put in for rescuing a Mastiff. We’re good with big dogs, and they’re often hard to place. That mastiff rescue process wasn’t working well, and Jim knew I was a wreck about having lost Duke, and one day he said we should look at dogs. I assumed Jim was being sweet with me. We went to where APA was showing a few dogs, including what they said was a rottie mix (they marked him as large or extra large). I thought he was adorable, but I also thought Jim was looking at dogs for my sake, and so I took his enthusiasm for that dog as being supportive of my grief. I said we needed to look other places, and we did. And he kept saying, what about that rottie-mix, and I kept thinking he was just being kind to me, and so, when, after having looked at dogs at various other places, I said, “Yeah, I think that rottie mix is the best choice,” he rushed me to the car and drove like a maniac back to the place we’d seen him. He actually jumped a curb. That was the dog that would be named Louis.

We had had a dog, Duke Ellington, who was a wonderful dog, but a little bit staid. And then we got a puppy who adored him (and whom he adored) and who made him a little bit more playful, so we named her Ella Fitzgerald. And Duke died.

And then this rottie mix (he wasn’t) came home and bonded so thoroughly with Ella Fitzgerald that he was obviously Louis Armstrong.

And he was the most hilarious dog we have ever had. Austin is so good at getting dogs adopted that Austin now takes dogs from the shelters of other cities (and even counties), and Louis came from Bastrop. He had abrasions on his leg and neck suggesting he’d been thrown from a car (which is what people around here do to get rid of unwanted puppies—don’t get me started), and they thought he was going to get to be a large or extra-large dog. He thought he did. He got to be fifty pounds.

He was hilarious.

He hated mornings. He loved morning walks, but he never wanted to get up. He was the most talkative dog I’ve ever had. We’ve had dogs with strong opinions (Marquis is very clear that he thinks we should build a fire, nap, give him Dasequin, rearrange the dog beds), but Louis gave six-part Greek orations. We’ve had dogs with whom you could have conversations, but never a dog, but he had a lot to say. You could have a long conversation with him. Even I thought he could out-argue me.

We took him through all the Petsmart training, and he was a gem. My plan was, when I retired, that he and Ella would be our nursing home dogs.

He would have been great. He worried about other beings. If I sneezed, he would put his paws on me. He worried about Winston (especially once Winston got sick), and he worried about whether Clarence was going to get upset at seeing another dog (he sometimes does), and he worried about whether Ella was going to jump on me (she shouldn’t, and she does).

And he ate everything. He was the “can dogs eat…” dog. He ate the bark off our firewood, and he once ate a large part of an organic firestarting log. He ate arugula, watercress, lettuce, and all the things no other dog (even Clarence, who wouldn’t eat arugula) would eat.

And he cuddled. I have a high tolerance for sleeping surfaces, so our practice is that, when we get a new dog or cat, I sleep on some dog beds on the floor with them, and then we transition into the bedroom, and then into their finding their own space. The first night with Louis, he slept across my neck. Literally. The next night he slept across my chest, then legs, and then we were in the bedroom. And every night after that he slept cuddled in either my arms or Jim’s. And the night before he died, he crawled under the blankets, and had to be rescued because he got so hot he was panting. He was, without a doubt, the single most affection-loving dog I’ve ever had.

He and Ella were terrors—they were total siblings (although not littermates), with a hilarious game. Louis would dig a little bit in the ground, and this his job was to keep Ella from taking that little spot, and the two of them were tear around the yard with him keeping her from home. They jumped on each other at certain marked point on the morning walk (why those points, neither Jim nor I ever figured out).

We really worried about Louis because, although he was terrified of tires, he had NO sense about traffic. And he had a tendency to slip out behind someone who opened the front door. And we live somewhere that, if it’s raining or not, the front door might or might not entirely close. More than once we realized he had slipped out and we had to chase him down. It was our nightmare that he would get out and get into traffic. And our nightmare came true. He ran half a mile in order to get on a fucking freeway.
We had come to the difficult decision that we would put down Winston today, and therefore would spending all our time cuddling with him, and thinking about him. Louis slipped out, and we didn’t notice. This breaks my heart.

And, for reasons we don’t understand, he ended up half a mile away on a major freeway. A vet saw him just after he’d gotten hit, and tried to save him. And that vet (whose name we never got) took him to an emergency vet, but Louis was DOA. And someone called Jim, and he called me, and so the vet, Jim, and I all stood in a room and sobbed together over this hilarious dog who was now dead.

And so, today, we sent along their way a hilarious and young dog and the old cat he loved. I don’t believe in Hell (the scriptural basis for it is weak), but I believe in heaven, and I believe that these two are frolicking together. And the grief is for those of us who are left to mourn for them.

You’re the one with epistemic crisis

cicular reasoning works because circular reasoning works

For many years, I had a narrative about what makes a good relationship, and I had a lot of relationships that ended in exactly the same kind of car crash. I decided, each time, not that my narrative about relationships was wrong, but that I was wrong to think this guy was the protagonist in that narrative. I kept telling myself that I wasn’t wrong about the narrative; I was just wrong about the guy.

In fact, I was wrong about the narrative. When I changed the narrative, I found the guy.

We all have narratives, we have explanations as to how things happen, how to get what you want, how political figures operate, how dogs make decisions. And, as it was with me, it’s really easy to operate within a narrative without question, perhaps without even knowing that we have a narrative. I didn’t see my narrative about relationships as one of several possible ones, but as The True Narrative.

Relationship counselors often talk about how narratives constrain problem solving. Some people believe that people come to a relationship with stable identities—you get into a box (the marriage), and perhaps it works and perhaps it doesn’t. Some people have a narrative of a relationship being at its height when you marry, and it goes downhill from there. Some people believe that a relationship is a series of concessions you make with each other. Some people think that marriages are an authoritarian system in which the patriarch needs to control the family. Some people see a relationship as an invitation to go on a journey that neither of you can predict but during which each of will try to honor one another. There are others; there are lots of others. But I think it’s clear that people in each of those narratives would handle conflict in wildly different ways. People with the “people in a box” narrative would believe that you either put up with the other person, or you leave. People with the “concessions” narrative would believe that you try to negotiate conditions, like lawyers writing a contract. Patriarchs would believe that the solution is more control. Our narratives limit what we imagine to be our possible responses to problems.

If you ask people committed to any of those narratives if those narratives are true, they’ll say yes, and they’ll provide lots of evidence that the narrative is true. That evidence might be cultural (how it plays out in movies and TV), it might be arguments from authority (advice counselors, pundits, movie and TV plots). Or they might, as I would have, simply insist I was right by reasoning deductively from various premises—all relationships have a lot of conflict, for instance. That this relationship has a lot of conflict is not, therefore, a problem—in fact, it’s a good thing! (Think about the number of movies, plays, or novels that are the story of a couple with a lot of conflict who “really” belong together, from Oklahoma to When Harry Met Sally.)

If I thought of myself as someone who had relationships that ended badly because I got involved with the wrong person, I didn’t have to face the really difficult work of rethinking my narrative. And I was kind of free of blame, or only to blame for things that aren’t really flaws—being naïve, trusting, loyal. I could blame them for misleading me, or take high road, and say that we were mismatched.

If, however, I looked back and saw that I kept getting involved with someone with whom it could not possibly work because I kept trying to make an impossible narrative work, then the blame is on me. And it was. And it is.

I don’t really want to say what my personal narrative was, although I’ll admit that Jane Eyre might have been involved, but it was the moment that I stopped reasoning from within the world of that narrative and started to question the narrative itself that I was able to move to a better place. I had to question the narrative—otherwise I was going to keep getting “duped.” (That is, I would keep making only slightly varied iterations of the same mistakes which I would blame on having been misled by a person I thought would save me.)

Our current cultural narrative about politics is just as vexed as my Jane Eyre based narrative about relationships. We are in a world in which, paradoxically, far too many people all over the political spectrum share the same—destructive—narrative about what’s wrong with our current political situation. That narrative is that there is an obviously correct set of policies (or actions), and it is not being enacted because there are too many people who are beholden to special interests (or dupes of those special interests). If we just cut the bullshit, and enacted those obviously right policies, everything would be fine. Therefore, we need to elect people who will refuse to compromise, who will cut through the bullshit, and who will simply get shit done.

This way of thinking about politics—there is a clear course of action, and people who want to enact it are hampered by stupid rules and regulations–is thoroughly supported in cultural narratives (most action movies, especially any that involve the government setting rules; every episode of Law and Order; political commentary all over the political spectrum; comment threads; Twitter). It’s also supported deductively (if you close your eyes to the fallacies): This policy is obviously good to me; I have good judgment; therefore, this policy is obviously good.

It’s more complicated than that, of course. Staying within our narrative doesn’t look like it’s limiting options. It feels rational. The narrative gives us premises about behavior–if you think someone is a good man, then you can make a relationship work; the way to stop people from violating norms is to punish them; high taxes make people not really want to succeed–and we can reason deductively from those premises to a policy. If the narrative is false, or even inaccurately narrow, then we’ll deliberate badly about our policy options.

But what if that narrative—there is a correct course of action, and it’s obvious to good people—is wrong?

And it is. It obviously is. There is no group on any place on the political spectrum that has always been right. Democrats supported segregation; Republicans fought the notion that employers should be responsible if people died on the job because the working conditions were so unsafe. Libertarians don’t like to acknowledge that libertarianism would never have ended slavery, and there is that whole massive famine in Ireland thing. Theocrats have trouble pointing to reliable sources saying that theocracy has ever resulted in anything other than religicide and the suppression of science (Stalinists have the same problem). The narrative that there is a single right choice in regard to our political situation, and every reasonable person can see it is a really comfortable narrative, but it’s either false (there never has been a perfect policy, let alone a perfect group) or non-falsifiable (through no true Scotsman reasoning).

This narrative—the correct course of action is obvious to all good people—is, as I said, comfortable, at least in part because it means that we don’t have to listen to anyone who disagrees. In fact, we can create a kind of informational circle: because our point of view is obviously right, we can dismiss as “biased” anyone who disagrees with us, and, we thereby never hear or read anything that might point out to us that we’re wrong.

If we’re in that informational circle, we’re in a world in which “everyone” agrees on some point, and we can find lots of evidence to support our claims. We can then say, and many people I know who live in such self-constructed bubbles do say, “I’m right because no one disagrees with me because I’ve never seen anyone who disagrees with me.” And they really haven’t—because they refused to look. When we’re in that informational circle, we’re in a world of in-group reasoning. We don’t think we are; we think we’re reasoning from the position of truth.

But, since we’re only listening to information that confirms our sense that we’re right, we’re in an in-group enclave.

It’s become conventional in some circles to say that we’re in an epistemic crisis, and we are. But, it’s often represented as we’re in an epistemic crisis because they refuse to listen to reason—meaning they refuse to agree with us. We aren’t in an epistemic crisis because they are ignoring data. We are in an epistemic crisis because people—all over the political spectrum– reason from in-group loyalty, and no one is teaching them to do otherwise. We live in different informational worlds, and taking some time to inhabit some other worlds would be useful.

More useful is the simple set of questions:
• What evidence would cause me to change my mind?
• Are my arguments internally consistent?
• Am I holding myself and out-groups to the same standards?

Our epistemic crisis is not caused by how they reason, but how we do.

Windsocks and the epistemological/ontological distinction

Were I Queen of the Universe, no one could graduate from high school without being able to explain the difference between causation and correlation, and no one could graduate from college without being able to distinguish between an epistemological and ontological claim. (I have moments when I think that people should also understand the difference between eschatology and soteriology, but that’s a different post.)

Here, I just want to talk about the difference between an epistemological and ontological claim. That distinction is more important than you think. [1]

Earl Warren argued for the race-based mass imprisonment of Japanese Americans, and he provided evidence to support his claim that “the Japanese” must be engaged in nefarious activities. Among his evidence was a collection of letters from various police, sheriff, and other peace officer groups saying that they believed “the Japanese” to be dangerous.

An ontological claim is a claim about Reality. It’s a claim about the fabric of the universe, about what is Really True. Warren was making an ontological claim—that “the Japanese” were essentially and Really dangerous.

And he supported that claim with statements on the part of various (racist) people as to their beliefs. Epistemological claims are claims about belief. So, he was trying to support an ontological claim (“the Japanese are dangerous”) with an epistemological claim (“various (racist) people believe the Japanese to be dangerous”).

It’s like my saying that squirrels are evil (ontological claim) because my dogs agree that squirrels are evil (epistemological claim). They really do agree that squirrels are evil—that’s true. There is complete consensus on that point. They also agree that windsocks, plastic bags blowing down the street, that asshole labradoodle, and possums are all evil. Like Warren, my dogs make an ontological argument (windsocks are evil) on the basis of an epistemological claim (I am afraid of windsocks).

The difference, of course, is that the various people Warren polled had more prestige than my dogs, but did they have better judgement? Many people assume that if “good” people agree on a claim—if they all make the same epistemological claim, that’s an indication that the epistemological claim is also ontologically true. So, if everyone you value agrees on some claim–squirrels are evil–you think that claim has been proven. It hasn’t. All that’s been proven is that you’re loyal to your in-group.

My point is that the way that people decide who is “good” is just in-group reasoning, as in the case of Warren’s testimonies about “the Japanese” being dangerous. “The Japanese” also had beliefs—they had epistemological claims. But Warren didn’t worry about them. He took the claims of the police as reliable, and the claims of opponents as not worth considering. And how Warren assessed claims is the dominant way of assessing claims in our current culture–decide whether a claim is true on the basis of whether the person making it (or the media reporting it) is someone we think is “good.” In other words, whether they’re in-group.

That’s a bad way to think about reliability–it just pushes the question back one step. If everyone in my family agrees on something, every pastor I’ve known, everyone with whom I interact on a regular basis, the talk radio host or pundit I like, my group of like-minded friends, in other words, if my in-group agrees on a claim, then I take that agreement to be a sign the claim is a claim about Reality. And the–my claim is true because my in-group has perfect agreement on this point–isn’t something restricted to any point on the political spectrum, or even restricted to politics. I’ve had colleagues tell me that, although their claims are either non-falsifiable or actually falsified, they’re true because everyone in their discipline or sub-field (i.e., in-group) agrees that they’re true, and I must be wrong because they are an expert in that field (and, yes, I’m thinking especially of various economists, anglo-American analytic philosophers, and neo-conservative political scientists with whom I’ve been on committees).

We are in a world in which media–all over the political, cultural, and religious spectrum–hammer home to their audiences that we are fighting for our very ability to exist. We are about to lose it all right now. We are, therefore, in a state of exception when all concerns about the rule of law, fairness, accuracy. That’s an epistemological claim.

That everyone in the in-group agrees on a claim doesn’t mean it’s true. That the in-group feels threatened, that all the in-group media say we are threatened with extinction doesn’t mean we are.

The reason people should understand the difference between an ontological claim (about Reality) and an epistemological one (I am certain this is true) is that, as long as we uncritically take epistemological claims as proof about the world, we’re only deliberating within in-group beliefs. We’re Warren, who only took the epistemological claims of people like him as relevant to ontological conclusions.

We’re people banning windsocks because my dogs don’t like them.

[1] For the pedants in  the audience, I’m not saying that epistemology and ontology are, so to speak, ontologically different. I’m say that epistemological and ontological claims are rhetorically different–they have different standards of proof in an argument because they imply different rhetorical burdens.

Chances are good that how you assess bias is irrational

Many people believe that a biased argument is irrational, and vice versa, and, so, one way to assess the rationality of an argument is to see whether it’s biased. That’s an irrational way to assess an argument, and one that nurtures irrationality.

What I have long found difficult about getting people to think in a more accurate way about how we think is that many people assume that you either believe there is a truth, and we all see it (naïve realism), or you believe that all points of view are equally valid.

It’s grounded in an old and busted model of how perception can work—that “rational” people just look at the world and see it in an unmediated (unbiased) way. And that direct perception of the world enables them to make judgments that are accurate and ring true.

One of the ways that our media (all over the political spectrum) engage in inoculation is to promote the false binary of one position being that kind of “unbiased” and obviously true position, and “biased” positions (all others). They point out to their choir that this position seems obviously true to them, so it must be the unbiased position. That’s the confusion that Socrates pointed out—that you believe something to be true doesn’t mean you know it to be true. You just believe you do.

Imagine that two dogs, Chester and Hubert, disagree as to whether little dogs are involved in the squirrel conspiracy to get to the red ball. Chester, and his loyal media, says to their base, “Hubert media is biased because it says little dogs aren’t conspiring with squirrels.” Chesterian media is inoculating its base against listening to any contradictory information. To the extent that it successfully equates “disagrees with us” and “biased,” any media—regardless of its place on the political spectrum—ensures that its audience can’t assess policies rationally.

That’s what far too much of our political media says—any source of information that gives information that contradicts or complicates our position is “biased” and therefore should be dismissed without consideration. And, as I said, that’s irrational.

It’s irrational because it’s saying that having a strong political commitment is irrational, but only if it’s an out-group political commitment. So, this isn’t really about the rationality of an argument, in terms of its internal consistency, quality of evidence, logical relationship of claims, but whether it’s in- or out-group.

It’s saying that people who believe what I believe are rational because they believe what I believe and I believe that my beliefs are rational and so I believe that anyone who disagrees with me is irrational because they don’t believe what I believe and what I believe is rational because it’s what I believe.

A rational position on an issue is one that is argued:
• via terms and claims that can be falsified,
• internally consistently in terms of its claims and assumptions,
• by fairly representing opposition arguments,
• by holding all interlocutors to the same standards.

Rationality has nothing to do with the tone of an argument, whether it appeals to emotions, whether the people making arguments are good people, or even whether you can find evidence to support your claims.

So, the argument that out-group media sources should be dismissed on the grounds that those sources  are biased is irrational because it violates everyone one of those criteria. It’s a circular argument; it doesn’t consistently condemn bias (only out-group bias); it frames all out-group arguments as biased by bad motives; and it privileges in-group arguments.

To say that all media are biased is not to say that they are all equally reliable (or unreliable). It is to say that we are all biased, and we can assess sources to see if their biases cause them to engage in irrational argumentation. If we find that a source is consistently irrational, then it’s fine to dismiss the source as unreliable–not because it’s out-group, but because we’ve found it to fail so often.We should assess arguments on whether they’re rational; not whether they seem true to us.

That you believe, sincerely, deeply, and profoundly, that what you are saying is true doesn’t mean it is, let alone that it’s a belief you can defend rationally. Just because you sincerely believe you’re right doesn’t mean you’re Rosa Parks, refusing to give up your seat; you might be George Wallace, committing to segregation forever.

[Btw, if any of you would like to put pressure on cafepress to make the circular reasoning visual a t-shirt, count me in.]

“This decision by ‘the government’ is obviously wrong” as factional demagoguery

My poor husband. This weekend, we went to a farmer’s market because it was a beautiful day, and I didn’t have to work, and the farmer’s market is fun, and, long story short, a person from whom I was buying earrings said to me and Jim, “Some people think government is the problem, and some people think government is the solution.” Jim, being a sensible person, just stepped back a bit. I don’t really remember what I said after that (I was in a white-hot rage), but I know I said a lot.

I have spent my career working for big (and public) institutions, and got all my degrees at a big (and public) institution. And I spent far too much of my life irritated (and sometimes outraged) by various decisions that those institutions made—decisions that were, to me, not just wrong but obviously wrong.

There are, loosely, three categories of wrongness. There were decisions that were irritating and time consuming (such as providing physical documentation of every article I claimed to have published, having students sign for getting a small gift card, having to provide travel receipts). There were decisions that obviously ignored considerations central to the teaching of writing, for instance, or ethical practices regarding staffing. There were others that seemed to strike at the very notion of college education as a public good. All of those decisions were, to me, outrageously short-sighted. I was right. I was also short-sighted.

I’m really sorry about all that time I spent bloviating about how obviously dumb my administration was; it turns out that my administration was not necessarily being dumb. It turns out I was often the short-sighted one. I was right that about some decisions being unethical, and I was right about the harm some decisions did for the teaching of writing, but I was wrong to think that my Dean was the problem. Because I saw every entity above me as “administration,” I falsely identified the source of the problem, and therefore I never identified a workable solution.

And this is another post about the neighborhood mailing list, and how it exemplifies what’s wrong with American political deliberation. (Although, to be fair, I could use departmental faculty meetings to make the same point, with me as the person arguing very badly. I’ve also done my share of this on the neighborhood mailing list and various other places. I’ve loved me some pleasurable outrage about how obviously wrong the government, my university administration, the city  is).

Anytime there is a change in our neighborhood, we look at the proposed policy from our perspective, and we think how it will affect us. That’s a valid datapoint. But that’s all it is–one datapoint. I earlier wrote about how the Big Bike narrative assumed that cyclists in our neighborhood are outsiders, when in fact a lot of the people cycling in our neighborhood (including some of the cyclists who are jerks) are neighbors. They are us.

And, let’s be clear, we are in a neighborhood with streets paid for by all citizens of Austin. The notion that these are “our” streets is no more rational than the belief that the trash can loaned to you by the city of Austin is your trash can.

In the case of Big Bike, the assumption is that there is a policy that is obviously right to all sensible people of goodwill, and it happens to be the one I hold. Thus, anyone who advocates a different policy is stupid, corrupt, duped, selfish, shortsighted. I’m saying that, for years, I thought that way about my universities’ policies that didn’t agree with what policies I thought we should have.

At every university, there have been irritating, complicated, and time-consuming, and, to me, obviously dumb, requirements about submitting documentation for travel, absences of students, rewarding students for participating in a study, hiring student workers, keeping track of purchases, exposing personal data about sources of income. It turns out that, in many cases, the policies I thought were obviously stupid were a response (perhaps not the best response, but often good enough) to a real problem I didn’t know existed.  Because, at every university, those irritating, complicated, and time-consuming requirements were put in place because someone was an asshole. Someone filed false documentation, failed to note a conflict of interest, embezzled, falsely accused a student (or a student was a jerk and refused to admit to absences), exploited student workers, or filed a lawsuit.

I’m not saying that university is always right, but I have been wrong as to who was wrong. I have been at three universities with unethically low salaries for staff (University of Texas at Austin is one of them). I care about staff; that is part of my viewpoint. I’m not looking out for me; I’m looking out for others. And the salary structure at three of my universities was (and is) obviously ethically and rationally indefensible. I was (and am) right about all that.

I was, however, wrong to think that these unethical salary structures for staff were the consequence of my University administration being short-sighted in its policies about staff salaries. In two cases (I’m still unclear about UT-Austin), the salaries of staff were legislative decisions, and not the university.

I was right that the decision was wrong, but I was wrong as to who was wrong.

There is a different kind of decision in which I thought I was completely right, and the university was being stupid and short-sighted, and I was wrong.

When, for complicated reasons, I ended up on Faculty Council, I learned that most of what I thought about how the university ran was wrong, in all sorts of ways. Here’s one example: I had long thought it was obviously wrong to have the day before Thanksgiving a class day. A lot of students had to miss that class in order to get flights, and others risked their lives driving on a day with terrible traffic and accidents.

I sat at a Faculty Council meeting, and listened to someone explain that, because the fall semester is already shorter than spring (which I’d never noticed), and because of various legislated weirdnesses about the UT calendar, taking away that class day would mean that some of the Engineering departments would lose accreditation. Accrediting organizations require a certain number of labs, and removing that class day would mean they wouldn’t have enough labs.

We would, they said, have to refigure the entire calendar to ensure that they could have enough labs, and that any decision about that Wednesday should be delayed till that refiguring could happen. And I listened to faculty stand up and talk about how we should, right now, cancel that Wednesday class because of what it meant for them personally. Of course, were UT to lose its engineering accreditation, all those faculty would suffer far more than they were suffering by having a Wednesday class day. But they didn’t think of that because they assumed that their perspective was the only valid  one.

And I realized I was them. I also assumed that the policies of the university should enable my way of teaching. And suddenly I empathized with engineers. I was engaged in epistemological selfishness, only assessing a situation from my perspective. A decision that was obviously wrong from my perspective (such as requiring that the day before Thanksgiving be a class day) was a great decision for a university that wanted to ensure its engineering programs were accredited.

My perspective about the day before Thanksgiving—enable students to leave earlier—was a legitimate one. But the perspective of the Engineering faculty concerned about losing accreditation was also legitimate. In fact, I’d say that, since my university would be seriously hurt by losing Engineering accreditation, and my students would be hurt, that my interests and the concerns of the Engineering faculty were intertwined. That my perspective was legitimate doesn’t mean it was the only one that should be considered. That the Engineering faculty had a legitimate concern doesn’t mean it was the only one that should be considered.

The University worked it out.

I’m not saying that all positions are equal, nor that we should never decide our administration has made a bad decision. I have twice been at universities with an ambitious Provost who made every decision on the basis of what would enable them to have great things on their cv because they saw this job as a stepping stone to being Chancellor. Try as I might (and I did try), there was no perspective from which their decisions were the best for the university—they were (are) splashy projects that look great on a resume but aren’t thought through in terms of principles like sustainability, shared governance, financial priorities.

I also sat at a Faculty Council meeting and listened to various faculty from business, math, and economics explain that a report arguing for major changes in various university practices had numbers that literally did not add up. And they didn’t, and those major changes never did save anywhere near the predicted amount. The changes were eventually abandoned.

Three times I have been at universities that had a state legislature actively hostile to my university, that made decisions designed to get the university to fail.

Big institutions make bad decisions. But they also make decisions that aren’t bad–they’re the best decisions within the various constraints, or good enough decisions within the constraints. If we spend our lives outraged that the university, or city, or government isn’t enacting the policies we believe to be right, then we’re spending our lives in the pleasurable orgy of outrage. We aren’t doing good political work.

What I’m saying is that just looking at a policy, and assessing it from your perspective as a good or policy doesn’t mean it is a good or bad policy. You have to look at it from the perspective of the various stakeholders, after which you might decide it’s a terrible policy (because it might be). My university should not make every decision on the basis of what is best for me, or even people like me. My university has people with genuinely different needs from me. My university makes bad decisions, but that a decision is not the best one for me is not sufficient proof that it is a bad decision. My university should not be designed for me.

And, similarly, the government should not be designed for me. Or you. Or us.

The notion that, in regard to any question, there is an obviously right answer is epistemological selfishness. The notion that, because you can see flaws in a policy, that policy is obviously dumb and wrong, is just bad reasoning.

Every policy has flaws. You have to decide how to get to work. That’s a policy argument—you are deliberating the policy of getting to work. Is there a perfect route? Nope. Parenting, having a dog, gardening, buying a car—those are all policy deliberations. Is there a perfectly right decision? No. You have to deliberate among various pressing concerns—cost, size, resale value, gas mileage, loan options. Any big institution has to do the same weighing.

Despite the fact that we all get by in a world of vexed and nuanced decisions in our moment to moment decisions, when it comes to what we think of as “political decisions,” a troubling number of us reason the way I did for far too many years—that, when it comes to policy, my perspective is obviously right. Even though my personal life was not a series of perfect decisions, from the day to day (whether to bring an umbrella, wear a heavy coat, take that route to work) through the slightly more important (whether to grant an extension to my students, how to manage my time, agree to that commitment) to the big ones (whether to marry that guy, take that job, get that haircut), somehow I was convinced that I knew the right thing for my university, city, state, or country to do. I had made the wrong decision about a haircut multiple times, but, when it came to politics, my belief was some kind of perfect insight spit from the forehead of God?

My model of political deliberation–despite my long and documented history of being wrong, even when it came to major policy decisions in my personal life, I was magically infallible–is unhappily common.

My experience with big institutions—that they make policies that are ridiculous from my perspective, and even burdensome—is how most people experience the government. And that mantra—this big institution is terrible because their decisions don’t make sense from my perspective–is a constant mantra on my neighborhood mailing list. Every decision “they” make is not just dumb, but obviously dumb. And there are no good reasons or legitimate perspectives that might make “their” decision makes sense.

According to many people on my neighborhood mailing list, everything the city does is wrong. It isn’t just flawed, but completely, obviously, and pointlessly dumb.

And, unhappily, my neighborhood mailing list exemplifies how smart, well-intentioned, good people who are deeply committed to thinking about the public good reason.

My neighborhood mailing list is, ostensibly, non-partisan. But it isn’t. A recurrent (perhaps even dominant) topos (as people in rhetoric say) is that “the government” (an out-group) is making an obviously bad decision because “the government” is dominated by “special interests.”

That’s as political and factional as political discourse gets. It’s toxic populism. It’s the false assumption that there is some group (us) made up of “regular people” who see what really needs to happen. If anything happens that “regular people” (us) don’t like, or that hurts us in any way, then this is the government being dumb, oblivious, or corrupt.

Toxic populism dismisses that the policy we hate might help some other group of people by saying those people aren’t “real Americans.” For complicated reasons, I had to listen to some guy repeat what he said he had heard on Rush Limbaugh, about how Native Americans were getting “special” benefits from the government (those “special” benefits were simply honoring agreements). There was something about Native Americans not being “real” Americans that caused steam to come out my ears.

My neighborhood mailing list claims to be non-factional, but it tolerates dog whistle racism and demagoguery about graffiti. It also tolerates the “the government always fucks things up” rhetoric that is, actually, profoundly factional.

As various studies have shown (Ideology in America summarizes a lot of them), the public, on the whole, supports policies that we tend to identify as “liberal,” but votes for anyone who plausibly performs the identity of “conservative.” And “conservative” is associated with being opposed to government intervention—“the government” is associated with Democrats. This association explains why so many people complain about aspects of Obamacare that Republicans enacted (such as the failure to expand medicare).

And irrational.

In all those years when I was whingeing that the huge institution wasn’t enacting policies that were the best from my perspective, I was engaging in profoundly anti-democratic rhetoric. It was political, and it was factional. Rhetoric about how government sucks isn’t just anti-democratic; it’s pro-Republican.

The government screws things up, and we should engage in loud and vehement criticism when it does. But “the government” making a decision that inconveniences us and “the government” screwing up are not necessarily the same thing—the first is not evidence of the second. Good governmental policies inconvenience everyone at least a little.

After Proposition 13 passed in California (which greatly reduced the state budget), I frequently found myself in situations in which—in the same conversation—someone celebrated the passage of Prop 13 and bemoaned that government services had declined. They shot themselves in the foot and then complained they had a limp.

Americans, till Reagan, lived within a world of well-financed government projects—roads, bridges, water services, public schools, non-partisan science research. Since Reagan, the infrastructure has deteriorated. We now have people complaining that taxes are too high and the infrastructure sucks (which is why we should take more money from government).

We need to stop assuming that “the government” is always deliberately, stupidly, and obviously wrong. “The government” is neither the problem nor the solution; voters are.

I don’t remember much about what I said when I lost my temper with the guy at the farmer’s market, but I do remember one thing. I said, “If you think the government is the problem, then why haven’t you moved to Somalia?”  (And, yes, I know, that situation in Somalia is more complicated than that, but, by that time, I’d figured out his sources of information, and that those sources said Somalia is hell.)

And then he did start talking about how the government should stick to what it does well and leave other things aside.

That’s the fallback position for people repeating Libertarian positions that are internally inconsistent but sound good as long as you don’t think too hard. I made no headway with him.

But, what I did see is that his position was thoroughly indefensible logically, and it was the position I have taken far too often in far too many situations. He thought the government was stupid because it made some decisions that he didn’t like. He didn’t notice that “the government” paved the roads that got customers to his place, enabled the trade that got him what he needed for his shop, ensured that he didn’t get robbed, enabled him to do something if someone wrote a bad check. He wants a government that gets him everything he wants and nothing he doesn’t.

And so do I. And that’s a bad way to think about government.

That a policy seems wrong to me doesn’t actually mean it’s wrong. I am not (yet) Queen of the Universe with perfect and universal insight. None of us is. People all over the political spectrum need to stop talking as though the government is the problem. It isn’t. We are.

“OK, Boomer” and intergenerational demagoguery

Growing up with relatives prone to saying really offensive and bigoted things, I quickly learned the rule: saying something offensive, even if it clearly insults someone sitting there at the table, is okay, as long as you’re older than the people who might object. The person who calls attention to how offensive that statement was, especially if they’re younger, that is who people blame for “starting the conflict.” Calling attention to demagoguery that other people haven’t noticed is seen as “confrontational,” and perhaps even “aggressive.” That is “divisive.”

Someone saying out loud that something was racist isn’t what started the problem—the racist (or otherwise bigoted) person did. But, time and again, I saw someone directly insult someone else at the table, sometimes openly, sometimes passive-aggressively, almost always through saying insulting generalizations about a group of which the other person was a member. Someone might say something like, “Well, young people today just don’t know how to work, and […]” then tell a rambling story about how they had to walk eight miles to school, uphill both ways. Most of the people at the table wanted to let all that demagoguery go by un-noticed. They got upset if the person who had, in fact, very clearly been insulted said, “I was just insulted.”

This is the “OK, Boomer” controversy, I think.

There has been divisiveness about generations for a long time, and it isn’t new. But I have to say that demagoguery about “young people today” (in current public discourse oddly often mis-identified as “millennials”) is pernicious and ubiquitous and damaging. Demagoguery about how awful this generation is is in everything from comment threads to best-sellers, and it’s often engaged in on the part of boomers, probably the most privileged generation ever. For instance, consider that this profoundly incoherent book about what’s wrong with young people is a best seller. It actually argues that this generation is the dumbest because they’re on the phones all the time, and therefore not reading.

It’s available in kindle.

And it’s worth remembering that Culture of Narcissism was written about boomers.

If you’re now outraged about divisiveness about generations because of the “ok, boomers” meme, then you are blaming the person at the table who says, “Wow, that was racist” as “starting the conflict.” You didn’t notice all the divisive demagoguery about young people today.

If you haven’t called out that pernicious and pervasive boomer demagoguery about kids these days, and you are condemning “ok, boomer,” then either put “I’m a demagogue” on your sleeve, or STFU. If you think that the “ok, boomer” meme has called attention to how boomers have been profiting by demagoguery about kids today, and you’d like to reduce the generational demagoguery by acknowledging the role of authors like Bauerlein, then go for it. But don’t pretend for a second that the “ok, boomers” people started intergenerational demagoguery.

They’re responding to it.

And I think it’s a pretty good response.

I think it’s asking boomers to hold young people today to the same standards they had to meet when they were 20. And good luck with that.