One of my criticisms of conventional definitions of demagoguery is that they enable us to identify when they are getting suckered by demagoguery, but not when we are. They aren’t helpful for helping us see our own demagoguery because they emphasize the “irrationality” and bad motives of the demagogues. And both strategies are deeply flawed, and generally circular. Here I’ll discuss a few problems with conventional notions of rationality/irrationality, and later I’ll talk about the problems of motivism.
Definitions of “irrationality” imply a strategy for assessing the rationality of an argument, and many common definitions of “rational” and “irrational” imply methods that are muddled, even actively harmful. Most of our assumptions about what makes an argument “rational” or “irrational” imply strategies that contradict one another. For instance, “rationality” is sometimes used interchangeably with reasonable and logical, sometimes used as a larger term that incorporates logical (a stance is rational if the arguments made for it are logical, or a person is rational if s/he uses logical processes to make decisions). That common usage contradicts another common usage, although people don’t necessarily realize it: many people assume that an argument is rational if you can support it with reasons, whether or not the reasons are logically connected to the claims. So, in the first one, a rational argument has claims that are logically connected, but in the second one it just has to have sub-claims that look like reasons. There’s a third usage: many people assume that “rational” and “true” are the same, and/or that “rational” arguments are immediately seen as compellingly true, so to judge if an argument is rational, you just have to ask yourself if it seems compellingly true. Of course, that conflation of rational and true means that “rational” is another way of saying “I agree.” A fourth usage is the consequence of many people equating “irrational” with “emotional:” it can seem that the way to determine whether an argument is rational is to try to infer whether the person making the argument is emotional, and that’s usually inferred by the number of emotional markers—how many linguistic “boosters” the rhetor uses (words such as “never” or “absolutely”), or verbs of affect (“love,” “hate,” “feel”). Sometimes it’s determined through sheer projection, or through deduction from stereotypes (that sort of person is always emotional, and therefore their arguments are always emotional).
Unhappily, in many argumentation textbooks, there’s a fifth usage thrown in: it’s not uncommon for a “logical” argument to be characterized as one that appeals to “facts, statistics, and reason”—surface features of a text. Sometimes, though, we use the term “logical” to mean, not an attempt at logic, or a presentation of self as engaged in a logical argument, but a successful attempt—an argument is logical if the claims follow from premises, the statistics are valid, and the facts are relevant. That usage—how it’s used in argumentation theory—is in direct conflict with the vaguer uses that rely on surface features (“facts, statistics, and reason” or the linguistic features we associate with emotionality). Much of the demagoguery discussed in this book makes appeals to statistics, facts, and data, and much of it is presented without linguistics markers of emotionality, but generally in service of claims that don’t follow, or that appeal to inconsistent premises, or that contradict one another. Thus, for the concept of rationality to be useful for identifying demagoguery, it has to be something other than any of the contradictory ones above—surface features; inferred, projected, or deduced emotionality of the rhetor; presence of reasons; audience agreement with claims.
Following scholars of argumentation, I want to argue for using “rationality” in a relatively straightforward way. Frans van Eemeren and Rob Grootendorst identify ten rules for what they call a rational-critical argument. While useful, for purposes of assessing informal and lay arguments, they can be reduced to four:
-
- Whatever are the rules for the argument, they apply equally across interlocutors; so, if a kind of argument is deemed “rational” for the ingroup, then it’s just as “rational” for the outgroup (e.g., if a single personal experience counts as proof for a claim, then a single appeal to personal experience suffices to disprove that claim);
- The argument appeals to premises and/or definitions consistently, or, to put it in the negative, the claims of an argument don’t contradict each other or appeal to contradictory premises;
- The responsibilities of argumentation appeal equally across interlocutors, so that all parties are responsible for representing one another’s arguments fairly, and striving to provide internally consistent evidence to support their claims;
- The issue is up for argument—that is, the people involved are making claims that can be proven wrong, and that they can imagine changing.
Not every discussion has to fit those rules—there are some topics not open to disproof, and therefore can’t be discussed this way. And those sorts of discussions can be beneficial, productive, enlightening. But they’re not rational; they’re doing other kinds of work.
In the teaching of writing, it’s not uncommon for “rationality” and “logical” to be compressed into Aristotle’s category of “logos” (with “irrational” and “emotional” getting shoved into his category of “pathos”)—and then very recent notions about logic and emotion are projected onto Aristotle. As is clear even in popular culture, recent ideas assume a binary between logical and emotional, so saying something is an emotional argument is, for us, saying it is not logical. That isn’t what Aristotle meant—he didn’t even mean that appeals to emotion and appeals to reason can coexist; he didn’t see them as opposed. Nor did he mean “facts” as we understand them, and he had no interest in statistics. For Aristotle, ethos, pathos, and logos are always operating together—logos is the content, the argument (the enthymemes); pathos incorporates the ways we try to get people to be convinced; ethos is the person speaking. So, were we to use an Aristotelian approach to an argument, we would look at a set of statistics about child poverty, and the logos would be that poverty has gotten worse (or is worse in certain areas, or for some people—whatever the claims are), the pathos would be how it’s presented (what’s in bold, how it’s laid out, and also that it’s about children), and the ethos is whatever is situated (what we know about the rhetor prior to the discourse) but also a consequence of the person using statistics (she’s well-informed, she’s done research on this) and that it’s about children (she is compassionate). For Aristotle, unlike post-logical positivists, the pathos and logos and ethos can’t operate alone.
I think it’s better just to avoid Aristotle’s terms, since they slide into a binary so quickly. More important, they enable people to conflate “a logical argument” (that is, the evaluative claim, that the argument is logical) with “an appeal to logic” (the descriptive claim, that the argument is purporting to be logical).
What this means for teaching
People generally reason syllogistically (that’s Ariel Kruglanski’s finding), and so it’s useful for people to learn to identify major premises. I think either Toulmin’s model or Aristotle’s enthymeme works for that strategy, but it is important that people are able to identify unexpressed premises.
Syllogism:
All men are mortal. [universally valid Major Premise]
Socrates is a man. [application of a universally valid premise to specific case: minor premise]
Therefore, Socrates is mortal. [conclusion]
Enthymeme:
Socrates is mortal [conclusion]
because he is a man. [minor premise]
The Major Premise is implied (all men are mortal).
Or, syllogism:
A = B [Major Premise]
A = C [minor premise]
Therefore, B = C. [conclusion]
Enthymeme:
B = C because A = B. This version of the argument implies that A = C.
Chester hates squirrels because Chester is a dog.
Major Premise (for the argument to be true): All dogs hate squirrels.
Major Premise (for the argument to be probable): Most dogs hate squirrels.
Batman is a good movie because it has a lot of action.
Major Premise: Action movies are good.
Preserving wilderness in urban areas benefits communities
because it gives people access to non-urban wildlife.
Major Premise: Access to non-urban wildlife benefits communities.
Many fallacies come from some glitch in the enthymeme—for instance, non sequitur happens when the conclusion doesn’t follow from the premises.
-
- Chester hates squirrels because bunnies are fluffy. (Notice that there are four terms—Chester, hating squirrels, bunnies, and fluffy things.)
- Squirrels are evil because they aren’t bunnies.
Before going on to describe other fallacies, I should emphasize that identifying a fallacy isn’t the end of a conversation, or it doesn’t have to be. It isn’t like a ref making a call—it’s something that can be argued—this is especially true with the fallacies of relevance. If I make an emotional argument, and you say that’s argumentum ad misercordiam, then a good discussion will probably have us arguing about whether my emotional appeal was relevant.
Appealing to inconsistent premises comes about when you have at least two enthymemes, and their major premises contradict.
For instance, someone might argue: “Dogs are good because they spend all their time trying to gather food” and “Squirrels are evil because they spend all their time trying to gather food.” You’ll rarely see it that explicit—usually the slippage is unnoticed because you use dyslogistic terms for the outgroup and eulogistic terms for the ingroup: “”Dogs are good because they work hard trying to gather food to feed their puppies” and “Squirrels are evil because they spend all their time greedily trying to get to food.”
Another one that comes about because of glitches in the enthyme is circular reasoning (aka “begging the question). This is a very common fallacy, but surprisingly difficult for people to recognize. It looks like an argument, but it is really just an assertion of the conclusion over and over in different language. The “evidence” for the conclusion is actually the conclusion in synonyms–“The market is rational because it lets the market determine the value of goods rationally.” “This product is superior because it is the best on the market.”
Genus-species errors (aka over-generalizing, ignoring exceptions, stereotyping) happens when hidden in the argument (often in the major premise is a slip from “one” (or “some”) to “all.” It results from assuming that what is true of a specific thing is true of every member of that genus, or what is true of the genus is true of every individual member of that genus. “Chester would never do that because he and I are both dogs, and I would never do that.” “Chester hates cats because my dog hates cats.”
Fallacies of relevance
Really, all of the following could be grouped under red herring, which consists of dragging something so stinky across the trail of an argument that people take the wrong track. Also called “shifting the stasis,” it’s trying to distract from what is really at stake between two people to something else—usually inflammatory, but sometimes simply easier ground for the person engaged in red herring. Sometimes it arises because one of the interlocutors sees everything in one set of terms—if you disagree with them, and they take the disagreement personally, they might drag in the red herring of whether they are a good person, simply because that’s what they think all arguments are about.
Ad personum (sometimes distinguished from ad hominem) is an irrelevant attack on the identity of an interlocutor. Not all “attacks” on a person or their character are ad hominem. Accusing someone of being dishonest, or making a bad argument, or engaging in fallacies, is not ad hominem because it’s attacking their argument. Even attacking the person (“you are a liar”) is not fallacious if it’s relevant. It generally involves some kind of name-calling (usually of such an inflammatory nature that the person must respond, such as calling a person an abolitionist in the 1830s, a communist in the 1950s and 60s, or a liberal now). It’s really a kind of red herring, as it’s generally irrelevant to the question at hand, and is an attempt to distract the attention of the audience.
Ad verecundiam is the term for a fallacious appeal to authority. In general, it’s a fallacy because their authority isn’t relevant—there’s nothing inherently fallacious about appeal to authority, but having a good conversation might mean that the relevance of the authority/expertise now has to become the stasis. Bandwagon appeal is a kind of fallacious appeal to authority—it isn’t fallacious to appeal to popularity if it is a question in which popular appeal is a relevant kind of authority.
Ad misericordiam is the term for an irrelevant appeal to emotion, such as saying you should vote for me because I have the most adorable dogs (even though I really do). Emotions are always part of reasoning, so merely appealing to emotions is not
Scare tactics (aka apocalyptic language) is a fallacy if the scary outcome is irrelevant, unlikely, or inevitable regardless of the actions. For instance, if I say you should vote for me and then give you a terrifying description of how our sun will someday go supernova, that’s scare tactics (unless I’m claiming I’m going to prevent that outcome somehow).
Straw man is dumbing down the opposition argument; because the rhetor is now responding to arguments their opponent never made, most of what they have to say is irrelevant. People engage in this one unintentionally by not listening, projection, and a fairly interesting process. We have a tendency to homogenize the outgroup and assume that they are all the same. So, if you say “Little dogs aren’t so bad,” and I once heard a squirrel lover praise little dogs, I might decide you’re a squirrel lover. Or, more seriously, if I believe that anyone who disagrees with me about gun ownership and sales wants to ban all guns, then I might respond to your argument about requiring gun safes with something about the government kicking through our doors and taking all of our guns (an example of slippery slope).
Tu quoque is usually (but not always) a kind of red herring, sometimes it’s the fallacy of false equivalency (what George Orwell called the notion that half a loaf is no better than none). One argues that “you did it too!” While it’s occasionally relevant, as it can point to a hypocrisy or inconsistency in one’s opposition, and might be the beginning of a conversation about inconsistent appeals to premises, it’s fallacious when it’s irrelevant. For instance, if you ask me not to leave dirty socks on the coffee table, and I say, “But you like squirrels!” I’ve tried to shift the stasis. It can also involve my responding with something that isn’t equivalent, as when I try to defend myself against a charge of embezzling a million dollars by pointing out that my opponent didn’t try to give back extra change from a vending machine.
False dilemma (aka poisoning the wells, false binary, either/or) occurs when a rhetor sets out a limited number of options, generally forcing one’s hand by forcing one to choose the option s/he wants. Were all the options laid out, then the situation would be more complicated, and his/her proposal might not look so good. It’s often an instance of scare tactics because the other option is typically a disaster (we either fight in Vietnam, or we’ll be fighting the communists on the beaches of California). It is “straw man” when it’s achieving by dumbing down the opponent’s proposal.
Misuse of statistics is self-explanatory. Statistical analysis is far more complicated than one might guess, given common uses of statistics, and there are certain traps into which people often fall. One common one is the deceptively large number. The number of people killed every year by sharks looks huge, until you consider the number of people who swim in shark-infested waters every year, or compare it to the number of people killed yearly by bee stings. Another common one is to shift the basis of comparison, such as comparing the number of people killed by sharks for the last ten years with the number killed by car crashes in the last five minutes. (With some fallacies, it’s possible to think that there was a mistake involved rather than deliberate misdirection; with this one, that’s a pretty hard claim to make.) People often get brain-freeze when they try to deal with percentages, and make all sorts of mistakes—if the GNP goes from one million to five hundred thousand one year, that’s a fifty per cent drop; if it goes back up to one million the next year, that is not, however, a fifty per cent increase.
The post hoc ergo propter hoc fallacy (aka confusing causation and correlation) is especially common in the use of social science research in policy arguments. If two things are correlated (that is, exist together) that does not necessarily mean that one can be certain which one caused the other, or whether they were both caused by something else. It generally arises in situations when people have failed to have a “control” group in a study. So, for instance, people used to spend huge amounts of money on orthopedic shoes for kids because the shoes correlated with various foot problems’ improving. When a study was finally done that involved a control group, it turned out that it was simply time that was causing the improvement; the shoes were useless.
Some lists of fallacies have hundreds of lists, and subtle distinctions can matter in particular circumstances (for instance, the prosecutor’s fallacy is really useful in classes about statistics), but the above are the ones that seem to be the most useful.