Loosely, here is the chain of events that got me banned. In March, I shared Nazi propaganda about euthanization, in order to make the point that social Darwinism (which is what people were advocating for covid) was exactly the line of argument used by Nazis. Personally, I would encourage everyone to share this image, as it is a very effective way to get people who are (still) arguing for “let covid run its course as it will only hurt the weak.” If hundreds of people get banned for it, that would be good.
I got a “you’re banned for 24 hours for this violation,” and then “you keep violating our standards and so are getting more punished” for the same post. It ended up being at least three and maybe four times. When a human looked at it, the decision was made (correctly) that I wasn’t promoting Nazism, but using a Nazi image for appropriate purposes of discussion. Therefore, I was let out of Facebook jail for the last violation. To make the whole thing more irritating, I’m still on record for having violated Facebook standards for a post they said was not a violation of their standards.
It happened again on Friday—banned twice, with increasing penalties—for the same post, and then I got a notice that the post is fine. But, since it was reported twice, I’m still unable to post on Facebook till the three days are up. Only one of those posts was removed from my permanent record.
I’ve often posted about how I think we should use good old policy argumentation when trying to solve problems, and this is a great example. It might be tempting to say that my problem is Facebook, and there are lots of things to say about what’s wrong with Facebook. If “Facebook” is the problem, then the solution is to refuse to participate in Facebook, but my refusing to participate in Facebook doesn’t mean they handle issues of crappy censorship any better. If I quit Facebook, I have solved my problem of Facebook banning me, but I’ve solved it by banning myself.
Facebook banned my post because its policies assume: 1) reducing hate speech can be solved through bots; 2) racism and hate speech are all clear from surface features; 3) sharing is supporting. The first follows neatly from the second and third.
Around the time I was banned last spring, Facebook was being sued by people paid to review posts because their work was so awful that it was giving them serious health issues. Having spent a tiny amount of my time throughout the years trying to engage with the kind of rhetoric those people would have had to read, I can say that their claims were completely legitimate. It would be awful work.
The people who sued argued that the pay should be better, and there should be more support, and those claims seem to be reasonable to me. What puzzles me is why Facebook would decide that someone’s job would be to wade into that toxic fecal matter for forty hours a week at $16-18$ per hour. I assume that settlement is why, last spring, they started relying heavily on bots.
The bots don’t work very well, and so people can complain and get the posts reviewed by humans, but it’s still gerfucked (as in the multiple reports for the same post). It’s also indicative of how people think about “hate speech.” It’s long been fascinating to me that people use “hate speech” and “offensive speech” as though those terms are interchangeable. They aren’t—they don’t even necessarily overlap.
People assume that the problem with “hate speech” is that it expresses hate, and that’s bad. It’s bad on its own (because you shouldn’t hate anyone), and it’s bad because it hurts someone else’s feelings. So, “hate speech” is bad because of feeeeelings. I’m not sure hate is necessarily bad—I think there are some things we should hate. In addition, you can hurt my feelings without expressing hate—if you tell me that I’ve hurt your feelings, I’ll feel bad, so does that make what you did “hate speech”? It’s approaching the whole issue this way that makes people think that telling someone they’re racist is just as bad as saying something racist. They’re wrong.
“Hate speech” is bad because it encourages, enables, and causes violence against a scapegoated out-group.
And it isn’t necessarily offensive. I’ve known a lot of people who didn’t intervene (or think any intervention should happen) for passive-aggressive hate speech because they didn’t notice that it was hate speech. It didn’t seem “hateful” because they, personally, didn’t find it offensive. If we think hate speech is offensive, then we either aspire after a realm of discourse in which no one is ever offended (and that is neither possible nor desirable) or we only care about whether the dominant group is offended.
If we think of hate speech as hateful and offensive, then we’re likely to rely on surface features—that is, whether the speech is vehement and/or has boosters. Vehement speech isn’t necessarily hate speech (although it makes people very uncomfortable, so they’re likely to find it offensive, and mischaracterize it as hate speech), and hate speech isn’t necessarily vehement. It’s hard to notice passive-aggressive attacks on a scapegoat (or scapegoated out-group) because we don’t feel attacked. Thus, the most effective hate speech doesn’t have a lot of what linguists call “boosters” (emphatic words or phrases), but instead seems calm and even hedging. Praeteritio and deflection are useful strategies for maintaining plausible deniability while rousing a base to violence against a scapegoated out-group because people not in the scapegoated out-group won’t be offended by it. (“I don’t know if it’s true, but I’ve heard very smart people say…”)
Thus, surface features aren’t good indicators of whether something is hate speech, nor is whether we are offended by it.
The third bad assumption in this whole dumb process is that sharing is supporting. There’s a big problem as to whether we should share hate speech, even if we’re criticizing it, since we’re thereby boosting the signal (and there are people, like Ann Coulter, who are, I think, deliberately offensive for publicity purposes). But I’m not really talking about that particular dilemma. It struck me when I was working with graduate students how many of them refused to teach a book or essay with which they disagreed, or which they disliked. We still see teaching as profoundly inculcation, as presenting students with admirable things they should like. There are a lot of problems with that way of thinking about teaching (it presumes, for one thing, that the teacher has infallible judgment), and one of those problems is shared with the larger culture—the desire to live in a comfortable world of like-minded and pleasurable things. That is why Facebook is such an informational enclave—because we choose to use it that way.
So, unfortunately, Facebook is probably right that most of the times someone shares an image or post, they’re indicating agreement. I don’t, therefore, object to a post of Mussolini’s headquarters being stuck in timeout for an hour till a human can look and see if it’s approving or disapproving of Mussolini. I do object to the fact that, because of their incompetent system, I’m banned from posting for three days for a post they have decided doesn’t violate their standards. I also object to how difficult it is to get my (not) penalties removed from my permanent record, and I do wish they had smarter bots, and I do wish we were in a world that was smarter about hate speech.