Starting with the Assumption that I’m Wrong

So I’ve been trying this thing. If I’m contemplating a change in my thinking or my life—especially for ethical reasons—I shift my perspective for a bit, and start with the assumption that I’m wrong.

I don’t mean this in a “proof by contradiction” sort of way, like in logic or math, where you assume that the thing you’re trying to prove is wrong so you can come to a paradox and thus find out that it’s really right. I mean it in a more practical way. I mean actually living and thinking, temporarily, as if my old ideas are wrong and the new ones I’m considering are right. I mean living with the new ideas for a little while, to see if my thinking gets clearer. And I mean experimenting to find out: If I were wrong, if I had to change—what would my life look like?

We all have a tendency to start with the assumption that we’re right. It’s just how our human brains work. We start with the assumption that we’re right, that we’re smart, that we’re good—and we work backwards from there. We come up with rationalizations for why the things we do, and the things we want to do, are right, smart, and good. (In fact, unusually intelligent people can be unusually good at this.) And when we’re challenged on our rightness and smartness and goodness, we get defensive. No matter how skeptical we are, no matter how conscious we are of cognitive biases—including this one—we still do this. It doesn’t make us bad people; in fact, there are very good reasons for why our brains work this way (among other things, if we constantly questioned every decision large or small, we’d become frozen, unable to do anything). This is just part of the unconscious background machinery of our minds.

But when it comes to important questions that I really want to look at clearly, rationalization can be a real problem. I’ve been looking at ways to hijack it. And it’s helped to start with the assumption that I’m wrong, to temporarily live as if I’m wrong and need to change.

Let me give you two examples. A little while ago I did a charity fundraiser on my blog, where I promised that if I raised a certain amount of money, I’d go vegetarian for a month. I did this partly as a publicity stunt for the charity—but I also did it as a test. I’ve been wrestling for a while with the ethics of meat and animal products, and I’ve gone back and forth on the vegetarian spectrum for years. I’d tried to go fully veggie before, and I wasn’t able to sustain it—but that was twenty years ago, and vegetarianism has become a lot easier. So I wondered, if I lived for a month acting as if I thought eating meat is always wrong under all circumstances, would that change my thinking? And if I did decide that eating meat was always wrong—what would my life look like?

And I found that my thinking did change. When I wasn’t eating meat—and thus, wasn’t unconsciously rationalizing it—I thought differently about it. I won’t go into all the details here, but my thinking changed, both about the ethics of meat and the ease of avoiding it, and I’ve slid much further towards the veggie end of the spectrum. I’ll soon be going vegan for a week, for the same reason.

A more recent example is the “Ableism Challenge.” On the blog Alex and Ania ‘Splain You a Thing, Ania Onion Cebulla asks people to go for one month without using ableist language, which for those not aware, are words for physical or mental disabilities used as insults—including “lame,” “dumb,” “crazy,” “retard,” and more. The problem with a lot of this language is very clear to me; it’s obvious that using “lame” to suggest something is ineffectual or unenjoyable stigmatizes disability, and using “crazy” in place of, say, “preposterous” stigmatizes mental illness. But with some of this language, I was baffled. I was, for instance, struggling to find alternatives to the word “stupid”—and I didn’t really understand the problem with it. (Google “Ableism Challenge” if you want to learn more about these problems, both with ableist language in general and with specific examples of it.)

I knew, though, that if I tried to logically examine the question from first principles, I’d probably wind up rationalizing why what I was already doing was okay, why I should keep doing it, and why the people trying to convince me I was wrong were doo-doo faces. And as a general principle, I generally listen to marginalized people when they say certain language is marginalizing, and I generally take their word for it, even if I don’t fully understand the reasoning. (After all, I want other people to do that for me.)

So, instead of trying to reason this from first principles, I decided to just try it. After all, it wouldn’t be so difficult to use different words for a month. And I thought it might shake my brain loose from its reflexive position that I am always right about everything.

And yes, this experiment has changed my viewpoint. It’s made me realize how much I use ableist language without even thinking about it. It’s made me realize how I use it in ways that are careless or sloppy or lazy. Dropping ableist language is occasionally annoying, but for the most part it’s made my language more precise. And this experiment—and the conversations I had with myself and others about it—made me realize that my reasons for wanting to hang on to certain words were seriously weak. They amounted to, “I want to do this, it’s easy and familiar, and that’s more important than the fact that it hurts people.”

I don’t think I would have come to those conclusions if I’d just tried to reason them from first principles. For one thing, I simply needed to try the new idea in the real world to see how it played out; I wouldn’t have realized how often I use ableist language if I hadn’t made a conscious effort to stop. But some of it, I think, was that loosening my attachment to the old idea made it easier to seriously examine it.

Sometimes—okay, a lot of the time—when I’m considering making a change, I have the sneaking suspicion that the real reason I don’t want to change is, “It’s hard.” Now, sometimes “It’s hard” is a valid reason not to change. If you support public transportation for environmental reasons, but you work ten-hour days and public transit would add two hours every day to an already long commute, I think “I’d like to quit driving but it’s too hard” is totally legit. But if I’m going to decide to not make a change purely because it’s hard, I want that to be a conscious part of my decision making—not an unconscious force warping my thinking. (If for no other reason: If the thing gets less hard later, I want to take that into account. I’ve skewed a lot more towards the vegetarian end of the spectrum since veggie options became easier to find.)

This doesn’t just apply to the things we do. It can apply to the things we think, the conclusions we come to about what’s true. I keep thinking about Julia Sweeney in her one-woman performance piece, Letting Go of God, when she describes seriously questioning her religious belief, and decides to try putting on the “there is no God” glasses, just for a minute.

Like Mr. Darcy in Pride and Prejudice, we love to think that our investigations and decisions are not usually influenced by our hopes or fears. We love to think that we don’t reach conclusions because we wish it, but that we believe on impartial conviction. And like Mr. Darcy, we are full of it. Our thinking is always weighted towards the conclusion that the things we want to be true really are true. And we can’t counterweight that bias simply by being aware of it, or by pinky-swearing that we won’t do it. We have to find ways to lighten the weight of our bias—or to create counterweights on the other side.

And if I want a counterweight to assuming I’m right, one of the weightiest ones I can think of is assuming I’m wrong—and seeing where it leads me.

  • Stephen Foster

    Excellent! I think this healthy humility may be one of the more important tools for a good future.

  • Some good points very well made, but I would add, how does this assumption of incorrectness differ from scepticism in general terms? I like to think I am sceptical of everything I hold to be true, and everything I am unconvinced of. Certainly, I hold cognitive biases, but my rationalisations are based on consistent and coherent arguments regardless of my biases. If this were not so, I would have no basis for believing those things I hold true, to be true.

    • Mark Hall

      I think the difference is that between an intellectual examination and actually living the issue in question. You say that your rationalisations are based on rational arguments. But it seems likely that Greta might have once said much the same thing about staying omnivorous, or using the term “stupid.” She’s an influential and inspiring skeptic, after all, so basing her life on such a rational foundation is what we would expect of her, and what she would likely expect of herself. Nevertheless, she found that living the issue instead of merely thinking it through led to different conclusions. The results are illuminating. Perhaps you should try the same technique, and see if your “consistent and coherent arguments” still seem consistent and coherent.

      Perhaps I should, too.