A couple of years ago, I remember talking to someone online (let's call him Bill) and he questioned the death toll of the holocaust. I remember feeling ill at ease when he said this, even though I had never seen him express any racist or white supremacist views. In fact, it doesn't logically follow from the claim that the holocaust was less severe than reported that someone supports Nazis or is racist.
And that's the interesting thing. My brain automatically made a series of connections and produced an emotional response. I would describe the process as follows:
Bill claims the holocaust was not as bad as reported → If the holocaust was not as bad as reported, then Nazis are less evil than originally assumed → But the Nazis were evil! Why is this man trying to defend Nazis?! → I must counter his viewpoint that the Nazis aren't that evil → Defensive response.
It seems that we have automatic associations between certain viewpoints and implications about the character and beliefs of those who hold those viewpoints. Those associations are not deductively correct; that is, holocaust denial cannot be linked through a bulletproof logical proof to racism on the part of the holocaust denier.
In the case of my recent post about feminism, some people responded with claims that are directly supported by what I wrote, but they presented these claims as though they were either contradicting me or making sure I wasn't committing an error in my thinking. I believe the chain of associations, in this case, went like this:
Shawn is being critical of feminism → People critical of feminism are often misogynists and sexists → Shawn is a well-meaning guy and likely doesn't intend to be a sexist → Provide Shawn with arguments contradicting sexism and misogyny.
At least, that was often the chain of associations for those who responded respectfully. Once again, like in the holocaust example, sexism and misogyny cannot be logically deduced from the fact that I am being critical of feminism.
So why do we have this function at all? Why do we seem driven to do this automatically, and why does it take a certain amount of effort and self-awareness to not automatically commit these logical fallacies? It can't have been an accident that we do this kind of inductive, imperfect thinking regarding issues like these.
I think we have a pretty sophisticated intuition and that much of this intuition (maybe even most of it) exists so that we can function well in a social environment. Even if we're not explicitly aware of it, our subconscious automatically assumes that people with certain kinds of values and ideas are dangerous to us and our tribe, and motivates us to either act to modify those values and ideas so that the person can function better in our tribe, or to act to remove them from the tribe.
In order for this function to be evolutionarily and pragmatically useful, it does not have to be perfect. In fact, it is almost certain that an intuition that demanded solid, irrefutable proof before assuming things about other people has a distinct disadvantage to intuitions that are willing to act on less than perfect evidence. The optimal risk/reward ratio for acting on intuition no doubt lies somewhere between absolute certainty and almost certain doubt. The implications for the evolutionary development of our intuition follow necessarily.
I actually think there is nothing wrong with this function. I do think, however, that it behooves us to be aware of processes like this so that we can have a better handle on situations where it comes about. It's fine to wonder if a holocaust denier is a racist, but it's certainly helpful to be consciously aware of the fact that you don't have solid evidence yet that he is a racist.
No comments:
Post a Comment