“Politics is a strife of interests masquerading as a conflict of principles.”
— Ambrose Bierce (1842-1914).
The tragedy in politics is that real principles are at stake, but people get so obsessed with “winning” that they lose sight of them. The same applies to moral discussions in general.
That’s not going to change, but we can at least try to avoid simple errors in reasoning.
Toward that end, I here present five common errors that mislead political, moral, and religious arguments: Four fallacies and an oversight.
1: The Naturalistic Fallacy
The naturalistic fallacy1 is based on a simple and seductive belief:
Reality determines morality.
The idea is that just by looking at facts about the world, you can deduce what is right or wrong. Your reasoning is so simple and clear that its validity is obvious. Anyone who disagrees with you must be either evil, stupid, or mentally ill.2
The biggest practical problem is that different people use the same facts to justify vastly different moral prescriptions. For example, some people are rich and others are poor. From that, you can argue that the rich should be wealthy because they are more virtuous than the poor. You can also argue that the rich (virtuous or not) have more than they need, and they should pay higher taxes to improve the lives of the poor.
You can make similar arguments about almost anything: sex, race, religion, nationality, and so forth. Until the late 20th century, people of European ancestry dominated the world. Should they have? Did their success mark them as somehow superior to the peoples they conquered? They thought so. Others disagreed. Societies throughout history have persecuted gays. Does that mean they deserve to be persecuted? Or does it just mean that majorities will latch onto any excuse to persecute minorities? Evolutionary psychologist Satoshi Kanazawa remarks:
“From a purely scientific perspective, murder and rape are completely natural for humans, and getting a Ph.D. in evolutionary psychology is completely unnatural … Natural decidedly does not mean good, valuable, or desirable, and unnatural does not mean their opposites.”3
In practical terms, the fallacy enables anyone to argue for almost anything on the ground that it’s natural. That makes it pretty useless as an argument.
Getting from “Is” to “Should”
But that’s only the practical problem. There’s also a logical problem in the naturalistic fallacy.
To clarify, let’s contrast a simple non-moral argument with a simple moral argument. Here’s the non-moral argument:
- Premise 1: John is in the kitchen.
- Premise 2: The kitchen is in the house.
- Conclusion: John is in the house.
The conclusion contains “John,” “is,” “kitchen,” and “house,” all of which occur in the premises.
Here’s the moral argument:
- Premise 1: John is in the house.
- Premise 2: The house is on fire.
- Conclusion: John should get out of the house.
Do you see anything missing?
The conclusion refers to John and the house, both of which are in the premises. But where did that “should” come from?
“Should” is a moral word. It has no factual counterpart. Nothing in the world corresponds to “should,” nor is there any action you can take that would constitute “should-ing.” What you really have is an argument like this:
- Premise 1: John is in the house.
- Premise 2: The house is on fire.
- Premise 3: If John stays in the house, he will be burned.
- Premise 4: John should avoid being burned.
- Conclusion: John should get out of the house.
The more accurate version of our argument has three factual premises and one moral (“should”) premise, all leading to a moral conclusion. But the moral premise isn’t a logical consequence of the factual premises. What does it mean, and where does it come from?
A short explanation goes like this: If John is burned, he will suffer pain and possibly die. We can imagine it happening to us. We have felt pain and we didn’t like it. We have probably suffered the loss of someone who died. So all those things recall unpleasant feelings in us. We prefer to avoid such feelings. When we imagine John in the burning house, we imagine the pain he might suffer. We want him to get out of the house. We think he should. David Hume puts it this way:
“[Reason is] sufficient to instruct us in the pernicious or useful tendency of qualities and actions; it is not alone sufficient to produce any moral blame or approbation … A sentiment should here display itself, to give a preference to the useful above the pernicious tendencies. This sentiment can be no other than a feeling for the happiness of mankind, and a resentment of their misery.”4
The “should” expresses our feelings about the situation. It’s what we would say to John if he were close enough to hear us: “John, get out of the house!” And getting out of the house is reasonable, but it’s not proven by the non-moral premises of the argument.5
Committing the Fallacy by Accident
If you’re not careful when you criticize the naturalistic fallacy, you might commit it yourself.
Suppose you say, “We shouldn’t deduce moral conclusions from non-moral premises.” If someone asks why not, you reply that because of the way the world is, such deductions are unreliable. But why should we care if conclusions are unreliable? You derived a moral conclusion (“We should not do X”) from non-moral premises.
What you need to say is that deriving moral conclusions from non-moral premises can lead to contradictory results or to moral statements with which we disagree. Therefore, such arguments are unreliable by the standards of consistency with logic and with our moral beliefs. Whether that’s good or bad is up to the individual. The answer seems obvious, but it’s not proven. It’s a choice.
And as an aside, that’s what morality is: It’s a choice. It’s not proven based on non-moral facts. It asks all of us the question: “What kind of person do you want to be?” Our answer determines how we will try to live our lives.
2: The Moralistic Fallacy
The moralistic fallacy is the opposite of the naturalistic fallacy. It assumes that:
Morality determines reality.
The moralistic fallacy assumes that whatever is morally desirable must be true.
For example, suppose we believe (as I do) that all people should be treated equally by the law. From that idea, we might conclude that all people are in fact equal in every respect.
Unfortunately, it’s not true. I could train for 18 hours a day but could never become a good gymnast, simply because I lack the innate ability. Others could make similar efforts and never become good mathematicians. Still others, even if they have the ability, just aren’t interested in such careers. People differ. That used to be called “diversity” before we redefined the word to mean something else entirely.
But that’s not the worst consequence of the moralistic fallacy. The worst consequence comes from a logically valid type of argument called Modus Tollens. It goes like this:
- Premise 1: If X is true, then Y is true.
- Premise 2: Y is not true.
- Conclusion: Therefore, X is not true.
An example of Modus Tollens is:
- Premise 1: If it is raining, then the streets are wet.
- Premise 2: The streets are not wet.
- Conclusion: Therefore, it is not raining.
The moralistic fallacy makes it seem like any denial of politically-correct dogma is a denial of more reasonable moral beliefs. For example:
- Premise 1: If all people should be treated equally by the law, then all people are equal in every respect.
- Premise 2: It is not true that all people are equal in every respect.
- Conclusion: Therefore, it is not true that all people should be treated equally by the law.
Premise 1 is false, so the conclusion can also be false. We can believe that people differ but also believe that they should be treated equally by the law.
But since the moralistic fallacy makes them believe morality determines reality, “social justice” mobs scream for the heads of any infidels who deny Sacred Doctrine. They think such denials imply immoral ideas, and that people who hold such ideas should be fired, vilified, and put under a P.C. fatwa for the rest of their mortal existence.
3: The Rationalistic Fallacy
The rationalistic fallacy assumes that:
Logic and evidence determine my beliefs.
People who commit the fallacy assume that they, themselves, hold beliefs based solely on logic and evidence. Other people are within the golden circle only if they agree with the self-styled rationalists. If they disagree, they are presumed to be fools or worse.
This fallacy betrays a curious lack of self-awareness. Everyone who has ever believed much of anything has sometimes turned out to be wrong, and people who disagreed with them sometimes turned out to be right.
Even in our cosmopolitan era, most of us work and socialize with people similar to us. Our friends and co-workers tend to have comparable education, similar jobs, similar backgrounds, and to live in similar neighborhoods. More than our co-workers, our friends tend to be the same race, religion, and nationality as we are. Other people in our group tend to think like we do, have the same values as we do, and believe most of the same things as we do. Group members reinforce each other’s beliefs and make it seem as if almost everyone believes the same things.
In simple cases, our beliefs sometimes are based solely on logic and evidence. If you believe that there are 10 apples in a barrel, but I count them in front of you and show that there are only nine, you will change your belief. Counting apples is a simple case, with no other factors that introduce any uncertainty. Moreover, the number of apples in a barrel doesn’t matter to you emotionally unless you’re starving or we have a bet. If the barrel has 10 apples or nine, either is okay with you.
In complex cases, our beliefs depend on a larger amount of evidence. We can’t personally verify most of the evidence, and some pieces of evidence conflict with others. We have to decide which evidence to believe and how significant it is to our conclusion. Our emotions bias our judgment, as do our previous experiences and beliefs.
Moral and social issues are especially vulnerable to the rationalistic fallacy. People want to think of themselves as morally good, and they also want to be seen by others as morally good. Because most of their associates have the same beliefs, they want to adopt conforming beliefs so they are accepted by the group. The desire for acceptance biases their judgment and makes them evaluate evidence differently than they would otherwise, but they still believe they’re just being rational.
Our existing stock of concepts and stories also biases how we understand new information. If we see immigration through the lens of Europe in 1939, then all immigrants look like Jews fleeing the Nazis. On the other hand, if we see it through the lens of terrorist attacks in Belgium, France, and the United States, then all immigrants look like Islamic terrorists. Such initial perceptions exert a powerful bias on how we assess evidence and on the conclusions we reach.
We can partially overcome such bias, but we must make a deliberate effort to do so. We can’t do it if we think we have no bias to overcome.
4: The Existentialist Fallacy
The existentialist fallacy6 is based on another seductive assumption:
Reality is whatever you want it to be.
The fallacy is only loosely derived from the philosophy of existentialism, which says that humans can and must define the meaning of their lives.
Inanimate objects cannot define themselves. They simply are what they are. For example, a coffee cup must have certain characteristics in order to be a coffee cup: those characteristics are its “essence.” Before a coffee cup can exist, its essence must exist; otherwise you can’t make a coffee cup. In existentialist argot, the cup’s essence precedes its existence.
Existentialists say that humans have no fixed essence as people. Humans must define their essence by the choices they make. Therefore, their existence precedes their essence. In a sense, human beings have the power to choose what they are, at least mentally. They choose what kind of character they have, how they live, and what their lives mean. But that’s it. As far as I know, existentialism never said they could choose to be bunny rabbits, have 17 toes, or fly like Superman.
Don’t feel bad if your eyes are glazing over. Existentialism has that effect on people. However, in spite of its eye-glazing obscurity, it does have some valid insights. The existentialist fallacy makes a long leap from those valid insights all the way to what psychologists call magical thinking.
According to the fallacy, if you’re a man who wants to be a woman, then you’re a woman. If we wish everyone had the ability and interest for STEM careers, then they do. If it would be nice for large multi-ethnic, multi-cultural, multi-national, multi-religious societies to be cohesive and harmonious, then they can be. And so forth.
This fallacy also functions as a kind of “get out of jail free” card for other fallacies such as the moralistic fallacy. If you think that moral idea X implies reality Y, but Y obviously isn’t true, then the existentialist fallacy makes it all better: “If you want Y to be true, then it’s true.” Anyone who says otherwise is a hateful bigot who should be ignored.
The fallacy leads to cases such as students who are too intimidated or brainwashed to disagree with a middle-aged white man when he claims to be a Chinese woman or to be seven years old.
Lest you accuse me of committing the naturalistic fallacy, I’m not saying it’s bad for people to be completely unhinged from reality. I’m just saying that such a society can’t last very long. Whether it’s good or bad is up to the people in the society.
5: Overlooking Opportunity Cost
Opportunity cost is an economic concept that people almost never think about. We often hear statements such as:
- “We should bring more refugees to our country.”
- “We should spend more money on education.”
- “We should spend more money on helping the poor.”
- “We should allow anyone who wants a job to come to America legally.”
In the abstract, those are nice ideas. It’s nice to want to help people. But unless our resources are infinite, which they are not, then helping some people means not helping others. If we spend $10 million to help the poor in Baltimore, for example, it’s $10 million we no longer have to spend on disease prevention or other worthy causes. That’s opportunity cost:
”Choosing one thing in a world of scarcity means giving up something else. The opportunity cost [of a particular choice] is the value of the most valuable good or service foregone.”7
Opportunity costs are not just monetary. For example, rapes and terrorist attacks in Germany, Belgium, France, and the United States have shown that well-meaning compassion for Islamic migrants can endanger citizens of the countries that allow migrants entry. It might be worth it, but we need to consider that cost in evaluating our policies.
Similarly, U.S. black unemployment is extremely high, which hurts black Americans and causes many social problems. Allowing immigration by millions of Hispanics who compete for the same jobs makes black unemployment even worse. That’s an opportunity cost. It might be worth it, but we need to consider that cost in evaluating our policies.
Ignoring opportunity cost is related to the political problem of concentrated benefits and diffuse costs. When members of special interest groups get enormous benefits from changes in the law, but the costs are widely dispersed so that non-members each pay only a little, the groups want to have everyone ignore the costs and just focus on the benefits. A small number of people each get large benefits, so they are organized and motivated to push for what they want. The majority of people each pay only a little (whether in money or quality of life), so they are unorganized, less motivated, and are easily defeated by the special interest groups. Contemporary society has many examples of the problem.
So there they are: four fallacies and an oversight. Please do not commit them:
- Don’t assume that the facts determine the moral answers.
- Don’t assume that the moral answers determine the facts.
- Don’t assume that you reasoning or anyone else’s is error-proof.
- Don’t assume that reality is whatever you want it to be.
- Don’t overlook opportunity costs.
Kanazawa, S. (2012), The Intelligence Paradox: Why the Intelligent Choice Isn’t Always the Smart One. Hoboken: John Wiley & Sons, Inc.
Samuelson, P. and Nordhaus, W. (2001), Economics, 17th edition. New York: McGraw-Hill Higher Education.
Schneewind, J. B., ed. (1983), David Hume: An Enquiry Concerning the Principles of Morals. Indianapolis: Hackett Publishing Company. Kindle edition.