Posted by: N.S. Palmer | July 23, 2016

The Trouble with Libertarianism


“Libertarianism can and does work.”

That’s the conclusion of a guest post on the “Ex-Army” blog site, which I recommend.

I agree that libertarianism can and does work. It works (at least somewhat and for a while) with groups of people who have:

  • Above-average intelligence,1
  • Adequate impulse control,
  • A common language,
  • A good education, and
  • Shared history, values, and traditions.
  • Shared ethnicity helps but is not essential.

Those things provide the respect for personal autonomy, tolerance of disagreement, and commitment to the common good (!) that make libertarianism possible.

Libertarians usually have those qualities at least to some degree. So do all their friends and associates.

As a result, like fish who are unable to see the water in which they swim, libertarians take those qualities for granted. They assume that libertarianism will work with any group of people, no matter how diverse and bitterly divided the group might be, or how lacking its people might be in the essential qualities that would make a libertarian society possible. Political philosopher Edmund Burke zeroed in on the main problem:

”Men of intemperate minds cannot be free. Their passions forge their fetters.”2


Even among libertarians, libertarianism is a utopian idea because libertarians themselves vary in the qualities that make libertarianism possible.

When I worked on Capitol Hill (for the honest and decent Ron Paul, among other people), I worked with some libertarians who were committed idealists. I worked with others who were simply careerists or who used libertarianism primarily as a means to power, wealth, and personal aggrandizement. They would have been just as happy spouting socialist arguments or selling used cars, but they apparently thought that libertarianism would pay better.

By the nature of human groups, power sooner or later tends to come into the hands of those who want it and seek to acquire it. The idealists, who don’t care about power, end up taking orders from the people who do care about it.

Ironically, libertarians commit the same mistake as liberals and Marxists: They assume that human beings and human society are perfectible. That error stems from a deeper one: Libertarians base their beliefs on abstract ideology instead of looking at real people and real societies.

They start with a definition: “Man is a rational animal.” Then they might throw in a little Randian mumbo-jumbo about how “A is A” implies free-market economics.

From that, they deduce how they think things ought to work. They assume that how things ought to work is how they in fact do work. They see no need to check their conclusions against reality because their premises seemed sound and their reasoning seemed logical. As a result, it escapes them that there has never been a libertarian society that lasted for any significant length of time.3

Which brings to mind a joke about economists: If you show an economist that an idea works in practice, he objects: “Yes, but does it work in theory?”

The biggest problem with libertarianism is not that it’s mistaken, historically oblivious, or based on wildly inaccurate notions of human nature and human society. The biggest problem is that it leads intelligent, educated, well-meaning people on a political wild goose chase. It causes them to waste their time and effort pursuing an unattainable utopian ideal instead of working for attainable goods that would benefit real people. The French philosopher Voltaire diagnosed the problem:

”The perfect is the enemy of the good.”

The more complete quote from Edmund Burke alludes to the good that libertarianism spurns in favor of unattainable perfection:

“Society cannot exist, unless a controlling power upon will and appetite be placed somewhere; and the less of it there is within, the more there must be without. It is ordained in the eternal constitution of things, that men of intemperate minds cannot be free. Their passions forge their fetters.”4

Works Cited

Burke, E. (2014), The Complete Works of Edmund Burke. Hastings: Delphi Classics. Kindle edition.


  1. As measured by any of the standard IQ tests, some of which are designed to eliminate cultural bias. Various human groups have different mean IQs, but an IQ of 100 is a reasonable minimum for a viable libertarian society. 
  2. Burke, E. (2014), loc. 69542. 
  3. Dr. Rinth de Shadley gave an excellent analysis of libertarianism. Sadly, she is a liberal, but she is still a nice person and is quite brilliant. 
  4. Ibid, loc. 69542. 
Posted by: N.S. Palmer | June 30, 2016

A Friend in Politics


”If you want a friend in politics, buy a dog.”

That adage is sometimes attributed to Everett Dirksen (1896-1969), who represented the State of Illinois from 1933-1969 in the U.S. House of Representatives and then in the Senate. Dirksen is best known for helping write the Civil Rights Act of 1964 and for staunchly supporting the Vietnam War.

Decades after Dirksen passed away, I arrived on Capitol Hill as a freshly-minted college graduate who wanted to make a better world — and thought he could. I’d done well in school, had read lots of books, and believed that I knew how the world worked. I was chock-full of moral and political principles, suffering from what the humanist philosopher Paul Kurtz jokingly called “principle-itis.”

I thought that the problem with Washington was that politicians didn’t understand economics, history, or the Constitution. They needed to be educated by someone who’d read a lot of books. And I was just the person to do it.

Do I need to point out that being well-read is compatible with being incredibly naive?

I knew what books said about how American government worked, but I was unprepared for the reality. Otto von Bismarck (1815-1898), who was Chancellor of Germany, put it best:

”Laws are like sausages, it is better not to see them being made.”1

My first job interviews should have given me a clue. Because I was a libertarian, most of the interviews were with Republican politicians’ staff or Republican-leaning political groups. I wanted to tell them about freedom, economics, and the Constitution.

They only wanted the answer to one question: Was I a loyal Republican?

I always had trouble with that one.

Washington’s Moral Inversion

I eventually realized that it was how Washington worked. It worked that way because it was a winning strategy. Minor parties that wouldn’t compromise their principles stayed morally pure but couldn’t get anything done, at least in the short run. The major parties couldn’t remember their principles, if indeed they’d ever had any.

The situation causes a strange moral inversion that you wouldn’t think of unless you had seen it at your job every day. It’s this: In politics, morally serious people are considered untrustworthy, while amoral careerists are trusted.

Suppose that you’re a U.S. Senator who supports policy X. It doesn’t matter what X is: for example, war, abortion, or religious freedom. And you have two staff members:

  • One sincerely and enthusiastically believes in X. It’s why he wanted to work for you.
  • The other doesn’t care at all about X. He cares only about money, power, and career advancement. (This type is much, much more common than the sincere believer.)

Both of your staff members support X. Then, for some reason, you decide to stop supporting X and throw your support to the other side. What happens then?

The staff member who sincerely believes in X becomes unreliable. His commitment is not to you, but to X. You don’t trust him anymore.

On the other hand, the staff member whose only interest is in money, power, and personal advancement will support you as long as (a) you provide those things and (b) nobody else makes him a better offer. You can trust him, at least as much as you can ever trust anyone like that. Your flip-flop on policy X won’t make any difference to him. He’ll support your new policy position just as vigorously as the old one.

In Washington, loyalty to your boss and your party is almost everything. On one occasion, I was working for a libertarian political group and discovered that it was using pirated copies of software. For better or worse, I was enough of a team player that I didn’t report it to anyone except my boss. That, however, put me under suspicion because I was considered an honest person who might call the software company. The situation got uncomfortable enough that I shortly thereafter took another job. I was better off than a friend of mine, who reported embezzlement at a government agency and then was himself framed for the crime. (I have no first-hand knowledge of it, but I believe my friend.)

Books, Speeches, and Degrees

Most of the books “authored” by politicians are not written by them. Instead, they’re written by professional ghostwriters, and occasionally by the politician’s staff. The same applies to politicians’ speeches, though I think most people know that already.

And at least some of the advanced academic degrees held by political players are based on work that other people did for them. I know first-hand of one case and second-hand of another. Beyond that, it’s all rumor because nobody involved in academic fraud wants to admit to it.

First and Second Bananas

Presidents and vice-presidents of political groups were another oddity. I can’t explain it, but I saw it often enough to recognize a pattern.

The presidents of political groups were sometimes awful human beings who were only in it for themselves. They didn’t believe in what their groups were doing and they treated their staffs badly. But if big-money donors walked into the room, the very same sociopathic tyrants suddenly turned into the most likeable and idealistic people you ever saw. I witnessed that transformation several times.

The vice presidents of political groups were another story. They were often idealists, truly decent people who believed in what their groups were doing. They were the ones who did the work and got good-faith cooperation from others. They kept the staffs from quitting. Very strange.

And the U.S. Postal Service …

After a few years, I’d had enough of Washington politics and switched to a job as a news reporter. I covered several federal agencies, including the FBI, NASA, and the U.S. Postal Service. Some things would surprise you, and others wouldn’t.

At the FBI, all of my experience was positive. They seemed like an honorable bunch of people. There were certain things that they wouldn’t tell you, and certain things they wouldn’t do for you, but it wasn’t arbitrary. You were not allowed to go anywhere in FBI headquarters without an escort, and you weren’t going to interview any FBI officials without a public-affairs person being present. They knew I had a job to do, and I knew the same about them. I’m sure that there were and are bad apples, black ops, and all that, but the FBI people who I knew were aces.

The NASA people: What can I say? As you would expect, they were wicked smart. For what we think of as a “space agency,” they devoted a surprising amount of attention to climate change. One of them, James Hansen, had been a climate-change skeptic before I met him. He ended up becoming a climate-change activist and quitting NASA. That made an impression on me.

And the U.S. Postal Service? As you might not expect, there were a lot of very smart people at the USPS. Most of the people I interviewed worked on artificial-intelligence projects to automate mail management. Their technical ideas were very impressive; it’s just too bad that they never had the money to implement most of them.


  1. There is some doubt about whether Bismarck actually said it. According to Fred Shapiro, editor of The Yale Book of Quotations, it more likely originated with 19th-century lawyer John Godfrey Saxe and was attributed to Bismarck because he was famous. 
Posted by: N.S. Palmer | June 22, 2016

What’s Changed Since 1916


Some things change, and some don’t.

That was the central argument between the Greek philosophers Heraclitus and Parmenides (both ca. 500 BCE). Heraclitus thought that the only constant was change. He argued that you can’t step into the same river twice, because by the time you step into it a second time, it’s already changed. Parmenides, on the other hand, believed that change was only apparent and that real things did not change. We’re still having the same argument over two millennia later.

The fairest judgment is that both Heraclitus and Parmenides were right, but that they emphasized different things. Looking back 100 years at how America was in 1916, we find that some things have changed and others have stayed the same.

My bookshelf provides many such windows into the past. One item on the shelf is a tattered copy of The American Review of Reviews from January 1916. Founded by Albert Shaw, who had been a classmate of future U.S. President Woodrow Wilson, the Review published from 1890 to 1937. Its January 1916 issue shows much of what has changed and what hasn’t changed since then.

Front Cover

The front cover (above) shows an issue price of 25 cents, or $3 for a year’s subscription. It lists articles by various luminaries. Notable is the now-forgotten Lothrop Stoddard, who was a popular public intellectual at the time — so much so that a character in The Great Gatsby (1925) refers to one of his books.1

The School Advertisements


One section has school advertisements grouped by geographic region. The schools are mostly single-sex, and all the boarding schools are single-sex. A page of text introduces the section and extols the virtues of private boarding schools:

”LOYALTY. There is a feeling we have for our native land: It is called patriotism. We have a similar feeling toward our friends and the institutions we hold dear … If as school boys and girls we are not heart and soul loyal to some one particular school, then we have lost a great opportunity to develop a true spirit of loyalty and appreciation not only of the individual but of groups of people and of communities — that big side of our character upon which in later life is built our ideals and our ambitions.”

The girls’ school ads emphasize languages, arts, household science (home economics), “special finishing courses” (i.e., how to act like a lady), and graduation certificates that provide a fast track to Vassar and other women’s colleges.

The boys’ school ads are often for military schools, since that was still considered a respectable upper-class career. They emphasize business, science, character-building, athletics, and college preparation. I was surprised not to find an ad for the school I attended, since it existed at the time and the magazine lists two of its rival schools.

Advertisement for History Books


The Great American Crisis was a 20-volume history of the American Civil War, “without bias or prejudice” and promising “justice to both North and South.”

It’s worth remembering that in 1916, plenty of Civil War veterans were aged but still alive. They probably occupied a place of honor in America similar to that of World War II veterans today. Each volume in the series is written by a different author, including Booker T. Washington, who was the pre-eminent black intellectual of his time and advised several American presidents.

The whole set cost $25. Purchasers sent in $1 up front and then paid $2 a month for a year.

Advertisement for Diet Books


One thing that hasn’t changed is the market for diet books. Do you want to know “what foods cause constipation, indigestion, fermentation, and rheumatism?” For only $3, you can get a little set of diet books by Eugene Christian, “recognized as the world’s greatest authority on food and its relation to the human system.” The ad doesn’t say if he’s recognized as an authority by anyone other than himself. But send no money: “Either return the books within five days or send $3.” That’s called the honor system, folks. It hasn’t been seen in America for a long time.

Better known than Eugene Christian was Horace Fletcher (1849-1919), who advocated chewing food 32 times before swallowing it: called “Fletcherizing” the food. His nickname was “The Great Masticator.” To this day, nobody knows if the nickname was intended to make fun of him. Ruminate on that, if you cud.

Progress of the World


The news analysis section discusses the Great War, of course: It had started in July 1914 and didn’t become “World War I” until there was a “World War II.” The article refers to it as “the European war,” as Americans often called it before the United States entered the war.

It’s true that when the Great War started, a lot of people thought it would be short: “German troops were assured that they would be home in time for Christmas.” Nobody expected it to turn into the continent-wide slaughterhouse that was arguably the beginning of the end for Western civilization. So many of our best men were killed that the name people applied to the generation of the 1920s — “the lost generation” — might more aptly have been given to the ones who died.

Even worse, the draconian surrender terms that the allies imposed on Germany in the Treaty of Versailles (1919) made World War II almost inevitable, as John Maynard Keynes warned in The Economic Consequences of the Peace. The article quotes the German Chancellor addressing the Reichstag (the German parliament) in 1915:

”If our enemies make peace proposals compatible with Germany’s dignity and safety, then we shall always be ready to discuss them.”

The entry of America into the conflict tilted the balance enough that the victorious allies could and did humiliate the Germans. Wounded pride is always dangerous, and in that case, it proved to be deadly beyond imagination.

War Profiteers


Profit is not the only reason wars keep occurring, but it’s one of them. There’s money to be made by selling arms to all sides of every conflict.

War profiteers get richer and politicians pose as war heroes without ever getting near a battlefield. The only losers are voiceless: the dead, the wounded, and the taxpayers.

Educating Immigrants for America


Americans in 1916 assumed that immigrants would forsake their lands of origin and become Americans. The popular metaphor was that of the United States as a “melting pot,” in which distinct nationalities and cultures joined the dominant American culture and added to it.

The idea that immigrants should remain separate from or even hostile to the American mainstream would have been dismissed as foolish and harmful since it leads directly to social strife. We see the results all around us in 2016. Of course, assimilation was easier to wish for than to achieve:

”The process has too often been irregular and haphazard. Many who should have become citizens have failed to qualify because of the lack of proper encouragement and assistance.”

Reducing Illiteracy


Another thing that hasn’t changed much is the problem of illiteracy. We live in an interconnected world and an increasingly technological culture. People who can’t read adequately or at all are excluded from participation in most events and issues. That was less true in 1916, but it was still a problem for a democratic republic:

”The illiteracy of millions of unschooled men and women — children in mind, though adult in years.”

Current events on both sides of the Atlantic provide ample evidence that plenty of “children in mind, though adult in years” are still with us.

Finding a Sensible Cigarette


Educated people seldom smoke anymore. That’s more because of social pressure than because tobacco is unhealthful, a fact that has been known at least since the early 1600s. In 1604, King James IV of Scotland wrote about the dangers of tobacco, and he probably wasn’t the first one to do so.

Until the 1960s, smoking was as much in fashion as it is now out of fashion. An unintentionally funny radio commercial of the late 1940s reported on a survey of 114,000 doctors. The survey discovered that more doctors smoked Camel cigarettes than any other brand. The commercial suggested that for good health, everyone should “do what doctors do” and smoke Camels.

Back Cover: Buying a Good Car


Finding a good car is still a problem: That hasn’t changed. Reliability is crucial, especially as the car gets older:

“A Pierce-Arrow grows old as gracefully as a good oriental rug or a Chippendale chair.”

In 1916, automobiles were still a luxury item: notice the chauffeur in the picture. It wasn’t until the 1920s that millions of average families owned cars.


  1. Stoddard’s ideas are shocking today, but they were entirely mainstream in his time. He held views similar to those of U.S. President Woodrow Wilson and Planned Parenthood founder Margaret Sanger
Posted by: N.S. Palmer | June 18, 2016

Harvard Classics: An Education on a Bookshelf


They are little remembered today, but they were a publishing sensation in the early 20th century and sold almost half a million copies.

The Harvard Classics’ 51 volumes include some of the greatest achievements of Western thought and literature up to the end of the 19th century. They cover science, philosophy, literature, poetry, religion, history, economics, medicine, and a range of other topics.

You can still get the complete set of printed books on the web for $300 or so. And now they’re available in e-book format, easy to download and dirt cheap:

The idea for the set came from a speech by Charles Eliot (1834-1926), a chemist who was president of Harvard University from 1869 to 1909. In a public lecture, Dr. Eliot had said that:

”A five-foot shelf would hold books enough to afford a good substitute for a liberal education to anyone who would read them with devotion, even if he could spare but fifteen minutes a day.”1

Two book editors from the Collier publishing company challenged him to make good on his claim. In response, Dr. Eliot worked with a team of scholars to select the best works and the best translations for a general audience. He then got permission from Harvard to use its name on the set of books. Collier published the set.


In addition to the 51 volumes of the set, the 52nd volume is a “Reading Guide.” It lists a short reading for each day. At the end of the year, readers have learned something about all the subjects of the set. The 51st volume is one of my favorites. It has lectures by leading thinkers of the late 19th century about history, science, philosophy, and other subjects.

It’s also a marvelous reference set. If you combine it with The Great Books of the Western World (published by the Encyclopaedia Britannica) and a few other books, you have at your fingertips most of the greatest achievements of human thought.

My bookshelf has both sets, along with books by Pico della Mirandola, F. Scott Fitzgerald, Boethius, Thomas Aquinas, and a Hebrew Bible. That’s almost everything a person would need to acquire a liberal education, though I have more specialized books (mathematics, economics, and so forth) in other bookcases.

If you’ve got a little money and 15 minutes a day, very few investments can give you a more valuable result.

Works Cited

Eliot, C., editor (1909), The Harvard Classics. New York: P.F. Collier & Son.


  1. Eliot, C. (1909), Reading Guide, p. 7. At the time Dr. Eliot wrote, the vast majority of Americans did not go to college and many did not finish high school. 
Posted by: N.S. Palmer | June 16, 2016

Rawls: What’s Old Is New Again


In his book Political Liberalism, the influential 20th-century philosopher John Rawls channeled some common sense from the 19th century:

“The most intractable struggles … are confessedly for the sake of the highest things: for religion, for philosophical views of the world, and for different moral conceptions of the good. We should find it remarkable that, so deeply opposed in these ways, just cooperation among free and equal citizens is possible at all.”

Rawls adds:

“In fact, historical experience suggests that it rarely is.”1

Our recent experience also suggests that it rarely is.

But it’s nothing new. As Walter Bagehot wrote in Physics and Politics (1872):

“A nation means a LIKE body of men, because of that likeness capable of acting together, and because of that likeness inclined to obey similar rules.”2

He adds that to mix incompatible cultures and worldviews in the same society:

“… tended to confuse all the relations of human life, and all men’s notions of right and wrong; or by compelling men to tolerate in so near a relation as that of fellow-citizens differences upon the main points of human life, led to a general carelessness and scepticism, and encouraged the notion that right and wrong had no real existence, but were mere creatures of human opinion.”3

Exactly as we see in contemporary America and Europe.

Paraphrasing George Santayana, “Those who do not learn from history are doomed to repeat it.”

Unfortunately, those who do learn from history are doomed to watch those who don’t learn from it commit the same avoidable and catastrophic errors that brought down great nations of the past.

Works Cited

Bagehot, W. (2007), Physics and Politics. New York: Cosimo Classics. Kindle edition.

Rawls, J. (1993), Political Liberalism. New York: Columbia University Press. Kindle edition.


  1. Rawls, J. (1993), loc. 727. 
  2. Bagehot, W. (1873), loc. 171. 
  3. Ibid, loc. 335. 
Posted by: N.S. Palmer | June 9, 2016

Things Intolerant People Never Say


Everyone claims to favor tolerance. And they do.

Unfortunately, the tolerance they favor is only for them, and for ideas of which they approve. Anyone or anything they don’t like is fair game to be vilified and suppressed.

I don’t want to beat up on any particular group of people, though some seem more hypocritical and intolerant than others. The real problem is human nature.

We all have our own narrow viewpoints. We tend to believe that anyone who disagrees with us is wrong. And since our biological nature evolved in conditions vastly different from those today, we often react to disagreement as if it were a threat to our physical safety. Our nervous system lights up like a Christmas tree. Adrenaline pours into our bloodstream. Our heartbeat quickens. Our circulatory system diverts oxygen from our brains to our muscles.

We are ready to fight or flee, depending on how we evaluate the danger from what suddenly seems like a mortal enemy.

And all he said was, “The Bruins might lose to Notre Dame this time.”

There are certain things that intolerant people — that is, most people most of the time, and almost all of us some of the time — never say. Here are a few of them.

“I might be wrong.”

“I might be wrong” simply recognizes that we don’t know everything and our judgment isn’t perfect. When we’re absolutely certain of our own rightness, we often feel entitled to persecute those who disagree or who act contrary to our belief.

“I might be wrong” is not the same as saying “I am wrong now,” “I am usually wrong,” or “There’s no right or wrong opinion.” We can stand by our beliefs. Sometimes we are right, and there often is a correct opinion. But we are blindly arrogant if we deny that maybe, possibly, sometimes, we might be wrong and the other people might be right.

We should be especially suspicious of beliefs that we want to be true, that give us money and status, or that fit what we already believe. In those cases especially, we are biased and we might be wrong.

“It’s none of my business.”

The American writer H.L. Mencken defined Puritanism as “the haunting fear that someone, somewhere might be happy.” Most things that other people do are none of our business.

If someone wants to have a hundred body piercings, I will think (but not say) that he or she is nuts; but I won’t try to stop it, nor will I claim that I have any right to do so. As long as they don’t harm others, what people do with their own bodies is none of my business unless they’re a close family relation of mine. Even then, the decision usually must be left up to them.

There are borderline cases, of course, and practicality is sometimes an issue. I don’t approve of how Islamic countries treat women, nor of the barbaric punishments they inflict for minor crimes and even for non-crimes. On the other hand, I don’t live in an Islamic country, so I am not required to oppose those things as I would if they happened in my own country, nor do I have any power to stop them even if I did oppose them. For all practical purposes, they are none of my business.

The Constitution of the United States originally took the same approach. Apart from basic rights and issues that affected the whole country, it left most decisions to state and local governments. That prevented the kind of bitter disputes we see today, since there’s no way to reach a national consensus on some moral issues. But the U.S. national government is now controlled by people who never think that they might be wrong, so they think that everything is their business. That is, by the way, a textbook example of a totalitarian government.

“What’s the harm?”

When people do things we don’t like, we unconsciously start looking for reasons to stop them from doing what we don’t like. Unsurprisingly, we find them.

That kind of moral rationalization is familiar to psychologists. Researcher Jonathan Haidt told people stories that were morally troubling but in which no one suffered harm, such as a morgue attendant taking home choice cuts to cook for dinner. When the people condemned the actions, Haidt challenged them to justify the condemnation:

“The biggest surprise was that so many subjects tried to invent victims. I had written the stories carefully to remove all conceivable harm to other people, yet in 38 percent of the 1,620 times that people heard a harmless-offensive story, they claimed that somebody was harmed … Most of these supposed harms were post hoc fabrications. People usually condemned the actions very quickly. But it often took them a while to come up with a victim …”1

That kind of post hoc rationalization was on display after a recent disruption at DePaul University. Conservative writer Milo Yiannopoulos was giving a speech in which he challenged a variety of leftist ideas. Several people disagreed, so they disrupted the event and took over the stage while campus security guards did nothing. The ringleader of the disruption later said that Yiannopoulos’s opinions “threatened his safety” — i.e., he fabricated harm to justify his actions.

“Would my actions cause greater harm?”

Stealing is wrong. It causes harm. In Saudi Arabia, the punishment is to have your hand cut off. What’s wrong with that picture?

What’s wrong is that the punishment causes vastly greater harm than the crime itself.

Likewise, suppose that some people do things of which we don’t approve and, unlike at DePaul, they cause a small amount of real harm. Does that entitle us to threaten them, slander them, and burn down their houses? It might prevent repetition of their harmful acts, but we would be guilty of causing far greater harm than people we “punished.” We want to have our cake and sue it, too.

“Why do they believe that?”

This is last because it’s arguably the most difficult. Other people’s ideas, psychology, values, and behavior have been influenced by their life histories. We don’t know what experiences led them to think and act as they do. It’s very difficult for us to put ourselves in their place and imagine how the world looks to them. It’s even more difficult to imagine how they feel about situations.

In a way, this is the converse of “I might be wrong.” Just as our viewpoints have been shaped by our experiences, their viewpoints have been shaped by theirs. Even if they are wrong, we should do our best to understand why they think they’re right.

In peaceful situations, it’s the moral thing to do. In violent situations, such as repeated terrorist attacks, it’s a practical thing to do. If we understand people, their viewpoints, motives, hurts, and fears, we can deal with them better, more fairly and effectively.

So the next time you encounter disagreement, don’t just get emotional and start yelling. Stop and think:

  • Could you be wrong?
  • Is it any of your business anyway?
  • What’s the harm?
  • Would you cause greater harm?
  • Why do they believe that?

Works Cited

Haidt, J. (2012), The Righteous Mind. New York: Pantheon Books.


  1. Haidt, J. (2012), p. 28. 
Posted by: N.S. Palmer | June 8, 2016

Follow Your Heart, But Use Your Head


In life, should you follow your heart or your head?

It’s an old dilemma. The ancient Greek philosopher Plato likened human nature to a chariot, pulled by two powerful horses but with a human driver. The horses represent our passions, while the driver represents our reason. Plato thought that a good person followed reason and kept the passions under careful control.

That dilemma came to mind earlier this week. One of my favorite bloggers wrote an eminently reasonable analysis of the controversy over the shooting of a gorilla at a Cincinnati Zoo. A child had climbed into the gorilla’s enclosure. Zookeepers did not know if the gorilla would hurt the child, but they were unwilling to take a chance. They made the right decision.

I commented that the blog post was sensible and compassionate, “as always,” but changed it to say “as usual.” It seems to me that the blogger sometimes lets her good heart overrule her good judgment. I probably have the opposite flaw. Reasonable people can disagree.

Later that day, a YouTube channel posted a video that seemed to embody a contrary error: “Don’t Follow Your Passion.” The speaker, a television personality named Mike Rowe, urged viewers not to follow their passion:

“Just because you’re passionate about something doesn’t mean you won’t suck at it.”

Instead, Rowe advised viewers to seek “dirty jobs” that pay well and that they eventually could come to love.

It’s not terrible advice, but it misses something important. If you have a real passion, you should follow it as long as you are aware of the risks, willing to accept them, and won’t unfairly burden other people.

Rowe gives the example of contestants on the television show “American Idol,” who he says are genuinely shocked when their passion is not met with success. But he doesn’t seem to ask what they have a passion for.

Many people have a passion not for music but for fame and approval. They doubt their own worth, so they need constant reassurance from an audience. If that is true of the contestants, then they often do “fail” to get what they want.

However, if they have a passion for music, then fame and approval are merely nice, not necessary. Their success lies in the music they create. If other people like it, that’s great. If they don’t like it, then it’s still the music that counts. That is a true passion.

Music critics hated Beethoven’s Third Symphony at first, but eventually they came to appreciate it. Beethoven didn’t care one way or the other what the critics thought. He had a passion.

At the premiere of Stravinsky’s “Rite of Spring,” the audience was so offended by the music that they almost rioted. Stravinsky didn’t care what they thought. In that case, I think the audience was right, but my opinion wouldn’t have mattered to Stravinsky. He had a passion, and he had to follow it. As long as I don’t have to listen to his music, I say, “good for him.”

Plato’s analogy is informative. Without a driver (reason), the horses (passion) might run the chariot off a cliff. But without horses, the chariot can’t go anywhere. Reason by itself isn’t enough: passion gives us the motive power to get somewhere.

The same principle applies to most situations in life. If we make life decisions and ignore reason, we get unpleasant results. But if we ignore passion, we can’t know which results will be pleasant or unpleasant. Likewise, if we make moral judgments and ignore reason, then our passions lead to bad conclusions. But if we ignore passion, we can’t know what a “good” conclusion would be.

If you have a passion, don’t deny it. First, use your head. Understand the risks. Are you willing to accept “failure” if that occurs? What would you do then?

Is your passion strong enough to sustain you no matter what happens? Then by all means, follow your passion. Most people aren’t lucky enough to have a passion that strong. You are blessed. You are truly alive.

Posted by: N.S. Palmer | May 29, 2016

What Helps or Hurts Free Speech


Mathematicians are lazy.

I’m allowed to say that, because I’m a mathematician. I’ve got the degree, I’ve got the calculator, and I’ve got a refereed publication. Yes, just one. I didn’t claim that I was an important mathematician. I’m much too lazy for that.

Because mathematicians are lazy, we like to solve problems in ways that are simple and elegant. That’s a lot less work than complicated and clumsy.

Freedom of speech isn’t a mathematical problem, but it is a problem with multiple solutions. Some solutions are simple and elegant, some are complicated and clumsy.

Western governments now enthusiastically promote solutions that are complicated and clumsy. The results are disastrous for anyone who cares about freedom.

Those thoughts came to mind as I was reading Mick Hume’s article in The Spectator: “No, thank you, officer, I will not think before I speak.” Hume writes:

The Greater Glasgow Police (who some might imagine would have their hands full pursuing actual offenders) recently tweeted a warning to all social media users: ‘THINK before you post or you may receive a visit from us this weekend.’ The force spelt out what it wanted Glasgow tweeters and posters to THINK about in order to avoid the knock on the door: T – is it true? H – is it hurtful? I – is it illegal? N – is it necessary? K – is it kind?

Now, I’m generally in favor of obeying the law. I support telling the truth, avoiding gratuitous insults, and being kind. But there are two issues here:

  • Apart from breaking the law, none of those things should concern the police.1
  • What qualifies as insulting, hurtful, or unkind depends on the context and on who’s listening.

Consider a relatively homogeneous society, such as Japan (98.5 percent ethnic Japanese) or China (91.5 percent Han Chinese). Or you could consider the United States of 50 years ago (88 percent European-American).

Are there problems and injustices? Certainly. All societies have them. You can’t have a perfect society of imperfect people. Ever. The only questions are about how to minimize injustice to minorities while maximizing the welfare of the majority. Utilitarianism says that’s what you should do, since every person’s welfare counts equally and there are more people in the majority than in minorities. Minimizing injustice is what the late Robert Nozick called a “side constraint” on what you are allowed to do: it prohibits things like murder even if they would maximize the welfare of the majority.

Because one culture and one part of the population are overwhelmingly dominant, the societies are more harmonious than they would be otherwise. The vast majority of people belong to the same group and they agree on most of the big questions of life. As British writer Walter Bagehot observed in Physics and Politics:

“A nation means a LIKE body of men, because of that likeness capable of acting together, and because of that likeness inclined to obey similar rules.”2

You can even leave out ethnicity. If a dominant majority of a society’s people agree about big issues, embrace a common history, celebrate the same holidays, and follow religions that are at least amicable relatives of each other, then they agree on enough that they can speak their minds in public. The odds of causing offense are very low, and social trust is very high.3

However, if a society has multiple non-dominant groups with incompatible values, different histories, different holidays, and different religions, they almost can’t help offending each other. All you can do to keep the peace is try to intimidate people into shutting up about anything that might offend the most easily offended groups. And what about the nicer groups that are less likely to take offense and almost never riot? An adage says “the squeaky wheel gets the grease.” They don’t squeak, so they don’t get any grease. They get shut up.

Having a society is something like being married. You don’t want a spouse who’s exactly like you and agrees with you about everything, because that’s just boring. On the other hand, you also don’t want a spouse who almost always disagrees with you, who prefers a different lifestyle, and who loves and hates different things. That’s a prescription for non-stop marital discord. You want some difference, but you want it with a lot of basic agreement and similarity.

May I suggest that trying to make society even more diverse and bitterly divided while trying to make nicer groups shut up about it is not a simple and elegant solution to the free-speech issue? It’s too much work and it makes almost everyone unhappy.

Works Cited

Bagehot, W. (2007), Physics and Politics. New York: Cosimo Classics.

Putnam, R. (2001), Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster.


  1. Note that slander and libel are against the law, although they are civil rather than criminal offenses. 
  2. Bagehot, W. (1872), loc. 170. 
  3. Putnam, R. (2001), loc. 5468ff. 
Posted by: N.S. Palmer | May 21, 2016

Dogs, Cats, and Concepts


“Don’t it always seem to go, That you don’t know what you’ve got ‘til it’s gone.”
— Joni Mitchell

Dogs and cats can teach us something about how humans think.

And about how to prevent them from thinking.

Consider the statement that “Dogs bark and cats meow.”

How many ways are there to say it?

Apart from stylistic variations, there’s only one way to say it. It’s easy. It’s efficient. It takes five seconds and not much brainpower.

Using a single word to denote a large number of similar things makes it easy to talk about them. Using a single idea corresponding to the word makes it easy to think about them. That’s a big part of why we have words and ideas.

But suppose for a moment that a few Chihuahuas find that statement insulting. They don’t like being lumped together with German Shepherds, Bulldogs, and Collies.

A few Siamese cats are also in high dudgeon about it. They don’t like being lumped together with British Shorthairs, Persians, and American Bobtails.

Therefore, to avoid offending Chihuahua dogs and Siamese cats, we will henceforth identify the dog and cat breeds any time we talk about dogs and cats.

There are 340 breeds of dog and 73 breeds of cat. Suppose that you now want to say (or think) “Dogs bark and cats meow.”

You can’t say that anymore. It’s hurtful and politically incorrect. It means you hate Chihuahuas and Siamese cats.

Instead, you must enumerate all the combinations of breeds:

  • “Labrador Retrievers bark and British Shorthairs meow.”
  • “Siberian Huskies bark and Bengal cats meow.”
  • “Beagles bark and Turkish Angora cats meow.”
  • … and so on.

By the fundamental counting principle, that’s 340 x 73 = 24,820 statements you’ll have to make in order to say “Dogs bark and cats meow.” Any shortcuts will invite the full wrath of the Unholy Inquisition.

In order to make Chihuahuas and Siamese cats feel better about themselves, we have made it almost impossible to speak or think effectively about dogs and cats.

Humanity has the concepts it does because they have proven practical for millennia in all kinds of societies. The concepts that weren’t practical have disappeared because nobody wanted to use them, or because societies that used them died out.

What does this teach us about human thought? Let me give you another example. Here’s a list of numbers:


Did you read it? Now cover the list and write it on a piece of paper. Very few people can do it. But let me give you the same numbers again:


You can do it this time. Why? Two reasons.

First, you were able to group the numbers, which reduced from 20 to 10 the number of items you had to remember. Concepts are a way you can group related cases, just as you did with the list of numbers. It makes your thinking easier and more efficient.

Second, you saw the pattern (0 to 9, with each number repeated). It further reduced the number of items you had to remember from 10 to three: the starting number, the ending number, and the pattern.

You could do that because you’re intelligent. Together with opposable thumbs and pornography, it’s what distinguishes humans from the rest of the animal kingdom.

The bottom line is this:

  • There is a presumption in favor of using concepts that have proven themselves over millennia in all kinds of societies. The reason they’re still around is that they work well.
  • Long-established concepts typically simplify our thinking and make it more efficient.
  • If you want to prevent people from thinking clearly about a subject, then require them to use concepts that are unintuitive, numerous, and vastly more complicated.

Copyright 2016 by N.S. Palmer. May be reproduced as long as byline, copyright notice, and URL ( are included.

Posted by: N.S. Palmer | May 7, 2016

Four Fallacies and an Oversight


“Politics is a strife of interests masquerading as a conflict of principles.”
Ambrose Bierce (1842-1914).

The tragedy in politics is that real principles are at stake, but people get so obsessed with “winning” that they lose sight of them. The same applies to moral discussions in general.

That’s not going to change, but we can at least try to avoid simple errors in reasoning.

Toward that end, I here present five common errors that mislead political, moral, and religious arguments: Four fallacies and an oversight.

1: The Naturalistic Fallacy

The naturalistic fallacy1 is based on a simple and seductive belief:

Reality determines morality.

The idea is that just by looking at facts about the world, you can deduce what is right or wrong. Your reasoning is so simple and clear that its validity is obvious. Anyone who disagrees with you must be either evil, stupid, or mentally ill.2

The biggest practical problem is that different people use the same facts to justify vastly different moral prescriptions. For example, some people are rich and others are poor. From that, you can argue that the rich should be wealthy because they are more virtuous than the poor. You can also argue that the rich (virtuous or not) have more than they need, and they should pay higher taxes to improve the lives of the poor.

You can make similar arguments about almost anything: sex, race, religion, nationality, and so forth. Until the late 20th century, people of European ancestry dominated the world. Should they have? Did their success mark them as somehow superior to the peoples they conquered? They thought so. Others disagreed. Societies throughout history have persecuted gays. Does that mean they deserve to be persecuted? Or does it just mean that majorities will latch onto any excuse to persecute minorities? Evolutionary psychologist Satoshi Kanazawa remarks:

“From a purely scientific perspective, murder and rape are completely natural for humans, and getting a Ph.D. in evolutionary psychology is completely unnatural … Natural decidedly does not mean good, valuable, or desirable, and unnatural does not mean their opposites.”3

In practical terms, the fallacy enables anyone to argue for almost anything on the ground that it’s natural. That makes it pretty useless as an argument.

Getting from “Is” to “Should”

But that’s only the practical problem. There’s also a logical problem in the naturalistic fallacy.

To clarify, let’s contrast a simple non-moral argument with a simple moral argument. Here’s the non-moral argument:

  • Premise 1: John is in the kitchen.
  • Premise 2: The kitchen is in the house.
  • Conclusion: John is in the house.

The conclusion contains “John,” “is,” “kitchen,” and “house,” all of which occur in the premises.

Here’s the moral argument:

  • Premise 1: John is in the house.
  • Premise 2: The house is on fire.
  • Conclusion: John should get out of the house.

Do you see anything missing?

The conclusion refers to John and the house, both of which are in the premises. But where did that “should” come from?

“Should” is a moral word. It has no factual counterpart. Nothing in the world corresponds to “should,” nor is there any action you can take that would constitute “should-ing.” What you really have is an argument like this:

  • Premise 1: John is in the house.
  • Premise 2: The house is on fire.
  • Premise 3: If John stays in the house, he will be burned.
  • Premise 4: John should avoid being burned.
  • Conclusion: John should get out of the house.

The more accurate version of our argument has three factual premises and one moral (“should”) premise, all leading to a moral conclusion. But the moral premise isn’t a logical consequence of the factual premises. What does it mean, and where does it come from?

A short explanation goes like this: If John is burned, he will suffer pain and possibly die. We can imagine it happening to us. We have felt pain and we didn’t like it. We have probably suffered the loss of someone who died. So all those things recall unpleasant feelings in us. We prefer to avoid such feelings. When we imagine John in the burning house, we imagine the pain he might suffer. We want him to get out of the house. We think he should. David Hume puts it this way:

“[Reason is] sufficient to instruct us in the pernicious or useful tendency of qualities and actions; it is not alone sufficient to produce any moral blame or approbation … A sentiment should here display itself, to give a preference to the useful above the pernicious tendencies. This sentiment can be no other than a feeling for the happiness of mankind, and a resentment of their misery.”4

The “should” expresses our feelings about the situation. It’s what we would say to John if he were close enough to hear us: “John, get out of the house!” And getting out of the house is reasonable, but it’s not proven by the non-moral premises of the argument.5

Committing the Fallacy by Accident

If you’re not careful when you criticize the naturalistic fallacy, you might commit it yourself.

Suppose you say, “We shouldn’t deduce moral conclusions from non-moral premises.” If someone asks why not, you reply that because of the way the world is, such deductions are unreliable. But why should we care if conclusions are unreliable? You derived a moral conclusion (“We should not do X”) from non-moral premises.

What you need to say is that deriving moral conclusions from non-moral premises can lead to contradictory results or to moral statements with which we disagree. Therefore, such arguments are unreliable by the standards of consistency with logic and with our moral beliefs. Whether that’s good or bad is up to the individual. The answer seems obvious, but it’s not proven. It’s a choice.

And as an aside, that’s what morality is: It’s a choice. It’s not proven based on non-moral facts. It asks all of us the question: “What kind of person do you want to be?” Our answer determines how we will try to live our lives.

2: The Moralistic Fallacy

The moralistic fallacy is the opposite of the naturalistic fallacy. It assumes that:

Morality determines reality.

The moralistic fallacy assumes that whatever is morally desirable must be true.

For example, suppose we believe (as I do) that all people should be treated equally by the law. From that idea, we might conclude that all people are in fact equal in every respect.

Unfortunately, it’s not true. I could train for 18 hours a day but could never become a good gymnast, simply because I lack the innate ability. Others could make similar efforts and never become good mathematicians. Still others, even if they have the ability, just aren’t interested in such careers. People differ. That used to be called “diversity” before we redefined the word to mean something else entirely.

But that’s not the worst consequence of the moralistic fallacy. The worst consequence comes from a logically valid type of argument called Modus Tollens. It goes like this:

  • Premise 1: If X is true, then Y is true.
  • Premise 2: Y is not true.
  • Conclusion: Therefore, X is not true.

An example of Modus Tollens is:

  • Premise 1: If it is raining, then the streets are wet.
  • Premise 2: The streets are not wet.
  • Conclusion: Therefore, it is not raining.

The moralistic fallacy makes it seem like any denial of politically-correct dogma is a denial of more reasonable moral beliefs. For example:

  • Premise 1: If all people should be treated equally by the law, then all people are equal in every respect.
  • Premise 2: It is not true that all people are equal in every respect.
  • Conclusion: Therefore, it is not true that all people should be treated equally by the law.

Premise 1 is false, so the conclusion can also be false. We can believe that people differ but also believe that they should be treated equally by the law.

But since the moralistic fallacy makes them believe morality determines reality, “social justice” mobs scream for the heads of any infidels who deny Sacred Doctrine. They think such denials imply immoral ideas, and that people who hold such ideas should be fired, vilified, and put under a P.C. fatwa for the rest of their mortal existence.

3: The Rationalistic Fallacy

The rationalistic fallacy assumes that:

Logic and evidence determine my beliefs.

People who commit the fallacy assume that they, themselves, hold beliefs based solely on logic and evidence. Other people are within the golden circle only if they agree with the self-styled rationalists. If they disagree, they are presumed to be fools or worse.

This fallacy betrays a curious lack of self-awareness. Everyone who has ever believed much of anything has sometimes turned out to be wrong, and people who disagreed with them sometimes turned out to be right.

Even in our cosmopolitan era, most of us work and socialize with people similar to us. Our friends and co-workers tend to have comparable education, similar jobs, similar backgrounds, and to live in similar neighborhoods. More than our co-workers, our friends tend to be the same race, religion, and nationality as we are. Other people in our group tend to think like we do, have the same values as we do, and believe most of the same things as we do. Group members reinforce each other’s beliefs and make it seem as if almost everyone believes the same things.

In simple cases, our beliefs sometimes are based solely on logic and evidence. If you believe that there are 10 apples in a barrel, but I count them in front of you and show that there are only nine, you will change your belief. Counting apples is a simple case, with no other factors that introduce any uncertainty. Moreover, the number of apples in a barrel doesn’t matter to you emotionally unless you’re starving or we have a bet. If the barrel has 10 apples or nine, either is okay with you.

In complex cases, our beliefs depend on a larger amount of evidence. We can’t personally verify most of the evidence, and some pieces of evidence conflict with others. We have to decide which evidence to believe and how significant it is to our conclusion. Our emotions bias our judgment, as do our previous experiences and beliefs.

Moral and social issues are especially vulnerable to the rationalistic fallacy. People want to think of themselves as morally good, and they also want to be seen by others as morally good. Because most of their associates have the same beliefs, they want to adopt conforming beliefs so they are accepted by the group. The desire for acceptance biases their judgment and makes them evaluate evidence differently than they would otherwise, but they still believe they’re just being rational.

Our existing stock of concepts and stories also biases how we understand new information. If we see immigration through the lens of Europe in 1939, then all immigrants look like Jews fleeing the Nazis. On the other hand, if we see it through the lens of terrorist attacks in Belgium, France, and the United States, then all immigrants look like Islamic terrorists. Such initial perceptions exert a powerful bias on how we assess evidence and on the conclusions we reach.

We can partially overcome such bias, but we must make a deliberate effort to do so. We can’t do it if we think we have no bias to overcome.

4: The Existentialist Fallacy


The existentialist fallacy6 is based on another seductive assumption:

Reality is whatever you want it to be.

The fallacy is only loosely derived from the philosophy of existentialism, which says that humans can and must define the meaning of their lives.

Inanimate objects cannot define themselves. They simply are what they are. For example, a coffee cup must have certain characteristics in order to be a coffee cup: those characteristics are its “essence.” Before a coffee cup can exist, its essence must exist; otherwise you can’t make a coffee cup. In existentialist argot, the cup’s essence precedes its existence.

Existentialists say that humans have no fixed essence as people. Humans must define their essence by the choices they make. Therefore, their existence precedes their essence. In a sense, human beings have the power to choose what they are, at least mentally. They choose what kind of character they have, how they live, and what their lives mean. But that’s it. As far as I know, existentialism never said they could choose to be bunny rabbits, have 17 toes, or fly like Superman.

Don’t feel bad if your eyes are glazing over. Existentialism has that effect on people. However, in spite of its eye-glazing obscurity, it does have some valid insights. The existentialist fallacy makes a long leap from those valid insights all the way to what psychologists call magical thinking.

According to the fallacy, if you’re a man who wants to be a woman, then you’re a woman. If we wish everyone had the ability and interest for STEM careers, then they do. If it would be nice for large multi-ethnic, multi-cultural, multi-national, multi-religious societies to be cohesive and harmonious, then they can be. And so forth.

This fallacy also functions as a kind of “get out of jail free” card for other fallacies such as the moralistic fallacy. If you think that moral idea X implies reality Y, but Y obviously isn’t true, then the existentialist fallacy makes it all better: “If you want Y to be true, then it’s true.” Anyone who says otherwise is a hateful bigot who should be ignored.

The fallacy leads to cases such as students who are too intimidated or brainwashed to disagree with a middle-aged white man when he claims to be a Chinese woman or to be seven years old.

Lest you accuse me of committing the naturalistic fallacy, I’m not saying it’s bad for people to be completely unhinged from reality. I’m just saying that such a society can’t last very long. Whether it’s good or bad is up to the people in the society.

5: Overlooking Opportunity Cost

Opportunity cost is an economic concept that people almost never think about. We often hear statements such as:

  • “We should bring more refugees to our country.”
  • “We should spend more money on education.”
  • “We should spend more money on helping the poor.”
  • “We should allow anyone who wants a job to come to America legally.”

In the abstract, those are nice ideas. It’s nice to want to help people. But unless our resources are infinite, which they are not, then helping some people means not helping others. If we spend $10 million to help the poor in Baltimore, for example, it’s $10 million we no longer have to spend on disease prevention or other worthy causes. That’s opportunity cost:

”Choosing one thing in a world of scarcity means giving up something else. The opportunity cost [of a particular choice] is the value of the most valuable good or service foregone.”7

Opportunity costs are not just monetary. For example, rapes and terrorist attacks in Germany, Belgium, France, and the United States have shown that well-meaning compassion for Islamic migrants can endanger citizens of the countries that allow migrants entry. It might be worth it, but we need to consider that cost in evaluating our policies.

Similarly, U.S. black unemployment is extremely high, which hurts black Americans and causes many social problems. Allowing immigration by millions of Hispanics who compete for the same jobs makes black unemployment even worse. That’s an opportunity cost. It might be worth it, but we need to consider that cost in evaluating our policies.

Ignoring opportunity cost is related to the political problem of concentrated benefits and diffuse costs. When members of special interest groups get enormous benefits from changes in the law, but the costs are widely dispersed so that non-members each pay only a little, the groups want to have everyone ignore the costs and just focus on the benefits. A small number of people each get large benefits, so they are organized and motivated to push for what they want. The majority of people each pay only a little (whether in money or quality of life), so they are unorganized, less motivated, and are easily defeated by the special interest groups. Contemporary society has many examples of the problem.


So there they are: four fallacies and an oversight. Please do not commit them:

  • Don’t assume that the facts determine the moral answers.
  • Don’t assume that the moral answers determine the facts.
  • Don’t assume that you reasoning or anyone else’s is error-proof.
  • Don’t assume that reality is whatever you want it to be.
  • Don’t overlook opportunity costs.

Works Cited

Kanazawa, S. (2012), The Intelligence Paradox: Why the Intelligent Choice Isn’t Always the Smart One. Hoboken: John Wiley & Sons, Inc.

Samuelson, P. and Nordhaus, W. (2001), Economics, 17th edition. New York: McGraw-Hill Higher Education.

Schneewind, J. B., ed. (1983), David Hume: An Enquiry Concerning the Principles of Morals. Indianapolis: Hackett Publishing Company. Kindle edition.


  1. Among professional philosophers, the “naturalistic fallacy” comes in two versions. The first version deduces moral rules from non-moral facts. The second version defines “good” in terms of natural, non-moral properties. The two versions are really the same fallacy, with the first looking at morality in terms of action and the second looking at morality in terms of value. 
  2. A variation on the argument is that God made the world, so whatever exists is good. Calvinist Christians divide humanity into two groups: the saved and the damned. Your membership in one group or the other is proven by the circumstances of your life. If you are rich, then God has chosen you for good fortune. If you’re not rich, then God has rejected you. The rich like that argument better than the poor. 
  3. Kanazawa, S. (2012), loc. 529. 
  4. Schneewind, J.B. (1983), loc. 1777. 
  5. A number of philosophers including John Searle, Hilary Putnam, and Sam Harris have argued that the naturalistic fallacy is not really a fallacy. They contend that we can deduce moral conclusions from non-moral premises. Their arguments are ingenious but ultimately inadequate, and beyond the scope of this blog post. 
  6. I apologize in advance to any experts on existentialism. Existentialism has never made much sense to me, but I’m explaining it as accurately as I can. Corrections will be gratefully received. 
  7. Samuelson, P. and Nordhaus, W. (2001), p. 137. 

Older Posts »



Get every new post delivered to your Inbox.

Join 190 other followers