0:00
/
0:00
Transcript

Michael Huemer: Nature of Knowledge, Foundations of Morality

On the basis of knowledge, foundations of morality, naturalistic fallacy, and political authority

Below is a transcript of an episode of “Vatsal’s Podcast”. You can listen to it using the player above or on Apple, Spotify, YouTube or wherever you get your podcasts.


Vatsal: The following is my conversation with Michael Huemer, professor of philosophy at the University of Colorado, author of such books as Understanding Knowledge, The Problem of Political Authority, and Knowledge, Reality, and Value, and a blogger at Fake Nous. We talked about the nature of knowledge and the foundations of morality.

You are listening to Vatsal’s Podcast, where I, Vatsal, host philosophical conversations with thinkers from a range of backgrounds. This podcast is part of a newsletter, where I also publish original essays on morality, knowledge, AI, and more. I am also the founder of the Universal Open Textbook Initiative, building the world’s largest repository of free, multilingual textbooks. You can find links to all my projects at vatsal.info

And now, here’s my conversation with Michael Huemer.

Michael Huemer, thank you so much for being here. It’s great to see you.

Michael Huemer: Thanks. Thanks for having me. It’s great to be here.

Vatsal: If I had to describe one theme in all of your philosophical work, it would be the search for a foundation of knowledge like Descartes and Thomas Reid. Where do you see yourself as correcting or updating them?

Michael Huemer: Let’s see. How am I correcting Descartes? Descartes was too much of a rationalist. Descartes’ epistemological view was more this sort of top-down view that you start with some very general abstract principles, and then you try to infer everything starting from those principles. And most knowledge is the other way. You start from particular concrete judgments and then you generalize from there. Usually when you start from an abstract generalization, it’s wrong. And when you try deducing things from it, then you get a whole bunch of errors.

And that includes when you think that you started from something that’s completely self-evident. Like Descartes — he thought there’s a completely self-evident principle that you can’t have more reality in an idea than in the cause of the idea or something like that. And that presupposes that there’s this thing called degrees of reality. And then that’s used to infer the existence of God. And then there’s another self-evident principle that God cannot be a deceiver. And so this is all part of his story about how you’re going to get knowledge of the external world. But the things that philosophers think are completely self-evident are often outright rejected by other philosophers. Almost no one today thinks that Descartes’ arguments for the existence of God are good.

So it’s not just about that particular issue about the existence of God. It’s that in general, this is not the way to build a system of knowledge. It’s not to start from some completely general principle. At one point he says, “I’ve identified some truths like I think and I exist, and now I can infer a criterion of truth,” which is something like if you clearly and distinctly perceive something, then it must be true. And this is also not the right approach. You don’t start with a criterion of truth. You start by learning particular individual things.

What about Thomas Reid? I mostly just agree with Thomas Reid. I think maybe his view was not completely explicit, so I don’t know if he would completely agree with what I’ve said. But I claim that there’s a general epistemological principle: justified beliefs are based upon appearances. So it makes sense to assume that things are pretty much the way they appear until you get specific reasons for thinking otherwise.

Now why is this different from Descartes? It’s partly because I don’t think that the foundations have to be absolutely certain, indubitable, incorrigible truths. The foundations of our belief system are things that we can presume to be true unless and until we have specific reasons for doubting them. You can have reasons for doubting things that were foundational, in which case you can revise them. But you stick with things until you have reason for doubting them.

Also, I didn’t come up with this as a self-evident starting principle. I thought about how I knew things. First, I knew a lot of things in the ordinary way, and then I started reflecting on examples of things that I knew. And I noticed that when I formed beliefs, when I’m trying to find the truth, my beliefs are based upon what seems right to me, which I claim is true of everyone whenever they’re trying to figure out the truth about anything. If you reflect on your own beliefs, you’re going to see that.

Vatsal: How would you describe what “seemings” are to someone hearing the term for the first time? And how do they differ from other sources of knowledge?

Michael Huemer: Well, the second question is they don’t differ from any other sources of knowledge, because there are no other sources of knowledge — everything that you would count as a reasonable candidate for a source of knowledge is some kind of appearance.

You have your knowledge of the external world, which comes from observation. What’s observation? Well, it’s having certain kinds of appearances — sensory appearances. Things look a certain way, sound a certain way, smell a certain way, and you assume that they are that way unless you get reasons for doubting it.

There’s knowledge by memory. That’s how you know about the past. Memory consists of having memory appearances. Going back for a second — perception isn’t just having the appearances, it’s having sensory appearances which are caused by the object. But you could have hallucinations in which you have sensory appearances which are not caused by the real object. And they could look exactly the same — they could be the same experience. So the appearance is the internal state that we only call a perception if that state was caused by a real object, and we call it a hallucination if that state was caused by something else, something in your brain.

So the way that you know about the past is by memory. Remembering things consists of having memory appearances — there’s a certain kind of experience where you seem to remember something, where the memory appearances are hopefully caused by a real event that really happened.

When you reason, that consists of having a certain kind of appearances. There are inferential appearances in which something seems true to you in the light of something else. You consider some information that you call a premise, and then there’s some other idea that you call a conclusion, and it seems to you like the conclusion would have to be true in light of the premise. This is just what’s always going on.

Vatsal: Do other animals have access to it?

Michael Huemer: Animals have sensory appearances. I don’t know if they have inferential appearances, but obviously they see things, they hear things — most of them, except for clams or something like that.

These appearances are a type of mental state which normally causes you to form beliefs, but does not have to. Normally when you believe something, it’s because something appeared a certain way to you. But the appearance is distinct from the belief — it causes the belief. It’s also possible to have a case where you don’t believe the appearances. Suppose that you think that you’re suffering from a sensory illusion — then you would not believe the appearances, but you still have those appearances. There are cases where you know that something is an illusion, but it still appears that way. That shows that the appearance is distinct from the belief. It’s also not just the disposition to form a belief — it’s the mental state that explains why you have a disposition to form a belief.

Vatsal: How do we distinguish genuine, intuitive appearance from temperamental or culturally conditioned reaction?

Michael Huemer: I’m not using the word appearance or intuition as a factive term. Having the intuition that P doesn’t entail P. If somebody has a mistaken intuition, that’s still an intuition. There’s not really a question of how do you distinguish the real intuitions from the fake intuitions — there aren’t any fake intuitions. There are intuitions that are mistaken, but they are still intuitions.

Now if your question is how do we distinguish true from false — well, there’s not a general criterion of truth. What could that be like? Tell me the procedure by which you will always be correct and never make a mistake. There’s no such procedure. The rational procedure is: assume that things are the way they appear until you have specific reasons to doubt it. If you have no specific reasons for doubting your appearances, then you stick with them. But will that guarantee that you will always be correct? No. Nothing will guarantee that you will always be correct. This is human life.

Vatsal: It seems unfalsifiable to say that X is true unless it is false. So is it inevitable that all foundations are like this? Like, for example, in mathematics?

Michael Huemer: First, I didn’t say X is true unless it’s false. I said it’s reasonable to assume that things are the way they appear, unless you have specific reasons for doubting it. Does the appearance then become unfalsifiable? No, exactly the opposite. We just accepted that it was possible for there to be grounds for doubt, in which case you would give it up. So that’s exactly what falsifiability is.

Then you might say, but the claim that you’re justified in believing appearances unless you have a reason for doubting — is that claim itself falsifiable? I don’t know, because it’s necessarily true. There’s some sense in which it can’t be falsified — there’s no way that it could be false.

But is there something that counts as a test of it? Well, what’s the test for epistemological claims in general? What’s the test for philosophical claims? It’s something like: you could consider examples of things that we consider to be justified and things we consider to be not justified, and you could see how well this theory does at classifying them. So there’s this theory of phenomenal conservatism. Some things we think are justified — we think the theory of evolution is justified, we think I’m justified in believing that I have two hands. You could go through some examples and say, does my theory explain these things? If it does, that’s good for the theory. If it doesn’t, that would be bad for the theory. Also give examples of things that are unjustified — I’m not justified in believing that there’s a pink unicorn in front of me. Does the theory explain that? Yes.

Vatsal: On what points is your framework different from naturalism?

Michael Huemer: Well, what do you mean by naturalism?

Vatsal: Given the enormous success of modern science, many philosophers think that some form of naturalism is now part of common sense — that we should assume everything, including morality and consciousness, fits into a scientific physicalist picture unless forced otherwise. So what is wrong with treating methodological naturalism as default?

Michael Huemer: I was trying to get what naturalism is, and then for a second it sounded like naturalism is physicalism — everything is physical. Is that the view? Because I don’t think everything is physical.

Why not? I’m conscious. How do I know that I’m conscious? Because I’m immediately aware of it. In fact, I know that better than I know anything else. As far as I can tell, that’s not physical. If you tell me that consciousness is physical, then I don’t know what you mean by physical anymore, because I would think that’s the paradigm example of something that’s not physical. If that counts as physical, then does physical just mean anything? In which case physicalism would be an empty view.

Also, I think there are abstract objects. I don’t think that the number seven is a physical object, but I think the number seven exists. So not everything is physical. I guess I’m a non-naturalist.

The other thing you mentioned was something about natural science. Sometimes people say naturalism is the view that everything that exists is subject to scientific investigation. I guess that’s true — it depends on what you mean by scientific. If the claim is everything is empirical — no, not everything is empirical. Some things are known a priori. Some things are known by thinking about it rationally. I know that the shortest path between any two points is a straight line, a priori. I know that seven is prime just by thinking about it — I don’t have to do any experiments. So if that counts as non-scientific, then I guess naturalism is false.

Vatsal: But consciousness is affected by changes in the body. Would you agree with that? Your experiences are affected by what happens in your brain?

Michael Huemer: Yeah. That’s not really under dispute.

Vatsal: What do you think about the idea that our abstract concepts like the unicorn arise from natural objects like horses and wings, and we combine them in our mind to derive the concept of a unicorn? And so numbers can also be explained in that way.

Michael Huemer: Well, first, that’s a little bit odd because the concept of a unicorn is not a particularly abstract concept. An abstract concept — I think of things like the concept of the derivative in calculus. That’s pretty abstract. But unicorn is pretty concrete.

Anyway, you have concepts that appear not to refer to anything observable. You have concepts where you have not directly observed that something satisfies that concept. You could say they’re derived from experience in some way. Now this isn’t really on the topic of what we were talking about before — I didn’t say anything about where concepts come from.

But is it in fact true that all concepts come from observation? I don’t know — there could be innate ideas. You have to ask the cognitive psychologists.

You have the concept of numbers. How do you teach a child what the number two is? I guess you show them pairs of things — two fingers, two cough drops. You show them enough pairs of things and say “that’s two,” and then they get the idea of what you mean by two. I would say there’s a property that’s present in all of the examples — the property of twoness, which is the property had by those objects. That’s where the concept comes from.

The concept applies to things in the physical world. It also applies to anything. You can have two apples, but you could also have two ideas. Numbers apply to physical things, mental things, and other abstract objects — “I have two formulas here.” It’s a completely general concept. And when you’re thinking about this, you’re thinking about this really abstract property, which doesn’t appear to be a physical property per se.

Vatsal: What do you think about Plato’s forms? What is their nature?

Michael Huemer: Plato had some confusion. Modern-day Platonists are generally understood as people who think that universals exist necessarily. Universals are things that could be present in more than one thing — things that multiple objects could have in common. Blue is a universal because multiple particular objects can be blue. They’re sharing in something — they share blueness. There’s blueness here and there’s blueness here in different places at the same time. These universals exist necessarily, meaning you can’t do anything to destroy blueness. You could destroy the particular blue objects, but you can’t destroy blueness.

Here’s an argument that blueness exists. Premise one: blue is a color. Premise two: the statement that blue is a color is a statement of the form F(a), using predicate logic terminology — a predicate-subject statement where the subject is blue and the predicate is being a color. The truth conditions for statements of that form require that the subject term refer to something for the statement to be true. So the subject term “blue” refers to something. If it refers to something, it must refer to blue, and so blue must exist.

Furthermore, there’s a question of whether it exists necessarily — meaning there’s no way that you could rearrange the world so that it would stop existing. The reason for thinking that is: blue is necessarily a color. It’s a necessary truth. You can’t rearrange the world such that blue will be a shape instead of a color, or that it’ll stop being true that blue is a color. So it’s a necessary truth, and therefore it’s necessary that blue exists.

Now about Plato himself — he had some more confused views. The most confused view was that he apparently thought the universals instantiate themselves. He apparently thought that there are perfect circles and perfect triangles existing in some realm called Plato’s Heaven, and that the form is a perfect example of the property you’re talking about. That is confused. And then there would be remarks like, “there’s nothing more beautiful than beauty itself.” No, that is false — that is confused. The universal doesn’t instantiate itself. The property of blueness is not blue. To be blue, you have to be a physical object. The property is an abstract object — it’s not blue.

The form of beauty could itself be beautiful — that’s a special case. But most properties cannot instantiate themselves.

Vatsal: And what about the property of goodness?

Michael Huemer: That’s also a universal. You can give examples of good things, and then you’ll see that they have something in common, which is goodness. It’s a universal. It also exists necessarily. It’s not a physical property. No matter how you describe the physical properties of something, it doesn’t logically follow that it’s good.

However, people have ethical intuitions. They have experiences in which they think about something and it seems good or it seems bad. Or similarly, they think about an action and it seems right or it seems wrong. Per our previous discussion, we should assume that things are good, bad, right, and wrong unless we have specific reasons for thinking otherwise.

Vatsal: On Moore’s naturalistic fallacy, what do you think he gets right, and what do you think he got wrong?

Michael Huemer: I basically agree with Moore. In ethics they distinguish between evaluative and descriptive statements. This terminology might be a little bit tendentious — when I make this distinction, I’m not implying that evaluations don’t really describe something. It’s just a stipulative use: descriptive statements mean statements that are not evaluative. And evaluative statements are statements that say that something is good or bad or right or wrong — they’re inherently positive or negative about something.

It’s true that you can’t deduce evaluative statements purely from descriptive statements — I think that’s correct. Also, Moore is correct that you can’t give a definition of good in purely descriptive terms, using purely descriptive concepts, where it genuinely captures the meaning of good. You can’t do that. You can only explain evaluative terms using other evaluative terms. If somebody doesn’t know what good means — it means valuable, it means positive. But those are all value terms. You can’t explain it using descriptive terms. I think that’s all correct.

Vatsal: Do you see any similarities between the property of goodness and the property of largeness?

Michael Huemer: They’re both properties that a lot of things could have, but they’re very different properties.

Vatsal: Both of them have objective and subjective dimensions. When someone says X is large, another person may say, no, that’s small. It depends on which category we are talking about. A small rocket is larger than a large pencil. But we know that largeness is not a non-naturalistic property.

Michael Huemer: One feature of the concept “large” is that the conditions for largeness are relative to a sortal term, as you just mentioned — a large pencil is smaller than a small planet. Even though the pencil is large and the planet is small, the pencil is smaller. But that’s just relativity to a sortal term, which is a little bit of an unusual property for an adjective to have, but not totally bizarre — the meaning of the adjective sort of changes depending upon what noun it was attached to. I would not describe that as subjectivity. Maybe “large” means something like “larger than the average thing of this type.”

Now there might also be this kind of sortal term relativity with good. There’s a good pencil versus a good person — the conditions for being a good person are very different from the conditions for being a good pencil. Bizarrely, you can also apply good to things that are bad. You could say “John is a really good assassin.” Assassins are bad — that’s a bad thing to be. But you could be good at being an assassin, meaning you serve the goals of assassins in an effective way. So there is that kind of similarity — the conditions for application switch depending on the term you’re applying it to.

But you might also think there’s a sort of subjectivity to a lot of terms where when somebody applies it, you might think, “that’s a matter of opinion.” Large is not a great example of that — “that’s a large dog” is usually thought of as pretty objective. It’s larger than most dogs. But you could have things like “frightening” — “that was a frightening event.” A lot of people’s reaction would be that’s subjective, that’s a matter of opinion.

So then you could ask whether good is that way. Maybe some uses of good are subjective — “that was a good movie.” I don’t think good in “good person” is subjective like that. I don’t think it’s just a matter of opinion. You have to think more about what we mean by this. Do you mean there’s no truth, there’s no fact? No, there is a fact about whether somebody is a good person or an action is a good action.

Vatsal: No, what I meant is that we think there is a subjective dimension, but it is tracking something objective when we talk about largeness. That is a subjective way of describing a pencil, but we can also describe it in an objective way, like ten centimeters or kilometers. So it is something objective in the world, but it is a subjective way of describing it when we say large. I was thinking if there was an analogy between largeness and goodness in the sense that eating a chocolate is good, but saving a life is also good. Likewise, a pencil can be large and a planet can also be large.

Michael Huemer: Yeah, well, the two different kinds of objects being large — that’s analogous to different kinds of things being evaluated as good. A person could be good and a pencil could be good, and it appears that those mean different things.

But what you pointed to was that there’s the term “large,” but then there’s this underlying dimension of maybe length in which everything has a specific measure. And largeness refers to being greater than some threshold on that dimension. The dimension just exists objectively — we don’t have to argue about it. But then different people could have different places where they put the threshold for something counting as large.

That’s not true for goodness. There is a dimension of goodness — there are things that are better and worse, and something could be twice as good as another thing. But there’s no variation in where people put the threshold. There’s the balance point between good and bad, and anything that’s better than nothing is good. Any positive amount of goodness counts as good. It could be slightly good. So it’s not a concept with a subjective threshold where people would disagree about where the threshold is.

Vatsal: Thomas Reid raises this objection against David Hume: that if goodness and badness depend on our constitution, then wouldn’t a change in that constitution make bad things good and good things bad? To me, it seems obviously true — we have to bite the bullet because there is a reason why we care so much about pain. It’s due to our constitution. And if our constitution was different, then maybe things would be different. What is wrong with that argument?

Michael Huemer: Yeah, the example of pain is interesting. If you imagine our constitution being different such that we liked pain — that is already a little bit odd, just an odd thought experiment. But there are in fact cases where people like pain. Some people enjoy spicy food, which is a kind of pain. So it is in fact possible to enjoy pain.

Would pain then be good? The answer is yes, it would. But actually, in the actual world, the cases in which people enjoy the pain are also good. The person who enjoys spicy food — it’s good to give them spicy food. But then you have to imagine a change in our disposition. I can’t imagine a case where a person thinks that their own suffering is good. Well, maybe they could have an abstract theoretical belief that their suffering is good, like because they’re a sinner and sinners deserve to be punished.

But going back to the thought experiment — imagine a society in which people approve of torturing babies. The baby comes out and they just burn the baby to make it cry. They know that the baby is suffering, and they just think that’s a funny thing to do. If you live in that society, is it good to torture the baby? Should you torture the babies? Your baby comes out and let’s say nobody’s watching, but people in your society generally think that it’s good to torture babies. Should you torture the baby? The answer is no.

So it’s not subjective. Subjective means that it depends on the attitudes of observers. What if you personally like torturing babies — should you torture a baby? No. Still no. It’s still bad. So it appears that it’s objective, as far as I understand what objective means.

Vatsal: If moral disagreements persist even after we account for biases, then does moral intuitionism not lead to subjectivism?

Michael Huemer: No. First, let’s say what these things are. Subjectivism is the view that the truth of moral statements depends upon the attitudes of observers towards the things being evaluated. Something’s being good depends upon people having some kind of positive attitude towards it. A typical subjectivist view is: an action is right provided that it’s approved of by your society — so different things are right in different societies. This is cultural relativism. Or it could be the view that to say something is good just means that you personally approve of it — you’re just reporting your own attitudes.

The intuitionist view does not say that. The intuitionist view is that there’s just a fact about what’s good or bad, and it’s not dependent on you. This is an objectivist realist view.

What is the relationship between intuitions and the facts? The relationship between our intuitions and the moral facts is like the relationship between all of our appearances and all of the facts that they’re about. What’s the relationship between our sensory experiences and the facts of physical reality? It’s not that our sensory experiences create the facts of physical reality. It is that our sensory experiences are our only way of knowing about the facts of physical reality. Our sensory experiences represent the physical facts, and we assume that the physical facts correspond to them, unless we have reason to think otherwise. But the sensory experiences don’t create the facts. And our ethical intuitions are the way that we would know about the ethical facts, but they don’t create the ethical facts. It’s just like that.

Now you asked about disagreements. Do disagreements persist after accounting for biases? Well, do we ever fully account for our biases? That may not have in fact ever happened. But let’s say that we have disagreements that we can’t resolve. Then what? Nothing really — we have a bad situation, a misfortune. Which is true in any subject. If you have disagreements that you can’t resolve, that is unfortunate, but that doesn’t make you doubt that the facts exist.

If you have unresolvable disagreements between people who seem about equally qualified — both highly qualified — then that should cause you to doubt your own judgment about those specific things. That should not cause you to doubt your judgment about everything in the subject. It shouldn’t cause you to doubt the things that there is not disagreement about. And it shouldn’t cause you to doubt that there are facts in the subject.

I mention this because people talk a lot about particular ethical issues that there’s disagreement about, and then they don’t talk about the things that there’s not disagreement about. There’s disagreement about abortion, and philosophers talk on and on about that issue. But there’s no disagreement about whether you should just kill somebody on the street for the fun of it. Because there’s no disagreement, we don’t talk about it. But that is a thing there’s agreement about. So that is probably a fact — that it’s wrong to kill people for fun.

Vatsal: The benefit of a naturalistic framework seems to be that in science, at least, there seem to be some ways of resolving disagreements. If someone disagrees in physics, you can just carry out an experiment. So that seems to be the benefit of naturalistic frameworks.

Michael Huemer: We don’t generally adopt views because they have benefits — we adopt views because they’re true. It’s not that something is false because it’s inconvenient. If we can’t resolve certain ethical disagreements — some can be resolved, but maybe some can’t — that’s inconvenient, and too bad. But that doesn’t mean it’s not true.

Also, can all scientific disagreements be resolved? That’s not really clear. There are scientific disagreements that have not been resolved — it’s really not clear that they can be. If you look at the interpretation of quantum mechanics, there are multiple different theories that appear to explain the same data, and there are disagreements about which is more plausible. It’s not obvious that there is a way of resolving that. But then so what? It’s a misfortune. But that doesn’t mean there’s no objective fact or that we don’t have any way of knowing about the subject matter.

Vatsal: Is there such a thing as the point of view of the universe?

Michael Huemer: You mean literally?

Vatsal: Henry Sidgwick uses this phrase.

Michael Huemer: Yeah. Literally, there’s only a point of view of the universe if the universe is conscious or there’s a God and the universe is part of God or something like that. I don’t know whether there’s a God — I’m agnostic. But I don’t think Sidgwick meant it literally. I think it’s just a metaphor. He just meant the objective point of view — a point of view that would take into account everybody’s perspective in a fair way.

There’s my perspective where I’m the center, I’m the most important thing, and I’m thinking in terms of my desires and my plans and the way things look to me. But then I should try to be aware of other people’s perspectives. And then there’s the idea of a person who would be able to understand everybody’s perspective. There may not actually be such a person, but you can sort of talk about what would appear to be the right thing to do from that perspective. And that is the moral point of view.

Vatsal: Henry Sidgwick famously arrived at the dualism of practical reason. Rational self-interest and universal benevolence seem equally self-evident. Where did he go wrong?

Michael Huemer: I didn’t completely understand why he thought that egoism was potentially a rational position. You have reasons for pursuing your self-interest, and you also have reasons for taking into account other people’s interests. But I just don’t see why he thought, or why anybody thought, that it could be rational to only pursue your self-interest, to only consider your own interests. That seems irrational because you know that other people have interests, and there’s no reason why yours would be objectively more important. You feel your interests more, you feel your desires and you see your perspective. But that doesn’t mean that it is in fact more important.

Vatsal: You often start with common-sensical, widely shared premises, but sometimes you end up with very radical conclusions about authority and immigration, for example. A utilitarian also begins with common-sensical views that altruism is good. Then he uses reason, but arrives at very counterintuitive prescriptions about self-sacrifice and the repugnant conclusion. Why do you not agree with them, if arriving at a radical conclusion itself is not a problem for you?

Michael Huemer: Yeah, good. I have an article called “Revisionary Intuitionism,” which describes how an intuitionist could arrive at revisionary views, even though the starting point is that things are probably the way they appear. You can wind up thinking that things are different from the way they appear in some pretty significant respects.

Also, intuitionism gives a chance for things like utilitarianism turning out to be true, which I think most other meta-ethical theories don’t really account for. If you have a subjectivist view, then I don’t see how a radically revisionary ethics can come out of it. Or a non-cognitivist view. If you have a nihilist view, then you shouldn’t have any ethics.

But if you have an intuitionist view, there are objective facts independent of our attitudes, and it could be that our attitudes are radically misguided. That’s not a crazy thing to think. And then you could get reason for thinking that. This is what the utilitarian needs to do — their position is radically counterintuitive in certain respects. I could give a list of very counterintuitive implications. You have a healthy patient, and you have five people who need organ transplants for five different organs. You could kill the healthy patient and give his organs to the five people — save five lives and only cost one life. And suppose you can make it look like he died by accident, so you won’t get in trouble and can continue doing this. Then it looks like the application of utilitarianism is that that would be the right thing to do.

Why I’m not a full-on utilitarian is that there are multiple counterintuitive implications like that. Now what the utilitarian needs to do is: one thing they can do is try to explain away our intuitions — give explanations of specific biases that we would have. And I don’t mean just saying in general that we’re biased — I mean identifying the specific bias. Like status quo bias: people have a bias towards approving of the way things are done in their society. And the way you show the relevance of this is by looking at how things are done differently in different societies, and then people have different normative intuitions in those other societies that match.

Another thing they can do is give more general ethical constraints. Utilitarianism is a very general ethical theory, but you can have even more abstract and general conditions that are going to be widely accepted. Like transitivity of “better than” — if A is better than B and B is better than C, then A is better than C. That’s a more general, more abstract thing that almost everyone agrees with, even if you’re not a utilitarian. And it turns out that some common-sense ethical views conflict with that, or they’ll conflict with a combination of general abstract principles like that. So you might be required to give up some very general, abstract, and extremely plausible ethical axioms by some of the common-sense ethical views.

This is the sort of thing a utilitarian needs to do, which is a reasonable project to attempt. I’m not fully convinced yet because there are a lot of really counterintuitive things that come out of utilitarianism. I’m not sure that they’ve explained them all away.

Vatsal: How does moral progress occur? You have said that it’s always a few people in a society who lead moral progress. What is it that they can access that others are not able to see?

Michael Huemer: People go wrong about a lot of things because they have biases. People have self-interested bias, so they’ll be tempted to make ethical judgments that approve of themselves and approve of what they want to do for other reasons, or what would be in their interest to do. Another thing is the status quo bias, where they’re just biased towards the conventions of their society and judging those conventions to be right.

But people vary in how susceptible they are to these biases. There are going to be some people who are less biased than others — not completely unbiased, but just less biased. Those people will have a tendency to get closer to the ethical truth. They will see problems in their society that other people don’t see. And then they will try to improve things on those dimensions. If they succeed, the conventions will move — they will move to be a little bit better. And then the next generation will start from a better starting point. The people who are less biased in the next generation will see further — they will see things that are still a little bit wrong.

A good example I like is John Locke and his Letter Concerning Toleration, where he was advocating for religious tolerance, which was not at all widely accepted in his society. People were fighting over religion. People were thinking that if somebody has the wrong religious views, you should persecute them, maybe throw them in jail, maybe in extreme cases execute them. And John Locke was saying, no, we should tolerate people — we should not kill people who disagree with us.

However, in that book, he could not see his way to tolerating atheists. He could see that you should tolerate other Christians, but he couldn’t see that you should tolerate atheists. But once the conventions of society had moved to tolerating different religious people, the people of that time could then see that maybe we should also tolerate atheists. Because it’s difficult for a person to see a point of view that’s radically at odds with their society — the people who are less biased than average will see something that’s a little bit at odds, but there will be something that’s too far. It was too far for Locke to see all the way to tolerating atheists.

Vatsal: Once we have discovered moral truths, what moves us to act? David Hume famously said that reason is inert. In your view, where does motivation come from?

Michael Huemer: Reason is not inert. It’s not purely instrumental. You can reason about what’s morally right, and you can have a motivation to do things because they’re morally right.

However, the majority of people have very low moral motivation. It’s not zero, but it’s really low — meaning they will not make great sacrifices for moral reasons. This is often disguised because most people do not violate moral norms in obvious ways in their day-to-day life. When they see something that they want, they don’t just grab it and steal it. When they get mad at someone, they don’t just punch the person. So you could see cases where a person would want to do something that would serve their self-interest and then they don’t do it, and it’s something that would be morally wrong. Then it looks like people have moral motivation.

But what’s actually going on in most of those cases is they’re respecting the social conventions. It’s probably just an instinct that evolved in us to respect social conventions. The way you can tell this is that there are cases in which the social conventions are at odds with morality, and most people then act according to the conventions.

A good example: you can convince a lot of people that it’s wrong to buy animal products from factory farms — almost all of the meat sold in the US is from factory farms. It’s not that hard to convince people that it’s morally wrong to buy that. And then after convincing them, almost no one will stop doing it. The reason is it accords with the conventions of their society — everybody else is doing it, so they’re just going to keep doing it even if they believe it’s wrong.

But it’s not that moral motivation is zero — it’s just really low. If there was a product that tasted the same and was about the same price — equally nutritious, like synthetic meat — then maybe they would switch over. They’re just not going to make a significant sacrifice to their interests.

So social convention is a large part. It’s not the only thing that makes people act decently — sometimes they just have emotions where they care about other people, and then they don’t abuse others because of that. Most of what looks like moral behavior is caused by something other than conscience. That’s not to say there is no conscience. There’s variation among human beings on almost every psychological trait — some people are more conscientious than others. Some people are strongly morally motivated, and then a lot of people are only slightly morally motivated.

Vatsal: Suppose we start by assuming that ought implies can, and that we cannot help but increase our own good even when we are being altruistic, and we try to explain altruism the same way celibacy and suicide are explained in the theory of evolution. What is wrong with that starting point?

Michael Huemer: I’m a little bit confused as to what you’re saying.

Vatsal: It’s a sophisticated version of rational egoism based on psychological egoism.

Michael Huemer: Oh. We know that psychological egoism is false. Usually people think the issue is whether people are inherently moral or altruistic, but there are many cases of non-selfish actions that are not altruistic either.

There are cases of so-called altruistic punishment — where somebody has done something wrong and you will take on a cost to punish them because you’re mad at them. It’s not driven by self-interest. There are experiments in which people are playing some game with money involved. People in the game will spend their own money to cause somebody who they think acted wrongly to lose money. And it’s black and white in the experiment that you do not get a reward for doing that — you lose money, and people will do it. They will lose their own money to make sure that the other person doesn’t profit from being bad — because they were mad or offended.

The people who are inclined to psychological egoism are more persuaded by these kinds of cynical examples — where a person is harming someone in a way that’s not in their self-interest, just because they don’t like them.

Here’s an example: Adolf Hitler killed a lot of people. Why did he do that? Was it for his self-interest? No. He hated the Jews and a bunch of other groups — that’s why he wanted to kill them. Towards the end of World War Two, he knew that he was losing the war. He had all of these people in concentration camps. He could use them to do labor, but instead he kills them, which harms his own chances of winning the war. Why is he doing that? It’s against his interest. He’s doing it because he hates them. That’s not egoism. People have emotions that make them want to do things which can be independent of their interests.

So I think the argument was something like: people can only act selfishly, and ought implies can, so it can’t be true that you’re obligated to act unselfishly. That’s not true. People have free will. The first premise was not true — we don’t always act selfishly. Also, determinism is false. We do have free will. There’s a range of things that you can do. You often have conflicting motivations — reasons for doing one thing and reasons for doing another. You’re often aware of the conflicting reasons, and you can decide which reasons to act on.

Vatsal: Does political authority differ from other kinds of authority, like that of a physician or a parent?

Michael Huemer: Usually in cases where you think somebody has authority, there’s some explanation for why they would have it. In the case of the government, political authority is the authority that the government is supposed to have, which they may not in fact have. It may be that no government actually has authority.

The reasons people give for why the state would have authority are different from the reasons they would give for other cases. Your doctor has a sort of authority — you trust his advice because he’s an expert. Is the government like that? Are they experts on what’s good for society? No. Is Donald Trump an expert on what’s good for society, and that’s why we have to listen to him? No, very far from it. Am I just saying this because I’m pro-Democrat? No. The Democratic politicians are not experts either. Kamala Harris was not an expert on what’s good for society.

Then you have your parents. There’s a small child — the child doesn’t know what he’s doing. The parents know how the world works, and the parents have instinctive motivations to protect the child. That’s why the parents take care of the child and control the child to stop him from hurting himself. Is the government like that? No. The government knows a lot more than we do and they love us, and that’s why they’re going to protect us? No, that’s not true either. They’re just people just like us. They’re not any smarter, they’re not wiser, and they don’t love us.

So you think about why the state would have authority — it’s got to be something completely different. And none of the explanations for why the state has authority are very good. None of them are persuasive.

Vatsal: Are you sympathetic to epistocracy, like Garrett Jones argues in his book?

Michael Huemer: I’m really an anarcho-capitalist, so I think the ideal society would not have a monopolistic central authority structure. I have some concern about epistocracy in that I’m concerned about who would decide, or how they would decide, who gets to vote — who is qualified, whatever the restrictions are. I’d be worried that somebody would game the system.

Vatsal: Final question. Where do you feel the most genuine philosophical uncertainty right now?

Michael Huemer: I’m not sure how much it’s philosophical and how much it’s scientific. I think there are two great mysteries about the universe.

One is the mystery of consciousness — just why is there consciousness at all? I really don’t know. I don’t know why that should exist. From everything else that we know about the universe and how it was in the past — it started out apparently with no conscious beings — it’s just bizarre. It’s very strange and surprising that there would be conscious beings now.

The other mystery is about the origin of the universe itself. People are saying there was a Big Bang around fourteen billion years ago — why did that happen? Why was it like that? And more particularly, why did it start out in an extremely low entropy state? There’s a law of physics that entropy is constantly increasing and low entropy states are intrinsically less probable. The reason entropy is increasing over time is that higher entropy states are more intrinsically probable — if you describe the mathematical space of all possible configurations of the universe, higher entropy states are larger regions in that space. In that sense, they’re intrinsically more probable. Low entropy states are smaller.

So it’s very strange that the universe started in this extremely low entropy state fourteen billion years ago, which according to Roger Penrose has a probability of about one over ten to the ten to the one hundred and twenty-four power — if you picked a state of the universe at random, for it to be that low entropy, that’s the probability. Why was it like that? Which is part of why anything interesting exists. Why are we not in thermal equilibrium, which is overwhelmingly the most likely state for a universe to be in?

Vatsal: Thank you so much for answering my questions and giving me your time. I really appreciate it. Have a nice day.

Michael Huemer: Thanks. It’s been great talking to you. Thanks for having me.


Keep Reading:

Overcoming the Naturalistic Fallacy

Bryan Caplan on Ethical Intuitionism (podcast)

Michael Shermer on Morality and Science (podcast)

Other Projects:

Universal Open Textbook Initiative (free, multilingual textbooks)

Aesthete (curate your culture — iOS app)


Thanks for reading Vatsal’s Newsletter! Subscribe for free to receive new posts and support my work.

Discussion about this video

User's avatar

Ready for more?