Forming beliefs on moral issues by deferring to someone or something else is problematic because these beliefs don’t integrate with our moral character.
Imagine that Google has just released their latest app entitled Google Morals. The important question is whether it would be $0.99 or $4.99 on the Play Store. Just like how you can ask Google how to get somewhere, now with Google Morals you can ask what to do for any moral issue. Should I eat this steak? Should I be an organ donor? Google Morals will answer any of these questions with a simple “Yes” or “No.”
To me, immediately it seems like there’s something off about using Google Morals. I do defer to Google on other things like the radius of the Earth or how to get to the airport, but something seems different with moral issues.
Robert J. Howell provides a novel account of what exactly is wrong about deferring on moral issues.1 We’ll get to this in a minute.
A significant part of the paper is devoted to talking about what exactly moral deference is and its many types. Some of it is technical so I leave it out to make my job easier. Howell thinks that deference includes (i) forming a belief in some statement p based on another person’s believing p and (ii) sustaining that belief in p based on another person’s believing p. As for why you shouldn’t use the dictionary, here’s the definition of deference from The American Heritage® Dictionary: “Submission or courteous respect given to another, often in recognition of authority.” So we’re not using whatever definition of deference that’s in the dictionary.
Why should we use Howell’s definition of deference? Howell thinks that this definition covers at least the essential aspects of deference; we believe and continue to believe p because someone told us so.
Let’s take a look at how this definition works. I might not know what the closest planet to the Sun is, but I know my friend who’s really into Astronomy does know. I ask my friend, and she tells me it’s Mercury. That’s p, the statement that Mercury is the closest planet to the Sun. Now, I believe p because my friend told me so. I’m assuming here that my friend telling me p also means that she believes p. I don’t think she would lie to me about this. I haven’t actually looked it up on Wikipedia or seen it for myself so I’m believing p based on what my friend believes.
The next day someone asks me what the closest planet to the Sun is (it’s some new trend I guess). I tell them it’s Mercury, and when they ask me how I know that I tell them my friend who’s into Astronomy told me. This is (ii) of Howell’s definition. Howell seems to mean that deferring on p means that we sustain our belief in p forever based on someone else’s word, but he also doesn’t seem to mean this which confuses me. See his discussion at the end of the paper on moral development for more on this. I continue to believe that Mercury is the closest planet to the Sun solely based on what my friend told me. I myself don’t understand why Mercury is the closest planet to the Sun. You might think this is problematic, but Howell thinks the bigger problem is with deferring on moral issues.
Imagine that I’m discussing with my friend over the phone whether I should register as an organ donor, and I just can’t reach a conclusion. After I hang up, I ask Google Morals whether I should register as an organ donor. Let’s assume Google Morals says that I should register as an organ donor. So now I believe I should register as an organ donor because Google Morals told me so.
Already, this might seem off to you, but wait it gets worse! The next day my friend asks me what I ended up doing, and I tell him I registered as an organ donor. He asks me why, and I tell him because Google Morals told me so. Wait, what! This seems very wrong. You might think that deferring to your friend on whether Mercury is the closest planet to the Sun is okay at best, but deferring to Google Morals on whether I should be an organ donor seems worse. If your intuition doesn’t line up with mine, Howell’s account gives some reasons to think moral deference is especially bad.
Howell looks at several possible explanations of why moral deference is bad, but he thinks that only his account gives the full story of what’s at the heart of the problem.
I won’t go through all of the possible explanations, but here are at least the ones I thought touched on an important aspect of the problem.
If you think about it, Google Morals only tells you what you should do and nothing more. It doesn’t tell you why you should do it or what sort of reasoning led to its conclusion. I might know what I should do, but what’s wrong is that I don’t understand why I should either register as an organ donor or not. The why might include the benefits of donating my organs or my right to what happens to my body. If I don’t understand why I should register as an organ donor, how would I know what to do in a similar case? Would I always ask Google Morals what to do? Our lack of understanding explains the problem with moral deference.
Howell argues that this argument only partially explains what’s wrong with moral deference. There are cases where we understand why we should do what Google Morals tells us to do, but it still seems problematic to defer. For example, we may defer to Google Morals on which moral theory is right, and if it says utilitarianism is right we would understand the reason behind whatever Google Morals tells us to do. For example, we might know why Google Morals says we should be vegetarians (because it would minimize the suffering of animals), but we still wouldn’t know why utilitarianism is correct.
Another explanation Howell considers is that we are acting some way only because someone else told us to do so.
I wonder if we could object that what we are trying to say here is that the problem with moral deference is believing something is the right thing to do only because someone else told us so. But Howell might reply here that if we think Google Morals is reliable then there shouldn’t be a problem with believing what Google Morals says is the right thing to do. Howell thinks there is a missing link here. We act some way because we believe it is the right thing to do, and we believe it is the right thing to do because someone else (who we think is reliable) told us. I don’t register as an organ donor simply because Google Morals told me. I register as an organ donor because I think it is the right thing to do, and I believe it’s the right thing to do because Google Morals told me.
The last explanation Howell considers is about doing our job.
Consider Gary the Googler. Gary doesn’t trust himself to do the right thing so whenever something comes up he asks Google Morals what to do.
We might think something seems off about what Gary is doing. Howell thinks it has to do with how we think it’s everyone’s job to figure out what to do (morally speaking). We can’t let Google Morals do what is our job.
I’m not convinced by Howell’s objection here because it does seem like this question digs deeper. If we are, in fact, responsible for figuring out moral truths, this would nicely explain why moral deference is bad. Howell thinks that this explanation doesn’t actually explain anything since now we’re left with the question of why we are responsible for figuring out moral truths. We’re just posing another question in response to the first question of why moral deference is bad.
So now that we’ve considered a list of (supposedly) problematic explanations, let’s take a look at Howell’s.
In one sentence, Howell thinks the problem with moral deference is that it indicates a lack of moral virtue or makes it harder to acquire moral virtue. In Howell’s own words, “the beliefs sustained by deference are largely isolated from the moral character of the agent” (p.402). There’s a lot squeezed into that sentence so let’s unpack it.
First, moral character is used in much the same way that we talk about someone’s character. Moral character has to do with what kinds of virtues we have.
This might seem circular because the definition of having the virtue of courage is defined in terms of courage. I think this can be safely ignored for Howell’s account so let’s assume we know what courage is and what a courageous action is. “To have a virtue is to have a reliable disposition to act and feel in certain ways” (p.403). You can think of virtue as being made up of 2 components: the disposition to act a certain way and the disposition to feel a certain way. For example, if I have the virtue of courage, then I (nearly all the time) am able to act and feel courageously.
The core of Howell’s account is that by deferring we don’t properly integrate these “virtuous features” (the actions or feelings) with our moral character. This means we only learn what the right thing to do is and don’t get any of the other components of having virtue such as the feelings or actions. Then, we don’t have the relevant virtue (because we’re missing one of the components) and/or it’s harder to acquire that virtue in the future (p.403).
For example, I’m walking down the street and I see a homeless man. If I have the virtue of compassion or generosity, I might try to help this man with food, shelter, or money. But it’s not just the action. As I give this man some money, I will feel relieved or glad that I’m helping this man. It’s also the case that I would almost always do the same thing in a similar situation. This is what it takes to have the virtue. Just the action is not enough.
Now compare this to an alternate situation. I’m walking down the street and I see a homeless man. Not sure about whether I should just pass by or try to help, I ask Google Morals what to do. Google Morals tells me I should help this man. So I also give this man some money like before. But this time I feel torn about giving money. I was going to use that to buy ice cream so I feel frustrated (we might think I’m a jerk for feeling so). In a similar situation, I might try to ignore the man since I know what Google Morals will tell me (even more of a jerk move). In this case, I do not have the virtue of compassion or generosity; I don’t feel the right feelings or act generously. Since I only act correctly, I don’t satisfy all the conditions to integrate the relevant virtue with my moral character.
But moral deference can also make it harder to acquire moral virtue. I had to do it. This is where Howell defers to the understanding account. We don’t understand the reasons for the belief we get from moral deference so we can’t apply it to similar situations we get into. I might learn from Google Morals that I should give money to the homeless man, but if I don’t understand why Google Morals told me this I will most likely not do the same in similar, future encounters. This means that acquiring the virtue of compassion or generosity will be more difficult because I can’t consistently apply the knowledge I get from moral deference.
So that’s it! I think Howell’s virtue account is very insightful, but there’s definitely more out there in the literature about the problem with moral deference.2