Weakness of Will: Puzzle and Solution

A long-standing puzzle in ethics is how someone can “know” what is the right thing to do but still not do it. Think about it. Someone says, “I know I should do this” and then doesn’t. “Then why aren’t you doing it?” you ask them. They reply, “Weakness of will.” They just can’t bring themself to do what they believe they ought to do. But, you think to yourself, “If he really believed he ought to do it – and isn’t that exactly what it means to use the verb ‘know’ in this context? – how could he not do it? Otherwise, what would it even mean to say that he knows he ought to do it?” That’s the puzzle. 

            I have a solution. And it’s one that has implications far beyond just this puzzle. My general solution is to recommend that, in lieu of talking about what one ought to do or about what is right or wrong, etc., we speak (and think) in nonmoral terms such as “If you thought about it, you would (or wouldn’t) do it.” Thus, in the case of weakness of will, my proposal is that the reason one can believe that x is the right thing to do and that they ought to do x but still not do it, is that the cash value of their (let us grant sincere and even true) assertion is that they believe that if they thought about it, they would do x. So why don’t they do x? Because they haven’t thought about it. That solves the puzzle. 

            Of course they have thought about it enough to be able to make their hypothetical declaration. So by “thought about it” in this context what I mean more specifically is that they have considered the matter in optimal circumstances for rational reflection. This implies that they have scrutinized relevant considerations in a logical way and with full attention to relevant evidence vividly presented. Also, since “would do it” is an empirical claim, there is an implicit “other things equal” clause, since in the real world all sorts of things can intervene between forming an intention and acting on it, such as having a heart attack. 

            So for example, suppose x is give up eating animals. To think about this proposition in the manner I have in mind would mean to consider the nature of animal sentience, the way in which animals are bred and killed for human consumption, the relative effects on nutrition and on the environment of animal versus nonanimal diets, the gustatory (will it taste good enough to motivate continued compliance?), labor (cooking time etc.), and economic (household income) implications for individuals, and so forth; and to do this with a clear mind in unrushed circumstances. 

            So the individual who says, “I know I should stop eating meat” but doesn’t, is, on my scheme, saying, “I believe that if I undertook that extensive examination of animal sentience and agriculture and nutrition and recipes etc., I am sure I would stop eating animals” but also has not yet undertaken, or at least not yet completed, that examination. But if and when they do complete it, they will (other things equal) stop eating animals. And if they don’t stop (and other things are not “unequal,” such as having no other food source but carrion during a famine), then their prediction was mistaken; so in fact they did not believe that it is wrong to eat animals. 

            My overarching recommendation is that we avoid ought and all other moral talk altogether and speak simply in terms of concrete predictions and recommendations about psychology and behavior, since that is all we could intelligibly be talking about anyway, and it avoids sidetracks and misunderstandings and paradoxes and, most importantly, the social sclerosis moral assertions tend to induce and the heightened conflicts moral disagreements tend to engender.

Popular posts from this blog

Closing the Gaps

Who Is more likely to be a psychopath: the rational moralist or the emotional amoralist?

Eating of the Tree: The phenomenology of the moral moment