It is often said that intuitions serve as the data of
philosophy. Of course, as with any concept, the notion of intuition is vexed.
(And I think the “simple” solution to all the vexation is that our
concepts are polysemous. But that’s another story.) But bracketing that problem,
perhaps you will find it acceptable for the present purpose to agree that if,
say, an ethical theory implied that everyone has an obligation to cut off her pinky
finger for no further reason, we will trust our intuition that this makes no
sense, and hence that the theory is false or at least needs more work.
The point I
want to make now is that sometimes it is the intuition that must yield. Obvious
examples abound here too. A classic example is that the intuition that the Sun
travels around the Earth had to yield to the heliocentric theory. (Note: Often
there is a consolation prize. Thus, it remains perfectly kosher to say that the
Sun moves across the sky.) However,
because in ethics the relevant intuitions are often tied to deeply felt
commitments or desires, they won’t always easily give up the fight. Indeed, I
am now convinced that often they never will, so we are left with a relativity of
values, and hence also of whatever ethical theories seem “confirmed” by them (although
in meta-theory, maybe all could be, given the right additional assumptions or
intuitions).
A particular example
that keeps exercising me is the moral intuition – that is, the strong and
abiding sense that some things are (“absolutely”) wrong and other things
are not and may even be obligatory. The ethical theory I now favor implies, or
at least suggests, that this intuition is flat-out false. But it won’t quit … not
even in my breast. But, as I have often pointed out, I have come to view
it the way I view an illusion: I will always see the two long lines in the Müller-Lyer as of
unequal length, but also know that they are of equal length. Just so, I will always
feel that eating animals is wrong, but I now believe I simply don’t
like people doing it (a lot!). (Note: Motivationally I see little if any loss
here; see for example “Dispelling
the Illusion of Motivational Inadequacy in Ethics.” And ethically I see a
definite gain; that is, without the bugaboo of morality to rile us, I think the
kind of world most of us would like to live in would become more likely.)
But
another “data”/theory problem is perhaps even more intractable. It happens
again and again in my experience that someone will hear out my amoralist thesis
and arguments and agree with them. But then I observe that they behave quite contradictorily
to that professed agreement; that is, they remain moralistic in their actual
feelings, attitudes, and behavior. This goes beyond what I mentioned in the
previous paragraph, for there I noted that something can be an illusion but
recognized as such, and hence ameliorative action taken. What I am talking
about now is being taken in by the illusion, such that it is a delusion,
despite the professed acceptance of the theory that labels it an illusion.
For
example, having just engaged in a delightful and agreeable conversation about
amoralism, a friend and I suddenly found ourselves in disagreement about some particular
practical matter; and before you know it, we were arguing (in the nonrational,
emotional sense) about which of us was right or good or wrong or bad!
Fortunately I was able to put a halt to this downward spiral by recalling
myself to my amoralist senses and recognizing that my own fiery feelings were “just
feelings” (and hence quite “unjust” in the moral sense); so I withdrew from my own
feelings and stopped casting back recriminations at my friend. However, my friend,
no doubt for having had but a cursory introduction to the theory he had just
acquiesced to, as opposed to my ten+-year immersion in it, remained firmly entrenched
in his judgmentalism. Therefore, applying the central precept of my theory, I
simply did (said) what I felt stood the best chance of ameliorating the discord
in order to achieve the concord I desired more than proving myself “right” or “good.”
In a word, I apologized for doing what he didn’t like. And it worked like a
charm. (Note: The apology was sincere, but in my amoralist sense of “apology.”)
Thus
Q.E.D.: Sometimes the “data” we rely on for judging the adequacy of a theory would
itself (themselves) profitably yield to the theory. For the data in my
interaction/altercation with my friend were moralist feelings; but, at least in
my own case, the power of the theory of amoralism overcame them, and the result
was vastly preferred by both of us to what the alternative would have been,
viz., each of us being angry at the other in our perceived self-righteousness.