Good People and Going Wrong, Part 3
One of the reasons I worry about this sort of thing is because I feel generally underequipped to persuade people that they are holding the wrong values. So far, what training I have in philosophy has done little to help me in this because philosophy rarely talks about values. It talks about philosophical systems.
I am used to framing ethical disagreements in terms
of deontology, consequentialism, virtue ethics, and moral nihilism, largely as
a result of my philosophy classes. But a brief look at Wikipedia's page on
morality indicates that there are a lot of other ways of measuring ethics. (Or,
I should say, other ways of slicing up the moral thoughtscape.) Further, I've
never been entirely sure why deontology is also matched against
consequentialism as though they are opposites. Utilitarianism strikes me as a
type of deontology, one with only two duties: maximize pleasure and minimize
pain. (You can tweak any consequentialism to fit.) The point is that the terms
"deontology," "consequentialism," and "virtue ethicist"
do not tell me much. None of these terms tell me what values you
have.
Let's say Alice is a deontologist. That means that
Alice believes we have specific duties; the completion of these duties,
regardless of their effects, is a moral good. This tells me that she will be
strictly principled, and that insofar as she adheres to her own sense of
morality she will obey those rules she takes to be true. But I do not know what
duties she supposes are inherently good. Deontology itself does not determine
these duties. She must have done further work to determine them. (Or cribbed
them from somewhere else. But I still don't know where she cribbed them
from.)
Let's say Bob is a consequentialist. This tells me
that Bob will be interested not in the kinds of actions you take, but in the
consequences you think your actions will have. He will not say things like,
"Lying is wrong. Telling the truth is an ethical imperative." But he
might say something like, "Don't lie to your girlfriend about this because
it will just wind up hurting her." What I do not know is what kinds of
consequences Bob cares about. Does he care about maximizing pleasure and
minimizing pain in a system? Or does he also care about how equally that pleasure
and pain are distributed throughout the system? And what kind of system is he
concerned with: local, national, global; present day, within our lifetimes, all
generations? The label "consequentialism" does not tell me what he
values.
Let's say Carol is a virtue ethicist. This tells me
that Carol is interested in being a good person. She wants her actions to help
her develop her character such that she will be better at doing the right thing
in the future. Carol treats ethical action as though it were a skill, something
for which you train--sometimes by directly practicing the skill, sometimes by
doing exercises that are adjacent to the skill, like learning dance to play
better sport. When she decides an action, she asks herself, "What kind of
person will this make me." But when she says she's a virtue ethicist, I do
not know what kind of person she thinks it would be good to be. I do not know
what skills she considers virtuous. I can imagine an Epicurean as a kind of a
virtue ethicist: Epicureans were very interested in becoming skillful at
maximizing pleasure and avoiding pain because they were aware that
over-indulgence is destructive.
In other words, I've been talking in these posts
about values rather than moral philosophies because they seem to make a bigger
difference on a strictly interpersonal level than the conventional philosophy
labels. (Of course, there are labels I have not mentioned: some that come to
mind are Divine Command Theory, moral nihilism, and moral relativism--no, nihilism
and relativism are not the same thing. But these seem to have many of the same
weaknesses I already mentioned and are also reformulatable as versions of the
others, just as the others are reformulatable as versions of these.) I've seen
ethical arguements that come down to deontologically-flavoured Divine Command
Theory versus consequentialism, but for the most part I get in arguments over
values, not systems.
The trouble with arguing about values instead of
systems, though, is that values are usually harder to argue for. If I value
something, in many ways I just value it. (cf Richard Beck's The Theology ofCalvin and Hobbes, at his Experimental Theology.)
How do I get someone else to value something like I do? Or how do I learn to
value something like someone else--and how do I know when to try? Maybe there
are ways of doing this, but I don't currently see them. So, ultimately, I worry
about whether a person with what I feel to be misplaced values can still be a
good person; if a person can only be excused if they have the right values,
what sense is there in a world where values are largely arbitrary? (And,
pressingly, what if my values are skewed?)
Series Index
Series Index
No comments:
Post a Comment