Exploring Ontology

It's all about the deep questions.

Against Deep Questions: Morality

Morality As a Community of Ideally Rational Desires

Although I stand by my error-theory position about folk morality, that doesn’t exhaust the possible questions about morality. Finding a substitute for the term that has sound philosophical grounds but throws away the unsatisfiable intuitions behind the word is desirable. I take this paper to make two claims. The first, philosophical, claim is to give a framework of what an ideally rational person should desire, and how an ideally rational person ought to live and act by consequence, in the context of a community. The second, more speculative empirical claim, is that the fleshing out of this framework with regard to human communities coheres enough with our rough, commonsense understanding of morality as to deserve to be called the name. If not, we should simply discard morality as a group of outdated notions and abide by the framework set out in the first claim, because that is what we have the most reason to follow.

Definitions:

Desires: reasons for action

Meta-desire: a desire about desires

Desire Satisfaction: A psychological state that can, in principle, be quantified. It is the amount of “satisfaction” attained after fulfilling a desire which is the driving force for what we do. “What drives us” could be happiness, pride, pleasure, etc. I am making no claim of what it actually is (are). (For example, the desire satisfaction of eating a good meal is less than the desire satisfaction of accomplishing something on your bucket list).

The Ultimate Desire: the desire to have the highest amount of desire satisfaction. I will call this the Ultimate Desire since although it is often the case that desires must compete against each other and one must “lose” (the case of eating chocolate cake vs. staying fit), it will never be the case that the Ultimate Desire can lose against other desires. Proof: If it did lose to a desire, it would only do so because that desire gave the agent more desire satisfaction than The Ultimate Desire. Contradiction

Rational Desires:

The term “rational” has most often been taken to describe actions rather than desires. An action is said to be the most rational action available at time t just in case it is the action that is most likely to give the most amount of desire satisfaction relative to one’s desires and beliefs at time t. How are we to apply rationality to desires? We will define it analogously as: a set of desires is said to be the most rational set available at time t just in case those desires are the most likely to give the most desire satisfaction, if one acted rationally given one’s beliefs at time t, and it is psychologically possible for that agent to hold those desires at time t. We can immediately see a way to define this notion in terms of degrees. The more the expected value of the desire satisfaction, the more rational it is to hold those desires.

The Most Simple Case:

The human case proves to be much more complicated in showing what I want to show because it depends on many empirical accidents that need to be worked out. So, instead, I will abstract away from the human case. The first point I would like to make is that out of all possible agents that have desires, they all have one desire in common, the Ultimate Desire. I take this to be an a priori claim based on the nature of desire and desire satisfaction. Now, I will abstract away to the simplest possible case – an agent whose only desire is the Ultimate Desire. The second complication revolves around the “is psychologically possible” clause in the definition of rational desires. In the case of humans, desires don’t seem to be the kind of thing that can simply be adopted by an act of will (If they were, why doesn’t everyone just end their desires for unhealthy food and create new desires for healthy food by willing it to be so?). Desires seem to be things that happen to us rather than chosen by us. However, this doesn’t seem to be a necessary fact about the psychology of all possible agents. To simplify then, the agent that I will focus on is an agent who can will himself into having desires. What desires ought this agent with only the Ultimate Desire and the capability to will desires into existence desire? He ought to conjure up desires that give the highest amount of desire satisfaction and those which are the easiest to satisfy, which would of course be contingent upon his circumstances. Breathing, for example, may give him an infinite amount of desire satisfaction (if that notion is even coherent).

Rational Communities:

Now, let us move on to a case of a community of agents, which bears much more resemblance to the human case. These agents, however, can also will desires into existence. I claim that no matter what the desire distribution of these agents is, adding the following two desires will always increase the desire satisfaction experienced by each agent.

The First Moral Desire: To maximize the desire satisfaction of others.

If everybody in the community had this desire, then of course all of their other desires will be satisfied much more easily with everyone wanting to help them. Furthermore, the act of helping others won’t be considered as “work” and therefore contributing to negative desire satisfaction, but it will increase the desire satisfaction even more. If this community is anything like the human case, then they would have to “work” to help others anyway in hope of reciprocation to achieve many of their desires. With this desire in place, what they had to do anyway will now give them positive instead of negative desire satisfaction. An agent in this community would therefore be irrational not to desire this desire.

The Second Moral Desire: To learn true propositions about the world.

The amount of desire satisfaction that any particular creature will eventually receive during his life is a function of his beliefs about the world, his desires, and the way the world actually is. Therefore, in general, the “more true” an agent’s beliefs are, the higher the desire satisfaction. An agent has good reason to desire truths about the world, and therefore would be irrational to either desire falsehoods or to not care either way whether his beliefs were true or false.

I will call a ideally rational society a society in which each agent has a desire set if and only if it is the most rational one (as defined above). Now, if the above claims are true, then the first and second moral desires will be part of this desire set.

The Human Case:

Now on to the human case. A lot of empirical, psychological work is needed here on just what is the mechanism for “desire conjuring” in humans. I imagine that a lot of possibilities of desire sets for humans are limited due to genetics, but some malleability is almost certainly possible in early stages. For example, praise and blame probably are two mechanisms that formulate desires. This project, of course, is not really one for a philosopher to speculate on however. So, the next part won’t depend on any specific mechanism holding true.

One possibility is that this utopian perfectly altruistic society that would result from an ideally rational society is in fact not a psychological possibility for human beings. While this may or may not be the case, examples of people who have come close to this ideal can be found. The more pragmatic question of how to implement such an ideally rational society would probably turn to methods such as moral education in early childhood.

What Morality Is:

Given this preamble, I will now stipulate what I mean be morality and give an explanation as to how it could satisfy the intuitions behind the common use of the word. A set of desires is moral just in case it is the most rational one to have. An action is moral just in case that action is the most rational action taken relative to moral desires. Given these definitions, it is easy to see that acting morally is the same thing as acting rationally, if one had ideally rational desires. This preserves the philosophical notion that rationality and morality go hand in hand. Furthermore, it not only focuses on rational actions, but it crucially focuses on rational desires.

As a caveat, I am in fact not saying it is rational to act morally (in the traditional sense and the newly defined sense) here and now. Any theory of morality which does not admit this manifestly true statement must be false. What I am saying is that relative to rational desires, we ought to act morally. Furthermore, we have reason to adopt rational desires. Unfortunately, we cannot adopt rational desires by act of will as a matter of psychology. However, if we want future human generations to live in an ideally rational society, which would by definition be a utopia of altruism, we ought to find ways to give them moral desires. To me, at least, the notion of morality stipulated in this paper coheres well enough with the common sense notion of morality as to deserve to be called the name.

Conclusion:

I have presented a notion of morality with only harmless naturalistic assumptions that a person with any metaphysical persuasion can accept. Furthermore, it preserves the idea that it is rational to be moral. Earlier in the paper, I also presented some reasons why morality would result in similar pre theoretical ideas, such as altruism, given to us by morality. Another consequence of this kind of morality is that its rules are not set in stone because of the “psychologically possible” clause. What it is moral for agents to do and want will vary depending on what kind of agent they are. That is why figuring out whether this new kind of morality tells us similar things to common sense morality is largely an empirical, psychological project that cannot be given a final ruling by philosophy. One intuition that this sort of morality does violate that some (mainly academic philosophers) hold about common sense morality is Kant’s categorical imperative which says that one should be moral regardless of one’s desires. I take rather the opposite stand, that one should be moral because of one’s desires. I do this with no apologies, since I think desires must be appealed to in order to justly say an agent ought to do X. In my opinion, I have provided good reasons why this precise concept of morality deserves to replace the notoriously imprecise, prescriptive concept of common sense morality. Lets call this new theory of morality “ideal morality”.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: