Exploring Ontology

It's all about the deep questions.

Free Will And Why We Don’t Have It: Galen Strawson’s argument

Galen Strawson’s Argument

1. To be responsible for what we do, we must be responsible for the way we are. (at least in certain crucial mental respects.)

2. We are not responsible for the way we are.

3. Therefore, we are not responsible for what we do.

Premise 1

The motivation for this premise comes from the uncontroversial fact that we do what we do because of the way we are. Given this, premise 1 at least seems to follow. Some people may maintain that the mere act of conscious deliberation seems to be enough for free will, and whether one is responsible for one’s mental nature is irrelevant. This intuition can easily be combatted however. When one acts out of deliberate self-conscious deliberation, ultimately certain reasons for action win out over other reasons for action, precisely because of one’s mental nature/disposition. If our mental nature is completely a matter of blind luck which we have zero control over, however, (as argued in premise 2) it is very hard to see how one has any sensible notion of free will left.

Explicitly, the bridging principle between “We do what we do because of the way we are” and premise one is along the lines of: If X because Y, then to be ultimately responsible for X, one must have (at least) some control over Y. This type of position has been deemed “source incompatibilism” since it denies free will because we are not the true source of our actions, and the thesis does not make any explicit reference to determinism. Can one plausibly hold that one can be ultimately responsible for X even though one has absolutely no control over Y? Consider a remote controlled robot. It’s actions X occur because of certain remote control inputs Y. Can one maintain that it is possible for this remote controlled robot to be ultimately responsible for his actions? Hopefully one would not be inclined to assert that. In fact, more parallels can be drawn between ourselves and this robot, especially considering that the main view of the mind today among researchers identifies the mind as a type of machine. One can even give the robot a set of desires such as the desire to walk when button 1 is pressed; similarly we can have the desire to walk when we want to lose weight to look good for example (which is a product of our mental nature which we have zero control over much like the pressing of button 1). Granted, our sets of desires are more complicated than the robot under consideration, but what does an increase in complexity have anything to do with a metaphysical issue of agency and responsibility. We can easily imagine making a remote controlled robot as complex as one wishes, but few people would want to give it free will! Even if it were the case that complexity entails free will (which is a huge if), would there than be a continuum of “degrees of free will” as complexity increases or would there be a sudden jump where one robot design lacks free will, while one just a tiny, tiny bit more complex suddenly had free will? Neither option sounds satisfactory.

Premise 2

To be ultimately responsible for the way you are, you would have to have intentionally brought it about that you are the way you are. The impossibility is shown as follows. Suppose that you have somehow intentionally brought it about that you are the way you now are, in certain mental respects: suppose that you have intentionally brought it about that you have a certain mental nature N, and that you have brought this about in such a way that you can now be said to be ultimately responsible for having nature N. For this to be true you must already have had a certain mental nature N-1 , in the light of which you intentionally brought it about that you now have nature N. (If you did not already have a certain mental nature, then you cannot have had any intentions or preferences, and even if you did change in some way, you cannot be held to be responsible for the way you now are.) But then, for it to be true that you and you alone are truly responsible for how you now are, you must be truly responsible for having had the nature N-1 in the light of which you intentionally brought it about that you now have nature N. To be responsible for nature N-1, you would have to be responsible for N-2 ad infinitum, which is impossible. One would have to be causa sui, or the originator of one’s self, which is absurd.

Conclusion

I can personally see no way in which premise 2 can be denied since it simply follows from the logic given in the above paragraph, which only resorts to very plausible assumptions. Whether we have ultimate free will then, seems to rest on premise 1 then. The only plausible response I have seen, really, is to accept premise 1 and thereby accept that we have no “ultimate” free will. Some maintain, however, that that standard is too high and that we should be comfortable with our weaker sense of free will. Although this does seem to me to be the most plausible case in the pro-free will side, I still maintain that ultimate free will is a necessary condition for moral responsibility and culpability.

Advertisements

4 responses to “Free Will And Why We Don’t Have It: Galen Strawson’s argument

  1. cl August 21, 2011 at 10:37 pm

    Or we could always challenge the determinist presumption inherent in 1.

  2. tarrobread August 21, 2011 at 11:31 pm

    Cl, thanks for the first comment!

    How would you challenge it exactly? Also, this argument is not an argument that free will is incompatible with determinism. In fact, Galen Strawson (who professionally advances this argument) advances it as an a priori argument against free will. Note that premise 1 doesn’t require that we sufficiently, deterministically cause “the way we are”; it only requires that we have some reasonable degree of control over it. Besides that, I don’t see where you got a determinist flavor anywhere. The argument is completely sound if an indeterministic interpretation of quantum activity at the micro scale wins out. Do you disagree with any specific point in the brief defense of premise 1? It seems reasonable to me that if we have no control over the way we are, then we can’t possibly be morally responsible for what we do.

  3. e.e. October 9, 2011 at 1:43 pm

    the difference between a human and a programmed robot (on the topic of free will) is that the robot has an active programmer, someone (or something) that understands and knows all its workings and, therefore, can 100% predict the reactions of the robot to any given situation. although humans are controlled (whether partially or completely) by the computation of their brains, there is no other being (in this case there being no existent God) that understands and can 100% predict the human’s action. the moment that other being appears, there is no longer free will, but as long as humans actions cannot be 100% predicted, a sliver of free will still exists.

  4. tarrobread October 27, 2011 at 12:39 am

    I dont think it really matter whether there is in fact someone who can predict our actions. It only matters whether our actions are predictable in principle. If they are predictable in principle, then we would be under the same conditions with the robot with regards to this point. We could even drum up a thought experiment. Suppose that an omniscient God existed every other minute (so he would exist for one, not exist in the next, exist for the one after that, etc.). Would our free will flicker on and off every other minute of our lives? It seems highly improbable that some mysterious supernatural action over and above the universe could so closely affect a question whose answer seems intuitively to be “internal” to us.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: