It's all about the deep questions.
Free Will And Why We Don’t Have It: Galen Strawson’s argument
Galen Strawson’s Argument
1. To be responsible for what we do, we must be responsible for the way we are. (at least in certain crucial mental respects.)
2. We are not responsible for the way we are.
3. Therefore, we are not responsible for what we do.
The motivation for this premise comes from the uncontroversial fact that we do what we do because of the way we are. Given this, premise 1 at least seems to follow. Some people may maintain that the mere act of conscious deliberation seems to be enough for free will, and whether one is responsible for one’s mental nature is irrelevant. This intuition can easily be combatted however. When one acts out of deliberate self-conscious deliberation, ultimately certain reasons for action win out over other reasons for action, precisely because of one’s mental nature/disposition. If our mental nature is completely a matter of blind luck which we have zero control over, however, (as argued in premise 2) it is very hard to see how one has any sensible notion of free will left.
Explicitly, the bridging principle between “We do what we do because of the way we are” and premise one is along the lines of: If X because Y, then to be ultimately responsible for X, one must have (at least) some control over Y. This type of position has been deemed “source incompatibilism” since it denies free will because we are not the true source of our actions, and the thesis does not make any explicit reference to determinism. Can one plausibly hold that one can be ultimately responsible for X even though one has absolutely no control over Y? Consider a remote controlled robot. It’s actions X occur because of certain remote control inputs Y. Can one maintain that it is possible for this remote controlled robot to be ultimately responsible for his actions? Hopefully one would not be inclined to assert that. In fact, more parallels can be drawn between ourselves and this robot, especially considering that the main view of the mind today among researchers identifies the mind as a type of machine. One can even give the robot a set of desires such as the desire to walk when button 1 is pressed; similarly we can have the desire to walk when we want to lose weight to look good for example (which is a product of our mental nature which we have zero control over much like the pressing of button 1). Granted, our sets of desires are more complicated than the robot under consideration, but what does an increase in complexity have anything to do with a metaphysical issue of agency and responsibility. We can easily imagine making a remote controlled robot as complex as one wishes, but few people would want to give it free will! Even if it were the case that complexity entails free will (which is a huge if), would there than be a continuum of “degrees of free will” as complexity increases or would there be a sudden jump where one robot design lacks free will, while one just a tiny, tiny bit more complex suddenly had free will? Neither option sounds satisfactory.
To be ultimately responsible for the way you are, you would have to have intentionally brought it about that you are the way you are. The impossibility is shown as follows. Suppose that you have somehow intentionally brought it about that you are the way you now are, in certain mental respects: suppose that you have intentionally brought it about that you have a certain mental nature N, and that you have brought this about in such a way that you can now be said to be ultimately responsible for having nature N. For this to be true you must already have had a certain mental nature N-1 , in the light of which you intentionally brought it about that you now have nature N. (If you did not already have a certain mental nature, then you cannot have had any intentions or preferences, and even if you did change in some way, you cannot be held to be responsible for the way you now are.) But then, for it to be true that you and you alone are truly responsible for how you now are, you must be truly responsible for having had the nature N-1 in the light of which you intentionally brought it about that you now have nature N. To be responsible for nature N-1, you would have to be responsible for N-2 ad infinitum, which is impossible. One would have to be causa sui, or the originator of one’s self, which is absurd.
I can personally see no way in which premise 2 can be denied since it simply follows from the logic given in the above paragraph, which only resorts to very plausible assumptions. Whether we have ultimate free will then, seems to rest on premise 1 then. The only plausible response I have seen, really, is to accept premise 1 and thereby accept that we have no “ultimate” free will. Some maintain, however, that that standard is too high and that we should be comfortable with our weaker sense of free will. Although this does seem to me to be the most plausible case in the pro-free will side, I still maintain that ultimate free will is a necessary condition for moral responsibility and culpability.