Exploring Ontology

It's all about the deep questions.

Category Archives: Free Will

My Philosophical Beliefs

As a bit of a background to my other posts, it often helps to know where I’m coming from. So, I’ve decided to chronicle my personal beliefs in all manners philosophical. Philosophical jargon is usually linked to a corresponding article (usually wikipedia when applicable) to give a brief overview of the term for convenience. For those who want to look into it further, I highly recommend the corresponding entry in The Stanford Encyclopedia of Philosophy. This post will probably be updated every now and then since my beliefs change pretty often, relatively speaking. Here it goes!

Formal Epistemology: Bayesian. Although I recognize that it needs to have some of the details scrutinized (for example the problem of old evidence presupposes logical omniscience), I largely believe it is on the right track. An excellent “updater” for your beliefs given the evidence.

Traditional Epistemology: Some work in traditional epistemology I find useful (combatting skeptical arguments and arguing over various epistemological theories), and some work I find not so useful (conceptual analysis on the word “knowledge” and debates over the viability of meaningful a priori knowledge). Overall, I think I find a modest foundationalism most attractive, even though it can result in a weak form of skepticism.

Ethics: For me, there are deep unsolved questions in meta-ethics that must be tackled first before we are willing to accept any view in normative ethics. Mackie’s classic “Inventing Right and Wrong” has been very influential on me however, so I identify as a tentative error theorist. Even though I do hold this view, morality definitely is a useful fiction that we should all abide by (See here). So, I do think that moral issues are important in this regard. I became a vegetarian for example through arguments from the animal rights movement.

Free Will: Source incompatibilist (first couple of paragraphs define it). Its unfortunate that this problem has to drag out for millennia in philosophy. It almost seems to me that philosophers agree on the facts of the matter; they just disagree on whether the facts are sufficient for a robust form of free will. People who believe in free will acknowledge that if determinism is true than our actions are sufficiently caused by events thousands of years ago (by definition). Most also agree that we have no ultimate control over the way we are. Still further most agree that a large bulk of our actions seem to have their source in unconscious processes. Similarly, those who reject free will recognize that we do behave in an orderly way according to our desires and beliefs, and we have the power to deliberate over “potential” (or perceived) outcomes and choose among them. They also believe that, intuitively, it is obvious that we have free will (they just reject those intuitions).

Philosophy of Math: Structuralist with an anti-realist bent.

Metametaphysics: In metaphysics there seem to be 2 types of questions, those that are substantial and based on non-linguistic truths such as “Does God exist?”, and those that are not (mereological composition anyone?) Unfortunately, in my personal readings it seems as though there are more of the latter. I am skeptical of most things metaphysical.

Consciousness: Surprisingly, I find myself increasingly persuaded by the property dualist view (this is very tentative). Hence, I am driven to reject physicalism (tentatively).

Philosophy of Religion: Strong atheist, weak adeist (if I can coin that word). I certainly do not find any argument for the existence of God persuasive. I do think that there are strong arguments against the traditional conception of God as actively performing miracles, as having a peculiar fascination with one species in the universe (homo sapiens, lucky us), and who is all loving, all good, etc. As for a deistic conception of God, it is sufficiently vague that I think it cannot have a strong refutation. Hence, weak a-deist.

Philosophy of Time: B-theory

Free Will And Why We Don’t Have It: Galen Strawson’s argument

Galen Strawson’s Argument

1. To be responsible for what we do, we must be responsible for the way we are. (at least in certain crucial mental respects.)

2. We are not responsible for the way we are.

3. Therefore, we are not responsible for what we do.

Premise 1

The motivation for this premise comes from the uncontroversial fact that we do what we do because of the way we are. Given this, premise 1 at least seems to follow. Some people may maintain that the mere act of conscious deliberation seems to be enough for free will, and whether one is responsible for one’s mental nature is irrelevant. This intuition can easily be combatted however. When one acts out of deliberate self-conscious deliberation, ultimately certain reasons for action win out over other reasons for action, precisely because of one’s mental nature/disposition. If our mental nature is completely a matter of blind luck which we have zero control over, however, (as argued in premise 2) it is very hard to see how one has any sensible notion of free will left.

Explicitly, the bridging principle between “We do what we do because of the way we are” and premise one is along the lines of: If X because Y, then to be ultimately responsible for X, one must have (at least) some control over Y. This type of position has been deemed “source incompatibilism” since it denies free will because we are not the true source of our actions, and the thesis does not make any explicit reference to determinism. Can one plausibly hold that one can be ultimately responsible for X even though one has absolutely no control over Y? Consider a remote controlled robot. It’s actions X occur because of certain remote control inputs Y. Can one maintain that it is possible for this remote controlled robot to be ultimately responsible for his actions? Hopefully one would not be inclined to assert that. In fact, more parallels can be drawn between ourselves and this robot, especially considering that the main view of the mind today among researchers identifies the mind as a type of machine. One can even give the robot a set of desires such as the desire to walk when button 1 is pressed; similarly we can have the desire to walk when we want to lose weight to look good for example (which is a product of our mental nature which we have zero control over much like the pressing of button 1). Granted, our sets of desires are more complicated than the robot under consideration, but what does an increase in complexity have anything to do with a metaphysical issue of agency and responsibility. We can easily imagine making a remote controlled robot as complex as one wishes, but few people would want to give it free will! Even if it were the case that complexity entails free will (which is a huge if), would there than be a continuum of “degrees of free will” as complexity increases or would there be a sudden jump where one robot design lacks free will, while one just a tiny, tiny bit more complex suddenly had free will? Neither option sounds satisfactory.

Premise 2

To be ultimately responsible for the way you are, you would have to have intentionally brought it about that you are the way you are. The impossibility is shown as follows. Suppose that you have somehow intentionally brought it about that you are the way you now are, in certain mental respects: suppose that you have intentionally brought it about that you have a certain mental nature N, and that you have brought this about in such a way that you can now be said to be ultimately responsible for having nature N. For this to be true you must already have had a certain mental nature N-1 , in the light of which you intentionally brought it about that you now have nature N. (If you did not already have a certain mental nature, then you cannot have had any intentions or preferences, and even if you did change in some way, you cannot be held to be responsible for the way you now are.) But then, for it to be true that you and you alone are truly responsible for how you now are, you must be truly responsible for having had the nature N-1 in the light of which you intentionally brought it about that you now have nature N. To be responsible for nature N-1, you would have to be responsible for N-2 ad infinitum, which is impossible. One would have to be causa sui, or the originator of one’s self, which is absurd.

Conclusion

I can personally see no way in which premise 2 can be denied since it simply follows from the logic given in the above paragraph, which only resorts to very plausible assumptions. Whether we have ultimate free will then, seems to rest on premise 1 then. The only plausible response I have seen, really, is to accept premise 1 and thereby accept that we have no “ultimate” free will. Some maintain, however, that that standard is too high and that we should be comfortable with our weaker sense of free will. Although this does seem to me to be the most plausible case in the pro-free will side, I still maintain that ultimate free will is a necessary condition for moral responsibility and culpability.