When someone asks 'why p rather than q?', it is sometimes a good answer to say, 'p is far more probable than q.' When someone asks, 'why is p more probable than q?', it is sometimes a good answer to say, 'there are many more ways for p to be true than for q to be true.' According to a well-known paper by Peter Van Inwagen, the question 'why is there something rather than nothing?' can be answered in just this fashion: something is far more probable than nothing, because there are infinitely many ways for there to be something, but there is only one way for there to be nothing. In his contribution to The Puzzle of Existence, Matthew Kotzen argues that, this sort of answer is only sometimes a good one, and that we cannot know a priori whether it is a good answer to the question of something rather than nothing.
Kotzen's general line of response is a standard one: he argues that there are many possible measures, and not all of them assign probability 0 to the empty world. Van Inwagen is perfectly aware of this problem, but argues that a priori considerations allow us to select a natural measure. Kotzen's strategy is to identify some everyday examples where this pattern of explanation looks good, and some where it looks bad, and show that van Inwagen's a priori considerations don't draw the line between good and bad in the right place. Furthermore, he argues (p. 228) that van Inwagen's considerations may not actually be sufficient to assign unique probabilities in the relevant cases, since it is not always clear what space the measure should be assigned over.
I think Kotzen's argument against van Inwagen is quite compelling. The best thing about Kotzen's article, though, is that it does a great job explaining these complex issues at a moderate level of rigor and detail without assuming hardly any background. This would be a great article to assign to undergraduate students.
In the rest of this post, I'm going to do two things. First, I'm going to explain the issue about measures in a much lesser level of rigor and detail than Kotzen does, just to make sure we are all up to speed. Second, I am going to raise the question of whether van Inwagen's argument might have an even bigger problem: whether, instead of too many equally eligible measures, there might be none.
The simplest, most familiar, cases where the probabilistic pattern of explanation with which we are concerned works are finite and discrete. This is the case, for instance, with dice rolls or coin flips. The coin either comes up heads or tails; each die shows one of its six faces. So then, as one learns in one's very first introduction to probabilities, in the case of the dice roll, the probability of any particular proposition about that dice roll is the number of cases in which the proposition is true divided by the total number of possible cases (for two six-sided dice, 36). In real life, by dividing the outcomes into discrete cases like this, we care about certain factors (which face is up) and not about others (e.g., where on the table the dice land). This division into discrete cases is called a partition. The reason the probabilities are so simple in the dice case, with each case in the partition being equally likely, is because we chose a good partition. (Well, actually, it's because a fair die is defined as one that makes each of those outcomes equally probable, but let's ignore that for now and imagine that fair dice just occur in nature rather than being made by humans on purpose.) Suppose that, on one of our dice, the face with six dots is painted red rather than white and, for some reason, what we really care about is whether the red face is up. Well then we might partition the outcomes accordingly, into the red outcome and the non-red outcomes. But these two cases (red and non-red) are not equally probable.
Sometimes the thing we care about is not a discrete case like this, but a fundamentally continuous case like (in a standard example) where on a dartboard a perfectly thin dart lands. A measure is basically the equivalent, in this continuous case, of the partition in the discrete case. For the dart board, there is a natural measure, one that 'just makes sense', and this is provided by our ordinary spatial concepts. So if, for instance, the bullseye takes up 1/10 of the area of the dartboard then, if the dart is thrown randomly, it will have a 1/10 chance of landing there. (Again, this is really just what it means for the dart to be thrown randomly.) This isn't the only possible measure, but it's the one that, in some sense, 'just makes sense.' But the question is, is there a natural measure on the space of possible worlds? That is, is there some 'correct' or 'sensible' or 'natural' way of saying how 'far apart' two possible worlds are? This is far from clear. The Lewis-Stalnaker semantics for counterfactuals supposes that we can talk about some worlds being 'closer together' than others, but this is not enough to define a measure. Furthermore, Lewis, at least, thinks that the closeness of worlds might change based on contextual factors (which respects of similarity we most care about), so it seems like there's a plurality of measures there. Perhaps one could claim that all of these reasonably natural measures agree in assigning nothing probability 0, but that's not clear either. For instance, Leibniz seems to think that one reason why the existence of something cries out for explanation is that "a nothing is simpler and easier than a something" ("Principles of Nature and Grace," tr. Woolhouse and Francks, sect. 7). So maybe we should adopt a measure in which worlds get lower probability the more complicated they are. (I think Swinburne might also have a view like this.) On this kind of view, the empty world (if there is such a world) will be the most probable world. So the plurality of measures seems like a problem.
It's not the only problem, though. Kotzen notes that "the Lebesque measure can be defined only in spaces that can be represented as Euclidean n-dimensional real-valued spaces" (222). (The Lebesgue measure is the standard measure used, for instance, in the dart board case: the bigger space it takes up the bigger its measure.) But the space of possible worlds is not like this! David Lewis has argued that the cardinality of the space of possible worlds must be greater than the cardinality of the continuum (Plurality of Worlds, 118). The reason is relatively simple: suppose that it is possible that there should be a two-dimensional Euclidean space in which every point is either occupied or unoccupied. The set of possible patterns of occupied and unoccupied points in such a space (each representing a distinct possibility) will be larger than the continuum. But if this is right, then there can be no Lebesgue measure on the possible worlds because there are too many worlds. Even if this exact class of worlds is not really possible (for reasons such as the considerations about space in modern physics I raised last time) it seems likely that there are too many worlds for the space of possible worlds to have a Lebesgue measure. Yet Kotzen attributes to van Inwagen that view "that we ought to associate a proposition's probability with its Lebesgue measure in the relevant space" (227).
Maybe van Inwagen is not in quite this much trouble. He doesn't actually seem to say anything about a Lebesgue measure in the paper, so I'm not sure exactly why Kotzen thinks van Inwagen is committed to this. In fact, in the paper Kotzen is discussing, van Inwagen cites his earlier discussion in Daniel Howard-Snyder's collection, The Evidential Argument from Evil. In endnote 3 (pp. 239-240) of that article, van Inwagen says "the notion of the measure of a set of worlds gets most of such content as it has from the intuitive notion of the proportion of logical psace that a set of worlds occupies." I find it a little bit ironic that van Inwagen says this, because he's always denying that he has intuitions about things! I don't have intuitions about proportions of logical space. In any event, it seems to me that van Inwagen is here disavowing the project of giving a well-defined measure in the mathematician's sense.
Suppose one did want to identify a natural measure that was well-defined in the mathematician's sense. I'm not sure about all the technicalities of trying to do this for sets of larger-than-continuum cardinality, and whether it can be done at all. Even if it can, thought, it's going to be hard to say that one measure is more intuitive or natural than another in such an exotic realm. Things might be even worse: Pruss thinks (PSR, p. 100) that, for any cardinality k, it is possible that there be k many photons. If this is true, then there is a proper class of possible worlds, and one certainly can't define a measure on a proper class. (This is another thing I don't think I have intuitions about.)
All this to say: anyone who wants to assign a priori probabilities to all propositions (as van Inwagen does) is fighting an uphill battle, but if such probabilities cannot be assigned, then it does not seem that the probabilistic pattern of explanation can be used to tell us why there is something rather than nothing.
(Cross-posted at The Prosblogion.)
Posted by Kenny at February 26, 2014 4:33 PMTrackbacks |
TrackBack URL for this entry: https://blog.kennypearce.net/admin/mt-tb.cgi/743
|