There is always a more fundamental question. Kenny Easwaran at Antimetea has written a response to my post What Does Bayesian Epistemology Have to Do With Probabilities? In his post, he raises the question, just what is a probability? I want to take a look at my own assumptions about what a probability is, and what he has to say, and see if this has any relevance for our discussion of Bayesian epistemology.
I will not attempt here to develop a philosophy of probability, like Bayesianism, or frequentism, or anything of that sort. These are accounts of what probabilities mean, but not of what probabilities are. Easwaran and I agree that probabilities, in the sense in which we are using the term, are certain formal constructions. I was assuming a particular settheoretic construction, because that's what I was taught (although what I'm about to present is slightly different than what I was assuming before, because I didn't remember things quite right).
I had assumed that a probability was defined over what I called a "state space" (which is actually a computer science term, but is not totally inapplicable here) which is a set of equally likely outcomes.
The correct term is, in fact, "sample space," and, according to my textbook (Mathematics: A Discrete Introduction by Edward R. Scheinerman), an ordered pair (S, P) where S is a set of outcomes and P is a function from S (in some formulations, the power set of S is used, but that makes everything else more complicated, and I think all it buys you is a simpler notation) to the real numbers between 0 and 1 (inclusive) such that the sum of P(s) over every s ∈ S = 1.
Once we've got this, give the interpretation of 1 as certain truth and 0 as certain falsity, and so we can map things back to a Boolean algebra. Easwaran constructs this in reverse:
My understanding of the word is that �probability� refers to any function from a Boolean algebra to the real numbers satisfying the following three properties: (1) it is never negative; (2) the tautology is assigned value 1; (3) finite additivity (that is, given two elements whose conjunction is the contradiction, the probability of their disjunction is the sum of their probabilities).
Now, my discussion before was built on this sample space construction, and I was discussing what the members of the set were. Easwaran's construction has the benefit of allowing us to deal directly with propositions, without introducing the possible worlds semantics. This, I think, is why he seems to describe his view as in between (P) and (KPW): he can hold that there is a real sample space, and construct it out of propositions. With his construction, he doesn't need to go much further than what Kripke says explicitly. Ignoring the facts we're not interested in isn't a simplification for practical purposes: it's actually what we want to do.
Now, a benefit of (KPW) proper (that is, the view I originally dubbed (KPW)) over Easwaran's view is that it explains where these probabilities come from, at least in the case of an abstract ideal reasoner: we assign the same possibility to every epistemically possible world, and take a look at how many worlds the proposition in question comes out true in. As Easwaran points out, this may run into trouble, because these may not be defined. Things get tricky with infinite sample spaces: if they are similar enough to the real line (or plane, etc.), then things work out, but otherwise they may not. So my (KPW) may be in trouble. I wonder, though, on Easwaran's view or on (P), where the probabilities are supposed to come from.
Posted by Kenny at December 12, 2007 6:22 PMTrackbacks 
TrackBack URL for this entry: http://blog.kennypearce.net/admin/mttb.cgi/384
