In one of my computer science classes in undergrad, we discussed a particular way of thinking about the efficiency of an algorithm, which the professor called 'adversarial upper bounds'. The idea was to suppose that someone knows the 'guts' of your algorithm - exactly how it works - and that person is trying to make your algorithm take as many steps to complete as possible. The upshot was that sometimes with this kind of system inserting some randomness will give you a better expectation value. For instance, suppose I am trying to find a route (just any route) from A to B on a map. Suppose I do this in 'breadth first' fashion: I try all of the roads leaving A, until they get to the next intersection, then I try all the paths from each of those intersections, etc., until something hits B. If the adversary knows the order in which I try each possibility he can set up a map and a pair of points such that I will always make the right guess LAST, so that I would always perform badly for that input. But if I try the possibilities in RANDOM order, then the expected time to calculate any input where the solution involves a certain number of 'hops' is the same as the expected time for any other input with the same number of 'hops'. If we were writing a MapQuest like application, we could accidentally end up with a case where every time I tried to find a route from Los Angeles to Irvine it was very slow, but I could find a route from Los Angeles to Riverside very fast. By randomizing, we make sure that the system is 'fair' and that it is only very rarely slow, and that the slowness is not consistently reproduced.
All this by way of illustration. My real purpose here is to consider how Cartesian demon skepticism resembles this type of thought pattern. The game works like this: I have the capacity to form beliefs on the basis of reason combined with certain 'inputs', especially sense perceptions. I want to maximize true belief and minimize false belief. Descartes asks us to imagine this game in 'adversarial' fashion: suppose there is another player in the game, an adversary called 'the evil demon'. The evil demon's purposes are precisely contrary to mine: he intends to maximize my false beliefs and minimize my true ones. The evil demon controls all of the inputs to my belief formation process; I choose the process itself. We ask the following four questions: (1) What is the optimal strategy for me? (2) What is the optimal strategy for the evil demon? (3) If we both follow the optimal strategies, what are my total quantities of true belief and false belief? (4) Can I have any beliefs which are guaranteed to be true, regardless of which strategy the evil demon employs?
Not much attention has been paid to (2). I suppose this is because, given representative realism (which leads inevitably to this kind of skepticism) the demon's strategy is obvious: he should imagine the possible world which is least similar to the actual world in respect of observable phenomena and cause the inputs which would occur if that world were actual and my senses were reliable. Descartes concludes that, as long as we are playing this game, there is no strategy by which I can reliably arrive at true beliefs about the external world, and I can avoid false ones only by suspending judgment. He is able to escape only by claiming that he knows by the faculty of pure reason that he is NOT playing this game: i.e. that one of the beliefs guaranteed to be true is the belief that we are dealing not with an evil demon but with a benevolent God who is "not a deceiver." Given the truth of representative realism, a Cartesian demon has a winning strategy with respect to external world beliefs.
This perspective on the matter seems to me to be helpful in understanding certain responses to this sort of skepticism. In particular, we should note that, while Descartes argues that he can establish a priori that we are not playing this sort of game, according to the classical response of George Berkeley and the 20th century responses of Hilary Putnam and Donald Davidson, the evil demon (or 'mad scientist' in the more recent literature) does not have a winning strategy. The claim is that even if the inputs to our belief formation process are intentionally malicious, once we understand the nature of our external world beliefs and how they acquire their content, it becomes clear that it can nevertheless be guaranteed that we are not radically mistaken.Posted by Kenny at December 22, 2008 1:35 PM
Return to blog.kennypearce.net