Archive for the ‘philosophy’ Category
Here’s a tired old subject that I’m going to have a go at. First, a confession: I’ve never read Lewis’ Counterfactuals. So maybe the following thrown together worries are easily dealt with. But here goes. I am going to argue that the existence of what I call “as-if” possible worlds mean that Lewis is committed to determinism: all and only the counterfactuals with true consequents are true. That is, only if the consequent is true in the actual world is the counterfactual true.
Take your favourite counterfactual: I’ll be using “If Oswald hadn’t shot Kennedy, he would still be alive”. (Linguistic point: this is ambiguous. Is the “he” Kennedy or Oswald? I’m pretending it means Kennedy.) What we want is a false antecedent, a false conclusion, but a true counterfactual. (We’ll come back to counterfactuals with true consequents later.) We want where and are both false, but the conditional is true. [Wordpress' LaTeX features don't extend to
Your typical Lewis semantics for counterfactuals go like this: find the nearest possible world where is true. If this is also a world where is true, then the conditional is true. And, for our Oswald/Kennedy case, the story goes like this: Find the nearest possible world where Oswald didn’t shoot Kennedy. In this possible world, Kennedy didn’t get shot, and so he is still alive. So the conditional is true.
But here’s another possible world. Oswald didn’t shoot Kennedy, but a bullet spontaneously appeared right outside the window where Oswald would have been, and this bullet has just the right velocity and so on that it hits and kills Kennedy, just like Oswald’s bullet does in the actual world. Now, this is also a world where the antecedent of the conditional is true. But this is a world where Kennedy dies. But which world is closer to the actual? Well, I claim that the “bullet appears as if Oswald had shot” world is closer. Why? Because it is exactly the same as the other world except in one crucial respect (the bullet) and that in this respect it is closer to the actual world, because the thing that is different is what happens in the actual world.
So the counterfactual is false. And we can easily generalise to any counterfactual whose components are both false. Take any which we intuitively think is true. Any claim like “If A had been true, things would have been different in the way B describes”. Now I let Lewis tell the intuitive possible world story that makes this seem right. Lewis says: “Take the nearest A world: it is also a B world. QED”. But I reply: “But what about the world where everything happens just as you say such that A is true, but everything else happens as if B were false”. This has to be a closer possible world, since B is false in the actual world.
One might object that I’m just misunderstanding what closeness of possible worlds amounts to. Worlds where bullets just spontaneously appear, where things are As, but everything else happens just as if not-B? These are not plausible worlds. So it seems that there’s a way out if you interpret possible worlds in terms of intuitive likelihood, rather than intuitive similarity. But our intuitions about how likely something is are tied to our notions of how the laws of nature work.
So if you want to take this route out, you need to have pre-existing ideas about laws of nature to hang your similarity relation on. And you’re welcome to do that. But then what you’re doing isn’t really Lewis’ semantics any more. Why’s that?
Lewis was a systematic philosopher. He wanted all his ideas to work together. So Lewis would want to appeal to his account of laws of nature. But Lewis’ view on laws made them supervene on the non-modal properties of spacetime points. So any similarity relation in terms of Lewis-laws is going to supervene on a relation among spacetime points across worlds.
Now, if we’re comparing worlds at this level, then the worlds where bullets spontaneously appear just at the end of where Oswald’s gun barrel would have been are closer worlds to the actual world: more of the spacetime points are the same as the actual in that world than in the “intuitively” close world where our laws of physics apply. So it seems if you want to stay close to the idea of Lewis’ project, then you can’t take this “similarity on the level of the laws of nature” route out of the problem.
I’m not totally sure about this “laws of nature” stuff, so let’s ignore that for now and go back to the original problem. The basic problem is that a certain kind of possible world: “A, but everything happens as if not-B”-worlds seem to make any counterfactual with an actually false consequent false. So if this analysis of counterfactuals is right, the conclusion is that things could not have been otherwise: all and only counterfactuals with true consequents are true. Determinism.
Another attitude you might adopt is to just take the “possible worlds” semantics of counterfactuals as a way of talking about counterfactuals. “There aren’t really concrete possible worlds, the whole approach is just a façon de parler. And so what matters are not those weird “A, but as if not-B” worlds, but the intuitively nearby worlds that we started with.” This seems a perfectly reasonable way out, and it’s one I endorse. But again, it’s not an escape route that Lewis can endorse. He does seem committed to this idea that there are concrete possible worlds with a genuine similarity relation between them: there is a fact of the matter about which worlds are closer. So again, this route is closed to Lewis.
Alan Hájek has a paper about Lewis’ semantics of counterfactuals where he ends up concluding that “most counterfactuals are false”. He does this by using some odd looking “might counterfactuals” to defeat any intuitively plausible counterfactual claim. I think my approach is relevantly different. Hájek’s might-counterfactuals say at-best-controversial things about possibilities left open in quantum mechanics: “this glass might just hover in the air”. That is, this possibility has non-zero probability. I guess he’s trying to give some sort of gloss of plausibility to the “A but as if not-B”-worlds that I talk about. I don’t think that’s necessary for the argument: you don’t need to make things fit with our (actual) laws of nature to undermine the truth of the counterfactuals. Also, it looks like Hájek wants to make “Even if not-A, B still would have happened”-style counterfactuals false too, while I make them all come out true. (Disclaimer: I’m basing this on vague memories of Hájek’s paper. I guess I should reread it before making these claims “in print”, but life’s too short for fact-checking blog posts).
What have I shown? That Lewis’ actual view on possible world semantics is crazy? But we all knew that already. That the idea of “similarity” of possible worlds is very tricky. Again, we all knew that already. So I don’t think the above comments are all that interesting in the sense of furthering our understanding of anything, but it does seem like there’s an oddity there that I don’t believe has been addressed.
A number of things have recently led me to wonder about the future of academic peer review. First there was George Monbiot on academic publishers I think he’s probably right that something has to give. Especially in these days where every academic has a website, and many papers are available on their websites, or on preprint servers like ArXiv and the PhilSci Archive. Indeed, Princeton has made its faculty put research in an open access site! And some have even gone so far as to suggest academics boycott journals who make it inaccessible. Michael P. Taylor suggests academics stop refereeing for publishers who put stuff behind a paywall.
So the question is, how will peer review change? Will the driver of change be the big academic publishers like Springer and Elsevier? No. I doubt it. As Monbiot points out, the current scheme suits them rather well. They have no incentive to change. Will it be the consumers? The libraries? Again, I doubt they are in a position to negociate. The academics they serve need the journals they pay for. So I find it hard to imagine that libraries could begin boycotting the publishers. I think the change will have to come from the producers: the academics themselves must do what they can to drive this change. This is more or less what Taylor seems to be driving at.
Of course, it’s a catch-22. The academics aren’t just acting as referees for these journals. They also try and publish in them. So even if Taylor’s suggestion of a boycott were taken up, the short term result would be a slow-down in the rate of publications. And this wouldn’t suit the researchers either.
So there needs to be some alternative scheme in place for quality research to migrate to before we can start to ween ourselves off the big publishers. I’m wondering what kind of form that alternative might take. There are already some good open access journals, like the Public Library of Science journals and (closer to home) Philosopher’s Imprint. There is also an interesting new project called Sympoze which should soon start publishing a philosophy journal, I believe. I am interested to see how these projects develop, and whether they will ever become mainstream. Until they (or similar projects) do become more common and more respected, I can’t see much hope of escaping the clutches of the big academic publishers.
Fun fact: Elsevier is named after the original Dutch publisher of Galileo’s books.
I’m thinking of taking an epistemology course this year. Which means that I’ve been thinking about knowledge. Now, a lot of epistemology starts from the assumption that whatever knowledge is, it must be true. This is called the “factivity of knowledge”. Basically, I am supposed to have the intuition that attributing knowledge of a falsehood to someone sounds a little odd. Consider “Ptolemy knew the Sun orbited the Earth”. We should be inclined to think that this sounds odd, and what we should say is “Ptolemy thought he knew the Sun orbited the Earth” or “Ptolemy believed the Sun orbited the Earth.” This is an intuition I just do not share. I take the point that perhaps it is a little odd to suggest Ptolemy knew the Sun orbited the Earth, but take modern instances of scientific “knowledge”: I know there are electrons; I know that nothing can go faster than light. Accepting that scientific knowledge is fallible, does that mean that it is not knowledge? Or rather, does my accepting that any piece of scientific knowledge might be wrong mean I cannot be confident that I have any scientific knowledge? After all, knowledge is not defined as “justified, approximately true belief”… It seems that epistemology tries too hard to accommodate the intuition that there’s something fishy about attributing knowledge of falsehoods to people, while ignoring the use of words like “knowledge” in science, for example. Any theory that fails to live up to the “folk” use of knowledge in science had better be damn good at what it does elsewhere…
And what’s the point of an analysis of “knowledge” that makes it so inaccessible. Given my acceptance of my own fallibility, I can never know I have knowledge: accepting that any belief of mine could be false makes the KK principle radically false. Do knowledge attributions help us understand or assess the rationality of someone’s reasoning or decision? No: once we’ve a picture of their doxastic state (their beliefs), and the relevant truths, then attributing knowledge doesn’t seem to add anything. I can never say “I merely believe this proposition, so I will act in this way in respect of it; but this proposition I know and can therefore act in this different way…” since I never have access to which of my strong justified beliefs are also true. So what role does this kind of knowledge play?
Maybe this attitude is caused by my being corrupted by LSE’s interest in decision theory. Perhaps I am too focussed on using a theory of knowledge to assess the rationality of some action of set of beliefs. Maybe the real problem is to understand what is special about knowledge, over and above mere belief. And maybe one thing that sets knowledge apart is that knowledge is true. But, to my (hopelessly pragmatically based) mind, that’s not an interesting distinction, since it’s one that never makes a difference to me. But maybe there are some belief states that do have practically privileged status: maybe some kinds of belief (e.g. justified beliefs) allow me to act differently. And if this sort of belief looks a bit like knowledge, then maybe we should adopt that label.
Perhaps the best course of action is just to give up the word “knowledge” to the epistemologists and to focus on the practically useful propositional attitudes like belief. Obviously, truth is still important. Not only is having true beliefs often pragmatically the best way to go, but having true beliefs may well have some kind of “epistemic value” in and of itself. But to make the truth of some belief part of characterising what kind of belief it is seems wrong. Maybe the misunderstanding I have of epistemology (or at least of analyses of the concept of knowledge) is that I want to focus on those aspects of my propositional attitudes that can influence my behaviour, that I can be aware of.
This post grew out of something I posted on Twitter, and thus thanks are due to all the people who argued epistemology with me over there. I’m beginning to think that Twitter is uniquely unsuited to philosophical discussions, but I’ve had some interesting conversations on there nonetheless. Thanks to:
This also marks my third blog post of the day. The others being here and here. I must have a deadline or something. (In my defense, the other two were already substantially written before today). I will be at EPSA so I will continue to not post here.
I do not post here very much, do I? In my defense, I have been posting:
- At the PhilTeX blog I mentioned
- At the TeX.stackexchange blog
- And at this philosophy of science blog
So I’ve not been slacking. Oh I’ve also had that whole “thesis” thing I’m supposed to be working on. I’ve nearly finished working on a paper about imprecise probabilities and decision making. It still needs some work, but once it’s out of the way, I hope to spend a little time working on the disagreement thing I mentioned in my last post…
So, I haven’t read up on the “epistemic significance of disagreement” literature (as may become obvious below). I do intend to, but I currently have several other things on the go. I’ve seen a couple of talks/blog posts that seem to add to this sort discussion, so I have a rough idea of what it’s about.
The idea is that if you and someone you take to be an epistemic peer disagree, then you should give their opinion equal weight. What “equal weight” means is something I’m not going to explore. But I’m worried that the notion of an “epistemic peer” makes EW a trivial claim. When do you decide if someone is an epistemic peer?
I’ve been thinking about bets recently. As you do. I’m interested in the Dutch book theorem, and central to the proof is having a method to formally represent what a bet is. Borrowing from this paper by Frank Döring, we can think of a bet as a list of ordered pairs of an event and a number: . The are the events and the are the stakes. So if Alice and Bob are betting on whether a coin will land heads or tails, with the winner gaining €1 from the loser the bets they are accepting are as follows: (let’s say Alice bets the coin will land heads) . Bob is taking the other side of the bet, so he is accepting the bet: . Flipping the sign on each stake turns one bet into its “complement” if you like. Taking both bets (i.e. taking Alice’s bet and Bob’s bet) would mean a net profit of 0, however the world turns out. This is a kind of “neutral bet”. We can stipulate that the form a partition of the space of possible outcomes. That is, they are mutually exclusive and exhaustive. A bet has to say what happens in all eventualities. Say you were betting on the roll of a die, and the deal was that you won if a 1,2, or 3 came up and that your opponent won if a 5 or 6 came up. What would happen if a 4 was rolled? The bet as it stands doesn’t say… So for simplicity let’s say bets specify what happens in all eventualities.
Perhaps a better formulation of Alice’s and Bob’s bets is as follows: for Alice and the “signs reversed” version for Bob. The here is supposed to cover all remaining possibilities, and stipulates that in the event that the coin lands on its edge, say, then “all bets are off”.
I like Döring’s framework for bets. It’s wonderfully general. J.Y. Halpern’s book Reasoning about uncertainty has a different framework which is less general, but still sufficient for proving the Dutch book theorem. Döring’s framework allows him to show that no (nontrivial) kinds of conditional probability measure can be justified by a Dutch book argument.A Dutch book in Döring’s framework is a bet where all the stakes are negative.
I was thinking about this framework for talking about bets, and I realised a couple of things. Döring discusses combining bets by taking pairwise intersection of the events, and summing the corresponding stakes, to get you a new bet. The events will still partition the event space, and the net profit will be the same for all outcomes, whether you take each bet individually or take the combined bet. We’ve already seen how there’s a neutral bet, and it should be obvious that combining any bet with the neutral bet doesn’t change anything. And for each bet, flipping the signs on all the stakes gives you a bet which, when combined with the original bet, gives you the neutral bet. Basically, the set of all bets with the rule of combination form a group. (Associativity of the group action follows from associativity of taking intersections and addition of numbers). An abelian group, in fact.
For each event we can define a function that maps a bet to the net gain of the bet if that event occurs is a homomorphism from the group of bets to the additive reals. For each probability function over the events, there is a homomorphism from bets to additive reals that returns the bet’s expected value given that probability distribution.
I don’t know if there is any actual point to this, but I found it neat that you can do this stuff. I wonder if you can put a Dutch book in terms of the existence or otherwise of some group theoretic stucture. A Dutch book in this framework is a bet with all stakes being the same sign (that is, whatever happens, it’s always the same party that benefits). So perhaps one can exploit the fact that this is the same as the maximum stake being negative or the minimum stake being positive, and neither of these functions from bets to additive reals are homomorphisms.
Just a quick update to say that I am a contributor to the PhilTeX group blog for philosophers who use LaTeX. If you fit into that (rather niche) category, chances are you’ve already heard of PhilTeX, so this update is almost certainly completely superfluous.
That is all.
[Caveat lector: I use a whole bunch of different labels for people who prefer sharp credences versus people who prefer imprecise credences. I hope the context makes it obvious which group I'm referring to in each instance. Also, this was all written rather quickly as a way for me to get my ideas straight. So I might well have overlooked something that defuses the problems I discuss. Please do tell me if this is the case.]
On two occasions now people have told me that there’s this paper by Roger White that gives a pretty strong argument against having imprecise degrees of belief. Now, I like imprecise credence, so I felt I needed to read and refute this paper. So I sat out on my tiny balcony in a rare spell of London sunshine and I read the paper. I feel slightly uneasy about it for two different reasons. Reasons that seem to pull in different directions. First, I do think the argument is pretty good, but I don’t like the conclusion. So that’s one reason to be uneasy. The other reason is that it feels like this argument can be turned against sharp probabilists as well…
The puzzle goes like this. You don’t know whether or not the proposition “P” is true or false. Indeed, you don’t even know what proposition “P” is, but you know that either P is true or ¬P is true. I write whichever of those propositions is true on the “Heads” side of a coin, after having painted over the coin such that you can’t tell which side is heads. I write the false proposition on the tails side. I am going to flip the coin, and show you whichever side lands upwards. You know the coin is fair. Now we want to know what sort of degrees of belief it is reasonable to have in various propositions.
It seems clear that your degree of belief in the proposition “The coin will land heads” should be a half. I’m not in the business of arguing why this is so. If you disagree with that, I take that to be a reductio of your view of probability. Whatever else your degrees of belief ought to do, they ought (ceteris paribus) to make your credence in a fair coin’s landing heads 1/2.
What ought you believe about P? Well, the set up is such that you have no idea whether P. So your belief regarding P should be maximally non-committal. That is, your representor should be such that C(P)=[0,1], the whole interval. This is, I think, the strength of imprecise probabilities over point probabilities: they do better at representing total ignorance. Your information regarding P and regarding ¬P is identical, symmetric. So, if you wanted sharp probabilities, the Principle of Indifference (PI, sometimes called the Principle of Insufficient Reason) suggests that you ought to consider those propositions equally likely. That is, if you have no more reason to favour one outcome over any other, all the outcomes ought to be considered equally likely. In this case C(P)=1/2=C(¬P). In sharp probabilities, you can’t distinguish total ignorance from strong statistical evidence that the two propositions are equally likely. Consider proposition M: “the 1000th child born in the UK since 2000 is male”. We have strong statistical evidence that supports assigning this proposition equal weight to the proposition F (that that child is female). I’ll come back to that later.
So what’s the problem with imprecise probabilities according to White? Imagine that I flip the coin and the “P” side is facing upward. What degrees of belief ought you have now in the coin’s being heads up? You can’t tell whether the heads or tails face is face up, so it seems like your degree of belief should remain unchanged: 1/2. Given that you can’t see whether it’s heads or tails, you’ve learned nothing that bears on whether P is the true proposition. So it seems that your degree of belief in P should remain the same full unit interval: [0,1].
But: you know that the coin landed heads IF AND ONLY IF P is true. This suggests that your degree of belief in heads should be the same as your belief in P. But they are thoroughly different: 1/2 and [0,1]. So what should you do? Dilate your degree of belief in heads to [0,1]? Squish your degree of belief in P to 1/2? Neither proposal seems particularly appetising. So this is a major problem, right?*
What I want to do now is modify the problem, and try and explore intuitions about what sharp credencers should do in similar situations. First I should note that the original problem is no problem for them, since PI tells them to have C(P)=1/2 anyway, so the credences match up. But I worry about this escape clause for sharp people, since it is still the case that the reasons for their having 1/2 in each case are quite different, and it seems almost an accident or a coincidence that they escape…
Consider a more general game. I have N cards that have a number 1..N on one side of each. The reverse sides are identical. Now, on those reverse sides I write a proposition as before. On card number 1 I write the true proposition out of P and ¬P. On the remaining 2..N I write the false one. Again you don’t know which is which etc. Now I shuffle the cards well, pick one and place it “proposition side up” on the table. For simplicity, let’s say it says “P”. I take it as obvious that your credence in the proposition “This is card number 1″ should be 1/N. What credence should you have in P? Well, PI says it should be 1/2. But it should also be 1/N, since we know that P is true IF AND ONLY IF this is card number 1.
Imagine the case where N is large, 1 million, say. I get the feeling that in this case, you would want to say that it is overwhelmingly likely that P is false: 999,999 cards have the false proposition on, so it’s really likely that one of them has been picked. So my credence in P being true should be something like 1/1,000,000. Put it the other way round. Say there’s only 1 card. Then if you see the card says “P”, that as good as tells you that P is true, so your credence should move to 1.
On the other hand, if we’re thinking about a proposition we are very confident in, say A: “Audrey Hepburn was born in Belgium” (it’s true, look it up.). Let’s say C(A)=0.999 (not 1 because of residual uncertainty regarding wikipedia’s accuracy.). Now, if we have 2 cards and the one drawn says A, that’s good reason to believe that that card is the number 1 card. So in this case, it’s the belief regarding the card that moves, not the belief regarding the proposition.
What about the same game but with a million cards? Despite my strong conviction that Audrey Heburn was, if briefly an Ixelloise (?), the chance of the card drawn being number 1 is so small that maybe that should trump my original confidence and cause me to revise down my belief in A.
Here’s another trickier case. Now imagine playing the same game with some proposition I have strong reason to believe should have credence 1/2, like M defined above. And let’s say we’re playing with only 3 cards. For simplicity, let’s imagine that M is shown. How should your credences change in this situation? Again, it seems that C(M)=C(1) is required by the set up of the game. But I’m less sure which credence should move.
In any case, is there a principled way to decide whether it’s your belief about the card or your belief about the proposition that should change. And if there isn’t, doesn’t this tell against the sharp credentist as much as against the imprecise one?
On objection you might have is that this is all going to be cleared up by applying some Bayes’ theorem in these circumstances, since in these cases (as opposed to White’s original one) see which proposition is drawn really does count as learning something. I don’t buy this, since the set up requires that your degrees of belief be identical in the two propositions. Updating on one given the other is going to shift the two closer together, but I don’t think that’s going to solve the problem.
* The third option, a little squish and a little dilate to make them match seems unappealing, and I ignore it for now, since it seems to have BOTH problems that the above approaches do…
I am organising this years LSE Philosophy of Probability Graduate Conference. This has been at least one of the reasons I have failed to blog for ages. But fear not! I am pondering some things that could well become blog posts. Here is a list of them:
- More stuff about a logic of majority
- Dutch book arguments (what they actually prove, and under what conditions)
- Pluralism (species concept pluralism, logical pluralism, pluralism about interpretations of probability)
- More fallibilist realism (following on from reading Kyle Stanford’s book and discussions with some LSE chaps)
- A couple more awesome headlines
- Perhaps some stuff about learning emacs, auctex and reftex. (And biblatex, tikz, beamer…)
As and when these posts achieve maturity, I’ll link to them here.