I recently looked back over all my old posts. Most of them are shit. A couple are worth updating and transporting over to the new blog. So over the next few weeks, I shall be adding new versions of the more interesting posts I wrote here.
Here’s a tired old subject that I’m going to have a go at. First, a confession: I’ve never read Lewis’ Counterfactuals. So maybe the following thrown together worries are easily dealt with. But here goes. I am going to argue that the existence of what I call “as-if” possible worlds mean that Lewis is committed to determinism: all and only the counterfactuals with true consequents are true. That is, only if the consequent is true in the actual world is the counterfactual true.
Take your favourite counterfactual: I’ll be using “If Oswald hadn’t shot Kennedy, he would still be alive”. (Linguistic point: this is ambiguous. Is the “he” Kennedy or Oswald? I’m pretending it means Kennedy.) What we want is a false antecedent, a false conclusion, but a true counterfactual. (We’ll come back to counterfactuals with true consequents later.) We want where and are both false, but the conditional is true. [Wordpress' LaTeX features don't extend to
Your typical Lewis semantics for counterfactuals go like this: find the nearest possible world where is true. If this is also a world where is true, then the conditional is true. And, for our Oswald/Kennedy case, the story goes like this: Find the nearest possible world where Oswald didn’t shoot Kennedy. In this possible world, Kennedy didn’t get shot, and so he is still alive. So the conditional is true.
But here’s another possible world. Oswald didn’t shoot Kennedy, but a bullet spontaneously appeared right outside the window where Oswald would have been, and this bullet has just the right velocity and so on that it hits and kills Kennedy, just like Oswald’s bullet does in the actual world. Now, this is also a world where the antecedent of the conditional is true. But this is a world where Kennedy dies. But which world is closer to the actual? Well, I claim that the “bullet appears as if Oswald had shot” world is closer. Why? Because it is exactly the same as the other world except in one crucial respect (the bullet) and that in this respect it is closer to the actual world, because the thing that is different is what happens in the actual world.
So the counterfactual is false. And we can easily generalise to any counterfactual whose components are both false. Take any which we intuitively think is true. Any claim like “If A had been true, things would have been different in the way B describes”. Now I let Lewis tell the intuitive possible world story that makes this seem right. Lewis says: “Take the nearest A world: it is also a B world. QED”. But I reply: “But what about the world where everything happens just as you say such that A is true, but everything else happens as if B were false”. This has to be a closer possible world, since B is false in the actual world.
One might object that I’m just misunderstanding what closeness of possible worlds amounts to. Worlds where bullets just spontaneously appear, where things are As, but everything else happens just as if not-B? These are not plausible worlds. So it seems that there’s a way out if you interpret possible worlds in terms of intuitive likelihood, rather than intuitive similarity. But our intuitions about how likely something is are tied to our notions of how the laws of nature work.
So if you want to take this route out, you need to have pre-existing ideas about laws of nature to hang your similarity relation on. And you’re welcome to do that. But then what you’re doing isn’t really Lewis’ semantics any more. Why’s that?
Lewis was a systematic philosopher. He wanted all his ideas to work together. So Lewis would want to appeal to his account of laws of nature. But Lewis’ view on laws made them supervene on the non-modal properties of spacetime points. So any similarity relation in terms of Lewis-laws is going to supervene on a relation among spacetime points across worlds.
Now, if we’re comparing worlds at this level, then the worlds where bullets spontaneously appear just at the end of where Oswald’s gun barrel would have been are closer worlds to the actual world: more of the spacetime points are the same as the actual in that world than in the “intuitively” close world where our laws of physics apply. So it seems if you want to stay close to the idea of Lewis’ project, then you can’t take this “similarity on the level of the laws of nature” route out of the problem.
I’m not totally sure about this “laws of nature” stuff, so let’s ignore that for now and go back to the original problem. The basic problem is that a certain kind of possible world: “A, but everything happens as if not-B”-worlds seem to make any counterfactual with an actually false consequent false. So if this analysis of counterfactuals is right, the conclusion is that things could not have been otherwise: all and only counterfactuals with true consequents are true. Determinism.
Another attitude you might adopt is to just take the “possible worlds” semantics of counterfactuals as a way of talking about counterfactuals. “There aren’t really concrete possible worlds, the whole approach is just a façon de parler. And so what matters are not those weird “A, but as if not-B” worlds, but the intuitively nearby worlds that we started with.” This seems a perfectly reasonable way out, and it’s one I endorse. But again, it’s not an escape route that Lewis can endorse. He does seem committed to this idea that there are concrete possible worlds with a genuine similarity relation between them: there is a fact of the matter about which worlds are closer. So again, this route is closed to Lewis.
Alan Hájek has a paper about Lewis’ semantics of counterfactuals where he ends up concluding that “most counterfactuals are false”. He does this by using some odd looking “might counterfactuals” to defeat any intuitively plausible counterfactual claim. I think my approach is relevantly different. Hájek’s might-counterfactuals say at-best-controversial things about possibilities left open in quantum mechanics: “this glass might just hover in the air”. That is, this possibility has non-zero probability. I guess he’s trying to give some sort of gloss of plausibility to the “A but as if not-B”-worlds that I talk about. I don’t think that’s necessary for the argument: you don’t need to make things fit with our (actual) laws of nature to undermine the truth of the counterfactuals. Also, it looks like Hájek wants to make “Even if not-A, B still would have happened”-style counterfactuals false too, while I make them all come out true. (Disclaimer: I’m basing this on vague memories of Hájek’s paper. I guess I should reread it before making these claims “in print”, but life’s too short for fact-checking blog posts).
What have I shown? That Lewis’ actual view on possible world semantics is crazy? But we all knew that already. That the idea of “similarity” of possible worlds is very tricky. Again, we all knew that already. So I don’t think the above comments are all that interesting in the sense of furthering our understanding of anything, but it does seem like there’s an oddity there that I don’t believe has been addressed.
A number of things have recently led me to wonder about the future of academic peer review. First there was George Monbiot on academic publishers I think he’s probably right that something has to give. Especially in these days where every academic has a website, and many papers are available on their websites, or on preprint servers like ArXiv and the PhilSci Archive. Indeed, Princeton has made its faculty put research in an open access site! And some have even gone so far as to suggest academics boycott journals who make it inaccessible. Michael P. Taylor suggests academics stop refereeing for publishers who put stuff behind a paywall.
So the question is, how will peer review change? Will the driver of change be the big academic publishers like Springer and Elsevier? No. I doubt it. As Monbiot points out, the current scheme suits them rather well. They have no incentive to change. Will it be the consumers? The libraries? Again, I doubt they are in a position to negociate. The academics they serve need the journals they pay for. So I find it hard to imagine that libraries could begin boycotting the publishers. I think the change will have to come from the producers: the academics themselves must do what they can to drive this change. This is more or less what Taylor seems to be driving at.
Of course, it’s a catch-22. The academics aren’t just acting as referees for these journals. They also try and publish in them. So even if Taylor’s suggestion of a boycott were taken up, the short term result would be a slow-down in the rate of publications. And this wouldn’t suit the researchers either.
So there needs to be some alternative scheme in place for quality research to migrate to before we can start to ween ourselves off the big publishers. I’m wondering what kind of form that alternative might take. There are already some good open access journals, like the Public Library of Science journals and (closer to home) Philosopher’s Imprint. There is also an interesting new project called Sympoze which should soon start publishing a philosophy journal, I believe. I am interested to see how these projects develop, and whether they will ever become mainstream. Until they (or similar projects) do become more common and more respected, I can’t see much hope of escaping the clutches of the big academic publishers.
Fun fact: Elsevier is named after the original Dutch publisher of Galileo’s books.
I’m thinking of taking an epistemology course this year. Which means that I’ve been thinking about knowledge. Now, a lot of epistemology starts from the assumption that whatever knowledge is, it must be true. This is called the “factivity of knowledge”. Basically, I am supposed to have the intuition that attributing knowledge of a falsehood to someone sounds a little odd. Consider “Ptolemy knew the Sun orbited the Earth”. We should be inclined to think that this sounds odd, and what we should say is “Ptolemy thought he knew the Sun orbited the Earth” or “Ptolemy believed the Sun orbited the Earth.” This is an intuition I just do not share. I take the point that perhaps it is a little odd to suggest Ptolemy knew the Sun orbited the Earth, but take modern instances of scientific “knowledge”: I know there are electrons; I know that nothing can go faster than light. Accepting that scientific knowledge is fallible, does that mean that it is not knowledge? Or rather, does my accepting that any piece of scientific knowledge might be wrong mean I cannot be confident that I have any scientific knowledge? After all, knowledge is not defined as “justified, approximately true belief”… It seems that epistemology tries too hard to accommodate the intuition that there’s something fishy about attributing knowledge of falsehoods to people, while ignoring the use of words like “knowledge” in science, for example. Any theory that fails to live up to the “folk” use of knowledge in science had better be damn good at what it does elsewhere…
And what’s the point of an analysis of “knowledge” that makes it so inaccessible. Given my acceptance of my own fallibility, I can never know I have knowledge: accepting that any belief of mine could be false makes the KK principle radically false. Do knowledge attributions help us understand or assess the rationality of someone’s reasoning or decision? No: once we’ve a picture of their doxastic state (their beliefs), and the relevant truths, then attributing knowledge doesn’t seem to add anything. I can never say “I merely believe this proposition, so I will act in this way in respect of it; but this proposition I know and can therefore act in this different way…” since I never have access to which of my strong justified beliefs are also true. So what role does this kind of knowledge play?
Maybe this attitude is caused by my being corrupted by LSE’s interest in decision theory. Perhaps I am too focussed on using a theory of knowledge to assess the rationality of some action of set of beliefs. Maybe the real problem is to understand what is special about knowledge, over and above mere belief. And maybe one thing that sets knowledge apart is that knowledge is true. But, to my (hopelessly pragmatically based) mind, that’s not an interesting distinction, since it’s one that never makes a difference to me. But maybe there are some belief states that do have practically privileged status: maybe some kinds of belief (e.g. justified beliefs) allow me to act differently. And if this sort of belief looks a bit like knowledge, then maybe we should adopt that label.
Perhaps the best course of action is just to give up the word “knowledge” to the epistemologists and to focus on the practically useful propositional attitudes like belief. Obviously, truth is still important. Not only is having true beliefs often pragmatically the best way to go, but having true beliefs may well have some kind of “epistemic value” in and of itself. But to make the truth of some belief part of characterising what kind of belief it is seems wrong. Maybe the misunderstanding I have of epistemology (or at least of analyses of the concept of knowledge) is that I want to focus on those aspects of my propositional attitudes that can influence my behaviour, that I can be aware of.
This post grew out of something I posted on Twitter, and thus thanks are due to all the people who argued epistemology with me over there. I’m beginning to think that Twitter is uniquely unsuited to philosophical discussions, but I’ve had some interesting conversations on there nonetheless. Thanks to:
This also marks my third blog post of the day. The others being here and here. I must have a deadline or something. (In my defense, the other two were already substantially written before today). I will be at EPSA so I will continue to not post here.
I do not post here very much, do I? In my defense, I have been posting:
- At the PhilTeX blog I mentioned
- At the TeX.stackexchange blog
- And at this philosophy of science blog
So I’ve not been slacking. Oh I’ve also had that whole “thesis” thing I’m supposed to be working on. I’ve nearly finished working on a paper about imprecise probabilities and decision making. It still needs some work, but once it’s out of the way, I hope to spend a little time working on the disagreement thing I mentioned in my last post…
So, I haven’t read up on the “epistemic significance of disagreement” literature (as may become obvious below). I do intend to, but I currently have several other things on the go. I’ve seen a couple of talks/blog posts that seem to add to this sort discussion, so I have a rough idea of what it’s about.
The idea is that if you and someone you take to be an epistemic peer disagree, then you should give their opinion equal weight. What “equal weight” means is something I’m not going to explore. But I’m worried that the notion of an “epistemic peer” makes EW a trivial claim. When do you decide if someone is an epistemic peer?
Say you have 20 essay topics. You know that there will be 9 exam questions and you’ll be expected to answer 4 questions. How many topics ought you revise? The simplest answer is 15. If you study 15 topics, even if all 5 topics you didn’t study get picked, there’ll be 4 left that you did study (because 9 get picked, remember). But that’s still a lot of topics! If you studied 14 topics, what are the odds that you’d only have 3 revised topics on the exam?