Posts Tagged ‘philosophy’
I’m thinking of taking an epistemology course this year. Which means that I’ve been thinking about knowledge. Now, a lot of epistemology starts from the assumption that whatever knowledge is, it must be true. This is called the “factivity of knowledge”. Basically, I am supposed to have the intuition that attributing knowledge of a falsehood to someone sounds a little odd. Consider “Ptolemy knew the Sun orbited the Earth”. We should be inclined to think that this sounds odd, and what we should say is “Ptolemy thought he knew the Sun orbited the Earth” or “Ptolemy believed the Sun orbited the Earth.” This is an intuition I just do not share. I take the point that perhaps it is a little odd to suggest Ptolemy knew the Sun orbited the Earth, but take modern instances of scientific “knowledge”: I know there are electrons; I know that nothing can go faster than light. Accepting that scientific knowledge is fallible, does that mean that it is not knowledge? Or rather, does my accepting that any piece of scientific knowledge might be wrong mean I cannot be confident that I have any scientific knowledge? After all, knowledge is not defined as “justified, approximately true belief”… It seems that epistemology tries too hard to accommodate the intuition that there’s something fishy about attributing knowledge of falsehoods to people, while ignoring the use of words like “knowledge” in science, for example. Any theory that fails to live up to the “folk” use of knowledge in science had better be damn good at what it does elsewhere…
And what’s the point of an analysis of “knowledge” that makes it so inaccessible. Given my acceptance of my own fallibility, I can never know I have knowledge: accepting that any belief of mine could be false makes the KK principle radically false. Do knowledge attributions help us understand or assess the rationality of someone’s reasoning or decision? No: once we’ve a picture of their doxastic state (their beliefs), and the relevant truths, then attributing knowledge doesn’t seem to add anything. I can never say “I merely believe this proposition, so I will act in this way in respect of it; but this proposition I know and can therefore act in this different way…” since I never have access to which of my strong justified beliefs are also true. So what role does this kind of knowledge play?
Maybe this attitude is caused by my being corrupted by LSE’s interest in decision theory. Perhaps I am too focussed on using a theory of knowledge to assess the rationality of some action of set of beliefs. Maybe the real problem is to understand what is special about knowledge, over and above mere belief. And maybe one thing that sets knowledge apart is that knowledge is true. But, to my (hopelessly pragmatically based) mind, that’s not an interesting distinction, since it’s one that never makes a difference to me. But maybe there are some belief states that do have practically privileged status: maybe some kinds of belief (e.g. justified beliefs) allow me to act differently. And if this sort of belief looks a bit like knowledge, then maybe we should adopt that label.
Perhaps the best course of action is just to give up the word “knowledge” to the epistemologists and to focus on the practically useful propositional attitudes like belief. Obviously, truth is still important. Not only is having true beliefs often pragmatically the best way to go, but having true beliefs may well have some kind of “epistemic value” in and of itself. But to make the truth of some belief part of characterising what kind of belief it is seems wrong. Maybe the misunderstanding I have of epistemology (or at least of analyses of the concept of knowledge) is that I want to focus on those aspects of my propositional attitudes that can influence my behaviour, that I can be aware of.
This post grew out of something I posted on Twitter, and thus thanks are due to all the people who argued epistemology with me over there. I’m beginning to think that Twitter is uniquely unsuited to philosophical discussions, but I’ve had some interesting conversations on there nonetheless. Thanks to:
This also marks my third blog post of the day. The others being here and here. I must have a deadline or something. (In my defense, the other two were already substantially written before today). I will be at EPSA so I will continue to not post here.
So, I haven’t read up on the “epistemic significance of disagreement” literature (as may become obvious below). I do intend to, but I currently have several other things on the go. I’ve seen a couple of talks/blog posts that seem to add to this sort discussion, so I have a rough idea of what it’s about.
The idea is that if you and someone you take to be an epistemic peer disagree, then you should give their opinion equal weight. What “equal weight” means is something I’m not going to explore. But I’m worried that the notion of an “epistemic peer” makes EW a trivial claim. When do you decide if someone is an epistemic peer?
Just a quick update to say that I am a contributor to the PhilTeX group blog for philosophers who use LaTeX. If you fit into that (rather niche) category, chances are you’ve already heard of PhilTeX, so this update is almost certainly completely superfluous.
That is all.
[Caveat lector: I use a whole bunch of different labels for people who prefer sharp credences versus people who prefer imprecise credences. I hope the context makes it obvious which group I'm referring to in each instance. Also, this was all written rather quickly as a way for me to get my ideas straight. So I might well have overlooked something that defuses the problems I discuss. Please do tell me if this is the case.]
On two occasions now people have told me that there’s this paper by Roger White that gives a pretty strong argument against having imprecise degrees of belief. Now, I like imprecise credence, so I felt I needed to read and refute this paper. So I sat out on my tiny balcony in a rare spell of London sunshine and I read the paper. I feel slightly uneasy about it for two different reasons. Reasons that seem to pull in different directions. First, I do think the argument is pretty good, but I don’t like the conclusion. So that’s one reason to be uneasy. The other reason is that it feels like this argument can be turned against sharp probabilists as well…
The puzzle goes like this. You don’t know whether or not the proposition “P” is true or false. Indeed, you don’t even know what proposition “P” is, but you know that either P is true or ¬P is true. I write whichever of those propositions is true on the “Heads” side of a coin, after having painted over the coin such that you can’t tell which side is heads. I write the false proposition on the tails side. I am going to flip the coin, and show you whichever side lands upwards. You know the coin is fair. Now we want to know what sort of degrees of belief it is reasonable to have in various propositions.
It seems clear that your degree of belief in the proposition “The coin will land heads” should be a half. I’m not in the business of arguing why this is so. If you disagree with that, I take that to be a reductio of your view of probability. Whatever else your degrees of belief ought to do, they ought (ceteris paribus) to make your credence in a fair coin’s landing heads 1/2.
What ought you believe about P? Well, the set up is such that you have no idea whether P. So your belief regarding P should be maximally non-committal. That is, your representor should be such that C(P)=[0,1], the whole interval. This is, I think, the strength of imprecise probabilities over point probabilities: they do better at representing total ignorance. Your information regarding P and regarding ¬P is identical, symmetric. So, if you wanted sharp probabilities, the Principle of Indifference (PI, sometimes called the Principle of Insufficient Reason) suggests that you ought to consider those propositions equally likely. That is, if you have no more reason to favour one outcome over any other, all the outcomes ought to be considered equally likely. In this case C(P)=1/2=C(¬P). In sharp probabilities, you can’t distinguish total ignorance from strong statistical evidence that the two propositions are equally likely. Consider proposition M: “the 1000th child born in the UK since 2000 is male”. We have strong statistical evidence that supports assigning this proposition equal weight to the proposition F (that that child is female). I’ll come back to that later.
So what’s the problem with imprecise probabilities according to White? Imagine that I flip the coin and the “P” side is facing upward. What degrees of belief ought you have now in the coin’s being heads up? You can’t tell whether the heads or tails face is face up, so it seems like your degree of belief should remain unchanged: 1/2. Given that you can’t see whether it’s heads or tails, you’ve learned nothing that bears on whether P is the true proposition. So it seems that your degree of belief in P should remain the same full unit interval: [0,1].
But: you know that the coin landed heads IF AND ONLY IF P is true. This suggests that your degree of belief in heads should be the same as your belief in P. But they are thoroughly different: 1/2 and [0,1]. So what should you do? Dilate your degree of belief in heads to [0,1]? Squish your degree of belief in P to 1/2? Neither proposal seems particularly appetising. So this is a major problem, right?*
What I want to do now is modify the problem, and try and explore intuitions about what sharp credencers should do in similar situations. First I should note that the original problem is no problem for them, since PI tells them to have C(P)=1/2 anyway, so the credences match up. But I worry about this escape clause for sharp people, since it is still the case that the reasons for their having 1/2 in each case are quite different, and it seems almost an accident or a coincidence that they escape…
Consider a more general game. I have N cards that have a number 1..N on one side of each. The reverse sides are identical. Now, on those reverse sides I write a proposition as before. On card number 1 I write the true proposition out of P and ¬P. On the remaining 2..N I write the false one. Again you don’t know which is which etc. Now I shuffle the cards well, pick one and place it “proposition side up” on the table. For simplicity, let’s say it says “P”. I take it as obvious that your credence in the proposition “This is card number 1″ should be 1/N. What credence should you have in P? Well, PI says it should be 1/2. But it should also be 1/N, since we know that P is true IF AND ONLY IF this is card number 1.
Imagine the case where N is large, 1 million, say. I get the feeling that in this case, you would want to say that it is overwhelmingly likely that P is false: 999,999 cards have the false proposition on, so it’s really likely that one of them has been picked. So my credence in P being true should be something like 1/1,000,000. Put it the other way round. Say there’s only 1 card. Then if you see the card says “P”, that as good as tells you that P is true, so your credence should move to 1.
On the other hand, if we’re thinking about a proposition we are very confident in, say A: “Audrey Hepburn was born in Belgium” (it’s true, look it up.). Let’s say C(A)=0.999 (not 1 because of residual uncertainty regarding wikipedia’s accuracy.). Now, if we have 2 cards and the one drawn says A, that’s good reason to believe that that card is the number 1 card. So in this case, it’s the belief regarding the card that moves, not the belief regarding the proposition.
What about the same game but with a million cards? Despite my strong conviction that Audrey Heburn was, if briefly an Ixelloise (?), the chance of the card drawn being number 1 is so small that maybe that should trump my original confidence and cause me to revise down my belief in A.
Here’s another trickier case. Now imagine playing the same game with some proposition I have strong reason to believe should have credence 1/2, like M defined above. And let’s say we’re playing with only 3 cards. For simplicity, let’s imagine that M is shown. How should your credences change in this situation? Again, it seems that C(M)=C(1) is required by the set up of the game. But I’m less sure which credence should move.
In any case, is there a principled way to decide whether it’s your belief about the card or your belief about the proposition that should change. And if there isn’t, doesn’t this tell against the sharp credentist as much as against the imprecise one?
On objection you might have is that this is all going to be cleared up by applying some Bayes’ theorem in these circumstances, since in these cases (as opposed to White’s original one) see which proposition is drawn really does count as learning something. I don’t buy this, since the set up requires that your degrees of belief be identical in the two propositions. Updating on one given the other is going to shift the two closer together, but I don’t think that’s going to solve the problem.
* The third option, a little squish and a little dilate to make them match seems unappealing, and I ignore it for now, since it seems to have BOTH problems that the above approaches do…
I am organising this years LSE Philosophy of Probability Graduate Conference. This has been at least one of the reasons I have failed to blog for ages. But fear not! I am pondering some things that could well become blog posts. Here is a list of them:
- More stuff about a logic of majority
- Dutch book arguments (what they actually prove, and under what conditions)
- Pluralism (species concept pluralism, logical pluralism, pluralism about interpretations of probability)
- More fallibilist realism (following on from reading Kyle Stanford’s book and discussions with some LSE chaps)
- A couple more awesome headlines
- Perhaps some stuff about learning emacs, auctex and reftex. (And biblatex, tikz, beamer…)
As and when these posts achieve maturity, I’ll link to them here.
The pessimistic induction (PI) says something like this: Previous scientific theories have been wrong, so we shouldn’t believe what current science tells us. But let’s modify that with an “optimistic induction” that there is continuity through theory change (the wave nature of light survived the abandonment of the ether theory…) and our methods are improving. The PI then seems to be saying something like this: It might be the case that this or that particular theoretical entity will be discarded some time in the future. Well, this “might” claim looks a lot like Descartes’ evil demon argument for radical scepticism.
Descartes’ argument says that you might be being tricked by a powerful evil demon. The upshot is supposed to be a radical scepticism about the reality of the things we think we see. So I see a table, but I might be being tricked, so I should not believe the table exists. But obviously this brand of radical scepticism is not the orthodoxy. Why? Because another way to approach the evil demon is a kind of “fallibilism” that holds that I should believe that what I see exists, while accepting that I might be wrong, I might be being tricked by this demon.
In much the same way, I think the right approach to the PI (as moderated by the optimistic comments made above) is to say that the right approach is a “fallibilist realism” which says that while I can be confident that some element of current science will be discarded, on the whole I should believe in theoretical entities.
I think this picture fits nicely with scientific practice as well. Doing science whilst not endorsing the reality of the entities one deals with seems difficult. I mean, if I were a scientist and I didn’t believe in electrons, I’d find it difficult to theorise about them… Or to put it another way, if I were a young creationist, I would not become a paleontologist. (OK, cheap shot. Sorry). The point is that on the whole, scientists will believe in what they study, but will of course accept they might be wrong.
So this point seems obvious enough that I’m surprised I haven’t read about it before. I’m interested in hearing about any precedents of this position in the literature.
Following Benacerraf’s rightly famous paper “What numbers could not be” a variety of authors have written papers entitled “What * could not be”. Here is a list of the ones I’ve come across:
- What numbers could not be, P. Benacerraf
- What conditional probabilities could not be, A. Hajek
- What structures could not be, J. Busch
- What possible worlds could not be, R. Stalnaker
- What chances could not be, J. Ismael
- What justification could not be, M.T. Nelson
- What mathematical truth could not be, P. Benacerraf
- What unarticulated constituents could not be, L. Clapp
- What equality of oppurtunity could not be, M. Risse
OK, I haven’t read most of these – I just did a google scholar search for “What * could not be”, but it’s interesting to see how a good title is “remixed”… (I’ve only read the top three or four) Also worthy of mention is “Numbers can be just what they have to” by Colin McLarty, another way to refer obliquely to Benacerraf. Another good title is “what is it like to be a bat?” My favourite title to play on this classic paper is “What is it like to be boring and myopic?”
I didn’t know google had a “blog search” thing. So I tried it out. I searched for “philosophy of science” (obviously) One of the results was about experimental philosophy. I read it and decided I don’t like “X-Phi” (which is the annoying name that these people have chosen to give to their approach). Bristol university has its own x-phi page here. I mention this out of some kind of misplaced loyalty, not out of any approval for such an endeavour.
Colleagues in biology have P.C.R. machines to run and microscope slides to dye; political scientists have demographic trends to crunch; psychologists have their rats and mazes. We philosophers wave them on with kindly looks. We know the experimental sciences are terribly important, but the role we prefer is that of the Catholic priest presiding at a wedding, confident that his support for the practice carries all the more weight for being entirely theoretical.
I don’t take issue with what these people are doing. I just wonder whether the label philosophy really applies… I mean what makes this new experimental philosophy philosophy, rather than, say anthropology or psychology or political science? Obviously philosophers should take empirical findings into consideration. Kant’s view that Euclidean geometry is necessarily the only possible geometry, and therefore necessarily the geometry of the world is cleary untenable in our post-elliptic geometry, post-Minkowski spacetime world. Any number of examples could be given here. But philosophy is still somehow separated off from empirical science. Of course you could just criticise me for having an old-fashioned view of the nature of philosophy. Armchair theorising is out of favour.
OK, here is the problem; what can the results of these polls tell us? If they tell us that some notion, X, is actually pretty much universal, then philosophers are allowed to say things like “it is clear that X” or ” most people would agree that X.” But if everyone agrees with X, it’s already clear that we’re allowed to assert it. No experiment is needed to demonstrate that everyone thinks X is true. Everybody already knows that: X is pretty damn obvious, right? If the poll gives more ambiguous results, it is unlikely that the notion under scrutiny is the kind of thing philosophers would go about asserting without justification. So this Experimental Philosophy might take its questions from philosophical literature, but its methods don’t seem to fit with philosophical practice.
I’m stressing the philosophical part. Obviously it is interesting to see what people think about the truth value of sentences like “the king of France is bald” or “the king of France is not on a state visit to Prussia” but is it really philosophy to go around asking people?