Sound and Fury

Signifying nothing

Posts Tagged ‘epistemology

Factivity of knowledge makes it redundant

with 2 comments

I’m thinking of taking an epistemology course this year. Which means that I’ve been thinking about knowledge. Now, a lot of epistemology starts from the assumption that whatever knowledge is, it must be true. This is called the “factivity of knowledge”. Basically, I am supposed to have the intuition that attributing knowledge of a falsehood to someone sounds a little odd. Consider “Ptolemy knew the Sun orbited the Earth”. We should be inclined to think that this sounds odd, and what we should say is “Ptolemy thought he knew the Sun orbited the Earth” or “Ptolemy believed the Sun orbited the Earth.” This is an intuition I just do not share. I take the point that perhaps it is a little odd to suggest Ptolemy knew the Sun orbited the Earth, but take modern instances of scientific “knowledge”: I know there are electrons; I know that nothing can go faster than light. Accepting that scientific knowledge is fallible, does that mean that it is not knowledge? Or rather, does my accepting that any piece of scientific knowledge might be wrong mean I cannot be confident that I have any scientific knowledge? After all, knowledge is not defined as “justified, approximately true belief”… It seems that epistemology tries too hard to accommodate the intuition that there’s something fishy about attributing knowledge of falsehoods to people, while ignoring the use of words like “knowledge” in science, for example. Any theory that fails to live up to the “folk” use of knowledge in science had better be damn good at what it does elsewhere…

And what’s the point of an analysis of “knowledge” that makes it so inaccessible. Given my acceptance of my own fallibility, I can never know I have knowledge: accepting that any belief of mine could be false makes the KK principle radically false. Do knowledge attributions help us understand or assess the rationality of someone’s reasoning or decision? No: once we’ve a picture of their doxastic state (their beliefs), and the relevant truths, then attributing knowledge doesn’t seem to add anything. I can never say “I merely believe this proposition, so I will act in this way in respect of it; but this proposition I know and can therefore act in this different way…” since I never have access to which of my strong justified beliefs are also true. So what role does this kind of knowledge play?

Maybe this attitude is caused by my being corrupted by LSE’s interest in decision theory. Perhaps I am too focussed on using a theory of knowledge to assess the rationality of some action of set of beliefs. Maybe the real problem is to understand what is special about knowledge, over and above mere belief. And maybe one thing that sets knowledge apart is that knowledge is true. But, to my (hopelessly pragmatically based) mind, that’s not an interesting distinction, since it’s one that never makes a difference to me. But maybe there are some belief states that do have practically privileged status: maybe some kinds of belief (e.g. justified beliefs) allow me to act differently. And if this sort of belief looks a bit like knowledge, then maybe we should adopt that label.

Perhaps the best course of action is just to give up the word “knowledge” to the epistemologists and to focus on the practically useful propositional attitudes like belief. Obviously, truth is still important. Not only is having true beliefs often pragmatically the best way to go, but having true beliefs may well have some kind of “epistemic value” in and of itself. But to make the truth of some belief part of characterising what kind of belief it is seems wrong. Maybe the misunderstanding I have of epistemology (or at least of analyses of the concept of knowledge) is that I want to focus on those aspects of my propositional attitudes that can influence my behaviour, that I can be aware of.

This post grew out of something I posted on Twitter, and thus thanks are due to all the people who argued epistemology with me over there. I’m beginning to think that Twitter is uniquely unsuited to philosophical discussions, but I’ve had some interesting conversations on there nonetheless. Thanks to:

This also marks my third blog post of the day. The others being here and here. I must have a deadline or something. (In my defense, the other two were already substantially written before today). I will be at EPSA so I will continue to not post here.

Written by Seamus

October 3, 2011 at 4:23 pm

Is there a version of the equal weight view of disagreement that is reasonable and non trivial?

leave a comment »

So, I haven’t read up on the “epistemic significance of disagreement” literature (as may become obvious below). I do intend to, but I currently have several other things on the go. I’ve seen a couple of talks/blog posts that seem to add to this sort discussion, so I have a rough idea of what it’s about.

The idea is that if you and someone you take to be an epistemic peer disagree, then you should give their opinion equal weight. What “equal weight” means is something I’m not going to explore. But I’m worried that the notion of an “epistemic peer” makes EW a trivial claim. When do you decide if someone is an epistemic peer?

Read the rest of this entry »

Written by Seamus

June 20, 2011 at 3:21 pm

White’s coin puzzle for imprecise probabilities

with one comment

[Caveat lector: I use a whole bunch of different labels for people who prefer sharp credences versus people who prefer imprecise credences. I hope the context makes it obvious which group I’m referring to in each instance. Also, this was all written rather quickly as a way for me to get my ideas straight. So I might well have overlooked something that defuses the problems I discuss. Please do tell me if this is the case.]

On two occasions now people have told me that there’s this paper by Roger White that gives a pretty strong argument against having imprecise degrees of belief. Now, I like imprecise credence, so I felt I needed to read and refute this paper. So I sat out on my tiny balcony in a rare spell of London sunshine and I read the paper. I feel slightly uneasy about it for two different reasons. Reasons that seem to pull in different directions. First, I do think the argument is pretty good, but I don’t like the conclusion. So that’s one reason to be uneasy. The other reason is that it feels like this argument can be turned against sharp probabilists as well…

The puzzle goes like this. You don’t know whether or not the proposition “P” is true or false. Indeed, you don’t even know what proposition “P” is, but you know that either P is true or ¬P is true. I write whichever of those propositions is true on the “Heads” side of a coin, after having painted over the coin such that you can’t tell which side is heads. I write the false proposition on the tails side. I am going to flip the coin, and show you whichever side lands upwards. You know the coin is fair. Now we want to know what sort of degrees of belief it is reasonable to have in various propositions.

It seems clear that your degree of belief in the proposition “The coin will land heads” should be a half. I’m not in the business of arguing why this is so. If you disagree with that, I take that to be a reductio of your view of probability. Whatever else your degrees of belief ought to do, they ought (ceteris paribus) to make your credence in a fair coin’s landing heads 1/2.

What ought you believe about P? Well, the set up is such that you have no idea whether P. So your belief regarding P should be maximally non-committal. That is, your representor should be such that C(P)=[0,1], the whole interval. This is, I think, the strength of imprecise probabilities over point probabilities: they do better at representing total ignorance. Your information regarding P and regarding ¬P is identical, symmetric. So, if you wanted sharp probabilities, the Principle of Indifference (PI, sometimes called the Principle of Insufficient Reason) suggests that you ought to consider those propositions equally likely. That is, if you have no more reason to favour one outcome over any other, all the outcomes ought to be considered equally likely. In this case C(P)=1/2=C(¬P). In sharp probabilities, you can’t distinguish total ignorance from strong statistical evidence that the two propositions are equally likely. Consider proposition M: “the 1000th child born in the UK since 2000 is male”. We have strong statistical evidence that supports assigning this proposition equal weight to the proposition F (that that child is female). I’ll come back to that later.

So what’s the problem with imprecise probabilities according to White? Imagine that I flip the coin and the “P” side is facing upward. What degrees of belief ought you have now in the coin’s being heads up? You can’t tell whether the heads or tails face is face up, so it seems like your degree of belief should remain unchanged: 1/2. Given that you can’t see whether it’s heads or tails, you’ve learned nothing that bears on whether P is the true proposition. So it seems that your degree of belief in P should remain the same full unit interval: [0,1].

But: you know that the coin landed heads IF AND ONLY IF P is true. This suggests that your degree of belief in heads should be the same as your belief in P. But they are thoroughly different: 1/2 and [0,1]. So what should you do? Dilate your degree of belief in heads to [0,1]? Squish your degree of belief in P to 1/2? Neither proposal seems particularly appetising. So this is a major problem, right?*

What I want to do now is modify the problem, and try and explore intuitions about what sharp credencers should do in similar situations. First I should note that the original problem is no problem for them, since PI tells them to have C(P)=1/2 anyway, so the credences match up. But I worry about this escape clause for sharp people, since it is still the case that the reasons for their having 1/2 in each case are quite different, and it seems almost an accident or a coincidence that they escape…

Consider a more general game. I have N cards that have a number 1..N on one side of each. The reverse sides are identical. Now, on those reverse sides I write a proposition as before. On card number 1 I write the true proposition out of P and ¬P. On the remaining 2..N I write the false one. Again you don’t know which is which etc. Now I shuffle the cards well, pick one and place it “proposition side up” on the table. For simplicity, let’s say it says “P”. I take it as obvious that your credence in the proposition “This is card number 1” should be 1/N. What credence should you have in P? Well, PI says it should be 1/2. But it should also be 1/N, since we know that P is true IF AND ONLY IF this is card number 1.

Imagine the case where N is large, 1 million, say. I get the feeling that in this case, you would want to say that it is overwhelmingly likely that P is false: 999,999 cards have the false proposition on, so it’s really likely that one of them has been picked. So my credence in P being true should be something like 1/1,000,000. Put it the other way round. Say there’s only 1 card. Then if you see the card says “P”, that as good as tells you that P is true, so your credence should move to 1.

On the other hand, if we’re thinking about a proposition we are very confident in, say A: “Audrey Hepburn was born in Belgium” (it’s true, look it up.). Let’s say C(A)=0.999 (not 1 because of residual uncertainty regarding wikipedia’s accuracy.). Now, if we have 2 cards and the one drawn says A, that’s good reason to believe that that card is the number 1 card. So in this case, it’s the belief regarding the card that moves, not the belief regarding the proposition.

What about the same game but with a million cards? Despite my strong conviction that Audrey Heburn was, if briefly an Ixelloise (?), the chance of the card drawn being number 1 is so small that maybe that should trump my original confidence and cause me to revise down my belief in A.

Here’s another trickier case. Now imagine playing the same game with some proposition I have strong reason to believe should have credence 1/2, like M defined above. And let’s say we’re playing with only 3 cards. For simplicity, let’s imagine that M is shown. How should your credences change in this situation? Again, it seems that C(M)=C(1) is required by the set up of the game. But I’m less sure which credence should move.

In any case, is there a principled way to decide whether it’s your belief about the card or your belief about the proposition that should change. And if there isn’t, doesn’t this tell against the sharp credentist as much as against the imprecise one?

On objection you might have is that this is all going to be cleared up by applying some Bayes’ theorem in these circumstances, since in these cases (as opposed to White’s original one) see which proposition is drawn really does count as learning something. I don’t buy this, since the set up requires that your degrees of belief be identical in the two propositions. Updating on one given the other is going to shift the two closer together, but I don’t think that’s going to solve the problem.

________________

* The third option, a little squish and a little dilate to make them match seems unappealing, and I ignore it for now, since it seems to have BOTH problems that the above approaches do…

Written by Seamus

July 19, 2010 at 1:04 pm