Sound and Fury

Signifying nothing

Posts Tagged ‘philosophy

Factivity of knowledge makes it redundant

with 2 comments

I’m thinking of taking an epistemology course this year. Which means that I’ve been thinking about knowledge. Now, a lot of epistemology starts from the assumption that whatever knowledge is, it must be true. This is called the “factivity of knowledge”. Basically, I am supposed to have the intuition that attributing knowledge of a falsehood to someone sounds a little odd. Consider “Ptolemy knew the Sun orbited the Earth”. We should be inclined to think that this sounds odd, and what we should say is “Ptolemy thought he knew the Sun orbited the Earth” or “Ptolemy believed the Sun orbited the Earth.” This is an intuition I just do not share. I take the point that perhaps it is a little odd to suggest Ptolemy knew the Sun orbited the Earth, but take modern instances of scientific “knowledge”: I know there are electrons; I know that nothing can go faster than light. Accepting that scientific knowledge is fallible, does that mean that it is not knowledge? Or rather, does my accepting that any piece of scientific knowledge might be wrong mean I cannot be confident that I have any scientific knowledge? After all, knowledge is not defined as “justified, approximately true belief”… It seems that epistemology tries too hard to accommodate the intuition that there’s something fishy about attributing knowledge of falsehoods to people, while ignoring the use of words like “knowledge” in science, for example. Any theory that fails to live up to the “folk” use of knowledge in science had better be damn good at what it does elsewhere…

And what’s the point of an analysis of “knowledge” that makes it so inaccessible. Given my acceptance of my own fallibility, I can never know I have knowledge: accepting that any belief of mine could be false makes the KK principle radically false. Do knowledge attributions help us understand or assess the rationality of someone’s reasoning or decision? No: once we’ve a picture of their doxastic state (their beliefs), and the relevant truths, then attributing knowledge doesn’t seem to add anything. I can never say “I merely believe this proposition, so I will act in this way in respect of it; but this proposition I know and can therefore act in this different way…” since I never have access to which of my strong justified beliefs are also true. So what role does this kind of knowledge play?

Maybe this attitude is caused by my being corrupted by LSE’s interest in decision theory. Perhaps I am too focussed on using a theory of knowledge to assess the rationality of some action of set of beliefs. Maybe the real problem is to understand what is special about knowledge, over and above mere belief. And maybe one thing that sets knowledge apart is that knowledge is true. But, to my (hopelessly pragmatically based) mind, that’s not an interesting distinction, since it’s one that never makes a difference to me. But maybe there are some belief states that do have practically privileged status: maybe some kinds of belief (e.g. justified beliefs) allow me to act differently. And if this sort of belief looks a bit like knowledge, then maybe we should adopt that label.

Perhaps the best course of action is just to give up the word “knowledge” to the epistemologists and to focus on the practically useful propositional attitudes like belief. Obviously, truth is still important. Not only is having true beliefs often pragmatically the best way to go, but having true beliefs may well have some kind of “epistemic value” in and of itself. But to make the truth of some belief part of characterising what kind of belief it is seems wrong. Maybe the misunderstanding I have of epistemology (or at least of analyses of the concept of knowledge) is that I want to focus on those aspects of my propositional attitudes that can influence my behaviour, that I can be aware of.

This post grew out of something I posted on Twitter, and thus thanks are due to all the people who argued epistemology with me over there. I’m beginning to think that Twitter is uniquely unsuited to philosophical discussions, but I’ve had some interesting conversations on there nonetheless. Thanks to:

This also marks my third blog post of the day. The others being here and here. I must have a deadline or something. (In my defense, the other two were already substantially written before today). I will be at EPSA so I will continue to not post here.

Written by Seamus

October 3, 2011 at 4:23 pm

Is there a version of the equal weight view of disagreement that is reasonable and non trivial?

leave a comment »

So, I haven’t read up on the “epistemic significance of disagreement” literature (as may become obvious below). I do intend to, but I currently have several other things on the go. I’ve seen a couple of talks/blog posts that seem to add to this sort discussion, so I have a rough idea of what it’s about.

The idea is that if you and someone you take to be an epistemic peer disagree, then you should give their opinion equal weight. What “equal weight” means is something I’m not going to explore. But I’m worried that the notion of an “epistemic peer” makes EW a trivial claim. When do you decide if someone is an epistemic peer?

Read the rest of this entry »

Written by Seamus

June 20, 2011 at 3:21 pm

PhilTeX

leave a comment »

Just a quick update to say that I am a contributor to the PhilTeX group blog for philosophers who use LaTeX. If you fit into that (rather niche) category, chances are you’ve already heard of PhilTeX, so this update is almost certainly completely superfluous.

That is all.

Written by Seamus

July 22, 2010 at 11:52 am

Posted in internet, LaTeX, philosophy

Tagged with , ,

White’s coin puzzle for imprecise probabilities

with one comment

[Caveat lector: I use a whole bunch of different labels for people who prefer sharp credences versus people who prefer imprecise credences. I hope the context makes it obvious which group I’m referring to in each instance. Also, this was all written rather quickly as a way for me to get my ideas straight. So I might well have overlooked something that defuses the problems I discuss. Please do tell me if this is the case.]

On two occasions now people have told me that there’s this paper by Roger White that gives a pretty strong argument against having imprecise degrees of belief. Now, I like imprecise credence, so I felt I needed to read and refute this paper. So I sat out on my tiny balcony in a rare spell of London sunshine and I read the paper. I feel slightly uneasy about it for two different reasons. Reasons that seem to pull in different directions. First, I do think the argument is pretty good, but I don’t like the conclusion. So that’s one reason to be uneasy. The other reason is that it feels like this argument can be turned against sharp probabilists as well…

The puzzle goes like this. You don’t know whether or not the proposition “P” is true or false. Indeed, you don’t even know what proposition “P” is, but you know that either P is true or ¬P is true. I write whichever of those propositions is true on the “Heads” side of a coin, after having painted over the coin such that you can’t tell which side is heads. I write the false proposition on the tails side. I am going to flip the coin, and show you whichever side lands upwards. You know the coin is fair. Now we want to know what sort of degrees of belief it is reasonable to have in various propositions.

It seems clear that your degree of belief in the proposition “The coin will land heads” should be a half. I’m not in the business of arguing why this is so. If you disagree with that, I take that to be a reductio of your view of probability. Whatever else your degrees of belief ought to do, they ought (ceteris paribus) to make your credence in a fair coin’s landing heads 1/2.

What ought you believe about P? Well, the set up is such that you have no idea whether P. So your belief regarding P should be maximally non-committal. That is, your representor should be such that C(P)=[0,1], the whole interval. This is, I think, the strength of imprecise probabilities over point probabilities: they do better at representing total ignorance. Your information regarding P and regarding ¬P is identical, symmetric. So, if you wanted sharp probabilities, the Principle of Indifference (PI, sometimes called the Principle of Insufficient Reason) suggests that you ought to consider those propositions equally likely. That is, if you have no more reason to favour one outcome over any other, all the outcomes ought to be considered equally likely. In this case C(P)=1/2=C(¬P). In sharp probabilities, you can’t distinguish total ignorance from strong statistical evidence that the two propositions are equally likely. Consider proposition M: “the 1000th child born in the UK since 2000 is male”. We have strong statistical evidence that supports assigning this proposition equal weight to the proposition F (that that child is female). I’ll come back to that later.

So what’s the problem with imprecise probabilities according to White? Imagine that I flip the coin and the “P” side is facing upward. What degrees of belief ought you have now in the coin’s being heads up? You can’t tell whether the heads or tails face is face up, so it seems like your degree of belief should remain unchanged: 1/2. Given that you can’t see whether it’s heads or tails, you’ve learned nothing that bears on whether P is the true proposition. So it seems that your degree of belief in P should remain the same full unit interval: [0,1].

But: you know that the coin landed heads IF AND ONLY IF P is true. This suggests that your degree of belief in heads should be the same as your belief in P. But they are thoroughly different: 1/2 and [0,1]. So what should you do? Dilate your degree of belief in heads to [0,1]? Squish your degree of belief in P to 1/2? Neither proposal seems particularly appetising. So this is a major problem, right?*

What I want to do now is modify the problem, and try and explore intuitions about what sharp credencers should do in similar situations. First I should note that the original problem is no problem for them, since PI tells them to have C(P)=1/2 anyway, so the credences match up. But I worry about this escape clause for sharp people, since it is still the case that the reasons for their having 1/2 in each case are quite different, and it seems almost an accident or a coincidence that they escape…

Consider a more general game. I have N cards that have a number 1..N on one side of each. The reverse sides are identical. Now, on those reverse sides I write a proposition as before. On card number 1 I write the true proposition out of P and ¬P. On the remaining 2..N I write the false one. Again you don’t know which is which etc. Now I shuffle the cards well, pick one and place it “proposition side up” on the table. For simplicity, let’s say it says “P”. I take it as obvious that your credence in the proposition “This is card number 1” should be 1/N. What credence should you have in P? Well, PI says it should be 1/2. But it should also be 1/N, since we know that P is true IF AND ONLY IF this is card number 1.

Imagine the case where N is large, 1 million, say. I get the feeling that in this case, you would want to say that it is overwhelmingly likely that P is false: 999,999 cards have the false proposition on, so it’s really likely that one of them has been picked. So my credence in P being true should be something like 1/1,000,000. Put it the other way round. Say there’s only 1 card. Then if you see the card says “P”, that as good as tells you that P is true, so your credence should move to 1.

On the other hand, if we’re thinking about a proposition we are very confident in, say A: “Audrey Hepburn was born in Belgium” (it’s true, look it up.). Let’s say C(A)=0.999 (not 1 because of residual uncertainty regarding wikipedia’s accuracy.). Now, if we have 2 cards and the one drawn says A, that’s good reason to believe that that card is the number 1 card. So in this case, it’s the belief regarding the card that moves, not the belief regarding the proposition.

What about the same game but with a million cards? Despite my strong conviction that Audrey Heburn was, if briefly an Ixelloise (?), the chance of the card drawn being number 1 is so small that maybe that should trump my original confidence and cause me to revise down my belief in A.

Here’s another trickier case. Now imagine playing the same game with some proposition I have strong reason to believe should have credence 1/2, like M defined above. And let’s say we’re playing with only 3 cards. For simplicity, let’s imagine that M is shown. How should your credences change in this situation? Again, it seems that C(M)=C(1) is required by the set up of the game. But I’m less sure which credence should move.

In any case, is there a principled way to decide whether it’s your belief about the card or your belief about the proposition that should change. And if there isn’t, doesn’t this tell against the sharp credentist as much as against the imprecise one?

On objection you might have is that this is all going to be cleared up by applying some Bayes’ theorem in these circumstances, since in these cases (as opposed to White’s original one) see which proposition is drawn really does count as learning something. I don’t buy this, since the set up requires that your degrees of belief be identical in the two propositions. Updating on one given the other is going to shift the two closer together, but I don’t think that’s going to solve the problem.

________________

* The third option, a little squish and a little dilate to make them match seems unappealing, and I ignore it for now, since it seems to have BOTH problems that the above approaches do…

Written by Seamus

July 19, 2010 at 1:04 pm

Graduate Conference on Philosophy of Probability

leave a comment »

I am organising this years LSE Philosophy of Probability Graduate Conference. This has been at least one of the reasons I have failed to blog for ages. But fear not! I am pondering some things that could well become blog posts. Here is a list of them:

  • More stuff about a logic of majority
  • Dutch book arguments (what they actually prove, and under what conditions)
  • Pluralism (species concept pluralism, logical pluralism, pluralism about interpretations of probability)
  • More fallibilist realism (following on from reading Kyle Stanford’s book and discussions with some LSE chaps)
  • A couple more awesome headlines
  • Perhaps some stuff about learning emacs, auctex and reftex. (And biblatex, tikz, beamer…)

As and when these posts achieve maturity, I’ll link to them here.

Written by Seamus

February 28, 2010 at 3:52 pm

The pessimistic induction and Descartes’ evil demon

with 2 comments

The pessimistic induction (PI) says something like this: Previous scientific theories have been wrong, so we shouldn’t believe what current science tells us. But let’s modify that with an “optimistic induction” that there is continuity through theory change (the wave nature of light survived the abandonment of the ether theory…) and our methods are improving. The PI then seems to be saying something like this: It might be the case that this or that particular theoretical entity will be discarded some time in the future. Well, this “might” claim looks a lot like Descartes’ evil demon argument for radical scepticism.

Descartes’ argument says that you might be being tricked by a powerful evil demon. The upshot is supposed to be a radical scepticism about the reality of the things we think we see. So I see a table, but I might be being tricked, so I should not believe the table exists. But obviously this brand of radical scepticism is not the orthodoxy. Why? Because another way to approach the evil demon is a kind of “fallibilism” that holds that I should believe that what I see exists, while accepting that I might be wrong, I might be being tricked by this demon.

In much the same way, I think the right approach to the PI (as moderated by the optimistic comments made above) is to say that the right approach is a “fallibilist realism” which says that while I can be confident that some element of current science will be discarded, on the whole I should believe in theoretical entities.

I think this picture fits nicely with scientific practice as well. Doing science whilst not endorsing the reality of the entities one deals with seems difficult. I mean, if I were a scientist and I didn’t believe in electrons, I’d find it difficult to theorise about them… Or to put it another way, if I were a young creationist, I would not become a paleontologist. (OK, cheap shot. Sorry). The point is that on the whole, scientists will believe in what they study, but will of course accept they might be wrong.

So this point seems obvious enough that I’m surprised I haven’t read about it before. I’m interested in hearing about any precedents of this position in the literature.

Written by Seamus

January 3, 2010 at 9:37 pm

What something could not be

leave a comment »

Following Benacerraf’s rightly famous paper “What numbers could not be” a variety of authors have written papers entitled “What * could not be”. Here is a list of the ones I’ve come across:

  • What numbers could not be, P. Benacerraf
  • What conditional probabilities could not be, A. Hajek
  • What structures could not be, J. Busch
  • What possible worlds could not be, R. Stalnaker
  • What chances could not be, J. Ismael
  • What justification could not be, M.T. Nelson
  • What mathematical truth could not be, P. Benacerraf
  • What unarticulated constituents could not be, L. Clapp
  • What equality of oppurtunity could not be, M. Risse

OK, I haven’t read most of these – I just did a google scholar search for “What * could not be”, but it’s interesting to see how a good title is “remixed”… (I’ve only read the top three or four) Also worthy of mention is “Numbers can be just what they have to” by Colin McLarty, another way to refer obliquely to Benacerraf. Another good title is “what is it like to be a bat?” My favourite title to play on this classic paper is “What is it like to be boring and myopic?”

Written by Seamus

December 3, 2008 at 7:00 pm

Posted in philosophy

Tagged with

I am a frog-pasta-tube

with 2 comments

I have been reading some craaa-aaazy stuff today. So this one paper suggested that the fundamental metaphysical nature of the world is that of a graph. An unlabelled asymmetric graph-theoretic entity. OK. It kind of fits in with a lot of stuff about ontic structural realism. Kind of.

Now I’m reading something about the bird-eye view versus the frog-eye view of space and time. The “bird” sees the whole spacetime structure from the outside. What looks like a particle moving with constant speed to the frog looks like a strand of spaghetti to the bird. Two particles orbiting each other, to the bird, looks like two strands of pasta entwined like a double helix. Like the weird blob thing in Donnie Darko. So the bird sees the frog as an ensemble of worldlines for the frogs particles. The frog looks like a tube of pasta strands to the bird. I did not make up any of this up. (Except the Donnie Darko reference). It’s all there in Max Tegmark’s The Mathematical Universe. The weak point in his argument is that he claims that any “theory of everything” will be entirely mathematical. This simply cannot be the case. We have plenty of theories that are entirely mathematical; go ask your local maths professor. If it is to be a theory of the physical world, the theory is going to have to involve some kind of pointers as to how to apply its results to the world. So we had a theory of Riemannian manifolds before Einstein came along, but that didn’t make the (mathematical) theory a scientific theory. Not until Einstein started showing how the manifold could relate to our conception of space. Tegmark pretty much agrees with this point, but then says that that isn’t fundamental to the theory. He says we have a mathematical theory and then the interpretation is done afterwards and isn’t necessarily part of that theory. This is both methodologically backwards and I think just plain wrong. The interpretation is central to that mathematical theory qua scientific theory.

I am sympathetic to the (ontic) structural realist flavour of what Tegmark is getting at, but I don’t think his “derivation” of his “MUH: Mathematical Universe Hypothesis” really works. I have to say I gave up after the first 10 or so pages because it was getting near to dinner time and the two column layout is a pain in the arse to read on the computer.

Tegmark also has that annoying scientist’s habit of not putting the names of the articles in his bibliography. So in the text he will cite “[14]” which isn’t helpful. Then if I scroll to the bibliography I will see that [14] is “J. Ladyman Studies in History and Philosophy of Science 29 409-424 (1998)” God dammit. If he just wrote Ladyman (1998) in the text instead of [14] I’d know immediately that he meant What is structural realism? It would mean much less hopping back and forth. And what about if the author and name wasn’t enough for me to identify the paper? I’d have to bloody well look it up on the internet. I appreciate the practice makes sense in science where knowing the actual paper under discussion isn’t important to the argumentation, and that titles of scientific papers are long and would lead to bloated bibliographies, but come on! It isn’t even as if it would be a lot of work to change it. How hard is it to add the line “\usepackage{natbib209}” to the preamble of your LaTeX document and change your bibliography style to chicago, or similar? I bet the names of the articles are already in the bibtex file…

I did promise to post something that wasn’t a rant. And this started out as a light-hearted look at some of the dangerously bonkers stuff I take seriously every day. But it turned into a rant about bibliography formatting, of all things. I appear to be incapable of not ending up complaining about something. I guess that means I am just a mean spirited cynical rantophile.

Written by Seamus

August 4, 2008 at 5:54 pm

Experimental Philosophy

leave a comment »

I didn’t know google had a “blog search” thing. So I tried it out. I searched for “philosophy of science” (obviously) One of the results was about experimental philosophy. I read it and decided I don’t like “X-Phi” (which is the annoying name that these people have chosen to give to their approach). Bristol university has its own x-phi page here. I mention this out of some kind of misplaced loyalty, not out of any approval for such an endeavour.

Here is some experimental philosophy in action. And here is an NYTimes article about it. It contains the following passage which I think is marvelous:

Colleagues in biology have P.C.R. machines to run and microscope slides to dye; political scientists have demographic trends to crunch; psychologists have their rats and mazes. We philosophers wave them on with kindly looks. We know the experimental sciences are terribly important, but the role we prefer is that of the Catholic priest presiding at a wedding, confident that his support for the practice carries all the more weight for being entirely theoretical.

I don’t take issue with what these people are doing. I just wonder whether the label philosophy really applies… I mean what makes this new experimental philosophy philosophy, rather than, say anthropology or psychology or political science? Obviously philosophers should take empirical findings into consideration. Kant’s view that Euclidean geometry is necessarily the only possible geometry, and therefore necessarily the geometry of the world is cleary untenable in our post-elliptic geometry, post-Minkowski spacetime world. Any number of examples could be given here. But philosophy is still somehow separated off from empirical science. Of course you could just criticise me for having an old-fashioned view of the nature of philosophy. Armchair theorising is out of favour.

OK, here is the problem; what can the results of these polls tell us? If they tell us that some notion, X, is actually pretty much universal, then  philosophers are allowed to say things like “it is clear that X” or ” most people would agree that X.” But if everyone agrees with X, it’s already clear that we’re allowed to assert it. No experiment is needed to demonstrate that everyone thinks X is true. Everybody already knows that: X is pretty damn obvious, right? If the poll gives more ambiguous results, it is unlikely that the notion under scrutiny is the kind of thing philosophers would go about asserting without justification. So this Experimental Philosophy might take its questions from philosophical literature, but its methods don’t seem to fit with philosophical practice.

I’m stressing the philosophical part. Obviously it is interesting to see what people think about the truth value of sentences like “the king of France is bald” or “the king of France is not on a state visit to Prussia” but is it really philosophy to go around asking people?

Written by Seamus

July 18, 2008 at 3:03 am

Journal availability woes and dissertation meandering worries

leave a comment »

Since Bristol University philosophy department lists, as one of its principal research areas, the philosophy of science, you would expect the unviersity to have access to the journal entitled “Philosophy of Science.” Not so. I have spent some time trying to get at two articles from that journal with no luck. They are listed on Jstor, but the actual article isn’t on there and it directs me back to the Chicago Journals website, which asks me to log on. All in all rather frustrating. In desperation I just tried searching Google for “Psillos structure” and lo! I was directed to the PhilSci Archive which contained a version of the paper I wanted. In fact, the other article I wanted was also available through that archive. So God bless you Pittsburgh! I can’t believe I haven’t come across this resource before. The papers I got were from conference proceedings, but there are also some articles that are forthcoming in various journals. It would be nice to see the PhilSci archive grow into a repository of preprints, much like arXiv has for physics papers.

Rather than actually working on the fundamentals of my dissertation, I have been borrowing books and downloading papers tangentially related to my topic. I have a core idea for my dissertation, and then many sattelite projects. Hopefully I can somehow glue it all into a cohesive whole. More for my benefit than anyone else’s I shall summarise the myriad directions my current project is taking. The main nexus of the dissertation is about geometry and structuralism. So a detailed look at Shapiro’s and Resnik’s accounts of ontology and epistemology in the context of geometry. Often when they are discussing the struturalist’s stock example “the natural number structure” they say things which are supposed to be true of all mathematical structures. In the case of geometry, it isn’t apparent that this is as straightforward as they imagine. My main aim is to look at whether structural interpretations of geometry will work. Some of the intellectual meanderings that I might also write about are:

  • The history of geometry, particularly 19th century. Interpreted as a move toward structuralism? (Shapiro argues as much in his 1997 book)
  • Bourbaki-type set theoretic structure and Klein’s Erlangen Programme as kinds of structuralism
  • Genetic Epistemology. Can how we learn geometry be interpreted structurally? (Piaget wrote a book on the child’s conception of geometry which would be my main source for this.)
  • Structural realism about spacetime. How this squares with structuralism about mathematical/axiomatic geometry.
  • Shapiro talks of “linguistic resources” limiting what we can do. In the case of geometry, it might be better to discuss “conceptual or imaginative resources” instead.
  • Our limits as a benefit. Why it is good for structuralism that we aren’t aware that any circle isn’t perfect and that any line we draw actually has a thickness. These defects in our perception make pattern recognition and abstraction easier.

So that should keep me going. All I need to do now is read all this stuff and then write loads. Pff. Easy.

Written by Seamus

July 8, 2008 at 12:59 pm