## Nonprobabilistic Cognitive Decision Theory

I’ve just got back from a philosophy of probability conference in Oxford. It was very interesting.

Here are some thoughts I had about Hilary Greaves’ talk. The project is to flesh out an idea of epistemic rationality by analogy to practical rationality and practical decision theory. The idea is that in the “Cognitive Decision Theory” the sort of acts you are interested in are various beliefs you could adopt. Or various belief functions you could adopt. There is an idea of cognitive utility which is supposed to be a measure of how epistemically good you think a certain belief state is. Greaves and Wallace 2006 show that it is rational to update by conditionalisation when some conditions are satisfied.

The thing is, their theory starts by assuming that belief is represented by a probability function. This is a fairly standard, but perhaps too restrictive assumption. To paraphrase from Greaves’ talk yesterday: there are various intuitions we have about epistemic rationality. If we can derive lots of these from the theory, without building them in to the theory, then that’s a good thing. The less we have to start with and still get back all (or most) of our intuitions, the better.

So let’s apply that to the issue of presupposing that belief is probabilistically coherent. Why not start with some more general belief function set up and see under what circumstances probability measures are the uniquely rational way to structure your beliefs? The answer to this question is, I imagine, “because it’s really hard to prove anything about any more general framework”. True enough. But what about the case where belief is represented by a set of probability measures? How much of the argument of Greaves and Wallace goes through in the case where you just replace every instance of “probability measure, ” with “set of probability measures ” and replace every with .

A large spread of probability measures in an agent’s representor can be seen as indicating that agent’s desire to withhold judgment pending more conclusive evidence. I think that’s a valuable thing to want to model formally. The probability measure representation of belief requires that the agent has to be maximally probabilistically opinionated, which I don’t think is epistemically rational…

Presumably in this broader context it is no longer possible to prove that conditionalise is the best updating rule, but perhaps some analogue of conditionalise which incorporates a method for conditionalising on sets of probability measures still works.

Here’s a suggestion as to when it would be rational to have probabilistic credences. If your epistemic utility function were close to the scoring rules of Joyce 1998 then perhaps probabilism would be uniquely rational.

In general however, I think that as well as accuracy (which moves you towards probabilism), there is another source of epistemic utility. Or rather, a source of epistemic disutility which you want to avoid. Accuracy means you derive utility from being close to the truth. You derive disutility from being far from the truth.

Here’s a kind of geometrical analogy to illustrate the sources of utility in contention. Alice’s credence in an event is represented by some subinterval while Bob’s credence is some one point in the unit interval. The event is actually some value . While the vagueness of Alice’s credence is itself a source of epistemic disutility, the fact that is a good thing and gives Alice some positive utility. Bob, on the other hand, gets no “vagueness penalty”, but he does get some disutility proportional to how far wrong he was, i.e. proportional to . So, if an agent does not have enough information to pin down the actual value, it might be epistemically rational to take the hit from the “vagueness penalty” in order to avoid Bob’s “wrongness penalty”. In the same way, paucity of information might make vague interval valued credences preferable to probabilistic point values.

## Leave a Reply