Three Updates

I’m in St Andrews now, visiting Arche, and while it’s been a lot of fun, and very rewarding intellectually, it’s been hard work! I’d hoped it would be a relaxing break with lots of blogging, but that hasn’t quite worked out. Anyway, here are three things I’ve been working on.

  • I’ve updated Deontology and Descartes’ Demon (Warning: Word Doc) to (a) take account of some objections that were made, and (b) get it into the right form for the Journal of Philosophy. The latter was hard work: they won’t let you use contractions and I can’t write without them! The former was more fun.
  • I’ve written five lectures on probability in philosophy – 1, 2-3, 4-5, and given four of them. I don’t have all my books/papers here, so some of the references to what other people say was from memory. So if I’ve misrepresented you, my apologies in advance. (I mostly got around this shortcoming by not talking about particular people much at all, just making sweeping generalisations about what lots of people think. So there are a few things that could be given better citations.)

  • Various people (most notably Crispin Wright and David Chalmers) have been pressing me on one of the core assumptions in Moderate Rationalism and Bayesian Scepticism, namely that you can’t learn p by getting evidence that decreases its probability. I’d like to have a good response to their worries, and if I did so I’d be putting it here. The worries are specifically about cases where p is a disjunction, and the evidence raises the probability of one disjunct, but decreases the probability of the other. Hopefully soon I’ll have thought of something clever to say here, but for now I don’t have much to say of any use.

This week I’ll be at the Language and Law workshop at the University of Oslo, and I’m hoping to have lots of interesting things to report back from that.

5 Replies to “Three Updates”

  1. Hi Brian,

    Thanks for putting up the notes on probability—-interesting stuff. I was wondering—-does the scoring argument you mention in favour of probabilism about credences presuppose bivalence (from the way you describe it, it sounded like the function T mapped all propositions into one or other of 1 or 0)? That seems kinda significant esp. in connection what sort of attitude is appropriate for indeterminate propositions.

    Robbie

  2. I’ve had those same worries in response to a paper I was discussing with one of your students at Berkeley. I wasn’t thinking of specific examples like disjunctions, but rather had some more general grounds. In particular, if you take a subjective Bayesian line, then one ought to have some distribution of credences even with very little evidence around. Because there are infinitely many propositions, some of these are going to get extremely high credence even though you don’t have any good evidence for them. Once you gather some good evidence, the credence might decrease very slightly, but the evidence might be good enough for you now to count as knowing or being justified in the proposition. If this sort of case is possible, then you can learn p by means of evidence that decreases your credence in p.

    I suppose the disjunction case is one particular example of this, as are lots of cases where you come to learn p because it is logically entailed by something else you learned through increasing its credence. But it’s not obvious to me that there even has to be a relevant proposition that entails the one you learn for this to be possible.

  3. Disjunction is very interesting in probabilistic logics. Strange things happen when you swap ‘XOR’ for real boolean disjunction in this setting. One thing that happens is that you can produce reversals that are Simpson-like from aggregation of cases. This is along the same line as Salmon’s observation, but it is done in the context of IJ Good’s sufficient conditions for avoiding reversals. It is clear from the probabilistic side why these reversals occur (Boolean ‘or’ doesn’t respect the partitioning of cases), but it is somewhat surprising to applied logicians who imagine adding probability to logics popular for agents and knowledge representation.
    See: http://centria.di.fct.unl.pt/~greg/papers/EPIA07-whe-3.pdf

    W.r.t. the contraction property, the issue traces back to various notions of independence that arise when working with sets of distributions. Conditional strict and strong independence satisfy the graphoid properties, hence satisfy contraction. Levi’s conditional confirmational (epistemic independence) and conditional Kuznetosov independence fail contraction. (See Cozman and Walley, Ann. of Math and AI, 2005). The problem, or the shape of the problem, is that epistemic independence is much easier to justify, since it admits a direct behavior justification. Strict independence is easy to state mathematically, but it violates convexity, so it does not admit a behavior cover story. The action here has been to work out a plausible story for strong independence.

Leave a Reply