Skip to main content.
June 21st, 2008

Three Updates

I’m in St Andrews now, visiting Arche, and while it’s been a lot of fun, and very rewarding intellectually, it’s been hard work! I’d hoped it would be a relaxing break with lots of blogging, but that hasn’t quite worked out. Anyway, here are three things I’ve been working on.

This week I’ll be at the Language and Law workshop at the University of Oslo, and I’m hoping to have lots of interesting things to report back from that.

Posted by Brian Weatherson in Uncategorized

5 Comments »

This entry was posted on Saturday, June 21st, 2008 at 9:31 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

5 Responses to “Three Updates”

  1. fitelson says:

    Brian — Re: Wright and Chalmers, there are even more compelling/puzzling cases involving disjunction. As Carnap showed, there are cases in which the evidence raises the probability of EACH disjunct, but lowers the probability of the disjunction. These cases are also discussed by Salmon in his paper “Confirmation and Relevance”. See example 3 in section 3:

    http://socrates.berkeley.edu/~fitelson/148/salmon_car.pdf

  2. jrgwilliams says:

    Hi Brian,

    Thanks for putting up the notes on probability—-interesting stuff. I was wondering—-does the scoring argument you mention in favour of probabilism about credences presuppose bivalence (from the way you describe it, it sounded like the function T mapped all propositions into one or other of 1 or 0)? That seems kinda significant esp. in connection what sort of attitude is appropriate for indeterminate propositions.

    Robbie

  3. eschwitz says:

    What could possibly justify the policy of not using contractions? (Right, I know. I mean a good justification!)

  4. easwaran says:

    I’ve had those same worries in response to a paper I was discussing with one of your students at Berkeley. I wasn’t thinking of specific examples like disjunctions, but rather had some more general grounds. In particular, if you take a subjective Bayesian line, then one ought to have some distribution of credences even with very little evidence around. Because there are infinitely many propositions, some of these are going to get extremely high credence even though you don’t have any good evidence for them. Once you gather some good evidence, the credence might decrease very slightly, but the evidence might be good enough for you now to count as knowing or being justified in the proposition. If this sort of case is possible, then you can learn p by means of evidence that decreases your credence in p.

    I suppose the disjunction case is one particular example of this, as are lots of cases where you come to learn p because it is logically entailed by something else you learned through increasing its credence. But it’s not obvious to me that there even has to be a relevant proposition that entails the one you learn for this to be possible.

  5. Gregory Wheeler says:

    Disjunction is very interesting in probabilistic logics. Strange things happen when you swap ‘XOR’ for real boolean disjunction in this setting. One thing that happens is that you can produce reversals that are Simpson-like from aggregation of cases. This is along the same line as Salmon’s observation, but it is done in the context of IJ Good’s sufficient conditions for avoiding reversals. It is clear from the probabilistic side why these reversals occur (Boolean ‘or’ doesn’t respect the partitioning of cases), but it is somewhat surprising to applied logicians who imagine adding probability to logics popular for agents and knowledge representation.
    See: http://centria.di.fct.unl.pt/~greg/papers/EPIA07-whe-3.pdf

    W.r.t. the contraction property, the issue traces back to various notions of independence that arise when working with sets of distributions. Conditional strict and strong independence satisfy the graphoid properties, hence satisfy contraction. Levi’s conditional confirmational (epistemic independence) and conditional Kuznetosov independence fail contraction. (See Cozman and Walley, Ann. of Math and AI, 2005). The problem, or the shape of the problem, is that epistemic independence is much easier to justify, since it admits a direct behavior justification. Strict independence is easy to state mathematically, but it violates convexity, so it does not admit a behavior cover story. The action here has been to work out a plausible story for strong independence.

Leave a Reply

You must be logged in to post a comment.