Skip to main content.
June 4th, 2013

Survival and Decision Making

First, an apology. I messed up the system that notifies me of when there are comments awaiting moderation, so there were several comments sitting in the queue for several days. That shouldn’t have happened, and I’m sorry it did.

I’ve written up a short note on Robbie Williams’s great paper Decision Making Under Indeterminacy. This was a bit long, and a bit symbol heavy, for a blog post.

The paper concerns cases where the agent is going to split into two, in some sense, and there’s no fact of the matter about which of the two will really be them. I think in those cases it can be rational to act as if it is 50/50 which of them will be you. Robbie, in effect, disagrees. (Or at least, if I’ve read him aright, he disagrees.) I present a couple of cases designed to strengthen the intuition that I’m right. Here’s the paper.

Posted by Brian Weatherson in Uncategorized

5 Comments »

This entry was posted on Tuesday, June 4th, 2013 at 4:19 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

5 Responses to “Survival and Decision Making”

  1. jrgwilliams says:

    Hi Brian, thanks for the very interesting thoughts on the paper!

    I just wanted to say a bit more about why I ended up without any of the various closure constraints you mention (and so end up committed to the result you mention at the end). And also to point out one thing (independent of that motivation) that makes me resist even the weak closure conditions on the representor you mention.

    The view in the background that I’m interested in is one where we have a fully classical semantics/logic. And we think that (determinately) the truth values have their standard normative significance—-so e.g. accurate partial beliefs will match the classical truth values. We then run through Joyce-style argument for probabilism, relative to each precisification. Assume that whether or not Alpha is Omega in situation S (or Beta, in your variant) is non-contingent. Then the argument for probabilism tells us that on pain of accuracy domination, we should either have probability 1 in Alpha being Omega in situation S, or probability 0 in this—-but it’s indeterminate which.

    Now, that gives you an indeterminate norm concerning what your credences should be (though determinately they are represented by a single credence function). The backstory behind my use of representors that I like (discussed a bit in the “mindmaking” section of the paper) is that they are the various not-determinately-wrong mental states to have.

    From this perspective, the issue I’d see with an option like B in your example, is that it determinately fails to maximize expected utility, relative to any mental state that’s not determinately an accuracy-dominated one.

    Putting into the representor probabilities that aren’t “induced” by a sharpening would also change the predictions for the sorites I discuss in the paper—-since those depend on extending a measure of determinacy over precisifications to a measure defined over the elements in the representor (which then gives you the chances of acting this way or that when a decision situation under indeterminacy comes up).

    That’s what allows me to say things I very much like. Suppose I’m forced to take a view on whether x is red and y is not, for x and y indiscriminable patches in the borderline area. Suppose all-but-one sharpening says this is false, and all the rest say it’s true. Then on my model, all-but-one members of your mental committee are certain that it’s false, and the last one is fully confident that it’s true. In that situation (with reasonable assumptions about the determinacy-measure over sharpenings) we get the prediction that with overwhelming probability we’ll judge it flat out false. But if we now introduce extra committee members to satisfy something like continguity, it’s really not clear what we’ll say—-since there’s certainly now a lot of committee members who recommend views other than flat-out-rejection of the cut-off claim.

    So I think some surprising results in areas like the one you mention, should be considered in the round—-and in particular the knock-on costs of making changes should be considered.

    I should say—-I see the lack of closure as specific to the case of using imprecise credences to model indeterminacy-induced uncertainty. I think closure principles are pretty plausible for the standard use of imprecise credences in the context of Knightian uncertainty. (Essentially—-hedging behaviour is just obviously permissible for ordinary uncertainty, whether of Ramsey’s kind or Knight’s; and closure conditions are needed to secure it in the latter case). I think of this as a positive benefit of the view—-it allows us to explain what’s the difference is between claiming that p is indeterminate, and simply saying that we’re uncertain whether p in an ordinary sense. This helps give us a response to Williamson’s challenge, to explain why supervaluation-style views don’t collapse into epistemicism.

  2. Brian Weatherson says:

    I worry about the accuracy argument overgenerating. Here’s one way it could.

    S is a scientist working on the molecular structure of water. She has some evidence that it is H2O, and some that it is H3O, but not enough to be sure either way. Of course, whatever it is, it is necessarily. So the only two credence functions that aren’t (determinately!) accuracy dominated are the one that has credence 1 in water is H2O, and the one that has credence 1 in water is H3O. But those aren’t the only rational credences.

    More generally, there’s a big issue (as you surely know!) in what exactly are the worlds in Joyce style arguments. Should we include worlds where p v ~p fails to be settled? Should we include worlds where water is H3O? Should we include worlds where Alpha is Beta, even if she is necessarily Gamma? I think the answers are often yes.

    It’s true that this messes up any argument that requires counting committee members. That’s a cost, and one I’m not sure how to count.

  3. jrgwilliams says:

    I think the worlds should be something like ‘a priori possibilities compatible with the agent’s evidence’. And so the relevant claim would be something like: whether Alpha is Omega in S supervenes a priori on the B-facts, and your evidence tells you what the B-facts are (a limiting case: Alpha being Omega supervenes apriori on whether S is the case——i.e. B facts drop out of the picture altogether). That would ensure that whether Alpha is Omega doesn’t vary across the worlds relevant to the accuracy argument, which is what was needed.

    I agree that lots and lots of what we get out of Joyce’s argument depends on what the worlds are. To fix ideas, suppose we’ve got Lewisian space-times, plus some kind of abstract “sharpenings”. The argument that I gave turns on supposing that accuracy measures the distance from credal states to classical truth value assignments induced by the Lewisian worlds. And so, it’ll be indeterminate in many cases how accurate a given sentence/proposition is.

    (The a priori supervenience claim I wanted looks very natural in this setting—-the idea is that e.g. the sharpenings tell you whether the criterion of identity is physical or psychological, or how much of either kind of continuity is enough—-and if S then tells you about the degree of psych/physical continuity, it looks like holding fixed a sharpening, we get the same answer at each world. But it’s not enforced by the formalism, of course—-we don’t have to think of sharpenings as working in that sort of principled way).

    An alternative to that way of applying the accuracy arguments in a “classical” supervaluational setting (that I know that Andrew Bacon was arguing for at one point when I discussed this with him) is to say that the “worlds” relative to which accuracy is measured are Lewis-world-sharpening pairs. And then you don’t get my argument. (My main reason for not liking that is the collapse-into-epistemicism worries. But Andrew, for one, has a really interesting response to that in terms of the interaction of indeterminacy and desire).

    So I do have lots of sympathy for the idea that sometimes we should include impossible worlds in the set we run accuracy arguments over (basically—-I think we should when your evidence doesn’t rule them out a priori). For example I’d allow logically impossible worlds within the set, in order to enable you to represent uncertainty about logic. That means we don’t get (classical) probabilism out of the Joyce-style argument, though we do get the result that conditional on this or that assumption about the way truth behaves, T, one’s credences should be T-probabilistic, for the generalized notion of probability given by T. But when the evidence we’ve got is extensive enough, or the a priori connections are helpful, then we can get lots of detailed info out of the arguments.

  4. Brian Weatherson says:

    I’m not sure how the start and end of the comment are meant to fit together. If logical truths aren’t a priori, I think truths about identity wouldn’t be a priori either.

    That is, I can think of a strong notion of a priori where the ideal a priori reasoner can deduce all of logic, and all of the fundamental metaphysical facts. I can think of a weak notion of a priori where both logic and metaphysics is up for grabs.

    What I don’t have a very good grip on is a notion whereby facts about identity are a priori, but facts about logic are not.

  5. jrgwilliams says:

    Sorry, yes—-I was a bit cryptic. The first and last paragraph are really two independent thoughts. The strong notion is definitely an option, so far as the first paragraph goes. After all, part of the accuracy argument I was using relied on classical semantics, so if you were working with an agent “open to” nonclassical worlds, you couldn’t run the argument anyway. I was thinking that the relevant space of worlds (for an agent) was highly evidence-dependent, so that some agents might be open to nonclassical possibilities; but others would have the logical and philosophical evidence required to run the argument as originally stated.

    But thinking about it, I believe there’s a way to adapt the argument to agents whose evidence leaves them open to worlds where logic and metaphysics works differently. A shot at this follows.

    Suppose we start with an utterly arbitrary set X of worlds——scattering truth values however you like. Of course, most people are not going to devote much credence to crazy possibilities. Let’s suppose, in particular, that Alpha devotes the majority of his credence to a set of worlds Y such that (a) throughout Y classical logic holds; and (b) restricting quantifiers to worlds in Y the supervenience claim is right.

    The accuracy argument gets you that undominated credences must be convex combinations of the worlds in X. Suppose Alpha’s credences meet that condition. Now consider Alpha’s conditional credences, given Y. These will be a convex combination of the worlds in Y. (Of course, this assumes that conditionalization works as expected in the generalized setting, but I’ve argued elsewhere it does). But now, we can run the original argument with respect to Alpha’s credences conditional on Y, and get the result that those should be as I describe—-i.e. describable via an unclosed representor (and that the results of “closing” the resulting representor typically leads to determinately dominated credences). The elements of the representor won’t be classical possibilities, but what I’ve called elsewhere “generalized probabilities” (like your intuitionistic ones). And the story I give in the paper will largely go through, except that the sharpening-relative expected utility calculations will have to be done with a generalized expected utility theory (again, something I’ve discussed elsewhere).

Leave a Reply

You must be logged in to post a comment.