Epistemic Conservatism

Daniel and I have been talking a lot about conservatism lately (Daniel’s been writing a book chapter on it), and we’re considering writing a joint paper on the topic. Here’s one of the things we’ve noticed that we’d like to write about.

A few importantly different kinds of epistemic conservatism seem to be floating around in the literature, not remarked upon nor clearly separated from one another, although it is far from obvious how they are related.

Some versions are about how to update your beliefs (e.g. Quineans, Bayesians), others about how to evaluate beliefs at a time. Let’s call these ‘update-evaluating conservatism’ and ‘state-evaluating conservatism’ respectively. In the latter category, there are some versions which say that what matters is your belief state at an earlier time than the time which is being evaluated (e.g. Sklar), others which say that what matters is your belief state at that very time (e.g. Chisholm). Let’s call these ‘diachronic state-evaluating’ and ‘synchronic state-evaluating’ conservatism respectively. Here are some examples from each category:

Update-evaluating (always diachronic): The best updating strategy involves minimal change to your belief and credence structure.

Synchronic and state-evaluating: The fact that you believe p at t1 gives a positive boost to the epistemic valuation of your belief in p at t1.

Diachronic and state-evaluating: The fact that you believe p at t1 gives a positive boost to the epistemic valuation of your belief in p at t2.

Now, the interesting question: does believing one of these principles commit you to any or all of the others? In this paper by McGrath – one of the few I know of that talks about this stuff – it is assumed that the core of conservatism is an update-evaluating kind, but that this is equivalent in truth-value to a corresponding synchronic state-evaluating kind of conservatism.

But here’s one reason to doubt things are that simple. Suppose I have a belief at t1 that is so epistemically bad that there is nothing to be said in its favour. Suppose I retain that belief at t2, with no new evidence, purely through inertia. One might wish to approve of the update qua update-evaluating conservative, but not wish to proffer any corresponding (diachronic or synchronic) state-evaluating approval of the belief at t2 – which, after all, is still held for really bad reasons.

Comments, pointers to good things to read, etc. warmly invited.

5 Replies to “Epistemic Conservatism”

  1. Purely off the cuff: it seems that if you accept DSE, you should also accept SSE as a special case of DSE. It would be odd if the fact that you believe p has nothing to be said for it at t1 but has something to be said for it a microsecond later.

  2. I can think of some defences of DSE that wouldn’t give you SSE. For instance, suppose you want to defend DSE by arguing that something’s having been in your belief box for some time means it’s had a chance to knock around a bit with your other beliefs and bring any potential tensions to light. This could (although I’m not recommending it) be used to support some versions of DSE (though perhaps only versions where there is some constraint on how much later t2 must be than t1) but wouldn’t seem to support any kind of SSE.

  3. Interesting post.

    So, presumably a conservative will accept some principle of the form

    If S believes that p, then ceteris paribus, —-.

    where the blank gets filled in with a claim about the positive epistemic status of S in relation to p. If rationality is the relevant status, we can ask whether our choice between the fillings below could make a difference:

    1) S is rational to continue believing that p.

    2) S is rational to believe that p.

    3) S’s belief that p is rational.

    In my paper, I assumed fillings (1) and (2) would yield equivalent principles. Suppose you believe p. I couldn’t see how you could be rational to continue believing p without also being rational to believe p, and vice versa. I do of course see how you could be rational to continue believing something you don’t know, and ditto for other epistemic statuses less intimately related to your current perspective (I wasn’t sure what “epistemically bad” meant in your last paragraph). But if we are talking about rationality throughout, I don’t really — as of yet — see the difference.

    Whether fillings (1) and (3) yield equivalent principles is a harder question, I think, and depends on what you think about the “proper basing” requirement for doxastic rationality. David Owens, if I remember right, thinks there is a real difference here, and speaks of two dimensions of rationality, one of which is an evaluation of beliefs, and which he thinks is a diachronic matter, and the other an evaluation of the subject, which if I recall he thinks is synchronic. He takes himself to be rejecting conservatism, but I’d say he is really rejecting only conservatism with filling (3) – i.e., conservatism about doxastic rationality. All this is stuff is his book Freedom without Reason. I discuss this stuff briefly on pp. 9-10 of my paper.


  4. About the belief that is epistemically bad; when you say that nothing can be said in favour of it, do you mean that the agent cannot say anything in its favour (i.e., does not think that there are any reasons for the belief at all)?

    I’m just starting to wonder whether it is possible to believe something without believing that there are some reasons for the belief. I’m suspicious that we are free to adopt beliefs by mere acts of will without our thoughts about evidence giving rise to them. Neither does it seem plausible that beliefs just happen to us.

  5. Jussi

    I didn’t mean anything in particular by ‘epistemically bad’ – this was just a kind of place-holder. Maybe no belief is so bad (in whatever epistemic sense we fix on) that there is nothing to be said (by anyone) in its favour, although I’m not sure. But still, I would make the same kind of point for beliefs which are just really bad even if they have something to be said for them.

Leave a Reply