Skip to main content.
December 16th, 2005

Epistemic Liberalism and Luminosity

In the latest Phil Perspectives, Roger White has a paper Epistemic Permissiveness argues against what he calls epistemic permissiveness, the view that in some evidential states there are multiple doxastic attitudes that are epistemically justified and rational. I call this epistemic liberalism, because at least in America liberal is a nice word. (‘In America’ of course functions something a negation operator.) I think there are a few things we liberals can say back to Roger’s interesting arguments. In particular I think a liberalism that allows that there are epistemically better and worse responses among the rational responses, just like we think that among the morally permissible actions some are morally better and worse, has some resources to deploy against his challenges. But for now I want to take a different tack and defend liberalism directly.

Some days Iím inclined to think that liberalism, like anti-scepticism, is so obviously plausible from casual observation of the world that the challenge is to find what is wrong with any anti-liberal argument. But unlike anti-scepticism, I think we can argue for liberalism as follows. The argument assumes that Timothy Williamsonís anti-luminosity view is broadly correct. Although Iíve argued that Williamson doesnít rule out all possible luminous sources, I think heís right that there is much less that is luminous than we previously thought.

Consider a series of agents, e0, Ö, en, such that e0 has evidence we would normally regard as compelling evidence that ~p, but each ei has ever so slightly more evidence that points in favour of p, so that taken as a whole en has compelling evidence that p, and even we liberals think that her only permissible state is to believe that p.

Now assume, for reductio, that liberalism is false, so for each ei they either must believe that ~p, must suspend judgment as to whether p, or must believe that p. Since as i increases ei has strictly more evidence that p, in the sequence there is a first person who must suspend judgment that p, and a first person who must believe that p. Consider that person, call her ej. (By the way, Iím not assuming epistemicism about vagueness here; if supervaluationism is true everything Iíve said is supertrue.)

Now ej must, if she is rational, believe that p. But by standard safety considerations, she cannot know this. The reason is that her evidence is practically indistinguishable from the evidence that ej-1 has, and ej-1 cannot rationally believe that p. If ej believes that she must believe that p, her belief will be unsafe and hence not knowledge. Assuming knowledge is the norm of belief, at least when it comes to propositions about epistemic justification, it follows that ej must rationally suspend judgment about whether she must (indeed may) believe that p.

This last step has to be taken carefully. Why say that it follows from ej not knowing something that she canít rationally believe it? I think for the following reason. Itís a platitude that belief aims at knowledge. The best thing to do is to know. And, quite plausibly, it is better to suspend judgment than to believe without knowing. Now anti-liberalism says that only the best will do. Anything else is irrational, by the definition of anti-liberalism. So the anti-liberal says that it is only rationally permissible to believe what you know. Perhaps we can imagine some variations on this in cases of deception (if it mistakenly seems that p, perhaps it is better to believe p than suspend judgment) but this is not a case of deception; ej does not get any misleading evidence at all about the support her evidence offers for p. In these cases at least (and perhaps in all cases) the best thing to do is to believe iff you know.

So rationality demands of poor ej that she believe that p, and suspend judgment about whether it is rationally permissible in her situation to believe that p.

This seems to me like a very bad result. Indeed, it is not at all clear that it could be rational to have those attitudes. It seems to me that the rational agent should believe that their beliefs are rationally permissible. But the anti-liberal says that ej must not believe this if she is to be rational. That seems like a reductio of anti-liberalism to me.

Note this is a result that the liberal avoids. The liberal thinks there are more boundaries than the anti-liberal sees.

First, there is the boundary between those who must believe p and those who need not. Let ek be the first agent on the continuum who must believe that p. If she is rational she believes that p, of course, but since the agents near her all can believe that p, she can know that it is rational to believe that p.

Second, there is the boundary between those who may believe that p and those who may not. Let em be the first agent on the continuum who may believe that p. Now she can believe p, but she canít know that she can believe that p. Is this a problem in itself? Well no, itís just a consequence of anti-luminosity that such agents exist. What would be a problem would be if she could rationally believe p, but couldnít believe such a belief is rational. But if liberalism is true this isnít the position em is in. Perhaps in her case the best thing for her to think about the rational permissibility of believing p is to suspend judgment. (At least thatís the best thing ceteris paribus; since she believes p perhaps ceteris isnít paribus here. Iím not sure how to resolve this issue.) But the liberal also thinks that sometimes doing less than the best can be rational, just like doing less than the best can be moral. So itís consistent with liberalism that em can rationally believe that it is rational to believe that p.

These borderline cases are hard. But the liberal can say things about them that sound coherent, and even plausible. The anti-liberal is forced to say it is rationally mandatory to have a belief and suspend judgment about the rationality of your own belief. That seems like an implausible, even incoherent, description. So these cases offer strong support for liberalism.

Posted by Brian Weatherson in Workbench

12 Comments »

This entry was posted on Friday, December 16th, 2005 at 11:11 pm and is filed under Workbench. You can follow any responses to this entry through the comments RSS 2.0 feed. Both comments and pings are currently closed.

12 Responses to “Epistemic Liberalism and Luminosity”

  1. Clayton says:

    Brian,

    I like the argument but I’m not convinced that it is an argument for liberalism. It might just as easily be an argument for thinking that if you assume that the norm for belief is knowledge, to account for ordinary judgments about epistemic rationality, you will have to allow for the possibility of rationally compelled wrongdoing.

    Take your standard Gettier-type case. Our subject may be conscientious and quite concerned to determine whether p is true. Owing to considerations he’ll never uncover, we’re not prepared to say he knows p when he comes to believe p. If after searching and acquiring mountains of evidence, he harbors serious doubts as to whether p and knows that there is no particular reason to doubt p, we will think he’s being unreasonable. I’m tempted to say that if S is unreasonable in refraining from believing, S is rationally compelled to believe. Still, S doesn’t know.

    One might recoil at this point and deny that K is the norm for belief, but better to tough it out and ask why we should assume that being rationally compelled to believe should entail that the belief in question is permissible. (One reason not to turn against the knowledge norm is that as Williamson has shown (said for rhetorical effect) you’ll be hard pressed to find a different norm for which this type of problem simply doesn’t arise).

    I think you can describe the kinds of conflicts that worry you in a way that makes them seem incoherent, but here is a description I think is perfectly coherent. We can say of the poor lost soul who is rationally compelled to believe p but doesn’t know p that unlike the subject who knows p, she is excused in believing p whereas the subject in the know is justified. On at least some views of the excuse/justification distinction, both excused and justified beliefs will be beliefs held by rationally competent and capable subjects, the difference between justification and excuse will reside in the fact that whereas the one subject satisfies the norm, the other merely has good evidence that she does. It is recognition of this that makes us want to say she’s rationally compelled to believe. It is in virtue of her violating the norm that we have to say she shouldn’t believe. (I know there is a tendency amongst epistemologists to reject this description because they think excuse is not a positive status whereas rational is, but if we didn’t see anything positive in the agent’s coming to believe, we’d not be prepared to offer the excuse but would have to defend the agent from criticism by trying to show the agent didn’t satisfy the conditions for being responsible at all, and that’s an exemption rather than an excuse in Strawson’s terminology).

    _________________________
    There is a complication having to do with the fact that there are times when we think that the fact that you are rationally compelled to think A-ing is required is itself a reason to A and can on balance make A-ing required. This kind of problem I think goes away if you deny that when it comes to believe, there can be positive duties to believe but only permissions to believe and prohibitions against believing.

  2. Thomas Kelly says:

    One note about this (actually, a plug): there will be a symposium on this topic—consisting of Roger presenting his arguments, and me attempting to respond to them on behalf of the epistemic liberal/permissivist—at the upcoming Eastern meetings. Unfortunately, the time slot (9a.m.-11 on the last day) is suboptimal, but, given the excellence of Roger’s paper, I’m sure that attendance is at least rationally permissible…

  3. mark says:

    Isn’t it dubious that a “belief” that one regards as held arbitrarily really is a belief? Similarly for suspensions of belief. In examples of your kind surely the doxastic states of a rational person will in fact vary vaguely (from not believing that p to believing that p, and from not believing that believing that p would be mandatory to believing that believing that p would be mandatory). Thinking in supervaluational terms seems particularly unhelpful here (“poor ej” is a semantic construct who doesn’t exist). To use the metaphor of degrees of truth, perhaps the degree of truth of “ej believes that p” somewhat exceeds that of “ej believes that believing that p is mandatory”. Does that seem so bad?

  4. Brian Weatherson says:

    Mark, where did arbitrariness come in to it? I assume you’re thinking that if you regard alternatives to your actual belief as permissible you regard your own belief as held arbitrarily. But this is nonsense. Clearly in the history of science there are plenty of occasions where scientists with competing theories needed to regard neither their opponent’s views as irrationally held or their own view as arbitrary. They just have to regard their opponents as being wrong, not as believing irrationally. Or maybe there is some other way that arbitrariness comes into it, but I have no idea what it might be. I certainly didn’t use the word ‘arbitrary’ and I don’t have any idea what could be a sound argument from the things I did say to the conclusion that some of the salient beliefs are held arbitrarily.

    Clayton, I think I should have included more than two options here. I was focussing on the debate between those who think there is one rational response to the evidence and those who think there is more than one. Perhaps the Williamsonian position is that in some circumstances, those involving deception, there is no rational response. That wouldn’t be a form of liberalism to be sure, so I should have more careful arguments against it.

    I might just be running together things that should be kept apart here, but I think the thing for the liberal to say is that knowledge really is the aim of belief (even in cases like the ones you describe) but that beliefs that don’t satisfy the aim can be rational. All of us who believe in the ethical superogatory say the same kind of thing about moral action, decision, virtue etc – there’s a distinction between what we aim at and what we’re required to achieve.

  5. Roger White says:

    Brian, your argument is ingenious as usual (thanks Tom K for pointing me to it). I’m buried in a search cmte, but I have a brief reaction if I’ve understood it right. Not surprisingly I’m worried about the unsafety to irrationality move. Perhaps knowledge is the aim of belief in that the best state to be in qua believer is to know. But if unfortunately I miss out on knowledge doesit follow that no belief is better than belief? The reaction most of us have to Gettier examples is that the poor Gettiered victim is doing better as a believer by believing given his excellent reasons than if he didn’t believe, even if knowledge has eluded him. So it would be irrational for him to suspend belief when belief is the better option. We should aim for the Gold, butif we miss it doesn’t follow that we should settle for the Bronze. Perhaps ej’s reasons that determine that she must believe that p determine that she must believe that she must believe it too even though unbeknown to her she doesn’t know it. “It’s better to suspend judgment than to believe without knowing” This can seem plausible since of courseif I realise I don’t know then believing is foolish (well that seems often the case). So ‘don’t believe if you don’t know’ seems like good advice to follow. But I’m not persuaded that someone who can’t tell he doesn’t know can’t rationally believe.

  6. Dilip says:

    I agree with Roger that anti-liberalism isn’t really the culprit here. Here is one way of reconstructing Brian’s argument:

    B: “ej believes that”
    K: “ej knows that”
    R: “it is rationally required that”
    P: “it is rationally permitted that”
    S: “ej suspends judgment as to whether”

    Sq iff (not Bq and not B not q). R distributes over conjunction and the conditional.

    1. R Bq
    2. not K R Bq
    3. not K R Bq —> R S P Bq
    4. Thus, R S P Bq
    5. Thus, R (not B P Bq and not B not P Bq)
    6. Thus, R (not B P Bq)

    Thus, from (1) and (6), ej is rationally required to believe that q and ej is rationally required not to believe that he is rationally permitted to believe q.

    Brian writes: “the rational agent should believe that their beliefs are rationally permissible”, which might be cashed out as:

    7. R B (Bq —> P Bq)

    or perhaps as:

    8. R (Bq —> B P Bq)

    in which case, we have from (1) and (8) :

    9. R (B P Bq)

    By (6) and (9) , rationality seems to place conflicting demands on ej.

    The awkward result seems to stem not anti-liberalism, but from combining Williamson’s epistemology with the claim that if one doesn’t know that q, then rationality requires one to suspend judgment as to whether q. For example, in a Williamson-style anti-luminosity series for feeling cold, there will be a point in the series at which the agent knows that he feels cold, but doesn’t know that he knows that he feels cold. This leads to the following argument which closely parallels Brian’s:

    10. Kq
    11. not K Kq
    12. not K Kq —> R S Kq
    13. Thus, R S Kq
    14. Thus, R (not B Kq and not B not Kq)
    15. Thus, R (not B Kq)

    (The principle that underlies (12) is not precisely the one that underlies (3); but that should be okay, since the former should be weaker than the latter. The princple underlying (12) is that if one doesn’t know that q, then one is rationally required to suspend judgment as to whether q. The principle underlying (3) seems to be that if one doesn’t know that one is rationally required believe that q, the one is rationally required to supend judgment as to whether one is permitted to believe that q. But if one is rationally required to suspend judgment as to whether one is permitted to believe that q, then one is rationally required to suspend judgment as to whether one is required to believe that q.)

    Presumably, if one knows that q, then one is rationally required to believe that q:

    16. Kq —> R Bq

    in which case, from (10) and (16), we get:

    17. R Bq

    The rational agent should believe that her beliefs constitute knowledge, which we could cash out (in parallel to (8):

    18. R (Bq —> B Kq)

    So, by (17) and (18), we have:

    19. R B Kq

    But (15) and (19) seem to place incompatible demands on the agent.

    One might question (16), but we could still get to (19) from (10), if we assumed (a) that knowledge entails belief and (b) that if one believes that q, then one is rationally required to believe that one knows that q, i.e. Bq —> R B Kq. (But perhaps some will reject that principle as well.)

  7. Brian Weatherson says:

    Roger,

    I agree that Gettier cases seem problematic for the ideally belief tracks knowledge thesis. But these aren’t Gettier cases – they are just cases of imprecise perception. And in those cases I think the thesis is very plausible. Think of Williamson’s standard cases: estimating a tree’s height or a crowd size, in each case with a clear vision of the target. In those cases I think it is very plausible that ideal belief tracks knowledge.

    Dilip,

    I think a lot of the principles you use require anti-liberalism. I definitely think that about 12. The liberal shouldn’t accept that. (Your argument for it is based on my argument for 3, but I only argue for 3 in the scope of a reductio of anti-liberalism.)

    The liberal thinks that in borderline cases the agent is permitted to go either way, either believe or suspend belief. It might be best to track knowledge, but it isn’t irrational to just plump for one or the other. And I certainly think that Bq -> RBKq is much stronger than anything a liberal need accept.

  8. Dilip says:

    I agree that the liberal needn’t accept (12)/(3). My point wasn’t to show that everyone is committed to the awkward result, but that (12)/(3) is what leads to the awkward result, and that it isn’t clear why the anti-liberal is committed to (12)/(3).

    You might be right that some of the other principles I appealed to are ones that only an anti-liberal would accept, in which case my argument doesn’t have its intended effect. But I take it that the important question — the one Roger raises — is why the anti-liberal must accept the principle that if one does not know that q, then one is rationally required to suspend judgment as to whether q.

  9. Matt Weiner says:

    Isn’t it plausible that sometimes the ideal is to believe that p, but not to believe that you know that p? For instance, I believe that the Steelers will make the playoffs (since they have the tiebreak over the Chargers and an easier schedule, and the Jaguars have such an easy schedule that it seems unlikely that there will be a three-way tie), but I don’t believe that I know this. But this would be ruled out utterly by your claim that, “quite plausibly, it is better to suspend judgement than to believe without knowing,” and I don’t see how epistemic liberalism lets it back in—unless epistemic liberalism permits a belief state that I know to be suboptimal (in this case, belief without knowing). And that seems implausible. If I know that it would be better to suspend judgment (since I know that I don’t know that the Steelers will win), I ought to.

    My belief state with respect to the Steelers is exactly what I ought to have if: (1) knowledge is the norm of belief and (2) I know that the Steelers will win but don’t know that I know. [Or, since we want to rule in Gettier cases and justified false beliefs, if I have enough justification for K but not KK.]

    This may sit uncomfortably with the idea that you should believe that your beliefs are rationally permissible, but that seems like the sort of thing that you’ll wind up with all the time if you combine rejection of the KK principle with the idea that belief aims at knowledge. The solution, I think, is to drop the idea that belief aims at knowledge (maybe it aims at truth, and the standard on whether it is rational is whether you have enough justification for it given relevant pragmatic encroachments). But then does the argument against anti-liberalism go through?

  10. Roger White says:

    The general thesis:

    Ideally, a subject who doesn’t know suspends judgment (regardless of whether she knows she doesn’t know)

    (leaving open whether such suspension is ‘superorgatory’ or rationally required)

    is clearly false given the case of deception with strong evidence, and is very hard to maintain in the light of Gettier examples. So to be plausible it must be restricted. What would the restriction look like? I haven’t thought much about this but I’d be interested to know what might be special about Williamson-style cases that makes something like this principle seem plausible (at least to Brian) A couple of salient features of these cases are a) the knowledge at issue is knowledge of knowledge (or in Brian’s variation, knowledge of rational requirement) b) the lack of knowledge is due to a safety violation. But I’m not seeing how either of these features is relevant.

    btw, I chose “permissive” over “liberal” (apart from its having the right force to chastise people for their loose living) because many have followed Jim Pryor in ‘What’s wrong with Moore’s Argument’ in using ‘liberal’ for a very different view.

  11. Dsosa says:

    There’s evidence and then there’s the use of evidence. Some people are better at making use of evidence (drawing inferences from it, for example) than others. Not excelling in drawing inferences is a flaw; but perhaps it does not make you irrational. So agents with the same evidence may, rationally, differ in their beliefs: one, rationally, draws an inference from that evidence while the other, not irrationally, doesn’t.

    Also, Twins might—plausibly if controversially—have the same evidence for the belief that water is potable. Perhaps their evidence is no more (and no less) evidence for the belief that water is potable as it is for the belief that twater is potable. But one of them, because of their causal circumstance, forms (inductively) the belief that water is potable, the other does not. Nobody’s irrational.

  12. Roger White says:

    David, good point, I take it a more concrete case of your first point might be:

    Shared evidence E: there are 2046968 people here
    Conclusion P: the number of people here is a multiple of 77

    I’m rationally permitted to have low credence in P (until I find a pencil and envelope) but a mathematician may have hight credence.

    This indeed raises difficult questions about what counts as evidence and just how to formulate theses of uniqueness/permissivism/liberalism. We might wonder whether there is really shared evidence in this case (the mathematician establishes lemmas in his head on the way to p which give him more evidence) the possible case of someone who can just ‘see’ that 2046968 is a multiple of 77 is trickier.

    In you second case, even if Twoger and I have the same evidence, I take it he can’t even entertain the thought that water is potable. So it’s not that he rationally takes a different doxastic attitude to the same proposition: he takes none. We should all agree that one can rationally take no attitude at all to a matter that one has never even thought about. Of course if Twoger comes to visit for a while things get complicated. But I think it takes some work to turn this into a permissive case.