This is a short post where I jot down my initial impressions of Jake Ross and Mark Schroeder’s interesting paper Belief, Credence, and Pragmatic Encroachment, and in particular how it compares to my views on belief and credence. I’m not going to summarise the paper, so this post won’t make a lot of sense unless you’ve read their paper too. Continue reading “Ross and Schroeder on Belief”
Following up a little from yesterday’s post I think that many people on both sides of the IRI debate have misconstrued the force of IRI examples. For instance, here’s what Jason Stanley says is the take-home message of the examples.
The advocate of IRI simply proposes that, in addition to whatever one’s favored theory of knowledge says about when x knows at time t that p, there is a further condition on knowledge that has to do with practical facts about the subject’s environment. (Knowledge and Practical Interests, pg. 85)
I think that’s wrong, or at least misleading, in a couple of respects. (And I think it’s trivial to say “when x knows at time t that p“; it’s at t isn’t it?!)
- It suggests that interests are a “further condition” on knowledge, rather than integrated into the other conditions.
- It suggests, perhaps indirectly, that the ‘low stakes’ condition is the analytically basic condition, and there are extra conditions in ‘high stakes’ cases.
I’m not sure Jason commits to either of these claims, but I think critics of IRI have often taken those claims as being part of the theory, and I don’t think those critics are being entirely uncharitable when they do that. Be that as it may, I think both claims are false, and certainly neither claim is supported by examples motivating IRI. (Or, if you’re like me, by theoretical arguments motivating IRI; I don’t think the examples show a great deal.)
Here’s an alternative way to capture the motivations behind IRI that doesn’t endorse a “further conditions” view, and takes the ‘high stakes’ case to be analytically basic.
There are coherence constraints on knowledge. Violations of them amount to doxastic defeaters. Some of these constraints are simple. I think, for instance, the following constraint is plausibly a universal truth.
- If x believes ¬p, then x does not know that p.
It doesn’t matter whether x‘s belief that p is true, justified, safe, sensitive, not derived from falsehoods, caused by the truth of p, robust with respect to the addition of further true beliefs, or whatever you like. If x believes both p and ¬p, there is too much incoherence in that part of her belief space for there to be knowledge. The belief that ¬p is a doxastic defeater of the (putative) knowledge that p.
There’s a motivation for this. Knowledge that p should mean that adding p to the cognitive state, and making the subsequent alterations that a good theory of belief revision suggests, would make no changes whatsoever. If x knows p, then p has been added. This suggests a further constraint.
- If x prefers φ to ψ, but prefers ψ ∧ p to φ ∧ p, she doesn’t know p.
In this case, however, the principle needs to be qualified. This seems to basically rule out anyone (rational) knowing p unless p is absolutely certain. (Proof sketch: Let ψ be the act of taking a bet on p at crazy long odds, and φ be the act of declining that bet.) So we qualify the principle. How? IRI, or at least one version of it, says that the qualification is interest relative. So the real rule is something like this:
- If x prefers φ to ψ, but prefers ψ ∧ p to φ ∧ p, she doesn’t know p, unless one of φ and ψ is too irrelevant to x‘s current interests.
The general picture is this. Knowledge that p requires that one’s belief that p sufficiently cohere with the rest of one’s cognitive state. Ideally, we’d like it to perfectly cohere. But requiring perfect coherence seems to lead to scepticism. So there is an exception clause. And it says that when you’re in a ‘low stakes’ context, certain restrictions on knowledge don’t apply. So a belief that doesn’t fully cohere can still be knowledge.
I think that makes better sense of how interests fit into knowledge. It isn’t that knowledge merely requires a belief with high utility. Or that changing one’s interests can be the basis for knowledge. It’s rather that certain barriers to knowledge get lowered in certain (relatively familiar) situations.
I’m in general sympathetic to approaches to knowledge that say that the sceptical scenario is the basic one, and the reason we have a lot of knowledge is because ‘exceptions’ to the rather tight restrictions on knowledge are frequently triggered. That is one way to explain the appeal of scepticism; abstract away too much from real-life situations and you lose the triggers for these exceptions, and then scepticism is true. The kind of lottery situations that IRI people worry about aren’t cases where strange new conditions on knowledge are triggered. Rather, they are cases where abstract Cartesian standards for knowledge are restored, just like they are in the simplest models for knowledge.
Jon Kvanvig has a very puzzling objection to interest-relative invariantism (IRI). He claims, I think, that IRI gets the wrong results in cases where there is a lot at stake, but the agent in question gains a lot.
But the objection is puzzling because I can’t even figure out why he thinks IRI has the consequences he says it has. Here’s what I take to be the distinctive claim of IRI.
Consider cases where the following is all true:
- The right thing to do given p is X.
- The right thing to do given Probably p is Y.
- The agent has a lot of evidence for p; sufficient evidence to know p ceteris paribus.
- The agent faces a live choice between X and Y, and the right thing to do in the agent’s situation is Y.
In those cases, we say that the agent doesn’t know p. If they did know p, it would be right to do X. But it isn’t right to do X, so they don’t know p. And this is a form of interest-relativity, since if they were faced with different choices, if in particular the X/Y choice wasn’t live, they may well know p.
As Kvanvig notes, the usual way this is illustrated is with cases where the agent stands to lose a lot if they do X and ¬p is true. But that’s not necessary; here’s a similar case.
S heard on the news that GlaxoSmithKline has developed a new cancer drug that will make billions of dollars in revenue, and that its share price has skyrocketed on the news. Intuitively, S knows that GSK’s share price is very high. Later that day, S is rummaging through his portfolio, and notices that he bought some call options on GSK at prices well below what he heard the current share price is. S is obviously extremely happy, and sets about exercising the options. But as he is in the process of doing this, he recalls that he occasionally gets drug companies confused. He wonders whether he should double check that it is really GSK whose price has skyrocketed, or whether he should just exercise the option now.
Here are the relevant X, Y and p.
X = Exercise the option.
Y = Spend 10 seconds checking a stock ticker to see whether it is worth exercising the option, then do so if it is, and don’t if it isn’t.
p = GSK share price is very high.
Given p, X is better than Y, since it involves 10 seconds less inconvenience. Given Probably p, Y is better than X, since the only downside to Y is the 10 seconds spent checking the stock ticker. The downside of X isn’t great. If S buys shares that aren’t that valuable, he can always sell them again for roughly the same price, and just lose a few hundred dollars in fees. But since any reasonable doubt will make it worth spending 10 seconds to save a risk of losing a few hundred dollars, Y is really better than X.
So, I think, S doesn’t know that p. Once he knows that p, it makes sense to exercise the option. And he’s very close to knowing that p; a quick check of any stock site will do it. But given the fallibility of his memory, and the small cost of double-checking, he doesn’t really know.
So IRI works in cases where the agent stands to gain a lot, and not just where the agent stands to lose a lot. I haven’t seen any cases conforming to the template I listed above where IRI is clearly counter-intuitive. In some cases (perhaps like this one) some people’s intuitions are justly silent. But I don’t think there are any where intuition clearly objects to IRI.
As I’m sure everyone knows, it hasn’t been a great year for trying to get academic jobs in philosophy. And there are some several very good students (including some from Rutgers) who would normally have gotten good jobs with ease, but haven’t got anything this year. But in the midst of all that, a number of Rutgers students have done very well on the job market, and I wanted to congratulate them on their success.
So far, the following students at Rutgers have received tenure-track positions:
- Meghan Sullivan – Notre Dame
- Karen Lewis – USC
- Gabe Greenberg – UCLA
- Allison Hepola – Samford
- Jennifer Nado – Lingnan
The following students have post-docs:
- Carrie Swanson – Indiana
- Evan Williams – Purdue
- Luvell Anderson – Penn State
And these recent graduates have new tenure-track positions
- Christy Mag Uidhir- University of Houston
- Julie Yoo- CSU, Northridge
Well done to all of them, and to Jeff McMahan for a great job as placement director.
I believe that of the 7 tenure-track jobs at top 20 departments listed in on the Leiter Reports hiring thread, 3 went to Rutgers students. So well done all. And hopefully there’s more good news to report before the market winds up for the year.
It’s been a while since I posted here, largely because the young one on the right has been taking up a fair amount of time. So here are a few bits of news.
- Andy Egan and I have (very slowly) put together a collection of papers on epistemic modals and epistemic modality, and it is coming out with OUP this spring. The collection isn’t perfect; it should have come out ages ago, and contributor list is missing a certain something, but we hope it’s a valuable addition to the literature. I’ll hopefully write more about this closer to publication, especially about what I wish I’d done differently along the way to publication.
- The Philosophical Quarterly essay competition this year is on the topic of Hume after 300 years. There is a 1500 pound prize, so get your Hume papers ready.
- In Defence of a Kripkean Dogma is now available in preprint on the PPR website. Hopefully it will be in paper format soon!
- There is a workshop on Formal Epistemology and Experimental Philosophy at Tillberg University this Fall.
- The March 2011 issue of Philosophy Compass is out, with papers on truth in fiction, truthmaking, Leibniz’s Law, and many other fun topics.