This is a short post where I jot down my initial impressions of Jake Ross and Mark Schroeder’s interesting paper ”
Belief, Credence, and Pragmatic Encroachment”:http://www-bcf.usc.edu/~jacobmro/ppr/Belief_Credence_and_Pragmatic_Encroachment.pdf, and in particular how it compares to my views on belief and credence. I’m not going to summarise the paper, so this post won’t make a lot of sense unless you’ve read their paper too.
Continue reading
Monthly Archives: March 2011
What is IRI?
Following up a little from “yesterday’s post”:, I think that many people on both sides of the IRI debate have misconstrued the force of IRI examples. For instance, here’s what Jason Stanley says is the take-home message of the examples.
bq. The advocate of IRI simply proposes that, in addition to whatever one’s favored theory of knowledge says about when _x_ knows at time _t_ that _p_, there is a further condition on knowledge that has to do with practical facts about the subject’s environment. (Knowledge and Practical Interests, pg. 85)
I think that’s wrong, or at least misleading, in a couple of respects. (And I think it’s trivial to say “when _x_ knows at time _t_ that _p_”; it’s at _t_ isn’t it?!)
- It suggests that interests are a “further condition” on knowledge, rather than integrated into the other conditions.
- It suggests, perhaps indirectly, that the ‘low stakes’ condition is the analytically basic condition, and there are extra conditions in ‘high stakes’ cases.
I’m not sure Jason commits to either of these claims, but I think critics of IRI have often taken those claims as being part of the theory, and I don’t think those critics are being entirely uncharitable when they do that. Be that as it may, I think both claims are false, and certainly neither claim is supported by examples motivating IRI. (Or, if you’re like me, by theoretical arguments motivating IRI; I don’t think the examples show a great deal.)
Here’s an alternative way to capture the motivations behind IRI that doesn’t endorse a “further conditions” view, and takes the ‘high stakes’ case to be analytically basic.
There are coherence constraints on knowledge. Violations of them amount to doxastic defeaters. Some of these constraints are simple. I think, for instance, the following constraint is plausibly a universal truth.
- If _x_ believes ¬_p_, then _x_ does not know that _p_.
It doesn’t matter whether _x_’s belief that _p_ is true, justified, safe, sensitive, not derived from falsehoods, caused by the truth of _p_, robust with respect to the addition of further true beliefs, or whatever you like. If _x_ believes both _p_ and ¬_p_, there is too much incoherence in that part of her belief space for there to be knowledge. The belief that ¬_p_ is a doxastic defeater of the (putative) knowledge that _p_.
There’s a motivation for this. Knowledge that _p_ should mean that adding _p_ to the cognitive state, and making the subsequent alterations that a good theory of belief revision suggests, would make no changes whatsoever. If _x_ knows _p_, then _p_ has been added. This suggests a further constraint.
- If _x_ prefers φ to ψ, but prefers ψ ∧ _p_ to φ ∧ _p_, she doesn’t know _p_.
In this case, however, the principle needs to be qualified. This seems to basically rule out anyone (rational) knowing _p_ unless _p_ is absolutely certain. (Proof sketch: Let ψ be the act of taking a bet on _p_ at crazy long odds, and φ be the act of declining that bet.) So we qualify the principle. How? IRI, or at least one version of it, says that the qualification is interest relative. So the real rule is something like this:
- If _x_ prefers φ to ψ, but prefers ψ ∧ _p_ to φ ∧ _p_, she doesn’t know _p_, unless one of φ and ψ is too irrelevant to _x_’s current interests.
The general picture is this. Knowledge that _p_ requires that one’s belief that _p_ sufficiently cohere with the rest of one’s cognitive state. Ideally, we’d like it to perfectly cohere. But requiring perfect coherence seems to lead to scepticism. So there is an exception clause. And it says that when you’re in a ‘low stakes’ context, certain restrictions on knowledge don’t apply. So a belief that doesn’t fully cohere can still be knowledge.
I think that makes better sense of how interests fit into knowledge. It isn’t that knowledge merely requires a belief with high utility. Or that changing one’s interests can be the basis for knowledge. It’s rather that certain barriers to knowledge get lowered in certain (relatively familiar) situations.
I’m in general sympathetic to approaches to knowledge that say that the sceptical scenario is the basic one, and the reason we have a lot of knowledge is because ‘exceptions’ to the rather tight restrictions on knowledge are frequently triggered. That is one way to explain the appeal of scepticism; abstract away too much from real-life situations and you lose the triggers for these exceptions, and then scepticism is true. The kind of lottery situations that IRI people worry about aren’t cases where strange new conditions on knowledge are triggered. Rather, they are cases where abstract Cartesian standards for knowledge are restored, just like they are in the simplest models for knowledge.
Interest Relativity in Good Cases
Jon Kvanvig has a “very puzzling objection”:http://el-prod.baylor.edu/certain_doubts/?p=2520 to interest-relative invariantism (IRI). He claims, I think, that IRI gets the wrong results in cases where there is a lot at stake, but the agent in question gains a lot.
But the objection is puzzling because I can’t even figure out why he thinks IRI has the consequences he says it has. Here’s what I take to be the distinctive claim of IRI.
Consider cases where the following is all true:
- The right thing to do given _p_ is X.
- The right thing to do given _Probably p_ is Y.
- The agent has a lot of evidence for _p_; sufficient evidence to know _p_ ceteris paribus.
- The agent faces a live choice between X and Y, and the right thing to do in the agent’s situation is Y.
In those cases, we say that the agent doesn’t know _p_. If they did know _p_, it would be right to do X. But it isn’t right to do X, so they don’t know _p_. And this is a form of interest-relativity, since if they were faced with different choices, if in particular the X/Y choice wasn’t live, they may well know _p_.
As Kvanvig notes, the usual way this is illustrated is with cases where the agent stands to lose a lot if they do X and ¬p is true. But that’s not necessary; here’s a similar case.
bq. S heard on the news that GlaxoSmithKline has developed a new cancer drug that will make billions of dollars in revenue, and that its share price has skyrocketed on the news. Intuitively, S knows that GSK’s share price is very high. Later that day, S is rummaging through his portfolio, and notices that he bought some call options on GSK at prices well below what he heard the current share price is. S is obviously extremely happy, and sets about exercising the options. But as he is in the process of doing this, he recalls that he occasionally gets drug companies confused. He wonders whether he should double check that it is really GSK whose price has skyrocketed, or whether he should just exercise the option now.
Here are the relevant X, Y and _p_.
X = Exercise the option.
Y = Spend 10 seconds checking a stock ticker to see whether it is worth exercising the option, then do so if it is, and don’t if it isn’t.
_p_ = GSK share price is very high.
Given _p_, X is better than Y, since it involves 10 seconds less inconvenience. Given _Probably p_, Y is better than X, since the only downside to Y is the 10 seconds spent checking the stock ticker. The downside of X isn’t great. If S buys shares that aren’t that valuable, he can always sell them again for roughly the same price, and just lose a few hundred dollars in fees. But since any reasonable doubt will make it worth spending 10 seconds to save a risk of losing a few hundred dollars, Y is really better than X.
So, I think, S doesn’t know that _p_. Once he knows that _p_, it makes sense to exercise the option. And he’s very close to knowing that _p_; a quick check of any stock site will do it. But given the fallibility of his memory, and the small cost of double-checking, he doesn’t really know.
So IRI works in cases where the agent stands to gain a lot, and not just where the agent stands to lose a lot. I haven’t seen any cases conforming to the template I listed above where IRI is clearly counter-intuitive. In some cases (perhaps like this one) some people’s intuitions are justly silent. But I don’t think there are any where intuition clearly objects to IRI.
Rutgers’ Placement Successes
As I’m sure everyone knows, it hasn’t been a great year for trying to get academic jobs in philosophy. And there are some several very good students (including some from Rutgers) who would normally have gotten good jobs with ease, but haven’t got anything this year. But in the midst of all that, a number of Rutgers students have done very well on the job market, and I wanted to congratulate them on their success.
So far, the following students at Rutgers have received tenure-track positions:
- Meghan Sullivan – Notre Dame
- Karen Lewis – USC
- Gabe Greenberg – UCLA
- Allison Hepola – Samford
- Jennifer Nado – Lingnan
The following students have post-docs:
- Carrie Swanson – Indiana
- Evan Williams – Purdue
- Luvell Anderson – Penn State
And these recent graduates have new tenure-track positions
- Christy Mag Uidhir- University of Houston
- Julie Yoo- CSU, Northridge
Well done to all of them, and to Jeff McMahan for a great job as placement director.
I believe that of the 7 tenure-track jobs at top 20 departments listed in on the “Leiter Reports hiring thread”:http://leiterreports.typepad.com/blog/2011/03/tenure-track-and-postdoc-hiring-by-philosophy-departments-2010-11.html, 3 went to Rutgers students. So well done all. And hopefully there’s more good news to report before the market winds up for the year.
Updates
It’s been a while since I posted here, largely because the young one on the right has been taking up a fair amount of time. So here are a few bits of news.
- Andy Egan and I have (very slowly) put together a collection of papers on epistemic modals and epistemic modality, and it is “coming out with OUP this spring”:http://www.us.oup.com/us/catalog/general/subject/Philosophy/Epistemology/?view=usa&view=usa&sf=toc&ci=9780199591589. The collection isn’t perfect; it should have come out ages ago, and contributor list is missing “a certain something”:http://www.newappsblog.com/2011/01/epistemic-modality-is-a-male-thing.html, but we hope it’s a valuable addition to the literature. I’ll hopefully write more about this closer to publication, especially about what I wish I’d done differently along the way to publication.
- The “Philosophical Quarterly”:http://www.st-andrews.ac.uk/~pq/ essay competition this year is on the topic of Hume after 300 years. There is a 1500 pound prize, so get your Hume papers ready.
- “In Defence of a Kripkean Dogma”:http://brian.weatherson.org/IDKD.pdf is now available in “preprint on the PPR website”:http://onlinelibrary.wiley.com/doi/10.1111/j.1933-1592.2010.00478.x/abstract. Hopefully it will be in paper format soon!
- There is a workshop on “Formal Epistemology and Experimental Philosophy”:http://www.tilburguniversity.edu/research/institutes-and-research-groups/tilps/FEMEP2011/ at Tillberg University this Fall.
- The “March 2011”:http://onlinelibrary.wiley.com/doi/10.1111/phco.2011.6.issue-3/issuetoc issue of “Philosophy Compass”:http://onlinelibrary.wiley.com/doi/10.1111/phco.2011.6.issue-3/issuetoc is out, with papers on truth in fiction, truthmaking, Leibniz’s Law, and many other fun topics.