Skip to main content.
March 22nd, 2011

Interest Relativity in Good Cases

Jon Kvanvig has a very puzzling objection to interest-relative invariantism (IRI). He claims, I think, that IRI gets the wrong results in cases where there is a lot at stake, but the agent in question gains a lot.

But the objection is puzzling because I can’t even figure out why he thinks IRI has the consequences he says it has. Here’s what I take to be the distinctive claim of IRI.

Consider cases where the following is all true:

In those cases, we say that the agent doesn’t know p. If they did know p, it would be right to do X. But it isn’t right to do X, so they don’t know p. And this is a form of interest-relativity, since if they were faced with different choices, if in particular the X/Y choice wasn’t live, they may well know p.

As Kvanvig notes, the usual way this is illustrated is with cases where the agent stands to lose a lot if they do X and ¬p is true. But that’s not necessary; here’s a similar case.

S heard on the news that GlaxoSmithKline has developed a new cancer drug that will make billions of dollars in revenue, and that its share price has skyrocketed on the news. Intuitively, S knows that GSK’s share price is very high. Later that day, S is rummaging through his portfolio, and notices that he bought some call options on GSK at prices well below what he heard the current share price is. S is obviously extremely happy, and sets about exercising the options. But as he is in the process of doing this, he recalls that he occasionally gets drug companies confused. He wonders whether he should double check that it is really GSK whose price has skyrocketed, or whether he should just exercise the option now.

Here are the relevant X, Y and p.

X = Exercise the option.
Y = Spend 10 seconds checking a stock ticker to see whether it is worth exercising the option, then do so if it is, and don’t if it isn’t.
p = GSK share price is very high.

Given p, X is better than Y, since it involves 10 seconds less inconvenience. Given Probably p, Y is better than X, since the only downside to Y is the 10 seconds spent checking the stock ticker. The downside of X isn’t great. If S buys shares that aren’t that valuable, he can always sell them again for roughly the same price, and just lose a few hundred dollars in fees. But since any reasonable doubt will make it worth spending 10 seconds to save a risk of losing a few hundred dollars, Y is really better than X.

So, I think, S doesn’t know that p. Once he knows that p, it makes sense to exercise the option. And he’s very close to knowing that p; a quick check of any stock site will do it. But given the fallibility of his memory, and the small cost of double-checking, he doesn’t really know.

So IRI works in cases where the agent stands to gain a lot, and not just where the agent stands to lose a lot. I haven’t seen any cases conforming to the template I listed above where IRI is clearly counter-intuitive. In some cases (perhaps like this one) some people’s intuitions are justly silent. But I don’t think there are any where intuition clearly objects to IRI.

Posted by Brian Weatherson in Uncategorized

5 Comments »

This entry was posted on Tuesday, March 22nd, 2011 at 2:19 pm and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

5 Responses to “Interest Relativity in Good Cases”

  1. kvanvig says:

    Hi Brian, that wasn’t quite the lesson. It wasn’t meant to show that any particular version of pragmatic encroachment is mistaken. It was intended, instead, to question the theoretical motivations for the view.

    Think of it like this. If some competent deduction transmission principle is correct, and there is an underlying pure epistemic notion (call it epistemic probability), then if we deduce q from p, where p has probability n, then we should be able to end with with q having a probability (very close to) n. If so, then, given certain symmetries, we should expect to have a case where we come to know q by competently deducing it from something we don’t know.

    I don’t know of a view that has this implication, so wasn’t accusing any of them of making a mistake. Since symmetry is pretty neat, I wonder why pragmatic encroachers don’t end up with such a view, and speculated about what might motivate rejecting the symmetry that generates the result.

  2. Brian Weatherson says:

    Now I’m confused about which point was meant to be made by which post.

    I thought there was one post on high-stakes-in-a-good-way cases, and another case on inferences from things that aren’t known. I was just responding to the high-stakes-in-a-good-way case.

    I suspect inference from the unknown is a perfectly common way to get knowledge. Lots of people come to know things about the physical world by making deductions that include the not exactly true, and hence not exactly known, Newtonian laws.

    Were you thinking that the two criticisms should be treated as a single unit? I’d probably have to modify the case to deal with that.

  3. Brian Weatherson says:

    Actually, I think the argument here is going to overgenerate by a lot.

    If some competent deduction transmission principle is correct, and there is an underlying pure epistemic notion (call it epistemic probability), then if we deduce q from p, where p has probability n, then we should be able to end with with q having a probability (very close to) n. If so, then, given certain symmetries, we should expect to have a case where we come to know q by competently deducing it from something we don’t know.

    Let’s say there is a ‘pure’ notion, say epistemic probability. And say the right way to capture fallibilism is by saying knowledge is compatible with it being less than 1. Then we’ve got two choices.

    (A) Knowledge supervenes on epistemic probability. That rules out there being anything like Gettier conditions on knowledge.

    (B) Knowledge doesn’t supervene on epistemic probability. That means we won’t be able to infer from the high probability of q that q is known. Perhaps the fact that q was solely deduced from something unknown is a defeater.

    Either way, I don’t see an objection to IRI here.

  4. kvanvig says:

    Brian, again, not supposed to be an objection to IRI here. On the last point, though, hold fixed everything but epistemic probability (it’s obvious on any fallibilist theory that knowledge doesn’t supervene on it). One can add the defeater claim if one is motivated to avoid the worry, but that defeater claim is unmotivated, I would think, apart from the desire to avoid the issue raised. More motivated would be to affirm the usual inferentialist restriction: you can’t know p inferentially unless you infer it from something you know.

    Of course, we now know that knowledge from falsehood is possible. But this isn’t such a case. This is a case where a change in stakes from a risk-of-error starting point to a benefit-from-correctness alone explains how one can come to know on the basis of something not known. Now, one might motivate this idea by claiming it’s just another way in which we can gain knowledge from non-knowledge. But no one has defended this idea. I wanted to know why.

    One special way to get such cases is with the rather particular competent deduction principle cited above, but presumably they can arise in other ways as well. That’s why there was a symmetry post as well as a closure post. But, again, neither one was an objection to extant views.

  5. Brian Weatherson says:

    I’m not sure why the various restrictions that you think are unmotivated are actually unmotivated. Any time there are defeaters around, there will be all sorts of restrictions to “knowledge can only be inferred from knowledge”. We IRI-ers think interests determine what kind of defeaters are around. So we think interests put restrictions on the principle “knowledge can only be inferred from knowledge”. I’m not sure what’s ad hoc about that.

Leave a Reply

You must be logged in to post a comment.