Do Justified Beliefs Justify Action?

In “Can We Do Without Pragmatic Encroachment?”:http://brian.weatherson.org/cwdwpe.pdf, I argued that the various phenomenon that pragmatic epistemological theories were trying to explain were primarily due to the pragmatic nature of belief, not the pragmatic nature of justification. A large part of Fantl & McGrath’s response to this is to argue that a pragmatic theory of belief isn’t sufficient to derive principles like this one, which they take to be central to a pragmatic epistemology.

bq. (JJ) If you are justified in believing that p, then p is warranted enough to justify yoy in φ-ing, for any φ.

This isn’t actually one of the principles they say I can’t derive, but it’s in the ballpark. And it’s relevant because (a) the principles they think I should be able to derive are stronger than (JJ), and (b), (JJ) is false. I think the argument against (JJ) in “Can We Do Without Pragmatic Encroachment?”:http://brian.weatherson.org/cwdwpe.pdf is pretty good, but it can be simplified. Here’s a much simpler version. The following is all true of an agent _S_.

  • She knows that _p_ and _q_ are independent, so her credences in any conjunction formed out of p, ¬p and q, ¬q are products of the credences in the conjuncts.
  • Her credence in _p_ is 0.99, just as the evidence supports.
  • Her credence in _q_ is also 0.99. This is unfortunate, since the rational credence in q given her evidence is 0.01.
  • She has a choice between taking and declining a bet with the following payoff structure.
    • If pq, she wins $100.
    • If p ∧ ¬ q, she wins $1.
    • If ¬ p, she loses $1000.
  • The marginal utility of money is close enough to constant that expected dollar returns correlate more or less precisely with expected utility returns.

As can be easily computed, the expected utility of taking the bet given her credences is positive, it is just over $89. Our agent _S_ takes the bet. She doesn’t compute the expcted utility, but she is sensitive to it. That is, had the expected utility given her credences been close to 0, she would have not acted until she made a computation. But from her perspective this looks like basically a free $100, so she takes it. Happily, this all turns out well, since _p_ is true. But it was a dumb thing to do. The expected utility of taking the bet given her evidence is negative, it is a little under -$8. So she isn’t warranted, given her evidence, in taking the bet.

I also claim the following three things are true of her.

  1. p is not justified enough to warrant her in taking the bet.
  2. She believes p.
  3. This belief is rational.

The argument for 1 is straightforward. She isn’t warranted in taking the bet, so p isn’t sufficiently warranted to justify it. This is despite the fact that p is obviously relevant. Indeed, given p, taking the bet strictly dominates declining it. But still, p doesn’t warrant taking this bet.

The argument for 2 is that she has a very high credence in p, this credence is grounded in the evidence in the right way, and it leads her to act as if p is true, e.g. by taking the bet. It’s true that her credence in p is not 1, and if you think credence 1 is needed for belief, then you won’t like this example. But if you think that, you won’t think there’s much connection between (JJ) and pragmatic conditions in epistemology either. So that’s hardly a position a defender of Fantl and McGrath’s position can hold.

The argument for 3 is that her attitude towards p tracks the evidence perfectly. She is making no mistakes with respect to p. She is making a mistake with respect to q, but not with respect to p. So her attitude towards p, i.e. belief, is rational.

These three points entail that (JJ) is false, since _S_ provides a counterexample. So I don’t think it’s a bad thing that you can’t derive principles like (JJ), or stronger principles, from my theory of belief. The derivation doesn’t work because my theory of belief is true, and those principles are false!

Shorter “Can We Do Without Pragmatic Encroachment?”

I’ve noticed, both in reading Fantl and McGrath’s book, and in talking to various people at Rutgers, that the position I took on pragmatic encroachment in “Can We Do Without Pragmatic Encroachment?”:http://brian.weatherson.org/cwdwpe.pdf has often been misinterpreted. This has happened so often that I assume it is my fault. So here’s a nickel summary of the views of that paper. This isn’t quite what I currently believe, but it’s close. (Below I say a bit about how I’ve changed my views.)

  1. Functionalism is correct. That is, mental states are individuated functionally, and typically have three kinds of proprietary functional roles: relationships to inputs, relationships to other states, and relationships to outputs. The third of these is most important to the story here.
  2. Ramsey’s functional characterisation of credences is more or less right, at least as regards relationships to outputs. So, assuming the input and internal connections are in order, to have credence _x_ in _p_ is more or less to be willing to bet on _p_ at odds 1 – x:x.
  3. Rational credences track evidential probabilities, in much the way Keynes suggested. So to have a rational credence in _p_ just is to have one’s credence be (close enough to) the epistemic probability of _p_ given _E_, where _E_ is your actual evidence. (Note that there’s nothing pragmatic around yet, at least as long as evidence is not pragmatic.)
  4. The output condition for belief is that an agent (typically) believes that _p_ iff for any _A_, _B_, the agent prefers _A_ to _B_ iff they prefer _A ∧ p_ to _B ∧ p_. There are other conditions on belief (i.e. input conditions and internal connections), but this condition explains the relationship between stake variation and variation in justified belief.
  5. The previous point uses a tacit quantifier over actions. Actions _A_ and _B_ are in the relevant quantifier domain iff they are practically relevant to the agent. This is where stake variation impacts belief, and it is the only place that it does. So right now I believe that I’m listening to a Beatles song, but I wouldn’t continue to believe that if I had to bet my life on it. (It could be a very carefully done re-recording after all.) That’s because betting my life on this song being by the Beatles is not currently in the relevant quantifier domain, but could move into the quantifier domain if my practical situation changes.
  6. An agent has a rational belief in _p_ iff they believe that _p_, and their credence in _p_ is rational in the sense of point 3.
  7. As a consequence of all that, changing the stakes cannot change an agent from having a rational to having an irrational belief in _p_. But it can change them from having a rational belief in _p_ to neither believing _p_ nor being in a position to rationally believe that _p_.
  8. Some similar story holds for knowledge, though that part of the story is explicitly put off until a later paper. (And if you’d asked me at the time I’d have said that paper would have taken less than 5 years to write.)

Here’s what I no longer think is correct in all that.

  • I think the ‘internal connections’ part of the functional role is more important to interest-relativity than I thought at the time. I did (somewhat opaquely) discuss that role when discussing conjunction-introduction and related issues, but it should have been more upfront, and more detailed.
  • I don’t think point 6 can be right, and in fact I suspect it fails in a way that undermines the larger project. The worry is that rationality doesn’t really require one’s credence _exactly_ tracking the Keynesian epistemic probability. It at most requires that credence be close enough. But how close is close enough might be sensitive to pragmatic factors. I think this is similar to a worry that Fantl and McGrath raise, though they have different enough terminology to me that it’s a little hard to be sure.
  • Point 8 really isn’t right. The problem is that irrational credences in other propositions seem more likely to defeat _knowledge_ than to defeat _rational belief_.

I’ll write more posts setting out those three bullet points, but for now I really just wanted to lay out for my own satisfaction an executive summary of “Can We Do Without Pragmatic Encroachment”:http://brian.weatherson.org/cwdwpe.pdf.

Fantl and McGrath on Fallibilism

I’ve been reading through Jeremy Fantl and Matthew McGrath’s excellent _Knowledge in an Uncertain World_. So there will be a few posts about it to come. I’ll start with a question about their definition of fallibilism. They offer up three definitions, and endorse the third.

*Logical Fallibilism* – You can know something on the basis of non-entailing evidence.

*Weak Epistemic Fallibilism* (hereafter, WeakEF) – You can know something even though it is not maximally justified for you.

*Strong Epistemic Fallibilism* (hereafter, StrongEF) – You can know that _p_ even though there is a non-zero epistemic chance that not-p.

They frequently restate StrongEF as the doctrine that you can know things with an epistemic chance of less than 1. That’s equivalent only if the following is true. The epistemic chance of _p_ is less than 1 iff the epistemic chance of not-p is non-zero. And that’s true I guess if epistemic chance is a probability function. (It isn’t only true that way, but I can’t see any other good motivation for the equivalence.) And I really don’t see any reason whatsoever to believe that epistemic chance is a probability function.

We never get a full definition of ‘epistemic chance’. It’s partially introduced through its natural language meaning. We talk about there being a chance that Oswald didn’t shoot Kennedy, or that the Red Sox will win the pennant this year. But that intuitive notion clearly isn’t a probability function. After all, in that sense of chance there’s some chance that the twin prime conjecture will be proved, and some chance that it won’t be proved. Yet one of those two things has probability zero.

The other way that ‘epistemic chance’ is introduced is in terms of rational gambles. I assume the idea is something like this. The epistemic chance of _p_ is _x_ iff it would be rational to regard a bet that costs _x_ utils and returns 1 util iff _p_ is fair. Fantl and McGrath never say anything that precise, but that seems to be the idea.

Now the same objection can be raised. It is rational to regard various bets at non-zero prices on the proof or disproof of the twin prime conjecture as fair. So epistemic chance so defined can’t be a probability function.

More seriously, I don’t think there’s a reason to think it is *anything like* a probability function. It doesn’t, as far as I can tell, have anything like the *topology* of a probability function.

For one thing, I don’t see any reason to think that it’s linear. That is, I don’t see why we should think epistemic chance defined in terms of gambles produces anything more than a very partial order over propositions. If you believe in totally ordered utilities you might think the definition I gave two paragraphs back can produce a total ordering over propositions. But I don’t really believe that utilities are totally ordered.

For another, I don’t see any reason to think that it’s got an upper and lower limit. Maybe “I exist” is at the top. But couldn’t we get even more confident in it, that is, even more willing to accept outrageous bets, by thinking through some philosophy, reading the _Meditations_ etc? I think that’s a reason to think that the chance of “I exist” can go up, at least if ‘chance’ is defined in terms of rational gambles.

Even if I was wrong about both these things, I’d think epistemic chances were more likely to be Dempster-Shafer functions than probability functions, so it wouldn’t be equivalent to say that _p_ having a chance of 1 or instead saying that not-p has a chance of zero.

I think one of the more pernicious influences of Bayesianism on epistemology is that theorists just assume that various functions are probability functions. This isn’t a mistake Bayesians make; they have long _arguments_ that probability theory is applicable where they apply it. (I don’t think those are typically good arguments, but that’s another story.) But in mainstream epistemology, we see probability theory brought in, either explicitly or tacitly, when it seems far from clear that it is appropriate.

In Defence of a Kripkean Dogma

Over the winter, “Jonathan Ichikawa”:http://jonathanichikawa.net/, “Ishani Maitra”:http://andromeda.rutgers.edu/~ishanim/ and I wrote up a paper on recent experimental work on reference. Here it is.

bq. “In Defense of a Kripkean Dogma”:http://brian.weatherson.org/IDKD.pdf

The paper is primarily a response to “Against Arguments from Reference”:http://www.philosophy.utah.edu/faculty/mallon/Materials/AAFR.pdf, though some of what we have to say is relevant to the arguments in “Semantics, Cross-Cultural Style”:http://www.philosophy.utah.edu/faculty/mallon/Materials/sccs.pdf. Really, we want to make three points.

  1. The experimental data presented to date don’t undermine what Kripke says about the Gödel-Schmidt case;
  2. The Gödel-Schmidt case is only relevant to a very small part of Kripke’s overall theory of reference, so if he’s wrong about it the bulk of the theory is unaffected; and
  3. The main philosophical applications of Kripke’s theory have concerned the bits that are already established in _Naming & Necessity_ before the Gödel-Schmidt case comes up, not the bits that are supported by the Gödel-Schmidt case. So even if the experiments do show that Kripke’s wrong about that case, not a lot follows for the applications of Kripke’s theory in the last four decades.

Online Papers

I spent a bit of time – a bit too much time probably – converting my archived papers into LaTeX form. Now that I’ve done that, it was easy enough to make them into a giant collection. So if anyone wants to download a bunch of my papers, currently at 43, but that number might change, here they are.

bq. “Brian Weatherson’s Online Papers”:http://brian.weatherson.org/WeathersonCW.pdf

I’m going to update that file reasonably often, but keep the same link. So the pagination in that file will change frequently. Of course, if you want to cite things, you should cite the published versions, so that shouldn’t cause any problems.

Updates

I managed to break a lot of the inner workings of TAR last week while tinkering with something that I thought was unrelated. Fortunately, Michael Kremer alerted me that there was a problem, and it should be now under control. (I’d broken TAR’s .htaccess file while tinkering with hotlink protection in case you’re interested. It turns out to be really easy to do this on a WordPress blog.) Things should be up and running again, and I updated quite a few things while trying to figure out the problem. So let me know if you see any non-content-related bugs, and thanks to Michael for the alert.

Speaking of blogs, the Rutgers grad students have a blog: “Discovering Truths and Announcing Them”:http://dtaatb.weebly.com/index.html. It looks good, and there is already a bit of activity in the comments threads.

And speaking of grad students, I want to thank everyone at MIT for both their hospitality and their insightful comments when I presented _Do Judgments Screen Evidence?_ there last week. I’ll post something soon about the things I learned there.

There’s a new society at the APA.

bq. The inaugural David Kellogg Lewis Society Group Meeting next Thursday! 2010 American Philosophical Association Pacific Division Meeting, Westin St. Francis, San Francisco, Thursday April 1, 8-10pm. Our special guest speaker will be Terry Horgan, “Quantification with Crossed Fingers.” Also appearing: Richard Hanley: “Counterfactuals, Backtracking, and Time Travel.”

I liked this picture of a spot on the river I frequently walk by; I wish I could take pictures like this.

Student Loan Reform

Somewhat overlooked in the drama over health insurance reform last night was that the House also passed a major piece of “student loan reform”:http://www.prospect.org/csnc/blogs/tapped_archive?month=03&year=2010&base_name=another_good_thing_happened_la. Currently the Federal Government offers huge subsidies to the student loan industry. This is basically a good idea, but the effect is that a lot of the subsidies are swept up by the loan providers, i.e. banks. The reforms will allow students to borrow directly from the government. Over the next 10 years, over $60 billion that would have been passed on to the banks in subsidies will be kept in the public purse. Most of that money will be spent on Pell grants, community colleges and historically black colleges and universities. In effect, the legislation is a massive transfer of wealth from the banking industry to higher education, and it will help hundreds of thousands, if not millions, of students attend college who would not otherwise have been able to afford it.

Of course, this assumes the legislation passes the Senate and White House. I keep being told that Obama is just interested in doing the bidding of the banking industry. I assume the people who say this will predict that he’ll veto the relevant legislation. Let’s see if that happens.

Workshop Followup

As I advertised here, we ran a one-day methodology workshop at Rutgers last week. I think it was a success, though one of (several) things I didn’t organise was recording devices. (I messed up on this; I should have had at least audio recording.) Thanks to Josh Knobe, Liz Harman, Michael Strevens and Jenny Nado for doing great talks, and an audience who asked engaged and smart questions.

The cost of the conference to my research account was a little over $1000, mostly for lunch. And most people who travelled there would have paid less than $20. A few people came from a little further away, but I think the overall cost of the conference, including costs incurred by the attendees, was under $2000. If you do the same equation for most 2-3 day conferences, the costs can fly past $100,000, maybe well past it. Most of those conferences are better than the one we ran at Rutgers, but probably not 50-100 times better. In terms of ‘bang-for-the-buck’, I think a one-day workshop with local speakers is a very good model.