Moderate Rationalism and Bayesian Scepticism

I just uploaded a very drafty version of a short paper I’m working on for a workshop in Edinburgh on scepticism.

bq. “Moderate Rationalism and Bayesian Scepticism”:http://brian.weatherson.org/MRaBS.pdf

The paper is an argument against any theorist who holds (a) that we can know substantive facts about the nature of epistemic justification a priori, but (b) we can’t know deeply contingent truths a priori. The example used in the paper is someone who holds that we can know a priori that process reliabilism is the right theory of epistemic justification, but who also holds that there is no deeply contingent a priori. The argument is that the (by now familiar) Bayesian objection to dogmatism, although not a good objection to dogmatism, is a good objection to such a view.

The paper is extremely choppy right now, and hopefully I’ll flesh out some of the arguments. But I thought it was worth posting the very drafty version in case it doesn’t get improved before the workshop!

Time Zones

I only recently noticed that the version of WordPress that I’m running doesn’t automatically adjust for daylight savings. So some posts might have seemed to appear at a time other than they were written. I’ve adjusted it now, and the time zone on posts should be U.S. Eastern Daylight Time.

As you may have noticed, I’m trying to have my posts appear once a day at midday. Sometimes these are fairly trivial posts (like this one) but hopefully we’ll have some content some of the time. Other bloggers here will keep on posting whenever (and whatever) they like. Thanks to the magic of being able to schedule posts in advance, this will hopefully mean that the blog keeps on ticking along even when other things are taking up lots of time, and I can’t personally be on the blog.

New Blog

Rob Wilson is involved in a new blog. He writes: The What Sorts of People blog is now up and running: check it out. This is the blog for the What Sorts of People Should There Be? network, a collaborative blog with regular contributions from around 10 team members. Short, recent posts are available on double-amputee Oscar Pistorius’s bid to compete Olympically, and on a so-recent-it’s-still-forthcoming piece by Steve Pinker in The New Republic on the concept of dignity and its use in a recent President’s Council on Bioethics report. Biella Coleman, who was a Killam Postdoc at Alberta last year and now teaches at NYU, has just posted a tempered rant on the blog on medical genetics and eugenics. You can also search for other blog pieces by category and review the archives of the blog from the site. If you like what you see:

  • add it to your blogfeeds, or otherwise check it out regularly
  • tell your friends
  • blog about it and direct folks from your own blog
  • send it on to other folks who might do any of the above

Analyticity and Intuitionism

Here’s a little argument that was inspired by some things Williamson says in chapter 3 of “The Philosophy of Philosophy”. It’s not at all the way Williamson intended his arguments to be used I guess.

  1. Any logical truth is true in virtue of meaning facts alone.
  2. _Timothy Williamson is a philosopher_ is not true in virtue of meaning facts alone.
  3. Any disjunction with exactly one true disjunct is true in virtue of whatever the true disjunct is true in virtue of.
  4. So, _Timothy Williamson is a philosopher or Timothy Williamson is not a philosopher_ is not a logical truth.

The premises could use being tidied up a little bit, but I think there’s something close to this in Williamson. Of course, he rejects (4), so he’s more interested in the argument from (2), (3) and the negation of (4) to the negation of (1). (Not that he would be quite as cavalier in the formulation of the argument as I’ve been.) Still, I think it’s a pretty interesting argument this way.

When I first saw this in Williamson, I thought, wow there’s a nice argument against the law of excluded middle. But now I’m worried that a structurally similar argument could, in principle, be run against the law of non-contradiction. I’ll leave it as an exercise for the reader to figure out the best way such an argument would go. I’m leaving it as an exercise in part because I’m not quite happy with any of my attempts, and in part because I’m too lazy. But unless I’m confident that no such argument could be used to reject LNC, I’m not going to be using this argument against LEM. And as of now, I’m certainly not confident of that.

Andy Egan to Rutgers

Great news for Rutgers. Andy Egan has accepted a tenured position in the philosophy department, starting in Fall 2009. As well as making Rutgers stronger in metaphysics, philosophy of language, philosophy of mind, aesthetics, ethics etc etc, it will be a lot of fun to have Andy around the area. Good times for TAR, for Rutgers, for NY area philosophy, and, we hope, for Andy!

The Externalists’ Demon

Congratulations to “Clayton Littlejohn”:http://claytonlittlejohn.blogspot.com/2008/05/good-news.html for getting his paper “The Externalist’s Demon”:http://people.smu.edu/clittle/Clayton%20Littlejohn%27s%20Homepage/Clayton%20Littlejohn%27s%20Homepage/work_files/extdemonweb.pdf accepted for publication. My own view on the new evil demon problem relies fairly heavily on what Clayton says in this paper, perhaps more heavily than I’ve properly acknowledged in the past, so I’m glad it’s coming out and I can give it its proper due.

Attitudes and Relativism

I’ve turned some of my blog posts on propositional attitude reports and how they bear on issues about relativism/contextualism into a short paper, called “Attitudes and Relativism”:http://brian.weatherson.org/AaR.pdf. It’s very drafty, and the references, thanks etc are barely started, let alone completed. But I hope it has some interesting points in it.

Comments more than welcome.

Perception and Nearby Error

Consider the following case.

bq. S has generally reliable vision, but she is subject to a small but serious deception. When she is on a boat over salt water, she is prone to hallucinate objects in the distance. The hallucinations are quite convincing, and S has often formed false beliefs on this account. She does not know the cause of the hallucinations. In fact, she hasn’t even considered that it may be the salt in the salt water that is responsible for them. That’s too bad, because the salt is the cause of it; her vision, even at large distances, is well above average when she’s over fresh water.

bq. Today she is sailing on Lake Huron (a fresh water lake). She forms a visual representation of land in the distance, about 20 miles ahead. She checks on her map and sees (correctly) that the map does not record any land there. And she knows that this is the kind of thing she’s disposed to hallucinate when over sea water. But she decides to trust her eyes, and forms a firm belief that there is some land ahead of her, and that her map must be mistaken. Both of these beliefs are of course true, since her eyes are reliable in these circumstances, and her eyes are telling her that there is land there.

Let p be the proposition that there is land about 20 miles ahead of S. Consider the following four questions.

(1) Does S see that p?
(2) Does S know that p?
(3) Is p part of S’s evidence?
(4) Can S take p for granted in practical and theoretical deliberation, if this is a question of some importance to her?

My initial reaction is to say “Yes” to (1) and “No” to (4). Both of these seem like fairly secure judgments actually.

Vision, like most senses, is fairly strongly informationally encapsulated. Even if S has reasons to doubt that p, those don’t affect what she sees. Since she’s formed a visual representation that p, and that representation was caused, in a non-deviant way, by p being true, she sees that p. (Is this the way these cases are standardly classified in the perception literature?)

On the other hand, if anything at all turns on the question of whether p is true, she should get more information before proceeding. She knows that her eyes are unreliable in circumstances like these, and she has direct evidence that her eyes are faulty here, namely the conflict with the map. The situation is one that calls out for further investigation, not simply trusting her eyes.

I don’t have immediate judgments about (2) or (3). But I do sort of think that if the answer to (3) were “Yes”, the answer to (4) would be “Yes”. So the answer to (3) must be “No” as well.

Whatever we say about (2), there’s a problem here for the views on knowledge and evidence that Williamson has put forward in recent work. He says that knowledge is the most general factive mental state. Seeing is a factive mental state. So if the answer to (1) is “Yes”, the answer to (2) is “Yes”. He also says that all knowledge is evidence. So if the answer to (2) is “Yes”, the answer to (3) is “Yes”. But that doesn’t seem to be the correct answer.

There’s a further challenge here for a broader Williamsonian view of evidence. Consider a straightforward case where we learn something by visual perception. So I just looked out the window and saw clouds. I now know that its cloudy outside. Is my evidence (a) that there are clouds, or (b) that I see there are clouds? Or perhaps both?

It’s not too hard to be motivated, on ordinary language grounds if nothing else, to think that the answer is (a). But if we agree that S’s evidence does not include p, there is a hard question that needs to be answered. Under what circumstances does seeing that p make it the case that p is part of your evidence? Williamson suggests the answer “All circumstances”, but I don’t think that can be right, because of S’s case. And I’m not sure there’s another answer around.

There’s a related question about philosophical methodology. T considers a case, and judges that q. That’s the right judgment about the case, and T makes it for the right reason. Is her philosophical evidence that q, or that she’s judged that q. Williamson again wants to argue that it is q, not merely the judgment that q. But again we have to ask, under just what circumstances does a judgment that q get to be part of your evidence? I suspect that thinking about cases like S’s will make us think that the answer is not completely obvious. More on this to follow.

Williamson’s Principle of Charity

In Chapter 8 of “The Philosophy of Philosophy”:http://books.google.com/books?id=HtFQHAAACAAJ&dq=Williamson+%22the+philosophy+of+philosophy%22&ei=TIssSNnbAaDsygSt08jXAw, Timothy Williamson defends a new principle of charity. He says we should interpret people in such a way as to maximise what they know. This principle is intended to be constitutive of mental content, in the way that Davidson, Lewis and others have taken charity to be constitutive. That is, the content of someone’s thought and talk just is that content that would (within some constraints) maximise how much they know.

Williamson argues, persuasively, that this version of charity avoids some of the problems that have plagued prior versions. For instance, if I have many beliefs that are caused by contact with x, but are only true if interpreted as beliefs about y (with whom I have no causal contact), Williamson’s principle does not lead us to interpret those beliefs as beliefs about y. That’s because such an interpretation, even if it would maximise the amount of truth I believed, wouldn’t increase how much knowledge I have, because I couldn’t know those things about y.

But some of the traditional problems attending to charity-based theories of content still remain. For instance, Williamson has a problem with horsey looking cows.

Imagine that S is very bad at distinguishing between horses and certain cows, which we’ll call horsey looking. S has a term, t, in his language of thought that he applies indiscriminately to horses and horsey looking cows. When S wants to express thoughts involving t, he uses a word, say “equine” that (unbeknownst to S) his fellow speakers do not used when faced with horsey looking cows. In fact S has very few beliefs about how “equine” is used in his community, or general beliefs about the kind picked out by “equine”/t. He doesn’t have a view, for instance, about whether it is possible for equines to produce milk, or whether other people use “equine” with the same meaning he does, or whether an equine would still be an equine if his eyesight were better. S just isn’t that reflective. What he does have views about are whether all the animals in that yonder field are equines, and he’s confident that they are. In fact, many of them are horsey looking cows.

What does S’s public term “equine”, and mental term t, denote[1]? It seems to me that it denotes HORSE, not HORSE OR HORSEY LOOKING COW. S is simply mistaken about a lot of his judgments involving equine. I’m not going to take a stand here about whether that’s because S’s fellow speakers use “equine” to denote HORSE, or because HORSE is more natural than HORSE OR HORSEY LOOKING COW, or because t stands in certain counterfactual relationships to HORSE that it does not to HORSE OR HORSEY LOOKING COW. I’m not going to take a stand on those because I don’t need to. A very wide range of philosophical theories back up the intuition that in this case, “equine” and t denote HORSE.

The knowledge maximisation view has a different consequence, and hence is mistaken. On that view, “equine” and t both denote HORSE OR HORSEY LOOKING COW. That’s because interpreting S that way maximises his knowledge. It means that all, or at least most, of S’s judgments of the form “That’s an equine” are knowledge. If “equine” denotes HORSE, then practically none of them are knowledge. Since those are the bulk of the judgments that S makes using “equine” and t, the interpretation that maximises knowledge will not be the one that says “equine” denotes HORSE.

It might be objected here that S does not know that the things in the field are horses or horsey looking cows. But I think we can fill out the case so that is not a problem. We certainly can fill out the case so that S’s beliefs, thus interpreted are (a) true, (b) sensitive, (c) safe and (d) not based on false assumptions. The first three of those should be clear enough. If the only horsey looking things around, either in the actual case or in similar cases, are horses and cows, then we’ll guarantee the truth, sensitivity and safety of S’s belief. And if we don’t interpret *any* of the terms in S’s language of thought as denoting HORSE, it isn’t clear why we’d think that there’s any false belief from which S is inferring that those things are all equines. Certainly he doesn’t, on this interpretation, infer this from the false belief that they are horses.

As noted above, if we interpret “equine” as denoting HORSE OR HORSEY LOOKING COW, then none of these three claims are true.

(1) If S’s vision was better, equines would still be equines.
(2) Equines can generally breed with other equines of the opposite sex.
(3) Most equines are such that most people agree they satisfy “equine”.

If S believed all those things, then possibly it would maximise S’s knowledge to interpret “equine” as denoting HORSE. But S need not believe any such things, and the proper interpretation of his thought and talk does not depend on whether he does. Whether those things are *true* might matter for the interpretation of S’s thought and talk, but whether they are believed does not.

The problem of horsey looking cows is in one sense worse for Williamson than it is for Davidson. Interpreting “equine” and t as denoting HORSE makes many of S’s statements and thoughts false. (Namely the ones that are about horsey looking cows.) But it makes many more of them not knowledge. If S really can’t distinguish between horses and horsey looking cows, then even a belief about a horse that it’s an equine might not be knowledge on that interpretation. So knowledge maximisation pushes us strongly towards the disjunctive interpretation of “equine” and t.

It might be objected that if we interpret “equine” as denoting HORSE, then although S knows fewer things, the things he knows are stronger. There are two quite distinct replies we can make to this objection.

First, if we take that line seriously, the knowledge maximisation approach will go from having a problem with disjunctive interpretations, to having a problem with conjunctive interpretations. Assume that S is in Australia, but has no term in his language that could plausibly be interpreted as denoting Australia. Now compare an interpretation of S that takes “equine” to denote HORSE, and one that takes it to denote HORSE IN AUSTRALIA. Arguably the latter produces as much knowledge as the first (which might not be a lot) plus the knowledge that the horses are Australian.

I assume here that beliefs that are true, safe, sensitive, and not based on false lemmas constitute knowledge. So S’s belief, if we interpret him as having this belief, that the horses he sees are horses in Australia, would count as knowledge. Perhaps that’s too weak a set of constraints, the general pattern should be clear; if we take any non-sceptical account of knowledge, there will be ways to increase S’s knowledge by making unreasonably strict interpretations of his predicates.

I also assume in this reply that if S doesn’t have a term that denotes Australia, then he won’t independently know that the horses (or horsey looking cows) that he sees are in Australia. That is, I assume that the interpretation of S’s mental states must be somewhat compositional. That’s, I think, got to be one of the constraints that we put on interpretations. Otherwise we could simply interpret every sentence S utters as meaning the conjunction of everything he knows.

The second reply to this objection is that it is very hard to even get a sense of how to weigh up the costs and benefits, from the perspective of knowledge maximisation, of proposals like this one. And that’s because there’s nothing like an agreed upon in advance algorithm for determining what would count as more or less knowledge. Without that, it feels like the view “Interpret so as to maximise knowledge” is a theory schema, rather than a theory.

Williamson starts this chapter by noting that most atomist (what he calls molecularist) theories of mental and verbal content are subject to counterexample. That’s true. But it’s true in large part because those theories do fill in a lot more details. I more than strongly suspect that if the knowledge maximisation view was filled out with as much detail as, say, some of Fodor’s attempts at an atomist theory, the counterexamples would fly just as thick and fast.

This isn’t, or at least isn’t merely, a complaint about a theory lacking details. It is far from obvious that there is *any* decent way to compare two bodies of knowledge and say which has more or less, save in the special case where one is a subset of the other. If we (a) thought that knowing that p was equivalent to eliminating not-p worlds, and (b) we had some natural measure over the space of possible worlds, then we could compare bodies of knowledge by comparing the measure of the set of worlds compatible with that knowledge. But (a) is doubtful, and (b) is clearly false. And without those assumptions, or something like them, where are we to even start looking for the kind of comparison Williamson needs?

fn1. By “denote” I mean to pick out here whatever relation holds between a predicate (in natural language or LOT) and what it picks out. Perhaps, following some recent work by David Liebesman, I should use “ascribe” here. I think nothing of consequence for this argument turns on the choice of terminology here.