In Chapter 8 of The Philosophy of Philosophy, Timothy Williamson defends a new principle of charity. He says we should interpret people in such a way as to maximise what they know. This principle is intended to be constitutive of mental content, in the way that Davidson, Lewis and others have taken charity to be constitutive. That is, the content of someone’s thought and talk just is that content that would (within some constraints) maximise how much they know.
Williamson argues, persuasively, that this version of charity avoids some of the problems that have plagued prior versions. For instance, if I have many beliefs that are caused by contact with x, but are only true if interpreted as beliefs about y (with whom I have no causal contact), Williamson’s principle does not lead us to interpret those beliefs as beliefs about y. That’s because such an interpretation, even if it would maximise the amount of truth I believed, wouldn’t increase how much knowledge I have, because I couldn’t know those things about y.
But some of the traditional problems attending to charity-based theories of content still remain. For instance, Williamson has a problem with horsey looking cows.
Imagine that S is very bad at distinguishing between horses and certain cows, which we’ll call horsey looking. S has a term, t, in his language of thought that he applies indiscriminately to horses and horsey looking cows. When S wants to express thoughts involving t, he uses a word, say “equine” that (unbeknownst to S) his fellow speakers do not used when faced with horsey looking cows. In fact S has very few beliefs about how “equine” is used in his community, or general beliefs about the kind picked out by “equine”/t. He doesn’t have a view, for instance, about whether it is possible for equines to produce milk, or whether other people use “equine” with the same meaning he does, or whether an equine would still be an equine if his eyesight were better. S just isn’t that reflective. What he does have views about are whether all the animals in that yonder field are equines, and he’s confident that they are. In fact, many of them are horsey looking cows.
What does S’s public term “equine”, and mental term t, denote1? It seems to me that it denotes HORSE, not HORSE OR HORSEY LOOKING COW. S is simply mistaken about a lot of his judgments involving equine. I’m not going to take a stand here about whether that’s because S’s fellow speakers use “equine” to denote HORSE, or because HORSE is more natural than HORSE OR HORSEY LOOKING COW, or because t stands in certain counterfactual relationships to HORSE that it does not to HORSE OR HORSEY LOOKING COW. I’m not going to take a stand on those because I don’t need to. A very wide range of philosophical theories back up the intuition that in this case, “equine” and t denote HORSE.
The knowledge maximisation view has a different consequence, and hence is mistaken. On that view, “equine” and t both denote HORSE OR HORSEY LOOKING COW. That’s because interpreting S that way maximises his knowledge. It means that all, or at least most, of S’s judgments of the form “That’s an equine” are knowledge. If “equine” denotes HORSE, then practically none of them are knowledge. Since those are the bulk of the judgments that S makes using “equine” and t, the interpretation that maximises knowledge will not be the one that says “equine” denotes HORSE.
It might be objected here that S does not know that the things in the field are horses or horsey looking cows. But I think we can fill out the case so that is not a problem. We certainly can fill out the case so that S’s beliefs, thus interpreted are (a) true, (b) sensitive, (c) safe and (d) not based on false assumptions. The first three of those should be clear enough. If the only horsey looking things around, either in the actual case or in similar cases, are horses and cows, then we’ll guarantee the truth, sensitivity and safety of S’s belief. And if we don’t interpret any of the terms in S’s language of thought as denoting HORSE, it isn’t clear why we’d think that there’s any false belief from which S is inferring that those things are all equines. Certainly he doesn’t, on this interpretation, infer this from the false belief that they are horses.
As noted above, if we interpret “equine” as denoting HORSE OR HORSEY LOOKING COW, then none of these three claims are true.
(1) If S’s vision was better, equines would still be equines.
(2) Equines can generally breed with other equines of the opposite sex.
(3) Most equines are such that most people agree they satisfy “equine”.
If S believed all those things, then possibly it would maximise S’s knowledge to interpret “equine” as denoting HORSE. But S need not believe any such things, and the proper interpretation of his thought and talk does not depend on whether he does. Whether those things are true might matter for the interpretation of S’s thought and talk, but whether they are believed does not.
The problem of horsey looking cows is in one sense worse for Williamson than it is for Davidson. Interpreting “equine” and t as denoting HORSE makes many of S’s statements and thoughts false. (Namely the ones that are about horsey looking cows.) But it makes many more of them not knowledge. If S really can’t distinguish between horses and horsey looking cows, then even a belief about a horse that it’s an equine might not be knowledge on that interpretation. So knowledge maximisation pushes us strongly towards the disjunctive interpretation of “equine” and t.
It might be objected that if we interpret “equine” as denoting HORSE, then although S knows fewer things, the things he knows are stronger. There are two quite distinct replies we can make to this objection.
First, if we take that line seriously, the knowledge maximisation approach will go from having a problem with disjunctive interpretations, to having a problem with conjunctive interpretations. Assume that S is in Australia, but has no term in his language that could plausibly be interpreted as denoting Australia. Now compare an interpretation of S that takes “equine” to denote HORSE, and one that takes it to denote HORSE IN AUSTRALIA. Arguably the latter produces as much knowledge as the first (which might not be a lot) plus the knowledge that the horses are Australian.
I assume here that beliefs that are true, safe, sensitive, and not based on false lemmas constitute knowledge. So S’s belief, if we interpret him as having this belief, that the horses he sees are horses in Australia, would count as knowledge. Perhaps that’s too weak a set of constraints, the general pattern should be clear; if we take any non-sceptical account of knowledge, there will be ways to increase S’s knowledge by making unreasonably strict interpretations of his predicates.
I also assume in this reply that if S doesn’t have a term that denotes Australia, then he won’t independently know that the horses (or horsey looking cows) that he sees are in Australia. That is, I assume that the interpretation of S’s mental states must be somewhat compositional. That’s, I think, got to be one of the constraints that we put on interpretations. Otherwise we could simply interpret every sentence S utters as meaning the conjunction of everything he knows.
The second reply to this objection is that it is very hard to even get a sense of how to weigh up the costs and benefits, from the perspective of knowledge maximisation, of proposals like this one. And that’s because there’s nothing like an agreed upon in advance algorithm for determining what would count as more or less knowledge. Without that, it feels like the view “Interpret so as to maximise knowledge” is a theory schema, rather than a theory.
Williamson starts this chapter by noting that most atomist (what he calls molecularist) theories of mental and verbal content are subject to counterexample. That’s true. But it’s true in large part because those theories do fill in a lot more details. I more than strongly suspect that if the knowledge maximisation view was filled out with as much detail as, say, some of Fodor’s attempts at an atomist theory, the counterexamples would fly just as thick and fast.
This isn’t, or at least isn’t merely, a complaint about a theory lacking details. It is far from obvious that there is any decent way to compare two bodies of knowledge and say which has more or less, save in the special case where one is a subset of the other. If we (a) thought that knowing that p was equivalent to eliminating not-p worlds, and (b) we had some natural measure over the space of possible worlds, then we could compare bodies of knowledge by comparing the measure of the set of worlds compatible with that knowledge. But (a) is doubtful, and (b) is clearly false. And without those assumptions, or something like them, where are we to even start looking for the kind of comparison Williamson needs?
1 By “denote” I mean to pick out here whatever relation holds between a predicate (in natural language or LOT) and what it picks out. Perhaps, following some recent work by David Liebesman, I should use “ascribe” here. I think nothing of consequence for this argument turns on the choice of terminology here.