Williamson’s Principle of Charity

In Chapter 8 of The Philosophy of Philosophy, Timothy Williamson defends a new principle of charity. He says we should interpret people in such a way as to maximise what they know. This principle is intended to be constitutive of mental content, in the way that Davidson, Lewis and others have taken charity to be constitutive. That is, the content of someone’s thought and talk just is that content that would (within some constraints) maximise how much they know.

Williamson argues, persuasively, that this version of charity avoids some of the problems that have plagued prior versions. For instance, if I have many beliefs that are caused by contact with x, but are only true if interpreted as beliefs about y (with whom I have no causal contact), Williamson’s principle does not lead us to interpret those beliefs as beliefs about y. That’s because such an interpretation, even if it would maximise the amount of truth I believed, wouldn’t increase how much knowledge I have, because I couldn’t know those things about y.

But some of the traditional problems attending to charity-based theories of content still remain. For instance, Williamson has a problem with horsey looking cows.

Imagine that S is very bad at distinguishing between horses and certain cows, which we’ll call horsey looking. S has a term, t, in his language of thought that he applies indiscriminately to horses and horsey looking cows. When S wants to express thoughts involving t, he uses a word, say “equine” that (unbeknownst to S) his fellow speakers do not used when faced with horsey looking cows. In fact S has very few beliefs about how “equine” is used in his community, or general beliefs about the kind picked out by “equine”/t. He doesn’t have a view, for instance, about whether it is possible for equines to produce milk, or whether other people use “equine” with the same meaning he does, or whether an equine would still be an equine if his eyesight were better. S just isn’t that reflective. What he does have views about are whether all the animals in that yonder field are equines, and he’s confident that they are. In fact, many of them are horsey looking cows.

What does S’s public term “equine”, and mental term t, denote1? It seems to me that it denotes HORSE, not HORSE OR HORSEY LOOKING COW. S is simply mistaken about a lot of his judgments involving equine. I’m not going to take a stand here about whether that’s because S’s fellow speakers use “equine” to denote HORSE, or because HORSE is more natural than HORSE OR HORSEY LOOKING COW, or because t stands in certain counterfactual relationships to HORSE that it does not to HORSE OR HORSEY LOOKING COW. I’m not going to take a stand on those because I don’t need to. A very wide range of philosophical theories back up the intuition that in this case, “equine” and t denote HORSE.

The knowledge maximisation view has a different consequence, and hence is mistaken. On that view, “equine” and t both denote HORSE OR HORSEY LOOKING COW. That’s because interpreting S that way maximises his knowledge. It means that all, or at least most, of S’s judgments of the form “That’s an equine” are knowledge. If “equine” denotes HORSE, then practically none of them are knowledge. Since those are the bulk of the judgments that S makes using “equine” and t, the interpretation that maximises knowledge will not be the one that says “equine” denotes HORSE.

It might be objected here that S does not know that the things in the field are horses or horsey looking cows. But I think we can fill out the case so that is not a problem. We certainly can fill out the case so that S’s beliefs, thus interpreted are (a) true, (b) sensitive, (c) safe and (d) not based on false assumptions. The first three of those should be clear enough. If the only horsey looking things around, either in the actual case or in similar cases, are horses and cows, then we’ll guarantee the truth, sensitivity and safety of S’s belief. And if we don’t interpret any of the terms in S’s language of thought as denoting HORSE, it isn’t clear why we’d think that there’s any false belief from which S is inferring that those things are all equines. Certainly he doesn’t, on this interpretation, infer this from the false belief that they are horses.

As noted above, if we interpret “equine” as denoting HORSE OR HORSEY LOOKING COW, then none of these three claims are true.

(1) If S’s vision was better, equines would still be equines.
(2) Equines can generally breed with other equines of the opposite sex.
(3) Most equines are such that most people agree they satisfy “equine”.

If S believed all those things, then possibly it would maximise S’s knowledge to interpret “equine” as denoting HORSE. But S need not believe any such things, and the proper interpretation of his thought and talk does not depend on whether he does. Whether those things are true might matter for the interpretation of S’s thought and talk, but whether they are believed does not.

The problem of horsey looking cows is in one sense worse for Williamson than it is for Davidson. Interpreting “equine” and t as denoting HORSE makes many of S’s statements and thoughts false. (Namely the ones that are about horsey looking cows.) But it makes many more of them not knowledge. If S really can’t distinguish between horses and horsey looking cows, then even a belief about a horse that it’s an equine might not be knowledge on that interpretation. So knowledge maximisation pushes us strongly towards the disjunctive interpretation of “equine” and t.

It might be objected that if we interpret “equine” as denoting HORSE, then although S knows fewer things, the things he knows are stronger. There are two quite distinct replies we can make to this objection.

First, if we take that line seriously, the knowledge maximisation approach will go from having a problem with disjunctive interpretations, to having a problem with conjunctive interpretations. Assume that S is in Australia, but has no term in his language that could plausibly be interpreted as denoting Australia. Now compare an interpretation of S that takes “equine” to denote HORSE, and one that takes it to denote HORSE IN AUSTRALIA. Arguably the latter produces as much knowledge as the first (which might not be a lot) plus the knowledge that the horses are Australian.

I assume here that beliefs that are true, safe, sensitive, and not based on false lemmas constitute knowledge. So S’s belief, if we interpret him as having this belief, that the horses he sees are horses in Australia, would count as knowledge. Perhaps that’s too weak a set of constraints, the general pattern should be clear; if we take any non-sceptical account of knowledge, there will be ways to increase S’s knowledge by making unreasonably strict interpretations of his predicates.

I also assume in this reply that if S doesn’t have a term that denotes Australia, then he won’t independently know that the horses (or horsey looking cows) that he sees are in Australia. That is, I assume that the interpretation of S’s mental states must be somewhat compositional. That’s, I think, got to be one of the constraints that we put on interpretations. Otherwise we could simply interpret every sentence S utters as meaning the conjunction of everything he knows.

The second reply to this objection is that it is very hard to even get a sense of how to weigh up the costs and benefits, from the perspective of knowledge maximisation, of proposals like this one. And that’s because there’s nothing like an agreed upon in advance algorithm for determining what would count as more or less knowledge. Without that, it feels like the view “Interpret so as to maximise knowledge” is a theory schema, rather than a theory.

Williamson starts this chapter by noting that most atomist (what he calls molecularist) theories of mental and verbal content are subject to counterexample. That’s true. But it’s true in large part because those theories do fill in a lot more details. I more than strongly suspect that if the knowledge maximisation view was filled out with as much detail as, say, some of Fodor’s attempts at an atomist theory, the counterexamples would fly just as thick and fast.

This isn’t, or at least isn’t merely, a complaint about a theory lacking details. It is far from obvious that there is any decent way to compare two bodies of knowledge and say which has more or less, save in the special case where one is a subset of the other. If we (a) thought that knowing that p was equivalent to eliminating not-p worlds, and (b) we had some natural measure over the space of possible worlds, then we could compare bodies of knowledge by comparing the measure of the set of worlds compatible with that knowledge. But (a) is doubtful, and (b) is clearly false. And without those assumptions, or something like them, where are we to even start looking for the kind of comparison Williamson needs?

1 By “denote” I mean to pick out here whatever relation holds between a predicate (in natural language or LOT) and what it picks out. Perhaps, following some recent work by David Liebesman, I should use “ascribe” here. I think nothing of consequence for this argument turns on the choice of terminology here.

4 Replies to “Williamson’s Principle of Charity”

  1. This is interesting, Brian. I think that this project of Williamson’s is one of the most intriguing parts of his recent work, but this post is the first somewhat serious commentary I’ve read on it. (It’s been floating around for a little while now — the second half of “Philosophical ‘intuitions’ and scepticism about judgement” is pretty close to identical to the new chapter 8.)

    I’m not sure if I have a solid grip on just what your subject S is like with respect to his ‘equine’. I wonder whether the agnosticism restrictions you put on him force him to be more alien than it seems.

    I take it that you mean to deny that S believes any of the following:

    ‘equine’ picks out a natural kind.
    ‘equine’ is the name of a species.
    ‘equine’ means the same thing that the person who taught me the word ‘equine’ meant by ‘equine’.
    sometimes I might mistake a non-equine for an equine, because my eyesight is fallible.

    If he believes any of these things, then Williamson can point to a respect in which knowledge maximization favors the HORSE interpretation. (I agree that there is a worry about how to manage trade-offs.)

    So I guess S has to fail to believe any of these things, in order for your counterexample to work. He must not believe any of these things even tacitly. (I am assuming that many of our beliefs, and much of our knowledge, is at most times tacit.) It’s one thing to be unreflective — I’m having a harder time getting my head around someone who doesn’t even have such tacit beliefs.

    Once we’ve stipulated so many natural beliefs away, I’m not sure it’s obvious that ‘equine’ in S’s idiolect means HORSE. Can you say a little more about why we should favor that interpretation?

  2. These are very natural beliefs for an adult, but I don’t think that they are particularly natural for a younger child. It is pretty easy to imagine a child that has several beliefs about which things in its vicinity are horses, and not even the concept of a natural kind, a species, of fallible eyesight, etc. And still I think, if the people around the child mean HORSE by ‘horse’, that the child does too.

    In general, I think most of the arguments for social externalism will be problems for Williamson, because he makes meaning depend on the speaker’s mental states. Wide states, to be sure, but still just the speaker.

    And even if I let S have some of those beliefs, we need to show two things before Williamson is off the hook.

    1) The beliefs are knowledge on the natural interpretation.
    2) Interpreting S that way produces more knowledge, by the relevant weighting.

    As you say, (2) really isn’t obvious, and Williamson doesn’t say much to make us believe it. (There’s really a lot of reliance on what a weighting of knowledge might show, without anything looking like a proof.) But note that (1) isn’t trivial either. It’s true that the natural interpretation makes these beliefs true. But I don’t think it is clear it makes them knowledge. If the ‘weighting’ referred to in (2) is close, then the metalinguistic belief you mentioned may not be knowledge.

  3. Oh, ok. I didn’t realize the central role that social externalism was playing. I’m inclined to re-cast my worry with different examples, then.

    The key to your argument is that S is, even if a young child, a member of a community with a shared language in which ‘horse’ means horse, instead of horsey-thing.

    Of course, it’s POSSIBLE for a person in a community like that to use the word ‘horse’ to mean horsey-thing. I can use words to mean whatever I want. I could stipulate right now, for instance, that my word ‘horse’ means horsey-thing, explicitly saying that I don’t care whether that’s what other members of my community mean by ‘horse’. Obviously, your subject isn’t like that.

    What I’m wondering is whether certain (maybe tacit) beliefs are necessary for inheriting the social meaning of the shared word. I guess I think it’s plausible that some might be. These, for instance:

    mommy means horse by ‘horse’
    daddy means horse by ‘horse’
    I mean the same thing mommy and daddy mean by ‘horse’

    Both of the following seem plausible to me: First, ordinary children, even unreflective ones, believe and even know many propositions like these. Second, such beliefs/knowledge may be constitutive of the sorts of social practices that give rise to social externalism.

    I think the first claim is pretty hard to deny. The second strikes me as less obvious, mostly because people who lack these beliefs are pretty weird. But my thinking is, to deny, or even withhold, on the claim that the members of my community mean what I mean with some word is just what it takes to divorce oneself from the community in the way gestured at by my story about stipulation above.

    If both of these claims are true, then Williamson is back in the game.

  4. S could conceivably have forgotten which of his parents taught him the word ‘horse’. So he may not have those beliefs.

    But more plausibly, he may not have the concept of a meaning. I don’t think its necessary to have this concept in order to mean things by words. If he doesn’t, I think it’s hard to say these are even tacit beliefs. And if he doesn’t have them because he doesn’t have the concept, that isn’t withholding in any interesting sense.

    Having said that, I agree that there must be some way to “opt-out” of a social practice, and maybe something like consciously withholding these beliefs is a way of opting-out. I guess I just think that opting out requires more conceptual sophistication than it takes to have the concept HORSE.

    And I think even if I grant you everything you say here, it only barely lets Williamson back in the game. He’d still have to show (a) that these metalinguistic beliefs of S are knowledge, no easy task I’d think, and (b) this gives S more knowledge in the salient sense.

    One last point. The principle of charity starts to behave very oddly I think when we are changing what S knows by changing the interpretation of words that aren’t used in the sentence, like in “I mean the same thing mommy and daddy mean by ‘horse’”. This isn’t an objection, just that it’s not the way I think the principle of charity is usually used. There are probably some tricky issues here to be worked through.

Leave a Reply