Skip to main content.
November 29th, 2012

Knowledge, Decisions and Games

I was a little puzzled by Stephen Hetherington’s comments about my paper Knowledge, Belief and Interests in his review of Knowledge Ascriptions. Here’s the main thing he says about the paper.

Weatherson’s argument is centred upon the thesis that “knowledge plays an important role in decision theory” (p. 77). His central conditions are that “ (a) it is legitimate to write something onto a decision table iff the decision maker knows it to be true, and (b) it is legitimate to leave a possible state of affairs off a decision table iff the decision maker knows it not to obtain” (p. 77). (But does this entail that, when one does not know that p and one also does not know that not-p, one cannot legitimately write p onto a decision table yet one also cannot legitimately leave p off one’s decision table?)

Maybe that wasn’t the clearest way of putting the point I was trying to get at, but I hoped it would have come through clearly in the paper. Here’s another go.

In a decision table, there are rows for the decisions the agent can make, and columns for the possible states of the world, and values in the cells for what will happen if the relevant world-choice pair obtains. Now there are a lot of questions about how to interpret what is, and what is not, on these tables.

One set of questions I don’t take a stand on in this paper concerns what should be on the rows. There are two big questions here. When should we leave a row off, and when should we `collapse’ a class of possible agent actions into a single row? Brian Hedden had an interesting paper at Bellingham on some of these issues a couple of years back, and Heather Logue and Matthew Noah Smith had excellent comments on it, and I came away thinking that these were much harder questions than I’d realised. But they aren’t the questions KBI addresses.

I’m more interested in the columns, and to some extent the cells. Here are the (closely related!) questions I’m interested in.

First, when do we need to include a column in which p is true? Answer, I say, when the agent making the decision doesn’t know that p is not true.

Second, when is it legitimate to have a column for the possibilities in which p obtains? The answer here is less clear than to the previous question. Roughly, it’s when there’s no q such that the agent doesn’t know whether q obtains, and the relative success of different actions the agent might undertake is different depending on whether p and q are both true, or whether p is true and q is false.

Finally, there are some questions about what goes into the cells. These aren’t directly the focus of KBI either, but I have some views on them. I’m tempted by the view that one can write v into a cell as its value iff the agent knows that the relative, relevant payout of that cell is v. Why relative? Because all utilities are relative to some choice of baseline. Why relevant? Because how well one’s life goes after choosing an action is obviously unknowable in many important ways. Still, one can know how well things will go in a localised region around the decision, and if we’ve set the table up correctly, other outcomes will be independent enough of what we’ve done.

(Why can’t we just put expected values in the cells? Given an expected utility maximising decision theory, all that matters is that we put the right expected values in. The problem is that thinking about decision tables that way begs the question against those heterodox decision theorists, like say Lara Buchak, who reject expected utility maximisation. I’m a (reluctant) advocate of orthodox decision theory, but I don’t think we should conceptualise decision tables in a way that begs the question against our heterodox friends.)

So knowledge matters for decision theory. It also matters for game theory, though the relationship there might be a little less clear. (When we’re thinking about states of the world that are individuated by other actions another player might make, should we use our criteria for row addition/division, or our criteria for column addition/division? I think this question is close to the heart of the debate about the relationship between game theory and decision theory.)

As I said, I had hoped this was clear in the original paper. But maybe it wasn’t, so I’ve tried a different way of stating it here.

There’s another thing though which Hetherington says which I found more perplexing.

Brown’s “Words, Concepts, and Epistemology” confronts a concern many of us have felt. Is there a danger of some recent epistemology’s not really being epistemological? For instance, might even a book called Knowledge Ascriptions not really be so much about knowledge? The worry is whether we can understand epistemology as not being first and foremost about linguistic phenomena and “thought-experiment judgements” (p. 31), even as we encourage reflection upon thought and language — such as knowledge ascriptions — in order to understand whatever epistemology is about first and foremost. Brown’s carefully argued answer is optimistic. And the next three chapters, in effect, seize upon that licence. They defend impurist conceptions of knowledge: pragmatic encroachment (Jeremy Fantl and Matthew McGrath, in “Arguing for Shifty Epistemology”), interest-relative invariantism (often called IRI — Brian Weatherson in “Knowledge, Bets, and Interests”), and contextualism (Michael Blome-Tillmann in “Presuppositional Epistemic Contextualism and the Problem of Known Presuppositions”).

I don’t really know what the general category is supposed to be which sweeps up all the views described at the end of the paragraph. Contextualism is a theory, at least in the first instance, about ‘‘knows’‘. It isn’t really a theory about knowledge, and more than a theory of the context-sensitivity of ‘‘heavy’‘ is a theory of mass. But that’s not true of interest-relative invariantism. It is a theory of knowledge. It says that whether a person knows p depends, in part, on whether she is sufficiently confident to take p as given, give her interests. This implies something about ‘‘knows’‘, given the close relationship between ‘‘knows’‘ and knowledge, but it isn’t in the first instance a theory of ‘‘knows’‘, and more than Einstein’s theory of relativity is a theory of ‘‘heavy’‘.

I’m even more confused by the idea that linguistic phenomena and thought experiment judgments are in any way a natural kind when it comes to epistemological evidence. People who approach epistemology by looking at things like Stanley’s binding argument are not, I would say, taking the same approach people who start with Gettier cases or fake barn cases. And I’m not sure what is to be gained by lumping these methodologies together.

Posted by Brian Weatherson in Uncategorized

1 Comment »

This entry was posted on Thursday, November 29th, 2012 at 10:45 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

One Response to “Knowledge, Decisions and Games”

  1. Grant Reaber says:

    Hi Brian, the second part of this post touches on a number of things I’ve been thinking about a lot lately. I am glad to see that these kinds of questions are getting more and more attention; epistemologists can be proud of the “atmosphere of increased methodological awareness” Hetherington credits them with having created.

    I think what the views described at the end of the paragraph have in common is that they are alternative explanations of a common batch of evidence about (an aspect of) the usage or meaning of the word “know”; it is no accident that books on IRI have long sections on contextualism. The evidence in question includes both the judgments of informants about putative Gettier cases and theoretical considerations such as the binding argument (unless we categorize those considerations otherwise than as evidence). You say that there is nothing to be gained by lumping together the methodology of Stanley and that of people who “start with Gettier cases,” but I would have thought that even Stanley had to pay attention to Gettier intuitions and that people who start with Gettier cases had better not end with them, so that we do not really have to do with two different but equal methodologies for doing this sort of philosophical linguistics or lexicography.

    Now I hear you objecting that the advocates of IRI are not in the business of linguistics or lexicography at all (even of a specially honorable philosophical kind). But I think they are. It is true that IRI can be stated at the object level while contextualism can only be stated as a metalinguistic theory that mentions the word “know.” Doesn’t that make IRI non-linguistic and more like Einstein’s theory of relativity than Webster’s theory of “heavy”? No. Webster’s theory of “heavy” can also be stated at the object level.
    Contextualists are not forced to the metalinguistic level because their interest is in what “know” means but rather because of the kind of account they want to give of what “know” means. Epistemology, qua separate branch of philosophy, has been largely about the actual meaning of certain favored uses of “know” since long before the advent of contextualism. (Though its prehistory is long, I think the contemporary textbook understanding of epistemology was sealed into place by Gettier’s paper and the way people reacted to it. This understanding sometimes includes “justification” as well as knowledge, but it is also Gettier’s way of framing the issue that gives us our grip on justification.) If epistemologists weren’t mainly interested in what “know” means, they wouldn’t be bothered about delineating the exact contours of what counts as knowledge any more than someone who studies, say, bridge design or coups d‘├ętat will think it particularly important to have a rock-solid definition of what counts as a bridge or a coup.

    But why is it that we can study bridges and coups in an almost wholly non-linguistic way, but we cannot do the same with knowledge? That is a tricky question. Part of the explanation, I think, is that epistemologists are only, or often, interested in certain totalizing claims about cases of knowledge like “in any possible case where someone knows something, it is OK for them to act on it.” People who study bridges and coups aren’t so interested in generalizations that will hold of every possible bridge or coup, so it doesn’t so much matter to them what the boundaries of those concepts are. Another part of the explanation, I think, is that propositional knowledge is constituted by our practice of attributing it in a way that bridges and coups are not. However, it is very hard (though important) to be precise here, and how substantial knowledge is is itself a contested issue. For Cook Wilson, or even Williamson, it may be more substantial than it is for most of the rest of us.

    The bigger issue about recent epistemology not being epistemological is that the post-Gettier conception of epistemology is dubiously compatible with both the idea that people like Descartes, Locke, and Kant (not to mention contemporary probabilists) are interested in epistemology and with the idea that most every philosophical investigation has an epistemological side to it (the yin to its metaphysical yang). It also becomes questionable, when we define epistemology in the contemporary way, if the subject is even important. (I think that, fortunately, this last question can be answered yes, for reasons that have to do with the special role that knowledge attribution plays in our lives and especially with the thought that knowledge is the norm of assertion.)

Leave a Reply

You must be logged in to post a comment.