Gordon Belot, “Bayesian Orgulity”

I’ve been thinking over the last few days about my colleague Gordon Belot’s forthcoming paper Bayesian Orgulity. In it he poses a series of very difficult challenges to Bayesianism. I’ve been trying to think about how the imprecise Bayesian can respond to these challenges. (I’m thinking of what response an imprecise Bayesian who thinks all updating goes by conditionalisation could make to Gordon’s arguments. This isn’t my view about updating.)

Here’s one example that Gordon uses. The agent, call her A, is going to get data coming in in the form of a series of 0s and 1s. She is investigating the hypothesis that the data is periodic. Say that she succeeds iff one of the following two conditions hold.

  • The data is periodic, and eventually her credence that it is periodic goes above 0.5 and stays there.
  • The data is not periodic, and eventually her credence that it is not periodic goes above 0.5 and stays there.

Call the data sequence for which a prior succeeds its success set, and its complement its failure set.

Gordon suggests the following two constraints on a prior:

  1. For any initial data sequence x, there are further data sequences y and z such that (a) the agent will have credence greater than 0.5 that the sequence is periodic after getting x + y, and (b) the agent will have credence less than 0.5 that the sequence is periodic after getting x + z. Call any prior with this property open-minded.
  2. The probability that the agent using this prior will succeed (in the sense described above) is not 1.

Much of the paper is an argument for the second condition. The argument, if I’ve understood it correctly, is that for any open-minded prior, the data sequences for which it succeeds are highly atypical. Its success set is measure 0 and meagre, while its failre set is dense (and obviously the complement of a meagre measure 0 set.)

And, as you might have guessed by now, it is impossible to meet these two conditions as a Bayesian agent. Any open-minded prior gives probability 0 to its own failure set. Gordon argues this is a very bad result for Bayesians, and I’m inclined to agree.

This post has gone on long enough, so I’ll leave how the imprecise Bayesian could respond to another post. I think this is a real problem, and indicative of deeper problems that Bayesians (especially precise Bayesians) have with countable infinities.

How to Make Your Journal Open Access in One Easy Step

Imagine that a prominent journal, let’s call it The Philosophy Journal, has a well functioning, but subscription only, website. As per usual for well run journals, subscribers can download a PDF of any article they want, and it looks exactly like the printed version that subscribers can read on paper. It also lets non-subscribers see some information, such as the bibliographic information, the abstract, and perhaps the first page.

Now imagine this journal adds another feature to the part of the website available to everyone. It uploads the final version of the paper submitted by the author(s), in whatever form the paper was submitted in. It also puts these papers in a place they can be properly indexed by the appropriate crawlers. In that way, anyone in the world could read the papers in the journals. But to see them in the polished final form, and certainly to track down page numbers for citation purposes, you’d have to be a subscriber.

Two questions about this thought experiment.

  1. Would The Philosophy Journal now be an open access journal?
  2. Would adding these features to its website cause subscribers, especially library subscribers, to unsubscribe from The Philosophy Journal?

I’m tempted to say the answers are ‘yes, more or less’ and ‘no, or at least not many’.

It wouldn’t be the optimal form of open access, a la Philosophers’ Imprint or Semantics and Pragmatics, but it would be much better than nothing. In particular, it would promote what I think are the two big benefits of open access in philosophy: making leading work available to people at universities that do not (or cannot) subscribe to the best journals, and making this work available to journalists, magazine writers and the like who are interested in philosophy, but do not have access to university libraries.

And at least for prominent journals, I don’t think this would be sufficient grounds for unsubscribing. A library would still prefer to have good journals in archival formats (and a repository of self-submitted papers is not such a format), and it is important for researchers to be able to properly cite papers. I’m far from 100% certain of this, but I suspect it wouldn’t cost a lot.

So there you have it – a low cost means of being sorta kinda open access. I’m grateful to Kai von Fintel for suggesting this model. But I’d be interested in hearing views on its prospects and flaws.

Belief and Stability

Robbie Williams has just posted an excellent paper on Accuracy, Logic and Degree of Belief. I wanted to highlight one of parts with which I strongly agreed.

The overarching idea is that in adopting doxastic attitudes to a proposition, we incur commitment to persist in those attitudes if no new evidence is forthcoming (where persistence is understood as not changing one’s mind—-i.e. not adopting a different attitude to the same proposition. I discuss cases of simply ignoring the proposition below). In the limiting case, consider a situation where one simply moves from one moment to the next, with no new input or reflection. It would be bizarre to change ones (non-indexical) beliefs in such circumstances. Insofar as action, over time, is based on one’s beliefs, it would mean that a course of action started at one time might be abandoned (since it no longer maximizes expected utility) without any prompting from reflection or experience.

Persistence might be construed as a (widescope) diachronic norm on belief. Alternatively, a disposition to retain an attitude to the proposition over time might be constitutive of belief. If what makes something count as a belief is its functional role, then the reflections on extended action above motivate this kind of claim.

I think persistence is, at least for belief, both constitutive and normative. If a kind of state is not disposed to persist, that state is not belief. And if a token of that belief does not persist, in the absence of good reasons for it to be reconsidered, that’s a normative failing.

I ended up with a view like this via Richard Holton’s work. But I hadn’t realised it had an even more notable pedigree. At the Formal Epistemology Workshop, Hannes Leitgeb highlighted the work my colleague Louis Loeb has done in drawing attention to the importance of persistence to Hume’s theory of belief. (For a brief view of this, here is a review of Louis’s 2002 book.)

There has been a lot of work recently on the existence of diachronic norms for belief. I think I’ll start calling the view that there are such norms, and they are primarily norms of persistence, the Humean view. It has a better claim to being genuinely Hume’s view than most views I call Humean!

Survival and Decision Making

First, an apology. I messed up the system that notifies me of when there are comments awaiting moderation, so there were several comments sitting in the queue for several days. That shouldn’t have happened, and I’m sorry it did.

I’ve written up a short note on Robbie Williams’s great paper Decision Making Under Indeterminacy. This was a bit long, and a bit symbol heavy, for a blog post.

The paper concerns cases where the agent is going to split into two, in some sense, and there’s no fact of the matter about which of the two will really be them. I think in those cases it can be rational to act as if it is 50/50 which of them will be you. Robbie, in effect, disagrees. (Or at least, if I’ve read him aright, he disagrees.) I present a couple of cases designed to strengthen the intuition that I’m right. Here’s the paper.

Epistemic Teleology

I mentioned in passing last week Selim Berker’s work on epistemic teleology. This post is basically a link dump, to list a few other sources that seem relevant to thinking about epistemic teleology.

What I’m interested in primarily is how these criticisms of teleology affect our assessment of Jim Joyce’s accuracy domination argument for probabilism.

In his Justification and the Truth Condition, Clayton Littlejohn also argues against epistemic teleology, or as he calls it, epistemic consequentialism. Littlejohn and Berker use similar arguments, but different enough that it’s worth considering both. (Also, yay that Clayton’s book is available as a Kindle edition, and boo that it costs $67. This was one of two books that I went looking for Kindle editions of today, and was put off by the insane price tags.)

Branden Fitelson and Kenny Easwaran have an objection to Joyce that you can see, I think, as turning on the separateness of propositions intuition that Berker and Littlejohn appeal to. Their idea is that it is wrong to use holistic considerations (such as accuracy dominance) to move away from the correct attitude towards a particular proposition. So I suspect there are interesting connections to be made between their objection, and the Berker and Littlejohn objections.

I also suspect, though I don’t know how to argue for this right now, that there will be interesting connections between the right response to the anti-teleologists, and the right response to Michael Caie’s very different kind of objection to Joyce. But that’s for another post; for now I just wanted to keep note of some papers and books that seem relevant to thinking about the connections between epistemic teleology in general, and Joyce’s accuracy arguments in particular.

Some Links

There’s a common way that a blog dies. For whatever reason, the author(s) can’t find time to make a post for a little while. Then there’s a feeling that given the time since the last post, any posting has to be a big deal. After all, if it was a little post, it could have been done earlier. But there’s never any time for that post, or never anything to say that’s a big deal, and possible to say in a blog post. So nothing gets written. The end.

That’s a sad way to go, and there’s an easy solution. Just don’t give in to the feeling that the first post after a hiatus must be substantial.

That’s a very long winded way of introducing a links post. Here are a few things I’m reading, along with some comments on why they seem interesting.

  • Selim Berker’s Epistemic Teleology and the Separateness of Propositions and The Rejection of Epistemic Consequentialism. I like Selim’s project here, which is to generalise the kind of “truth fairy” considerations Carrie Jenkins has raised to argue against a whole class of theories. But I suspect he over-reaches. I think Joyce-style accuracy approaches to credal epistemology are both (a) teleological in the sense Selim is interested in, and (b) immune to his objections. I’d like to think more about this over the summer.
  • Richard Pettigrew’s blog posts on accuracy dominance, which are relevant to the previous bullet point.
  • Wolfgang Schwarz’s Against Magnetism is, I just saw, forthcoming in the AJP. This is fantastic; it’s one of the best papers I’ve read in recent years. It’s just about the only paper which both (a) has me as a target, and (b) convinced me to change my mind on substantial questions. My reply/follow-up is now out in the Journal for the History of Analytic Philosophy
  • There’s a symposium on Timothy Williamson’s recent work on margin of error principles in Inquiry. I’m not particularly fond of the title of the issue; I think it contributes to the confusion about what is a “Gettier case”. But the papers are great.
  • Teddy Seidenfeld’s When Normal and Extensive Form Decisions Differ is relevant to some work about decision making under indeterminacy.
  • Katya Tentori did a fantastic paper at FEW on evidence that subjects are systematically better at making confirmation judgments than probability judgments. Here’s one sample of the experiments she was reporting, though there was a lot more data in the talk than that.
  • And finally two papers that look interesting, but I haven’t read yet so can’t comment on. Brad Armendt’s Pragmatic Interests and Imprecise Belief, and Hannes Leitgeb’s A Lottery Paradox for Counterfactuals Without Agglomeration


I don’t normally post announcements for conferences, but I’ll be speaking at this one and I got a special request to post a link here on TAR, so, just as an exception:

MAWM Heart Logo

On September 14-15, 2013 the University of Notre Dame will host the second Midwest Annual Workshop in Metaphysics (MAWM). We invite and encourage all interested parties to attend! MAWMs are targeted workshops for Midwestern faculty and graduate students working in metaphysics. Each MAWM features 5-7 invited speakers, the majority of whom come from Midwestern institutions. They provide a venue for sharing new research and building community among metaphysicians in the region. For more information and to register for the workshop, visit the website: http://mawms.org/Workshops/2013/

History of Philosophy

In a typical philosophy curriculum, there are some history courses, and some courses that are not history courses. A course on Plato’s metaphysics is a history course; a course on recent work on causation is not. Some courses have a history component. When I teach scepticism at upper levels (or graduate levels), I start with Descartes and Hume. I’m teaching history at that point; I’m not doing so when I go over the recent debate between Jim Pryor and Crispin Wright.

In that sense of ‘history’, which parts of the curriculum do you think count as part of history of philosophy? That is, when are you teaching history, and when are you not? To focus attention, consider which of the following works you would count as part of a history course, or part of the historical part of a course:

  • Mill’s On Liberty;
  • Russell’s “On Denoting”;
  • Moore’s “Principia Ethica”;
  • Wittgenstein’s Tractatus;
  • Ayer’s Language, Truth and Logic;
  • Ryle’s The Concept of Mind;
  • Austin’s Sense and Sensibilia;
  • Quine’s Word and Object;
  • Gettier’s “Is Knowledge Justified True Belief?”
  • Davidson’s “Actions, Reasons and Causes”;
  • Grice’s William James lectures (as published in Studies in the Way of Words);
  • Davidson’s “Truth and Meaning”;
  • Anscombe’s Intention;
  • Rawls’s A Theory of Justice;
  • Kripke’s Naming and Necessity;
  • Lewis’s Counterfactuals?
  • Putnam’s “The Meaning of Meaning”;
  • Thomson’s “In Defence of Abortion”;
  • Block’s “Troubles with Functionalism”;
  • Perry’s “The Essential Indexical”;
  • Kripke’s Wittgenstein on Rules and Private Language;
  • Lewis’s “New Work for a Theory of Universals”;
  • Lewis’s On the Plurality of Worlds.

That’s probably enough to give you the spirit of the enterprise. My answer is in the comments.

2013 Marshall M. Weinberg Cognitive Science Symposium

The 2013 Marshall M. Weinberg Cognitive Science Symposium will be on Rethinking Rationality and its Bounds, on Friday April 5, from 9-5.

The keynote speakers are:

  • Jonathan Cohen Princeton University, Department of Psychology
  • David Danks Carnegie Mellon University, Department of Philosophy
  • Konrad Körding Northwestern University, Department of Physiology
  • Laura Schulz Massachusetts Institute of Technology, Department of Brain & Cognitive Sciences

And the discussion panel will be:

  • Susan Gelman University of Michigan, Department of Psychology
  • Andrew Howes University of Birmingham, Department of Computer Science
  • Jim Joyce University of Michigan, Department of Philosophy
  • Stephanie Preston University of Michigan, Department of Psychology
  • Satinder Singh University of Michigan, Computer Science

Here’s the abstract for the workshop:

To what extent can human thought, action, and choice be understood as rational? For several decades the dominant view in the social and behavioral sciences has been that people routinely make suboptimal choices—a view based on findings that seem to indicate that people violate normative principles of thought ranging from rational choice theory to propositional logic. Beginning with Herb Simon’s work on bounded rationality, many have assumed that the gaps between observed and normative behavior are due in large part to bounds on information processing: our brains are simply not up to the task. But many recent approaches in cognitive science can be understood as redefining the problem of rational behavior, by incorporating assumptions about experience, local and evolutionary environments of adaptation, properties of the brain’s subsystems for perception and action, and even information processing bounds themselves. Taken together, these new approaches more sharply define the problems of optimal choice and action, and paint a new picture of human cognition that suggests it is often a surprisingly good solution to these problems. This symposium will explore these ideas as they are applied to topics ranging from how infants and children explore their environment to how we make rapid choices and move about in the world. The keynote speakers are leading cognitive scientists who engage these issues from the perspectives of psychology, philosophy, neuroscience and computation. The symposium will conclude with a discussion panel that encourages audience participation.

I learned a ton from last year’s workshop on bilingualism, and I’m really looking forward to this one.

Knowledge, Decisions and Games

I was a little puzzled by Stephen Hetherington’s comments about my paper Knowledge, Belief and Interests in his review of Knowledge Ascriptions. Here’s the main thing he says about the paper.

Weatherson’s argument is centred upon the thesis that “knowledge plays an important role in decision theory” (p. 77). His central conditions are that “ (a) it is legitimate to write something onto a decision table iff the decision maker knows it to be true, and (b) it is legitimate to leave a possible state of affairs off a decision table iff the decision maker knows it not to obtain” (p. 77). (But does this entail that, when one does not know that p and one also does not know that not-p, one cannot legitimately write p onto a decision table yet one also cannot legitimately leave p off one’s decision table?)

Maybe that wasn’t the clearest way of putting the point I was trying to get at, but I hoped it would have come through clearly in the paper. Here’s another go.

In a decision table, there are rows for the decisions the agent can make, and columns for the possible states of the world, and values in the cells for what will happen if the relevant world-choice pair obtains. Now there are a lot of questions about how to interpret what is, and what is not, on these tables.

One set of questions I don’t take a stand on in this paper concerns what should be on the rows. There are two big questions here. When should we leave a row off, and when should we `collapse’ a class of possible agent actions into a single row? Brian Hedden had an interesting paper at Bellingham on some of these issues a couple of years back, and Heather Logue and Matthew Noah Smith had excellent comments on it, and I came away thinking that these were much harder questions than I’d realised. But they aren’t the questions KBI addresses.

I’m more interested in the columns, and to some extent the cells. Here are the (closely related!) questions I’m interested in.

First, when do we need to include a column in which p is true? Answer, I say, when the agent making the decision doesn’t know that p is not true.

Second, when is it legitimate to have a column for the possibilities in which p obtains? The answer here is less clear than to the previous question. Roughly, it’s when there’s no q such that the agent doesn’t know whether q obtains, and the relative success of different actions the agent might undertake is different depending on whether p and q are both true, or whether p is true and q is false.

Finally, there are some questions about what goes into the cells. These aren’t directly the focus of KBI either, but I have some views on them. I’m tempted by the view that one can write v into a cell as its value iff the agent knows that the relative, relevant payout of that cell is v. Why relative? Because all utilities are relative to some choice of baseline. Why relevant? Because how well one’s life goes after choosing an action is obviously unknowable in many important ways. Still, one can know how well things will go in a localised region around the decision, and if we’ve set the table up correctly, other outcomes will be independent enough of what we’ve done.

(Why can’t we just put expected values in the cells? Given an expected utility maximising decision theory, all that matters is that we put the right expected values in. The problem is that thinking about decision tables that way begs the question against those heterodox decision theorists, like say Lara Buchak, who reject expected utility maximisation. I’m a (reluctant) advocate of orthodox decision theory, but I don’t think we should conceptualise decision tables in a way that begs the question against our heterodox friends.)

So knowledge matters for decision theory. It also matters for game theory, though the relationship there might be a little less clear. (When we’re thinking about states of the world that are individuated by other actions another player might make, should we use our criteria for row addition/division, or our criteria for column addition/division? I think this question is close to the heart of the debate about the relationship between game theory and decision theory.)

As I said, I had hoped this was clear in the original paper. But maybe it wasn’t, so I’ve tried a different way of stating it here.

There’s another thing though which Hetherington says which I found more perplexing.

Brown’s “Words, Concepts, and Epistemology” confronts a concern many of us have felt. Is there a danger of some recent epistemology’s not really being epistemological? For instance, might even a book called Knowledge Ascriptions not really be so much about knowledge? The worry is whether we can understand epistemology as not being first and foremost about linguistic phenomena and “thought-experiment judgements” (p. 31), even as we encourage reflection upon thought and language — such as knowledge ascriptions — in order to understand whatever epistemology is about first and foremost. Brown’s carefully argued answer is optimistic. And the next three chapters, in effect, seize upon that licence. They defend impurist conceptions of knowledge: pragmatic encroachment (Jeremy Fantl and Matthew McGrath, in “Arguing for Shifty Epistemology”), interest-relative invariantism (often called IRI — Brian Weatherson in “Knowledge, Bets, and Interests”), and contextualism (Michael Blome-Tillmann in “Presuppositional Epistemic Contextualism and the Problem of Known Presuppositions”).

I don’t really know what the general category is supposed to be which sweeps up all the views described at the end of the paragraph. Contextualism is a theory, at least in the first instance, about ‘‘knows’‘. It isn’t really a theory about knowledge, and more than a theory of the context-sensitivity of ‘‘heavy’‘ is a theory of mass. But that’s not true of interest-relative invariantism. It is a theory of knowledge. It says that whether a person knows p depends, in part, on whether she is sufficiently confident to take p as given, give her interests. This implies something about ‘‘knows’‘, given the close relationship between ‘‘knows’‘ and knowledge, but it isn’t in the first instance a theory of ‘‘knows’‘, and more than Einstein’s theory of relativity is a theory of ‘‘heavy’‘.

I’m even more confused by the idea that linguistic phenomena and thought experiment judgments are in any way a natural kind when it comes to epistemological evidence. People who approach epistemology by looking at things like Stanley’s binding argument are not, I would say, taking the same approach people who start with Gettier cases or fake barn cases. And I’m not sure what is to be gained by lumping these methodologies together.