Physics and Indifference

Over at the Rutgers blog there’s an interesting discussion about the various puzzles that arise given the conjunction of orthodox views on statistical mechanics combined with some indifference principles.

This seems to me to be another reason to not like indifference principles, to go alongside the various reasons I gave a few years ago. To be sure, I think to get out of this argument you need a slightly stronger hostility to indifference principles than I give in that paper. In the paper I argued that the following argument failed at step 3.

  1. The situation between us and other entities that are phenomenally like us is epistemically symmetric.
  2. Epistemically symmetric situations should be treated in a cognitively symmetric ways.
  3. If you should treat some situations as cognitively symmetric, then you should give them equal credences.
  4. So you should give equal credence to situations that are phenomenally alike.

If you like non-numerical credences, then you should think step 3 is clearly false. But I think steps 1 and 2 alone create some sceptical sounding results in the cases that quantum mechanics brings up.

So we should focus a little on step 1. Is it really true that phenomenal equivalence implies epistemic equivalence? That seems false for three reasons.

One reason concerns speckled hen type cases. Two people who are phenomenally alike might get different amounts of justification if one is better at tracking/observing fine details of their apparent environment, such as the number of speckles on an observable hen.

Another reason concerns history. Even if you’re a phenomenalist about evidence, it’s plausible that our evidence consists of a stream of phenomena, not just our apparent phenomena. In “Elusive Knowledge”, for instance, David Lewis says he takes our phenomenal history to be given, and not something threatened by sceptical doubts.

The biggest reason is that we may well be externalist about evidence. As Timothy Williamson has argued, when it comes to (apparent) perception, it’s plausible to identify evidence with what we know. Williamson extends this to all evidence, though I don’t think this is a particularly plausible. But that doesn’t matter – if we have different evidence from someone in a sceptical scenario, then some arguments for scepticism don’t get going.

The upshot of all this, I think, is that a serious study of the nature of evidence in epistemology seems to be important for, among other things, physics. Sean Carroll and David Albert have been defending theories whose defence relies on principles most epistemologists would reject. If the epistemologists are right, physics may be a little simpler than some physicists think.

Andrew Bacon on Supertasks

I was reading Andrew Bacon’s paper A Paradox for Infinite Decision Makers, and while I agreed with a lot of the conclusions, I didn’t agree with one of the arguments. The argument concerns this case,

For each n ∈ ω, at 1/n hours past 12 pm Alice and Bob will play a round of the game. A round involves two moves: firstly Alice chooses either 1 or 0, and then Bob makes a similar choice. The moves are made in that order, and both players hear each choice. Alice wins the round if Bob’s choice is the same as hers, and Bob wins if his choice is different. The game finishes at 1 pm, Alice wins the game if she wins at least one round, Bob wins the game if he wins every round.

As Andrew notes, it seems that Bob should win. At every stage, he waits for Alice’s move, and then makes a different move. But he claims Alice has a winning strategy, as follows.

There are various ways that Bob could play throughout a whole game, but any way he plays can be encoded as an ω-sequence of 1’s and 0’s, where the nth term in the sequence represents how he responds in the nth round. Before the game starts, Alice chooses her strategy as follows. Alice divides these sequences into equivalence classes according to whether they differ by finitely many moves at most. With the help of the Axiom of Choice, Alice then picks a representative from each equivalence class and memorises it. At any point after the game has started, Alice will know what moves Bob has made at infinitely many of the rounds, and will only be ignorant of the moves Bob is yet to play, of which there are only finitely many. Thus, at any point after 12 pm, Alice will know to which equivalence class the sequence of moves Bob will eventually make belongs. Her strategy at each round, then, is to play how the representative sequence for this equivalence class predicts Bob will play at that round. If the representative sequence is right about Bob’s move at that round, Alice will win that round. However, the representative sequence and the sequence that represents how Bob actually played, must be in the same equivalence class: they must be the same at all but finitely many rounds. If Alice played according to the representative sequence at every round, then she will have won all but finitely many of the rounds, meaning that she has won the game.

I think this can’t be right for two reasons.

First, I think we can make the intuitive argument that Bob will win slightly more rigorous. (This matters because intuitive arguments are more-or-less counter-indicative of truth when it comes to reasoning about infinities.) Assume that Bob plays the strategy “Wait and see what move Alice makes, then make the opposite move.” Let F(n) be the proposition that Bob wins the round after which there are n more rounds to play. Then F(0) is clearly true – Bob will win the last round. And for arbitrary k, we can prove F(k) → F(k+1). That’s because we can prove F(k+1), and then derive F(k) → F(k+1) by ∨-introduction! Then by mathematical induction, we can infer ∀x F(x).

Second, assuming that Alice’s strategy here works seems to lead to a reductio. Assume that Bob doesn’t play the “Wait and see what Alice does” strategy. Instead he plays a mirror image of Alice’s strategy. That is, he finds a representative strategy of each member of the set of equivalence classes as Andrew defines them. He then notes, as he can do at each stage, which equivalence class Alice’s play is in. He then plays the opposite move to the representative strategy. If Andrew’s reasoning is correct, then an exactly parallel argument should prove that Bob wins all but finitely many of the rounds. But it’s impossible for each of Alice and Bob to win all but finitely many of the rounds.

So something has gone wrong in the reasoning here. I think, though I’m not sure about this, that the strategy Andrew suggests for Alice will not have any distinctive advantages. Here’s the way I picture the situation. At any move, Alice will already have lost infinitely many games, or she will not have. If she has, the strategy can’t stop her losing infinitely many games, which is its main supposed advantage. If she has not, then it doesn’t matter what strategy she plays, she still won’t end up losing infinitely many games. So playing this strategy doesn’t help either way.

But I’m not at all sure about this diagnosis – it’s definitely a good case to think about, as are all the other points that are raised in the paper.

Some Links

  • I liked Bob Beddor’s post about when it can be epistemically rational to believe something because it will be useful to believe it. It’s always important to remember self-referential cases when making universal generalisations about propositions!
  • It’s nice that Philosophers’ Imprint has instructions for how to make your paper in LaTeX, and even nicer that the relevant macros are on CTAN and included in the standard TeXLive distribution. I wish more journals would simply let me typeset my own papers, rather than rely on rounds of proof readings.
  • I’m also excited about Latex for Google Docs, though it will be more useful when we can use our own packages, like the Imprint package, with ease.
  • This iPad review makes the iPad sound like a very pretty, but ultimately expensive and fairly impractical, netbook. Apple-exclusive users might have missed the fact that lightweight computers are no longer more expensive than heavier computers – in fact the MacBook Air might be the last computer that sold at a substantial premium simply for being light.

Philosophy Compass, Volume 5, Issue 4

Cover Image

Philosophy Compass
Volume 5, Issue 4, 2010.
Early View (Articles Available Online in Advance of Print)
Journal Compilation © 2010 Blackwell Publishing Ltd

Chinese Comparative Philosophy
No (More) Philosophy Without Cross-Cultural Philosophy
Karsten J. Struhl
Published Online: 7 Apr 2010
DOI 10.1111/j.1747-9991.2010.00291.x

On the Very Idea of Correlative Thinking
Yiu-ming Fung
Published Online: 7 Apr 2010
DOI 10.1111/j.1747-9991.2010.00294.x

Confucianism and Ethics in the Western Philosophical Tradition I: Foundational Concepts
Mary I. Bockover
Published Online: 7 Apr 2010
DOI 10.1111/j.1747-9991.2010.00295.x

Confucianism and Ethics in the Western Philosophical Tradition II: A Comparative Analysis of Personhood
Mary I. Bockover
Published Online: 7 Apr 2010
DOI 10.1111/j.1747-9991.2010.00297.x

Continental Philosophy
Problems of Other Minds: Solutions and Dissolutions in Analytic and Continental Philosophy
Jack Reynolds
Published Online: 7 Apr 2010
DOI 10.1111/j.1747-9991.2010.00293.x

Logic & Language
Proof Theory in Philosophy of Mathematics
Andrew Arana
Published Online: 7 Apr 2010
DOI 10.1111/j.1747-9991.2010.00282.x

The Grounds of Necessity
Ross P. Cameron
Published Online: 7 Apr 2010
DOI 10.1111/j.1747-9991.2010.00296.x

Teaching & Learning Guide
Teaching & Learning Guide for: Cinema as Philosophy
Paisley Livingston
Published Online: 7 Apr 2010
DOI 10.1111/j.1747-9991.2010.00285.x

Extra-Curricular Activities

One of the complaints about contemporary philosophy that seems to me to have some merit is that we (as a profession) don’t engage sufficiently with other disciplines.[1] I think this has gotten better in some ways, and worse in other ways, over the last generation or so.

It’s gotten better in that there are more philosophers whose work is informed by, and relevant to, another discipline within which they’re deeply entrenched. My paradigm of this is Brennan and Pettit’s The Economy of Esteem, though I’m sure you can think of many more.

It’s gotten worse in that fewer philosophers have a general sense of what’s going across the university. That’s not surprising – fewer academics in general have a good sense of what’s going on across the university. But it affects philosophy more, since a higher percentage of that work is philosophically relevant.

So here’s a small attempt to do something about this harnessing the wisdom of (small) crowds.

Which works by non-philosophers do you think it would be good for more philosophers to read?

Leave answers in comments, please!

Here’s my suggestion:

“The General Theory of Second Best”, by R. G. Lipsey and Kelvin Lancaster, in The Review of Economic Studies, Vol. 24, No. 1 (1956 – 1957), pp. 11-32, available at JSTOR

Lipsey and Lancaster discuss what happens when we know that the optimal circumstances are reached when a set of parameters take a particular ideal distribution, and we know that one of the parameters is not going to be set to the ideal value. The result, in general, is that if we set the value of that parameter as a constraint, we’re not best off setting all the other parameters to ideal values.

This is relevant to a whole host of issues in philosophy where we discuss the nature of ideals. Let’s say a philosopher has an argument that, say, the ideal agent’s credence distribution is a probability function. Does that mean that we should try to make our credences into probability functions, or that there’s something wrong with an agent whose credences are not probability functions? Not on its own – it might be that given physical constraints, the best outcome (by the very same measures that say the probability functions are absolutely best) are not probability functions. Or say that a philosopher shows that ideal agents only assert what they know. Does it follow that the fewer things one asserts but does not know the better? Obviously not – the second best solution (which might be all that’s attainable) might involve quite a bit of assertion without knowledge. In general, the implication from “The ideal has feature F“ to “What you do should have feature F“ is invalid, and in some circumstances the premise isn’t even a particularly strong reason to believe the conclusion. I’ve seen several philosophers miss this point, and others that appreciate it often ignore the fact that they’re working over material that had been well worked out by economists several decades ago.

So that’s my suggestion, but I’m sure you can come up with better.

UPDATE: I’ll keep a list of the suggestions here, with the suggestor in parentheses after the suggestion.

Continue reading “Extra-Curricular Activities”

What is the Equal Weight View of Disagreement?

Here are three quotes from Adam Elga’s paper Reflection and Disagreement, which I think are broadly indicative of how Adam intends to understand the Equal Weight View of disagreement.

When you count an advisor as an epistemic peer, you should give her conclusions the same weight as your own.

[T]he equal-weight view entails that one should weigh equally the opinions of those one counts as peers, even if there are many such people.

It [i.e., the Equal Weight View] says that one should defer to an advisor in proportion to one’s prior conditional probability that the advisor would be correct.

Let’s focus on the last of these, though I think you can make the same point about all of the quotes. Consider the following situation.

Prior to thinking about a question, S thinks it is just as likely that she and T, her peer, will come to the right answer. S gets evidence E, and considers whether p. She concludes that p is indeed true. Her friend T reaches the same conclusion, on the same evidence. This is a horrible mistake on both their parts. The evidence in fact strongly supports ¬p, and p is indeed false. Given the Equal Weight View, what should S do?

A literal reading of the last quote says that she should believe p. After all, there are two people, S and T, and her prior judgment was that each of them was equally likely to be right. So she should ‘defer’ to the average position between the two of them. But since they agree, that means she should do what they both say, i.e. believe p.

But this seems crazy. It was, by hypothesis, irrational for S to believe p on the basis of E in the first place. A literal-minded reading of the Equal Weight View suggests that she can ‘launder’ her irrational beliefs, and have them come out as something she should believe, by simply considering herself an advisor.

Let’s note an even stranger consequence of this way of taking the Equal Weight View. Assume S finds out that T did not in fact make this judgment. That’s because T simply hasn’t considered the question of whether p is true. The only one of her ‘peers’ who has considered that question, on the basis of E, is S herself. Again, a literal minded reading of the Equal Weight View suggests that she now should believe what she actually believes. But that’s wrong; her belief is both false and irrational, and she shouldn’t hold it.

I actually don’t think this is a deep problem for the Equal Weight View. As my repeated references to ‘a literal-minded reading’ of the view have suggested, it seems that the objection here is based on a misinterpretation of what was intended. But I think it’s interesting to note for two reasons. One is that the misinterpretation isn’t so bizarre that it shouldn’t be expressly addressed by proponents of the Equal Weight View. The other is that it isn’t obvious what the right interpretation is. I can think of two very different ways out of the problem here.

One way out, the one I suggest for proponents of the Equal Weight View in Do Judgments Screen Evidence, is to restrict the principle to agents who are making rational decisions. The Equal Weight View then doesn’t have anything to say about agents who start making an irrational decision themselves.

The other way out is to stress an analogy with other modals in consequents of conditionals. So Humeans sometimes say things like “If you desire an end, you should desire the means to it.” That sounds false in some cases. If I desire to rob a bank, I shouldn’t desire the means to rob a bank – I should change my desires. But there presumably is a true reading of the means-end conditional.

One way to make that conditional true is to take the ‘should’ to have wide scope, and read the conditional as “You should make this conditional true: if you desire the end, you desire the means.” Perhaps the Equal Weight View is best framed the following way. You should make this conditional true: “If the average of your peers’ judgment is J, your judgment is J.” If you don’t have any peers, this conditional is trivial, so the Equal Weight View doesn’t rule anything out, or ratify any choice.

Another way to make the means-end conditional true is to take the modal in the consequent to be somehow or other restricted by the antecedent. (Similar moves are suggested by Thony Gillies in papers like these two.) I don’t quite know how to fill out the details of this, so I’ll leave it for another day.

So I think there are three things that Equal Weight View theorists could do to avoid the problem I started with. I don’t know which of them is best though.

Pragmatics and Justification

In Can We Do Without Pragmatic Encroachment?, I argued that the pragmatic aspects of epistemic justification are explained in terms of the pragmatic aspects of belief. As I’ve mentioned before here, I no longer think that is entirely accurate. Here is one small respect in which it isn’t true.

The picture in that paper was that having a justified belief is simply a matter of two things.

  1. Having a credence that is high enough to count as a belief in the situation the agent is in.
  2. That credence being justified.

But that now seems to me to be too demanding a standard for epistemic justification. Let’s say that S‘s evidence justifies a credence of 0.941 in p. But S‘s actual credence in p is 0.943. And let’s say that as long as S‘s credence in p is higher than 0.9, then she wouldn’t make any different decisions in virtue of what her credence in p is. Then it seems to me that (a) she believes that p, and (b) her belief in p is very reasonable. After all, her credence is only off by a very small amount.

The numbers in the previous paragraph are ludicrously precise, but I think the basic idea is clear and correct. If S‘s credence in p is close to ideal, and she believes p, then it seems that her belief is highly justified. Perhaps it is better justified the closer her credence is to the correct credence (at least ceteris paribus) but near to correct suffices for high justification. And since ‘justified’ is a comparative adjective, it seems plausible to say that it is context-sensitive, and in most contexts, high justification is justification enough. So the simple two step account of justification can’t be right.

The problem is that it isn’t quite as easy to fix the account as it might look. We could say that S has a justified belief that p iff the following conditions are met:

  1. S has a high enough credence in p to count as believing it.
  2. S‘s credence in p is close to the correct credence to have in p given her evidence.

But I think that won’t quite work, for the following reason. Go back to the case where S‘s evidence justifies a credence of 0.941 in p, but S‘s actual credence in p is 0.943. And assume all of S‘s other credences (that are relevant to current decisions) are perfectly in order. Now S has to make a decision where the right thing to do if p‘s probability is greater than 0.942 is to do A, but otherwise the right thing to do is B. In this case, it seems that both the conditions are met, so S has a justified belief that p. But this is wrong. She shouldn’t believe p. Indeed, she shouldn’t do the thing that’s best to do given p, namely A. And this isn’t because she has any other irrational credences; ex hypothesi she doesn’t.

The conclusion seems clear enough. We want to say that S has a justified belief in p only if her credence in p is close enough to the ideal credence. But ‘close enough’ is itself senstive to what kind of choices S has to make. If the difference between p‘s actual credence and p‘s ideal credence is enough to swing a decision that she has to make, then the credence isn’t close enough. So the account of justification has to include an extra, pragmatically sensitive component.

Put another way, the plan behind the earlier paper was to isolate the pragmatic component of justification to point 1 of the two point account of justified belief. It now seems to me that that won’t work. The ‘close’ in point 2 is interest-relative in a way that undermines the big idea of the project.