Martians and the Gruesome

One of my quirkier philosophical views is that the most pressing question in metaphysics, and perhaps all of philosophy, is how to distinguish between disjunctive and non-disjunctive predicates in the special sciences. This might look like a relatively technical problem of no interest to anyone. But I suspect that the question is important to all sorts of issues, as well as being one of those unhappy problems that no one seems to even have a beginning of a solution to. One of the issues that it’s important to was raised by “Brad DeLong”:http://delong.typepad.com/sdj/2007/01/the_meddling_id.html yesterday. He was wondering why John Campbell might accept the following two claims.

* There is an important and unbridgeable gulf between our notions of physical causation and our notions of psychological causation.
* Martian physicists–intelligences vast, cool, and unsympathetic with no notions of human psychology or psychological causation–could not understand why, could not put their finger on physical variables and factors explaining why, the fifty or so of us assemble in the Seaborg Room Monday at lunch time during the spring semester.

I don’t know why Campbell accepts these claims. And I certainly don’t want to accept them. But I do know of one good reason to accept them, one that worries me no end some days. The short version involves the conjunction of the following two claims.

* Understanding a phenomenon involves being able to explain it in relatively broad, but non-disjunctive, terms.
* Just what terms are non-disjunctive might not be knowable to someone who only knows what the Martian physicists know, namely the microphysics of the universe.

The long version is below the fold. (This is cross-posted to CT, so I’ve filled in more of the background than I usually would here.)
Continue reading

Belief and Probability

In “this paper”:http://brian.weatherson.org/cwdwpe.pdf, I offered the following analysis of belief.

bq. S believes that p iff for any* A, B, S prefers A to B simpliciter iff S prefers A to B conditional on p.

The * on any is to note that the quantifier is restricted in all sorts of ways. One of the restrictions is senstive to S’s interests, so this becomes a version of interest-relative invariantism about belief. And if we assume that belief is required for knowledge we get (with some other not too controversial premises) interest-relative invariantism about knowledge.

I now think this wasn’t quite the right analysis. But I don’t (yet!) want to take back any of the claims about the restrictions on any. Rather, I think I made a mistake in forcing everything into the mold of preference. What I should have said was something like the following.

bq. S believes that p iff for any* issue, S’s attitudes simpliciter and her attitudes conditional on p match.

Here are some issues, in the relevant sense of issue. (They may be the only kind, though I’m not quite ready to commit to that.)

* Whether to prefer A to B
* Whether to believe q
* What the probability of q is

Previously I’d tried to force the second issue into a question about preferences. But I couldn’t find a way to force in the third issue as well, so I decided to retreat and try framing everything in terms of issues.

Adding questions about probability to the list of issues allows me to solve a bunch of tricky problems. It is a widely acknowledged point that if we have purely probabilistic grounds for being confident that p, we do not take ourselves to (unconditionally) believe that p, or know that p. On the other hand, it hardly seems plausible that we have to assign p probability 1 before we can believe or know it. Here is how I’d slide between the issues.

If I come to be confident in p for purely probabilistic reasons (e.g. p is the proposition that a particular lottery ticket will lose, and I know the low probability that that ticket will win) then the issue of p’s probability is live. Since the probability of p conditional on p is 1, but the probability of p is not 1, I don’t believe that p. More generally, when the probability of p is a salient issue to me, I only believe p if I assign p probability 1.

However, when p’s probability is not a live issue, I can believe that p is true even though I (tacitly) know that its probability is less than 1. That’s how I can know where my car is, even though there is some non-zero probability that it has been stolen/turned into a statue of Pegasus by weird quantum effects. Similarly I can know that the addicted gambler when end up impoverished, though if pushed I would also confess to knowing there is some (vanishingly small) chance of his winning it big.

Relativism and Meta-Semantics

I’m going to be commenting on Michael Glanzberg’s “Context, Content and Relativism”:http://philosophy.ucdavis.edu/glanzberg/relativismrev.pdf (PDF) at “Bellingham”:http://myweb.facstaff.wwu.edu/nmarkos/BSPC/BSPC7/BSPC7.htm. The paper is very good, as you’d expect, but I think one of the arguments he is responding to is interestingly different to the argument that I, and some others, have made. (This isn’t to say that some people have also made the argument that Michael makes of course. There are lots of relativists out there!)
Continue reading

Counterexamples to Lewis on Value

In “Dispositional Theories of Value”, Lewis endorses the following two claims.

* Something is valuable iff we value it under circumstances of ideal imaginative acquiantance.
* We value something iff we desire to desire it.

Here are a couple of counterexamples to this pair of theses. I don’t know whether these are at all original; I’m not very familiar with this literature.

Some people have many thwarted desires; others don’t. I value being one of the ones who does not. Or at least I think it is valuable to not have many thwarted desires, so if Lewis’s first thesis is right then I would value this under ideal circumstances.

But I don’t desire to desire this. To be sure, I *do* desire to not have thwarted desires. But I don’t regard this status of mine, desiring to not have thwarted desires, as something I have pro-attitudes towards. It seems to me constitutive of having desires that one desire to not have many thwarted desires, since I’m essentially a thing that has desires. So necessarily I desire to not have thwarted desires, so if I desired that I desire to not have thwarted desires, I’d be desiring something that I recognise as a necessary truth. And this seems like a very odd attitude to have. At any rate, I don’t have this attitude.

So this is a value, or at least something valuable, that I don’t desire to desire, and that I wouldn’t desire to desire if my circumstances were more ideal.

Perhaps there is a gap in that argument. I said it is essential to me that I desire not to have many thwarted desires. But I only have that property if I _exist_, and I might not exist. (Indeed, barring a dramatic medical revolution I won’t exist one of these centuries.) Maybe my desire to exist is a desire to desire that I not have many thwarted desires. I don’t really think it is. When I introspect I don’t see any second-order desire to desire to not have thwarted desires, but maybe I’m just not looking closely enough.

Still, considerations of existence and non-existence suggest a second counterexample to Lewis’s theory. Poor Billy is slowly and painfully dying. He belives (rightly or wrongly) that this protracted death is an affront to his dignity, and because he so values his dignity he wishes he were already dead.

Does Billy desire to desire dignity? No. He does desire dignity, but he wishes that he didn’t desire it, because he wishes that he had no desires at all. So Billy values something he doesn’t desire to desire.

Note that I’m not saying that anyone who desires not to exist thereby cannot reasonably desire anything that entails existence. That would be a most implausible claim about desire. (Or so I say; there are some who deny this, or something slightly weaker than it.) Rather, I’m just making it a condition of the case that Billy’s state is so deplorable by his own lights that as a matter of fact he does not desire anything that entails living, such as desiring dignity. That seems to me compatible with valuing dignity, so the second-order desire analysis of valuing fails.

Epistemic Liberalism and Luminosity

In the latest Phil Perspectives, “Roger White”:http://philosophy.fas.nyu.edu/object/rogerwhite has a paper “Epistemic Permissiveness”:http://philosophy.fas.nyu.edu/docs/IO/1180/EP.pdf argues against what he calls epistemic permissiveness, the view that in some evidential states there are multiple doxastic attitudes that are epistemically justified and rational. I call this epistemic liberalism, because at least in America liberal is a nice word. (‘In America’ of course functions something a negation operator.) I think there are a few things we liberals can say back to Roger’s interesting arguments. In particular I think a liberalism that allows that there are epistemically better and worse responses among the rational responses, just like we think that among the morally permissible actions some are morally better and worse, has some resources to deploy against his challenges. But for now I want to take a different tack and defend liberalism directly.
Continue reading

A Puzzle About Defining Theoretical Terms

There seems to be a fairly serious problem with the theory set out in Lewis’s “How to Define Theoretical Terms”, which I’ll set out here. I might be misinterpreting Lewis, and if so I’d like to know. And I’m probably just be repeating something that has been said elsewhere already, and if that’s so I’d really like to know. So comments from Lewis(iana) experts much appreciated.
Continue reading

A Puzzle for Subject-Sensitive Invariantism

When I was at Rutgers the weekend before last I was talking to Sam Cumming about, among other things, subject-senstive invariantism. Sam mentioned that there seem to be some interesting difficulties in generalising SSI so it is a theory of group knowledge as well as individual knowledge. These all seemed like excellent concerns, and I didn’t have much to say about them. (On my theory the problem of explaining what group knowledge is ‘reduces’ to the problem of explaining what group preferences are, which may not be progress.) I’ll leave Sam to say what the problems he’s noticed are, but I thought I’d note here that one of them seems to be a complication even for people who merely care about individual knowledge. Here’s the problem.

S has a lot of evidence that p, and p is in fact true. She doesn’t think much turns on p, so she accepts p. We might imagine that were p of little importance to her, she’d actually know that p. But it turns out that p is really important to her, so by SSI standards she doesn’t know that p.

S knows that p entails q, and she infers q from p. All her evidence for p is evidence that q. And q really isn’t important to her, or at least that important. (Presumably q is evidence that p, so q is of some importance, but not that important.) Could she thereby come to know that q?
Continue reading

A Puzzle about Moral Uncertainty, and its solution

Here’s an interesting asymmetry between reasoning under moral uncertainty and reasoning under factual uncertainty. Or, at least, an interesting prima facie asymmetry, since there might be a simple explanation once we set everything out clearly.

The followinig situation is reasonably common in reasoning under uncertainty. We have three choices, A B and C. Which of these is best to do depends on whether p or q is true, and we’re certain that exactly one of them is true. If p is true, the best outcome will arise from doing A. If q is true, the best outcome will arise from doing C. Yet despite this, the thing to do is B.

Here’s an example. I’m in Vegas, thinking about betting on a (playoff) football game. The teams seem fairly even, and there is no points spread. As usual, to bet on a team I have to risk $55 to win $100. Fortunately, I have $55 in my pocket. Let A = I bet on the home team, C = I bet on the away team, and B = I keep my money in my pocket. Let p = the home team wins, and q = the away team wins. (Given it’s a playoff game, we can be practically certain that one of these is true.) So if p, I’ll be best off if A, and if q, I’ll be best off if C. Still, the thing to do is B, since both A and C have negative expected value.

Now the puzzle is that this kind of situation doesn’t seem to arise for moral uncertainty.
Continue reading

Intuitions

This is a very badly worked out laundry list of ideas for my paper on intuitions. Most of it falls under the category of responses to the “Weinberg, Nichols, and Stich “:http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/Publications/NEI/NEIPT.html experiments, but some of it is probably just repitition.
Continue reading

Intuitions

This is a very badly worked out laundry list of ideas for my paper on intuitions. Most of it falls under the category of responses to the “Weinberg, Nichols, and Stich “:http://ruccs.rutgers.edu/ArchiveFolder/Research%20Group/Publications/NEI/NEIPT.html experiments, but some of it is probably just repitition.
Continue reading