Impossible Stories

Wo makes several good points about my imaginative resistance paper. It will take me a while to respond to all of them, but I just want to respond to one point for now. Wo suggests that my impossible time travel stories are not really impossible, they are just taking place in branching time. This is a good objection. I have to say more than I’ve said to show these really are impossible stories that don’t generate imaginative resistance.

One point is that the Restaurant at the end of the Universe wasn’t just supposed to be an impossible time travel story. It was supposed to be a story that was internally incoherent. I have my doubts that one could watch the end of the universe even once. Wouldn’t you be seeing it after it happened, which is after the universe ended?

I don’t have a full story here, but I think that even without the time travel component (you know, the going back and seeing it again from the same spot without running into yourself) there’s an impossibility here. And I think the impossibility arises from combinatorialism run amok. We can imagine a certain event, say the end of the universe. We can imagine ourselves watching a different event, say a lunar eclipse. So we can imagine watching the end of the universe, by substituting the first event in place of the lunar eclipse. And voila, impossibility in imagination!

Here’s another try at an impossible story that doesn’t generate imaginative resistance. At least, there’s no alethic puzzle. It’s pretty clearly true in the story that quadragons exist. You’ll have to read it to find out what a quadragon is, but suffice to say, it’s impossible.

The story is long, so I put it in the expanded section. I also don’t want to claim any virtues for the quality of the writing. If I ever use it I’ll try hamming it up a bit more because it’s meant to be a parody of cartoon superhero stories. (Whether this kind of parody is cheating, a point that Wo alludes to at the end of his post, is hard to say. I should try writing the story straight.)
Continue reading

Modal Parts

I don’t think I have a lot of way-outside-the-mainstream philosophical beliefs; in fact I think I have considerably fewer of them than I’d like. Probably the most extreme of my positions is that I believe in modal parts.

The idea behind the doctrine of modal parts is that for any object o and class of worlds W such that o exists in every member of W, o has a part o’ that exists in every world in W and in no world outside of it. (The obvious analogy is to temporal parts. This analogy will get pressed a lot as we continue.) This isn’t quite a strong enough position, because this makes it sound as if actualism implies modal parts, and modal parts is meant to be a much more outrageous position. The trick, as with getting temporal parts and presentism to sing in harmony, is to restate the doctrine using operators, or something like them. Here’s what I take the position to be.

For any class of worlds W, let pW be a proposition that is true at all and only the worlds in W. The doctrine is that for any object o and class of worlds W such that Necessarily, if pW then o exists, o has a part o’ such that Necessarily, o’ exists iff pW.

If modal realism is true, this is equivalent to the earlier statement of the doctrine of modal parts. However, even if modal realism is not true, this doctrine makes a striking claim about the existence of objects that are ‘world-bound’ relative to whatever our actualist takes worlds to be.

Given that I hold such a view, one might wonder what my arguments for it are. I was wondering just that this morning, and it seemed the arguments for it probably aren’t as bad as orthodoxy would have you think, but probably aren’t as good as I’d like.

The motivation for believing in modal parts is a generalised suspicion of extended simples. But suspicion is not an argument.

One real argument would be the problem of contingent intrinsics. In Plurality that’s Lewis’s main argument for (something like) the doctrine of modal parts. Stephen Yablo has argued that this argument won’t extend to objects that don’t vary in intrinsic property between worlds in W. In general I’ve never been strongly moved by arguments for parthood from intrinsicness, so I don’t want to rest too much weight on this.

Other arguments come from analogy with arguments for temporal parts. Since Ted Sider has collected so many of those in Four-Dimensionalism, I should just try stealing the best. (Ah, the advantages of theft over honest toil.)

One nice use for modal parts, I think probably the best use, is in resolving some of the paradoxes about coincidence and constitution. I think the modal partser has by far the best story to tell here. On the one hand, she can say that the statue and the lump are distinct fusions of modal parts, and hence respect the argument from Leibniz’s Law that they are not identical. On the other, she can say that there’s a good sense in which there is only one object here, because they both exist in virtue of having a common modal part. I’ve never seen another story about the paradoxes that gets nearly as close to capturing ordinary intuitions as the modal parts story.

Ted’s main argument for temporal parts, the argument from vagueness, also extends across. (The following sketch is incomplete at every step. I’ll try one day to write it up properly and see how the arguments carry across.) Assume that the doctrine of modal parts is not generally true. Still, it seems that for some o, W there will be an o’ such that Necessarily, o’ exists iff pW. Any principle about when such an o’ exists (other than the no parts claim that o’ exists iff o’=o and W is the class of worlds at which o exists) will be vague. But that will imply that it is vague how many things there are, which is intolerable. So we should be ‘universalists’ about modal parts – for any o and W in which o exists, o has a part o’ in W only.

One class of arguments for temporal parts, however, does not carry across: the arguments from time travel. I assume that genuine travel between worlds is a conceptual impossibility, even for a modal realist. So we can’t argue that the possibility of modal travel requires modal parts, and modal travel is possible, so modal parts exist, because premise 2 is false. This is a disanalogy with the argument for temporal parts, and perhaps a fatal one.

I suspect the vagueness argument will turn out to have holes in it when the details are spelled out. I worry that the nihilist position may turn out to be quite plausible. And I worry that there will be no way to argue from the vagueness of an intermediate view to any vagueness in how many things there are. So the arguments from constitution may have to do all the work. I think they probably can, but it’s not the strongest foundation for a metaphysical theory.

Evidence and Knowledge

I’ve been thinking again about Timothy Williamson’s idea that our evidence is what we know, or as he puts it, E=K. This never struck me as particularly persuasive. At first I thought it was wrong in every possible way. E=K implies that evidence is always propositional, that it is always true, and that it is usually massively redundant. The last point refers to the fact that when I know p I usually also know lots of things that imply p, such as ((p or q) and ~q) for all ~q that I know, as well as many things that inductively support p. All these things will count as evidence on Williamson’s lights, while I think evidence is much sparser. I’ve mellowed a bit in my old age – I now think that the first of these claims is right, evidence is propositional. But I’m sceptical (at best) that evidence has to be true, and rather confident that evidence (at least in the most natural sense) is not so redundant.

Here is what persuaded me that evidence should be propositional. Evidence is the input to thought. Thought is content manipulation. Hence evidence must be contentful, else it couldn’t be manipulated by a content manipulating machine. But to say it is contentful just is to say it is propositional. I’m not sure how close this is to the arguments Williamson provides. It is similar in spirit to the arguments from the role of evidence in inference to the best explanation, but very different from the syntactically motivated arguments concerning ‘because’ clauses.

Here’s what I think evidence is, at least for creatures like us. (I think E=K is intended to be a conceptual and/or analytic truth. What I say about evidence is contingently true if I’m lucky.) It’s the output of our reliable modules. By ‘module’ here I mean just what Fodor means by ‘module’ – it’s an informationally encapsulated processor that has a proprietary input (which could in principle be anything but is usually tightly constrained for particular modules) and delivers propositions as output. Ideally I’d like to think modules are neurologically local – I’d be a touch suspicious of any claim that there is a module whose processing work was distributed over most of the brain, some of the spinal cord and the sensory preceptors in my left leg. I think locality is an important part of ‘module realism’, but that’s a story for another day.

There’s only one other thing about Fodor’s conception of modules that will be important here. Fodor, unlike some contemporary modularity theorists, doesn’t think that there is a ‘thought module’. What modules do is deliver propositions to a central processor, but that processor is not informationally encapsulated and hence is not a module. (I haven’t gone back to check the details, but my impression was that Fodor thought information only got processed once before hitting the central processor. Whether that’s true or not isn’t I think central to the picture I’m going to sketch, though it would make it cleaner if it were true.)

A proposition is part of our evidence iff it is an output of a reliable module. The notion of reliability here is purely frequentist – a module is reliable iff it usually outputs truths. This theory allows that we have mistaken evidence. I could change that, I could say that evidence is the true output of reliable modules, but I’m inclined to think that’s mistaken.

The reason for that is a reason that Williamson discusses (on pages 198-200), though not I think satisfactorily. Consider a regular case of illusion, such the Checkershadow illusion. In the checkershadow, my intuition is that it is part of my evidence that square A is darker than square B. (This sentence is ambiguous. If we take the picture to represent a real situation, then it’s true that square A is represented as being darker than square B. What I mean is that my evidence includes the false claim that the representation of square A is darker than the representation of square B.) If evidence is always true, this cannot be the case.

Williamson’s way of saving the intuition here is to say that our evidence is really that square A looks darker than square B. There’s three problems with that.

One, that Williamson acknowledges, is that it implies that a primitive creature that lacks the concept LOOKS doesn’t get any evidence at all to the effect that square A is darker than square B. It cannot get the evidence that A is darker, because there is no such evidence, nor that square A looks darker, because it cannot process LOOKS. But surely it gets something like evidence to this effect.

A second is that this gets the phenomenology all wrong. We don’t get some evidence about visual appearance and then do something with it, like use it to conclude that A is darker than B. Rather, our evidence just is that A is darker than B.

A third problem, one that connects to the other odd feature of Williamson’s account of evidence, is that it seems ad hoc to say that in the checkershadow illusion our evidence is just about looks, but in a normal case, like looking at a book and coming to know there’s a book there, our evidence is about the external world. One way out of this would be to say that our evidence is always about looks. This potentially leads to scepticism, or at least to blocking the way out of scepticism that externalists about perception (like Williamson) want to endorse. Williamson doesn’t say the ad hoc thing, but nor does he fall back into scepticism. Rather, he thinks that in both cases we get evidence about looks, but in the good case we also get evidence that there is a book in front of us. This looks like too much evidence. If our evidence is that it looks like there is a book in front of me, then presumably the belief that there is a book in front of me is one of my conclusions, not part of my evidence.

(Is the dichotomy here justified? Some might think that in chain arguments a proposition can be first conclusion second premise, so a belief might be both a conclusion and a piece of evidence. I think matters are tricky here. On the one hand we do talk as if intermediate conclusions are part of our evidence for later inferences. On the other, it’s possible to construe all references to intermediate conclusions in arguments as shorthand for references to the real grounds on which they rest. I’m not sure that reflection at this level of generality can tell us much, so I’d rather move on to particular examples. Just how many roles a proposition can play in inference, whether it can be something we conclude as well as part of our evidence for example, is a hard question, one that in my view probably can’t be answered by purely a priori reflection.)

This connects to the concern about evidence being massively redundant on Williamson’s view. Williamson argues against the view that all justified true beliefs are part of our evidence using the following example.

Suppose that balls are drawn from a bag, with replacement. In order to avoid issues about the present truth-values of statements about the future, assume that someone else has already made the draws; I watch them on film. For a suitable number n, the following situation can arise. I have seen draws 1 to n; each was red (produced a red ball). I have not yet seen draw n+1. I reason probabilistically, and form a justified belief that draw n+1 was red too. My belief is in fact true. But I do not know that draw n+1 was red. Consider two false hypotheses:

h: Draws 1 to n were red; draw n+1 was black.
h‘: Draw 1 was black; draws 2 to n+1 were red.

It is natural to say that h is consistent with my evidence and that h‘ is not. (Pp 200-1, my emphasis, notation slightly altered.)

This is indeed natural, but look how much work is done by the line I have highlighted. If Williamson conceded that he knew that drawing n+1 would be red, then he couldn’t say the ‘natural’ thing he says at the end. But surely in some very similar cases we do know the equivalent proposition. Take any case when on the basis of evidence p (where p is something that all parties would consider evidence) we inductively infer q and thereby come to know q. Unless inductive scepticism is true, such a situation is surely possible. But in such a situation it would be very strange to say ~q is inconsistent with my evidence.

For a concrete case, consider my favourite piece of inductive reasoning: inferring how a movie will end on its Friday night screening from how it ended on its Thursday night screening. (It’s my favourite inductive inference because it’s such a nice case of inductive reasoning from a single data point. And I do so wish it were more often true that inductive reasoning from a single data point were epistemically sound.) I watch the Thursday night showing of the movie, and see that the hero dies in the final scene. I conclude, fallibly but pretty reliably, that when my friend watches the Friday night showing of the movie, the hero will die in the final scene. This is fallible, since some movies have multiple endings. (For example, 28 Days Later.) But it’s very reasonable, and I think constitutes knowledge.

Just like we get a disanalogy between h and h‘ in Williamson’s examples, we get a similar disanalogy here.

h: The hero died in the final scene on Thursday night but not Friday night.
h‘: The hero died in the final scene on Friday night but not Thursday night.

I know that both _h_ and _h’_ are false, but it’s very natural to say that _h_ is consistent with my evidence but h‘ is not. I conclude that Williamson is (in one sense) too generous in what he counts as knowledge.

(Rest of post eaten by WordPress. See this paper for more details.)

Multiple Causation

I was reading Karen Bennett’s paper on the exclusion argument and I realised half way through that I didn’t really understand some of the concepts that are commonly used in this debate. Here’s the difference I realised I don’t think I understand.

There’s meant to be an important difference between joint causation and overdetermination. Here’s a couple of simple cases to bring out the difference.

A and B shoot at V, each hitting him in the heart at the same time, and each in a way that would be sufficient to kill him instantly. This is overdetermination (I take it!).

A and B throw rocks at V, each of which hits V at the same time and punctures one of V’s lungs. V dies of aphysixiation (sp?). I take it this is a case of joint causation – the two throws kill V, though neither would be sufficient to kill him separately.

(Digression. The intuitions about this case differ a bit when we make the times of the throws different. If A’s throw happens in the morning and B’s in the afternoon, then I think B’s throw is the sole cause. End of Digression)

OK, so we’ve got the distinction, now let’s get to applying it.

Two rockets are fired at planet V. Planet V has a missile defense system that has one virtue and one vice. The virtue is that whenever a solo rocket comes in, then it will intercept the rocket and destroy the threat. The vice is that whenever two rockets come in, the defence system gets confused and fires an interceptor totally the wrong way. So both rockets hit the planet, explode as intended, and destroy the planet. (They are VERY BIG ROCKETS.)

Let F1 be the firing of one of the rocket, and F2 the firing of the other rocket. Let E1 be the explosion of the first rocket’s payload and the E2 the explosion of the second rocket’s payload. The payload explosion happens after the rockets are through where the intercept system would have done its work.

I think that F1 and F2 are joint causes of the destruction of the planet, since neither alone is sufficient to destroy the planet. But E1 and E2 are each causes, perhaps overdetermining causes, of the destruction. This is odd, I think, but perhaps not the worst result ever.

Change the case a little to allow for a third rocket. Call its firing F3. Now are the firings joint causes, or are they each overdetermining causes? Here’s where things get tough.

Karen Bennett’s paper suggests that the following two conditions are necessary for us to have a real case of overdetermination.

(O1) If c1 had occurred and c2 had not, e would (still) have occurred.
(O2) If c2 had occurred and c1 had not, e would (still) have occurred.

How do we extend this to where we have three putative causes. Here’s one triple of counterfactuals that we might think indicate overdetermination.

(O1a) If c1 and c2 had occurred and c3 had not, e would (still) have occurred.
(O2a) If c1 and c3 had occurred and c2 had not, e would (still) have occurred.
(O3a) If c2 and c3 had occurred and c1 had not, e would (still) have occurred.

These are all true. But maybe we should generalise (O1) and (O2) in this direction.

(O1b) If c1 had occurred and c2 and c3 had not, e would (still) have occurred.
(O2b) If c2 had occurred and c1 and c3 had not, e would (still) have occurred.
(O3b) If c3 had occurred and c1 and c2 had not, e would (still) have occurred.

These are all false. So overdetermination or joint causation? I have no idea really, and that makes me wonder whether I really understood the two concepts.

By the way, if (O1) and (O2) are necessary for overdetermination, then we can argue quite easily for compatibilism between causation by parts and causation by wholes. Here’s a homely example to end with.

Invasions cause deaths. In particular they often cause deaths of the invaders. As an example, the Achean invasion of Troy caused Hector’s death. (I’ll just take for granted that Homer’s tale is true, though of course this is doubtful.) It also seems to be the case that Achilles’s charge caused Hector’s death. Now the charge is not identical to the invasion, though it is a part of it. Let c1 be the invasion, and c2 be the charge. Then (O1) is clearly false. Had the invasion occured without this action of Achilles, then Hector wouldn’t have died, for none of the other Acheans could have killed Hector. So here we have a case of two non-identical synchronous causes not amounting to overdetermination. (Does this mean that (O1) is not necessary for overdetermination? Not sure. It might mean it isn’t necessary for bad overdetermination.)

Stars Stars and Stars

It was a comfortable enough flight over that I spent more time sleeping than doing things worthy of note. Surprisingly enough, it was The Iliad that kept making me drowsy. The various battle scenes were fine to stay awake through – though I hadn’t realised just how horribly detailed they could be. The problem was old King Nestor. Nestor’s role, for those who aren’t familiar, is largely to try and calm the tensions in the Achean camp, and his main weapon is the long-winded speech. It didn’t seem to help much with Agammemnon and Achilles, but it inevitably worked with me. By the middle of the story, all I had to hear was, “Then good King Nestor rose” and I was sound asleep.

Maybe if I hadn’t slept so much I would have figured out more about stars. But maybe not, for I think I was a little stuck just where I was. Here’s the basics. (For background on stars, see Ted Sider’s papers here and here. Be warned though, this is possibly the most esoteric philosophical question I’ve ever thought about, and that’s not a trivial comparison class.)

Ideally, we’d like to define F* as being F minus maximality. But that won’t do for two reasons.

First, it suggests that when F is not maximal, then F* = F. And that isn’t always right. Let F be the property of being human or weighing more than sixteen stone. This is not maximal – it’s not always the case that the large part of something that weighs more than sixteen stone does not weigh more than sixteen stone. But nor is it the case that F* = F. A large part of me is F*, but it is not F.

Second, this kind of conceptual subtraction in general is not defined. (I think Lloyd Humberstone has a paper on this somewhere, but I don’t quite know where. Wiggins makes quite a bit of this point in his response to Parfit in the 3rd edition of Sameness and Substance. That was the best part of the new edition I thought.) If F can be analysed as G and H, then F minus G is just H. But where F cannot be so analysed, F minus G is not clearly defined. The problem is that there’s nothing remotely like the unique factorisation theorem for concepts or for properties. What we’d like is that F minus G is the property H such that H and G is equivalent to F. But there are too many such properties H. There’s a few ways we might try to discriminate amongst them, mostly using strong appeals to naturalness at crucial points, but as far as I can tell the general problem is hopeless. And I have a suspicion this territory has been worked over in the literature, so I won’t go through it all here.

Let’s try getting to starring more directly. First hypothesis: An F* is something that massively overlaps an F. This gets the right result in most cases, but it doesn’t work in general. In fact, massively overlapping an F is neither necessary nor sufficient for being an F*.

Against necessity: imagine a ball with a small lump on one side. The lump is not massive, but it is big enough to make the ball something other than a sphere. Consider the part of the ball apart from the lump. It is a sphere*, for it has everything necessary for being a sphere other than being maximal, but it does not massively overlap a sphere.

Against sufficiency: Cusack is the heaviest man in Ireland. But not by much. He is only a few ounces heavier than Lenehan. If Cusack’s right hand were suddenly to fall off, Lenehan would be heavier. Let F = is the heaviest man in Ireland, and let a be the mereological difference between Cusack and his right hand. Is a an F*? It seems to be not. It does not have what it takes to be the heaviest man in Ireland, for it is less heavy than Lenehan. But it does massively overlap an F.

An F* is not just a duplicate of an (actual or possible) F. This is I think a necessary condition for being an F*, but it is not sufficient. The counterexamples to sufficiency are easy. I’m a duplicate of a possible uncle, but I am not an uncle*. Still, we do seem to have a necessary condition here, and that may be worth something.

What we intuitively want for a definition of star is something like the following. A thing a is F* iff if a is the right kind of thing to have maximal properties, it has F. The last conditional is not a material conditional, so we can’t easily use it in an analysis. But we can do something.

The kind of thing that’s apt to have maximal properties is just a thing that does have some or other natural maximal property. (I’ll come back to why there has to be a restriction to natural maximal properties here in a bit.) Roughly, then, an F* is something that if it has any natural maximal properties, it is F. Say an object is pretty iff it has any natural maximal properties. Here’s a first pass at trying to define F*, at least for cases where F is reasonably natural.

Another little definition that will be helpful. Say F is intrinsic to the Gs iff being F entails being G and the following holds. Any bijection between the Gs in w1 and the Gs in w2 that maps objects onto duplicates always maps Fs onto Fs and non-Fs onto non-Fs. (That’s actually a little rough. For some purposes we need to also say that for any collection of objects the fusion of their images under the bijection is a duplicate of their fusion. I’ll assume that where necessary.) A lot of extrinsic properties are nonetheless intrinsic to the Gs for suitable G. (Every property, I think, is intrinsic to the things – that’s sort of a weak version of the truthmaker principle.) For instance, the property of being the heaviest man in Ireland is intrinsic to the men in Ireland.

Here’s my attempt then at getting F*. Let G be any natural maximal property such that F is intrinsic to the Gs. Let a be some object in a world w that massively overlaps a pretty object. If a is pretty, then a is F* iff a is F. If not, let b be the pretty object. Let P be the set of pretty objects apart from b in w. Let w’ be a world in which a duplicate of a, call it a’, is pretty. Consider any bijection from the Gs plus a in w onto the Gs in w’. If a is F*, then a’, the image of a under the bijection, should be F. The reason is that a’ is just like a in all respects necessary for being F, it is an intrinsic duplicate and the world is just the right way for a’ to be F, and since a’ is G, and G is a natural maximal property, a’ is pretty so it is apt to have maximal properties. That much all seems relatively uncontroversial, I think.

Let me now make a bold conjecture. If for all such G all such bijections map a onto an F, then a is an F*. The little argument above was that this is a necessary condition for being F*. The hypothesis is that it’s sufficient. I don’t really have an argument that this is sufficient, which is why it is a particularly bold hypothesis. I do, however, have something that may be a counterexample. In fact I may have two. (An extremely bold hypothesis in that case.)

Let F be the property of being the best hitter in baseball. Right now, I presume, Barry Bonds has that property. Let a be a large part of Barry Bonds, say all of him less one hair. I take it that a is F*, and as far as I can tell, my theory delivers that result. But what of poor c, which is the mereological difference between Barry and both of his hands. I think c is not F* – it is not at all the right kind of thing to be the best hitter in baseball, for it has no hands. But I can’t immediately see a G such that being the best hitter in baseball is intrinsic to the Gs, and any suitable bijection does not map c onto the best hitter in baseball. The worry is that being the best hitter in baseball might not be intrinsic to any group more coarse-grained than the things, so there’ll be no bijections of the type I described, so on all such bijections c will be mapped onto the best hitter in baseball. Maybe I’m wrong about that, so the bold conjecture might be right. And maybe c really is F*, the intuitions here are not particularly clear.

A different kind of problem arises with properties like being the mereological difference between a human and its longest hair. Note this is maximal, but we don’t want to say an object with this property is pretty. The difference between me and my longest hair, call it d, has this property, call it F, but it is not pretty. That’s why I restricted the definition of prettiness to those things with natural maximal properties. But now consider d minus its longest hair – call that e. Surely e is F*. But there’s no way at all for my definition of starring to work in that case, for it is only defined for cases where the things that are F are pretty, or at least where they could be pretty. I’m actually not too worried about that. Maybe I don’t have a definition of starring, but necessary and conditions for being an F* for cases where F is reasonably natural. That would still be progress I think, though maybe not much progress.

Radical Beliefs

Reading Neil Levy’s very good paper on responsibility for belief reminded me that I’ve probably never posted here my view about the connection between voluntarism about belief and deontological conceptions of justification. I keep forgetting this, but I do have one extreme philosophical view. (Most of my views are just mundane common sense, which I regret a little, but sometimes the truth is like that.) I’m a fairly extreme voluntarist about belief. I think there are some propositions that you can come to believe more or less at will, at least with a little practice. I don’t think this is always easy. Moving your beliefs around at will is like moving your arms around at will when there are heavy weights attached to the ends of them. It can be done, but practice helps.

Anyway, I think that the kind of voluntarism we need to defend a deontological conception of justification is actually quite weak, and almost plausible. (It’s certainly true, since stronger versions of voluntarism that are definitely not plausible by current standards are also true.) Let’s start by noting some fairly obvious truths about the connection between voluntary action and moral responsibility. Today was graduation at Brown, and I had an obligation, of a sort, to attend the departmental graduation ceremony. Despite the torrential rain, I did so. Now I could well have stayed at home, and had the game I’d been watching (Wolverhampton-Sheffield playoff for the last premiership position, if you’re keeping score) been any closer or the rain been any heavier, I may well have. Had I done so, I would have been morally culpable. And in part this would have been because it was within my voluntary control to get myself to the graduation ceremony.

Now, I couldn’t have reached the graduation ceremony by just clicking my heels and wishing myself there. I would have been a little drier had I been able to do just that, but sadly it was impossible. But there were a series of actions that were within my direct voluntary control (one foot in front of the other, keep the umbrella pointed towards the wind so it doesn’t invert, etc.) that resulted in my being at the graduation ceremony. It might not be easy to carry out this series of actions, especially in the rain, but as long as the series exists then my presence or otherwise at the graduation is sufficiently under my voluntary control that it I’m responsible for whether or not it happens.

How does this relate to belief? The most direct way it does is if for some beliefs, the ones for which you are responsible, there is a series of voluntary actions you can take such that you’ll end up having that belief. I think that’s sometimes possible, but I don’t want to try convincing you of that here. And the reason for that is that for present purposes I don’t need to. If I could have failed to have a certain belief by performing a series of actions that are under my voluntary control, yet I still have the belief, then that seems like enough for responsibility. And actually it’s rather easy to remove beliefs, at least non-perceptual beliefs, by voluntary actions. The good kind of scepticism, the kind that teaches you to doubt charlatans, fraudsters, used car salesmen, magicians, Republican politicians, spammers with Nigerian millions, news that’s too good to be true, stories that are too incredible to be fiction, anything said by philosophers and so on, basically consists in an exhortation to doubt everything doubtable. And that kind of exhortation can work, especially when presented the right way. If we do our job in teaching entry level philosophy courses, one of the skills we generate is the ability to doubt at will, and this kind of doubt defeats belief.

Let’s try a little thought experiment. Take any claim that you believed at first but later regretted believing. In America this should be easy – unless you disbelieved every factual claim made by the administration in the lead-up to the Iraq war, there’s probably something you believed and regretted. (I’m cheating a little here. The adminstration did say things like that Saddam is evil and the Iraqi people would be better off with him removed, which are both true, and even factual on a cognitivist theory of morality. Ignore these claims. I’m sure most readers believed them then, and don’t regret believing them now. The claims about the military capacities and threats of the Iraqis are what we care about here. The basic administration line, recall, was that Iraq posed a clear and present danger to the U.S. and that they were so weak militarily that a few thousand soldiers and some smart bombs should see them out. It’s the parts of that line that I’m focussing on.) Many people, for example, believed what Colin Powell said at his presentation at the U.N. about Iraq’s chemical and biological weapons capacity, and I’m sure some of them regret so doing. I think many of these people could have, if they had tried hard enough, remained sceptical about these claims. They could have retained a sceptical doubt even in the face of apparently sincere assertion by Sec. Powell. If they couldn’t have done just this, their regret would be at least a little misplaced. Not entirely, since we can regret things that are outside our voluntary control, but a little I suspect. And I think this kind of situation is one in which we often find ourselves. It’s natural to take things at face value, to believe what people say, but we don’t have to do this, and we often shouldn’t.

That’s all we need I think to salvage a deontological conception of justification. We don’t need that people can believe at will. We don’t even need that people can doubt at will. We just need that there are procedures we can use, the kinds of procedures we teach students in critical reasoning courses, that if properly carried out will lead to doubt and hence not to belief. If the agent could have carried out these procedures, but believes anyway, then s/he is culpable, because her/his belief is in the relevant sense under her/his voluntary control – it was within her/his power to not have that belief.

That much I think is fairly moderate. The radical bit is where I try and turn this into an argument that one can generate beliefs just as easily as one can destroy them. But I might leave that for a different late night blog.

1976 – Some History of the Problem of the Many

This is a history post. So those of you with no interest in history of philosophy, or with no confidence in my abilities as a historian might want to skip to the next post.

In my Problem of the Many article in the Stanford Encyclopaedia I said that the problem could be traced to two sources: the third edition of Geach’s Reference and Generality and Unger’s article The Problem of the Many, both from 1980. I was somewhat surprised to learn when doing the research for this that the problem was not in earlier versions of Reference and Generality, so Geach doesn’t get a clear claim to priority over Unger. At the time I was fairly confident that these were the earliest versions of the problem. All the contemporary articles seemed to trace the problem back to Geach and/or Unger, and no one cited anything earlier than that. And I certainly hadn’t found anything earlier than 1980, though one wouldn’t want to rest too much weight on my historical acumen.

I think, though I haven’t checked this with the principals, that the problem was independently discovered by Unger and by Geach. In any case, I have no reason to suspect otherwise, and since both versions came out roundabout the same time and neither cites the other it seems reasonable to conclude that this was a process of simultaneous independent discovery.

I now think that there’s an earlier statement of the problem, in more or less its modern form. And I also think, contra what I said in the Stanford article, that the over-population solution to the Problem of the Many has been seriously defended. (Hud Hudson attributes this solution to David Lewis, but I think he’s being too charitable there.) Both conclusions derive from this passage from a article by Jaegwon Kim. The context is that Kim is trying to deflect the objection that his theory of events leads to too many events. His response is, roughly, that all sorts of plausible philosophical theories lead to implausible counting results.

The analogy with tables and other sundry physical objects may still help us here. We normally count this as one table; and there are just so many (a fixed number of) tables in this room. However, if you beleve in the calculus of individuals, you will see that included in this table ia another table – in fact, there are indefinitely many tables each of which is aprper part of this table. For consider the table with one micrometer of its top removed; that is a table difference from this table; and so on.

It would be absurd to say that for this reason we must say that there are in fact indefinitely many tables in this room. What I am suggesting is merely that the sense in which, under the structured complex view of events, there are indefinitely many strolls strolled by Sebastian may be just as harmless as the sense in which there are indefinitely many tables in this room.

I think that’s pretty much exactly the problem of the many. Note that despite the talk of ‘removing’ one micrometer of the top of the table, the reference to the calculus of individuals makes it clear that Kim just cares about what objects are here now, not what objects could be here. What he’s assuming, falsely I now think, is that table is an intrinsic property so the fact that if we did shave off a micrometer we’d clearly have still a table means that the mereological difference between the table now and the bits of wood that would, in that case, be so shaved is also a table. And he’s inferring, I think, that since it would be absurd to give up our ordinary practice of talking as if there’s exactly one table here because of these metaphysical speculations, there must be some pragmatic mechanism that makes this talk acceptable. Note in this context the exact wording of the first sentence of the second quoted paragraph. He doesn’t say that this is an absurd reason to think there are indefinitely many tables here. It is really, but he thinks it’s actually quite a good reason. He thinks it is an absurd reason to say that there are indefinitely many tables here. Presumably pragmatics must be doing a fair bit of work to bridge the gap between truth and assertion.

Kim’s paper Events as Property Abstractions was first published in Action Theory, edited by Myles Brand and Douglas Walton, Reidel 1976, pp 159-77. That volume was a collection of papers presented at the Winnipeg Conference on Human Action, held at Winnipeg, Manitoba, Canada, 9-11 May 1975. The quote is from page 172. (I think – I’m writing this from notes which are a little hazy.) So I think it’s a pretty clear claim to priority. I still think Geach and Unger independently discovered the problem, but I now think they independently rediscovered it, rather than being simultaneous initial discoverers.

Unless I find good reason to change my mind on that, I’ll alter the Stanford entry to credit Kim with the initial discovery.

1976 – Some History of the Problem of the Many

This is a history post. So those of you with no interest in history of philosophy, or with no confidence in my abilities as a historian might want to skip to the next post.

In my Problem of the Many article in the Stanford Encyclopaedia I said that the problem could be traced to two sources: the third edition of Geach’s Reference and Generality and Unger’s article The Problem of the Many, both from 1980. I was somewhat surprised to learn when doing the research for this that the problem was not in earlier versions of Reference and Generality, so Geach doesn’t get a clear claim to priority over Unger. At the time I was fairly confident that these were the earliest versions of the problem. All the contemporary articles seemed to trace the problem back to Geach and/or Unger, and no one cited anything earlier than that. And I certainly hadn’t found anything earlier than 1980, though one wouldn’t want to rest too much weight on my historical acumen.

I think, though I haven’t checked this with the principals, that the problem was independently discovered by Unger and by Geach. In any case, I have no reason to suspect otherwise, and since both versions came out roundabout the same time and neither cites the other it seems reasonable to conclude that this was a process of simultaneous independent discovery.

I now think that there’s an earlier statement of the problem, in more or less its modern form. And I also think, contra what I said in the Stanford article, that the over-population solution to the Problem of the Many has been seriously defended. (Hud Hudson attributes this solution to David Lewis, but I think he’s being too charitable there.) Both conclusions derive from this passage from a article by Jaegwon Kim. The context is that Kim is trying to deflect the objection that his theory of events leads to too many events. His response is, roughly, that all sorts of plausible philosophical theories lead to implausible counting results.

The analogy with tables and other sundry physical objects may still help us here. We normally count this as one table; and there are just so many (a fixed number of) tables in this room. However, if you beleve in the calculus of individuals, you will see that included in this table ia another table – in fact, there are indefinitely many tables each of which is aprper part of this table. For consider the table with one micrometer of its top removed; that is a table difference from this table; and so on.

It would be absurd to say that for this reason we must say that there are in fact indefinitely many tables in this room. What I am suggesting is merely that the sense in which, under the structured complex view of events, there are indefinitely many strolls strolled by Sebastian may be just as harmless as the sense in which there are indefinitely many tables in this room.

I think that’s pretty much exactly the problem of the many. Note that despite the talk of ‘removing’ one micrometer of the top of the table, the reference to the calculus of individuals makes it clear that Kim just cares about what objects are here now, not what objects could be here. What he’s assuming, falsely I now think, is that table is an intrinsic property so the fact that if we did shave off a micrometer we’d clearly have still a table means that the mereological difference between the table now and the bits of wood that would, in that case, be so shaved is also a table. And he’s inferring, I think, that since it would be absurd to give up our ordinary practice of talking as if there’s exactly one table here because of these metaphysical speculations, there must be some pragmatic mechanism that makes this talk acceptable. Note in this context the exact wording of the first sentence of the second quoted paragraph. He doesn’t say that this is an absurd reason to think there are indefinitely many tables here. It is really, but he thinks it’s actually quite a good reason. He thinks it is an absurd reason to say that there are indefinitely many tables here. Presumably pragmatics must be doing a fair bit of work to bridge the gap between truth and assertion.

Kim’s paper Events as Property Abstractions was first published in Action Theory, edited by Myles Brand and Douglas Walton, Reidel 1976, pp 159-77. That volume was a collection of papers presented at the Winnipeg Conference on Human Action, held at Winnipeg, Manitoba, Canada, 9-11 May 1975. The quote is from page 172. (I think – I’m writing this from notes which are a little hazy.) So I think it’s a pretty clear claim to priority. I still think Geach and Unger independently discovered the problem, but I now think they independently rediscovered it, rather than being simultaneous initial discoverers.

Unless I find good reason to change my mind on that, I’ll alter the Stanford entry to credit Kim with the initial discovery.