Modal Parts

I don’t think I have a lot of way-outside-the-mainstream philosophical beliefs; in fact I think I have considerably fewer of them than I’d like. Probably the most extreme of my positions is that I believe in modal parts.

The idea behind the doctrine of modal parts is that for any object o and class of worlds W such that o exists in every member of W, o has a part o’ that exists in every world in W and in no world outside of it. (The obvious analogy is to temporal parts. This analogy will get pressed a lot as we continue.) This isn’t quite a strong enough position, because this makes it sound as if actualism implies modal parts, and modal parts is meant to be a much more outrageous position. The trick, as with getting temporal parts and presentism to sing in harmony, is to restate the doctrine using operators, or something like them. Here’s what I take the position to be.

For any class of worlds W, let pW be a proposition that is true at all and only the worlds in W. The doctrine is that for any object o and class of worlds W such that Necessarily, if pW then o exists, o has a part o’ such that Necessarily, o’ exists iff pW.

If modal realism is true, this is equivalent to the earlier statement of the doctrine of modal parts. However, even if modal realism is not true, this doctrine makes a striking claim about the existence of objects that are ‘world-bound’ relative to whatever our actualist takes worlds to be.

Given that I hold such a view, one might wonder what my arguments for it are. I was wondering just that this morning, and it seemed the arguments for it probably aren’t as bad as orthodoxy would have you think, but probably aren’t as good as I’d like.

The motivation for believing in modal parts is a generalised suspicion of extended simples. But suspicion is not an argument.

One real argument would be the problem of contingent intrinsics. In Plurality that’s Lewis’s main argument for (something like) the doctrine of modal parts. Stephen Yablo has argued that this argument won’t extend to objects that don’t vary in intrinsic property between worlds in W. In general I’ve never been strongly moved by arguments for parthood from intrinsicness, so I don’t want to rest too much weight on this.

Other arguments come from analogy with arguments for temporal parts. Since Ted Sider has collected so many of those in Four-Dimensionalism, I should just try stealing the best. (Ah, the advantages of theft over honest toil.)

One nice use for modal parts, I think probably the best use, is in resolving some of the paradoxes about coincidence and constitution. I think the modal partser has by far the best story to tell here. On the one hand, she can say that the statue and the lump are distinct fusions of modal parts, and hence respect the argument from Leibniz’s Law that they are not identical. On the other, she can say that there’s a good sense in which there is only one object here, because they both exist in virtue of having a common modal part. I’ve never seen another story about the paradoxes that gets nearly as close to capturing ordinary intuitions as the modal parts story.

Ted’s main argument for temporal parts, the argument from vagueness, also extends across. (The following sketch is incomplete at every step. I’ll try one day to write it up properly and see how the arguments carry across.) Assume that the doctrine of modal parts is not generally true. Still, it seems that for some o, W there will be an o’ such that Necessarily, o’ exists iff pW. Any principle about when such an o’ exists (other than the no parts claim that o’ exists iff o’=o and W is the class of worlds at which o exists) will be vague. But that will imply that it is vague how many things there are, which is intolerable. So we should be ‘universalists’ about modal parts – for any o and W in which o exists, o has a part o’ in W only.

One class of arguments for temporal parts, however, does not carry across: the arguments from time travel. I assume that genuine travel between worlds is a conceptual impossibility, even for a modal realist. So we can’t argue that the possibility of modal travel requires modal parts, and modal travel is possible, so modal parts exist, because premise 2 is false. This is a disanalogy with the argument for temporal parts, and perhaps a fatal one.

I suspect the vagueness argument will turn out to have holes in it when the details are spelled out. I worry that the nihilist position may turn out to be quite plausible. And I worry that there will be no way to argue from the vagueness of an intermediate view to any vagueness in how many things there are. So the arguments from constitution may have to do all the work. I think they probably can, but it’s not the strongest foundation for a metaphysical theory.

Illusions

Yesterday I linked to Edward Adelson’s checkershadow illusion. I was mosying around his papers page looking for a paper with a description of (or better a picture or) that illusion, largely because in some circumstances it’s better to refer to papers than websites. (Not online obviously!)

I didn’t find what I was looking for, but I did find a bunch of other interesting papers on illusions. And most of them have lots of pretty pictures. (Though honestly I only read the articles for the, er, articles.) I was particularly impressed by the snake illusion in his Lightness Perception and Lightness Illusions. It’s a nice example of an illusion that doesn’t turn on immediate contrasts, since the illusory diamonds have the same immediate contrast. As Adelson puts it, it’s as if we can turn contrast effects on and off by ‘remote control’.

Every time I look at perception I’m struck by just how big a mystery it is.

Conferences

It seems to be conference season around a few disciplines, so Kieran Healy and Daniel Drezner have a few words of advice for attendees. Most of the advice carries across well to philosophy, I think, though perhaps some of it could use qualification. I don’t really agree with Dan’s suggestion that one minimise consumption of caffeine and/or alcohol during these conferences – but I can see why people might think that. Turning up with little sleep and a raging headache to a talk, or better still a job interview, is really unpleasant. But it’s hard to party or network without booze and coffee, and they are the main reasons one conferences.

At a big conference, esp an APA, one of the hard things to do is work out which papers to attend. If you’re thinking about this, the first thing you should do is decide not to go to the APA, and instead go to a conference that is likely to have good papers, like the Bellingham Conference, or the Australasian Association for Philosophy conference. (Did I mention it is on the Great Barrier Reef next year?!)

If you don’t follow that advice, the best I can do is offer some empirical data. The papers I go to tend to fall into three categories.

1. Papers by my friends.
2. Papers by bigshots.
3. Papers on topics I’m interested in.

Obviously there’s some overlap between the categories, but it’s usually possible to say which of the three is the primary reason for attending. And as a rule the ordering given there (friends, bigshots, on topic) is the ordering is of the quality of the sessions. So probably in the future I should only go to papers by my friends, unless there’s a bigshot who is at least an acquaintance also speaking on a topic I care about.

Just how to generalise this result is hard. If you want to follow my lead, should you go to papers by your friends or by my friends?

In general it’s hard to say whether papers by unknowns or bigshots at these conferences will be better. Some bigshots are just recycling the same ideas (or the same papers) that made them famous in the first place. But not all. And some unknowns are not bigshots (or even intermediateshots) because, well, because they aren’t that good. But not all.

On the recycling old ideas theme, I just noticed that the deadline for next spring’s APAs is the end of the week, and that the blog post yesterday on evidence and knowledge is just about the right length for an APA submission. So I might try and polish that a bit, tighten up the jokes and loosen up the argument, and send it in.

While on the conference theme, did anyone go last year to The Hawaii International Conference on Arts and Humanities? I can’t tell whether going is an inspired idea or just plain crazy. It might be fun in a post-APA way, and it might really be nice to get away from the snow for a few days in Hawaii. On the other hand, the conference looks like a bit of a joke, and I fear attending could make one part of the joke rather than in on the joke.

Evidence and Knowledge

I’ve been thinking again about Timothy Williamson’s idea that our evidence is what we know, or as he puts it, E=K. This never struck me as particularly persuasive. At first I thought it was wrong in every possible way. E=K implies that evidence is always propositional, that it is always true, and that it is usually massively redundant. The last point refers to the fact that when I know p I usually also know lots of things that imply p, such as ((p or q) and ~q) for all ~q that I know, as well as many things that inductively support p. All these things will count as evidence on Williamson’s lights, while I think evidence is much sparser. I’ve mellowed a bit in my old age – I now think that the first of these claims is right, evidence is propositional. But I’m sceptical (at best) that evidence has to be true, and rather confident that evidence (at least in the most natural sense) is not so redundant.

Here is what persuaded me that evidence should be propositional. Evidence is the input to thought. Thought is content manipulation. Hence evidence must be contentful, else it couldn’t be manipulated by a content manipulating machine. But to say it is contentful just is to say it is propositional. I’m not sure how close this is to the arguments Williamson provides. It is similar in spirit to the arguments from the role of evidence in inference to the best explanation, but very different from the syntactically motivated arguments concerning ‘because’ clauses.

Here’s what I think evidence is, at least for creatures like us. (I think E=K is intended to be a conceptual and/or analytic truth. What I say about evidence is contingently true if I’m lucky.) It’s the output of our reliable modules. By ‘module’ here I mean just what Fodor means by ‘module’ – it’s an informationally encapsulated processor that has a proprietary input (which could in principle be anything but is usually tightly constrained for particular modules) and delivers propositions as output. Ideally I’d like to think modules are neurologically local – I’d be a touch suspicious of any claim that there is a module whose processing work was distributed over most of the brain, some of the spinal cord and the sensory preceptors in my left leg. I think locality is an important part of ‘module realism’, but that’s a story for another day.

There’s only one other thing about Fodor’s conception of modules that will be important here. Fodor, unlike some contemporary modularity theorists, doesn’t think that there is a ‘thought module’. What modules do is deliver propositions to a central processor, but that processor is not informationally encapsulated and hence is not a module. (I haven’t gone back to check the details, but my impression was that Fodor thought information only got processed once before hitting the central processor. Whether that’s true or not isn’t I think central to the picture I’m going to sketch, though it would make it cleaner if it were true.)

A proposition is part of our evidence iff it is an output of a reliable module. The notion of reliability here is purely frequentist – a module is reliable iff it usually outputs truths. This theory allows that we have mistaken evidence. I could change that, I could say that evidence is the true output of reliable modules, but I’m inclined to think that’s mistaken.

The reason for that is a reason that Williamson discusses (on pages 198-200), though not I think satisfactorily. Consider a regular case of illusion, such the Checkershadow illusion. In the checkershadow, my intuition is that it is part of my evidence that square A is darker than square B. (This sentence is ambiguous. If we take the picture to represent a real situation, then it’s true that square A is represented as being darker than square B. What I mean is that my evidence includes the false claim that the representation of square A is darker than the representation of square B.) If evidence is always true, this cannot be the case.

Williamson’s way of saving the intuition here is to say that our evidence is really that square A looks darker than square B. There’s three problems with that.

One, that Williamson acknowledges, is that it implies that a primitive creature that lacks the concept LOOKS doesn’t get any evidence at all to the effect that square A is darker than square B. It cannot get the evidence that A is darker, because there is no such evidence, nor that square A looks darker, because it cannot process LOOKS. But surely it gets something like evidence to this effect.

A second is that this gets the phenomenology all wrong. We don’t get some evidence about visual appearance and then do something with it, like use it to conclude that A is darker than B. Rather, our evidence just is that A is darker than B.

A third problem, one that connects to the other odd feature of Williamson’s account of evidence, is that it seems ad hoc to say that in the checkershadow illusion our evidence is just about looks, but in a normal case, like looking at a book and coming to know there’s a book there, our evidence is about the external world. One way out of this would be to say that our evidence is always about looks. This potentially leads to scepticism, or at least to blocking the way out of scepticism that externalists about perception (like Williamson) want to endorse. Williamson doesn’t say the ad hoc thing, but nor does he fall back into scepticism. Rather, he thinks that in both cases we get evidence about looks, but in the good case we also get evidence that there is a book in front of us. This looks like too much evidence. If our evidence is that it looks like there is a book in front of me, then presumably the belief that there is a book in front of me is one of my conclusions, not part of my evidence.

(Is the dichotomy here justified? Some might think that in chain arguments a proposition can be first conclusion second premise, so a belief might be both a conclusion and a piece of evidence. I think matters are tricky here. On the one hand we do talk as if intermediate conclusions are part of our evidence for later inferences. On the other, it’s possible to construe all references to intermediate conclusions in arguments as shorthand for references to the real grounds on which they rest. I’m not sure that reflection at this level of generality can tell us much, so I’d rather move on to particular examples. Just how many roles a proposition can play in inference, whether it can be something we conclude as well as part of our evidence for example, is a hard question, one that in my view probably can’t be answered by purely a priori reflection.)

This connects to the concern about evidence being massively redundant on Williamson’s view. Williamson argues against the view that all justified true beliefs are part of our evidence using the following example.

Suppose that balls are drawn from a bag, with replacement. In order to avoid issues about the present truth-values of statements about the future, assume that someone else has already made the draws; I watch them on film. For a suitable number n, the following situation can arise. I have seen draws 1 to n; each was red (produced a red ball). I have not yet seen draw n+1. I reason probabilistically, and form a justified belief that draw n+1 was red too. My belief is in fact true. But I do not know that draw n+1 was red. Consider two false hypotheses:

h: Draws 1 to n were red; draw n+1 was black.
h‘: Draw 1 was black; draws 2 to n+1 were red.

It is natural to say that h is consistent with my evidence and that h‘ is not. (Pp 200-1, my emphasis, notation slightly altered.)

This is indeed natural, but look how much work is done by the line I have highlighted. If Williamson conceded that he knew that drawing n+1 would be red, then he couldn’t say the ‘natural’ thing he says at the end. But surely in some very similar cases we do know the equivalent proposition. Take any case when on the basis of evidence p (where p is something that all parties would consider evidence) we inductively infer q and thereby come to know q. Unless inductive scepticism is true, such a situation is surely possible. But in such a situation it would be very strange to say ~q is inconsistent with my evidence.

For a concrete case, consider my favourite piece of inductive reasoning: inferring how a movie will end on its Friday night screening from how it ended on its Thursday night screening. (It’s my favourite inductive inference because it’s such a nice case of inductive reasoning from a single data point. And I do so wish it were more often true that inductive reasoning from a single data point were epistemically sound.) I watch the Thursday night showing of the movie, and see that the hero dies in the final scene. I conclude, fallibly but pretty reliably, that when my friend watches the Friday night showing of the movie, the hero will die in the final scene. This is fallible, since some movies have multiple endings. (For example, 28 Days Later.) But it’s very reasonable, and I think constitutes knowledge.

Just like we get a disanalogy between h and h‘ in Williamson’s examples, we get a similar disanalogy here.

h: The hero died in the final scene on Thursday night but not Friday night.
h‘: The hero died in the final scene on Friday night but not Thursday night.

I know that both _h_ and _h’_ are false, but it’s very natural to say that _h_ is consistent with my evidence but h‘ is not. I conclude that Williamson is (in one sense) too generous in what he counts as knowledge.

(Rest of post eaten by WordPress. See this paper for more details.)

Time Travel

Here’s a (wildly optimistic) draft of the syllabus for my freshman seminar on time travel. I suspect we won’t cover much of the material I’ve listed, but I always prefer to have too much planned than too little. Now I just need to write a syllabus for the math logic course, and then get to writing the first couple of classes in each subject, and then I’ll be ready for start of semester!

Causation by Omissions

At the recent Syracuse Metaphysics conference the following rather surprising little paradox arose. I think I know which way I want to get out of it, but I was rather surprised to see that a problem as simple and as pressing as this exists but hasn’t received much airplay.

I’ll set up the puzzle with an example. Chauncey the gardener, like every other gardener in the city, is on strike today. He hasn’t had a day off in a long time, so he can’t really decide what to do with it. After deciding that television is too boring and it’s too far to walk to the beach, he decides to head to the pub. And, like most people who get to the pub before lunchtime, he ends up good and drunk. He was getting drunk at just the time he would normally have been watering the flowers. But water the flowers he did not, and the flowers died. In the circumstances, (1) seems true and (2) false.

(1) Chauncey’s not watering the flowers caused them to die.
(2) Chauncey’s not watering the flowers caused him to get drunk.

Jonathan Schaffer said, in response to a question on this topic, that he would accept (2). The reasoning is fairly simple. He doesn’t believe that absences are distinct events from the commissions underlying them. So Chauncey’s not watering the flowers just is his going to the pub. And his going to the pub does cause him to get drunk. So by Leibniz’s Law, his not watering the flowers causes him to get drunk.

This is a pretty bad position to get stuck in, but Jonathan argued that there were only two ways out, and both of them are worse. The first way out is to deny absence causation and hence deny that (1) is true. I don’t think this position is awful, but let’s grant for now that it’s bad enough to want to avoid. The second way out is to deny that causes have to be immanent. If Chauncey’s not watering the flowers is some kind of abstraction, then it need not be his going to the pub, and hence our little Leibniz’s Law argument need not get off the ground. But causation is the cement of the universe, it has to relate immanent entities.

There’s a third option, I guess, and maybe one I want to accept. Absences are immanent events that are distinct from any commissions. This kind of position is of a par with those views that think there are a lot more events in the world than ordinary metaphysics allows. Now it’s a little strange to say that no only do absences exist but they are distinct immanent events, but maybe that’s better than the alternatives.

So here’s the puzzle. The following options look to be exhaustive.

1. Deny (1) because absences are never causes.
2. Deny (1) because there’s something wrong with this absence that makes it unfit to be a cause.
3. Deny that (1) entails (2) because absences are abstract while commissions like Chauncey’s going to the pub are concrete, and accept that abstracta can be causes.
4. Deny that (1) entails (2) because absences are distinct concrete events from any kind of any kind of commission.
5. Accept (2).

None of these options is particularly happy. I think my preference ordering is 4, 1, 3, 5, 2. But I don’t really have an argument for that – here we get down to comparing strengths of intuitions.

Perdurantism

Achille Varzi responds here to my rather intemperate criticisms of his Perdurantism, Universalism and Quantifiers. It looks like I misinterpreted his target, which makes it somewhat embarrassing that I was so impolite.

(First rule of blogging: it’s OK to be rude, it’s OK to be wrong, it’s not OK to be rude and wrong. Second rule of blogging: don’t be so rude that you’d regret if the target read the post. Of course, this means that it’s perfectly OK to say that Descartes was an over-rated pompous self-rightous grovelling little Frenchman, since it’s unlikely that old Rene will be reading TAR.)

Rather than dig myself into a deeper hole here, let me note that Josh Parsons has a paper taking a much more sympathetic line on Varzi’s criticisms. Josh thinks that perdurantists have a response to Varzi’s argument, but it is a rather complicated response, and I’m not sure many perdurantists would be happy with it.

I don’t really understand Josh’s response, so I won’t try launching into too many criticisms. But I will make two quick points.

First, I don’t really understand what Josh’s translation scheme would do with Some girls are older than others. I think it ends up as (Some x)(Some y)(x is a girly temporal part of a person and y is a girly temporal part of a person and x is older than y). But that’s false, since all temporal parts are ageless. I must be missing something here.

Second, I think Josh is conflating two possible positions here. (There’s a complicated backstory here, but I won’t go into it. If you think I’m quoting Josh out of context, well there’s a link to his paper above.)

It makes sense to say Some child will be a tenor, referring to those persons who are now children. So the noun child that figures in that statement is in some sense implicitly present-tensed. You might insist that child functions like tenor, in that it applies to whole persons, not just their child-like stages. But if you did, then you should think that it applies to whole persons in virtue of their being now a child, not in virtue of merely tenselessly having a child-like part. If it were the latter, then child would be equivalent, for all practical purposes, to person, which it plainly isn’t.

The final line doesn’t follow from what came before, as can be seen by noting that the following position is consistent.

(a) The kinds of things that are in the extension of child are whole persons, i.e. fusions of past, present and future temporal parts.
(b) The semantic value of child is a function from worlds and times to sets of persons, and the members of that set at w,t are the persons that are children in w at t, which is (usually) a proper subset of the class of persons at w at t.

What’s been run together in this paragraph (unless I’ve misinterpreted something else) are the views (i) that the extension of child includes persons not person-stages and (ii) that the extension of child is unchanging over times.

To be sure, the kind of view I’ve been defending may be thought to have a problem or two with the problem of temporary intrinsics. If you thought the problem of temporary intrinsics was a problem whose solution involved modifying the intuitive semantics for tensed utterances, then you probably won’t like the theory I’ve sketched here. I don’t, but that’s another story.

Perdurantism

Achille Varzi responds here to my rather intemperate criticisms of his Perdurantism, Universalism and Quantifiers. It looks like I misinterpreted his target, which makes it somewhat embarrassing that I was so impolite.

(First rule of blogging: it’s OK to be rude, it’s OK to be wrong, it’s not OK to be rude and wrong. Second rule of blogging: don’t be so rude that you’d regret if the target read the post. Of course, this means that it’s perfectly OK to say that Descartes was an over-rated pompous self-rightous grovelling little Frenchman, since it’s unlikely that old Rene will be reading TAR.)

Rather than dig myself into a deeper hole here, let me note that Josh Parsons has a paper taking a much more sympathetic line on Varzi’s criticisms. Josh thinks that perdurantists have a response to Varzi’s argument, but it is a rather complicated response, and I’m not sure many perdurantists would be happy with it.

I don’t really understand Josh’s response, so I won’t try launching into too many criticisms. But I will make two quick points.

First, I don’t really understand what Josh’s translation scheme would do with Some girls are older than others. I think it ends up as (Some x)(Some y)(x is a girly temporal part of a person and y is a girly temporal part of a person and x is older than y). But that’s false, since all temporal parts are ageless. I must be missing something here.

Second, I think Josh is conflating two possible positions here. (There’s a complicated backstory here, but I won’t go into it. If you think I’m quoting Josh out of context, well there’s a link to his paper above.)

It makes sense to say Some child will be a tenor, referring to those persons who are now children. So the noun child that figures in that statement is in some sense implicitly present-tensed. You might insist that child functions like tenor, in that it applies to whole persons, not just their child-like stages. But if you did, then you should think that it applies to whole persons in virtue of their being now a child, not in virtue of merely tenselessly having a child-like part. If it were the latter, then child would be equivalent, for all practical purposes, to person, which it plainly isn’t.

The final line doesn’t follow from what came before, as can be seen by noting that the following position is consistent.

(a) The kinds of things that are in the extension of child are whole persons, i.e. fusions of past, present and future temporal parts.
(b) The semantic value of child is a function from worlds and times to sets of persons, and the members of that set at w,t are the persons that are children in w at t, which is (usually) a proper subset of the class of persons at w at t.

What’s been run together in this paragraph (unless I’ve misinterpreted something else) are the views (i) that the extension of child includes persons not person-stages and (ii) that the extension of child is unchanging over times.

To be sure, the kind of view I’ve been defending may be thought to have a problem or two with the problem of temporary intrinsics. If you thought the problem of temporary intrinsics was a problem whose solution involved modifying the intuitive semantics for tensed utterances, then you probably won’t like the theory I’ve sketched here. I don’t, but that’s another story.