Limbs and Limps and Persons and Bodies

I was thinking again about Ryle’s response to Descartes’ argument that minds are immaterial. Ryle, I think, takes Descartes to be making the following inference.

(1a) My mind is not identical to any physical part of me.
(2a) My mind is a part of me.
(3a) So, my mind is a non-physical part of me.
(4a) So, there are non-physical things.

And Ryle thinks that’s a bad argument, one whose badness can be seen by considering the following argument. (Imagine, for the sake of this argument, that I have a limp.)

(1b) My limp is not identical to any physical part of me.
(2b) My limp is a part of me.
(3b) So, my limp is a non-physical part of me.
(4b) So, there are non-physical things.

Clearly the latter argument breaks down. (4b) is pretty clearly not supported by this inference, and Ryle thinks (3b) isn’t either. The problem is that Descartes thinks that _I have a mind_ is like _I have a limb_, whereas it is really like _I have a limp_. We possess things other than parts; in particular we possess attributes. These include limps and minds.

This is a way we can reject Descartes without worrying about the modal argument Descartes offers for (1a). Indeed, we can accept (1a). And, in a sense, Ryle does. He certainly thinks it is not true that my mind is a physical part of me, but perhaps as a category mistake the negation of this isn’t true either. No matter; (1a) at least gets a truth across whether it is literally true or not.

I tend to think Ryle is basically correct about all of this. The Australian tradition (represented by Smart, Lewis and Jackson) I think says that Ryle concedes too much to Descartes, and he shouldn’t concede (1a). Determining whether that’s right I think would rely on a large cost-benefit analysis of the different ways Smart and Ryle think about minds, which is obviously too big for a blog post. But here might be another reason to think Ryle was on the right track.

It seems to me that even if (5) were true, (6) would be in some way defective.

(5) I have a limp.
(6) My body has a limp.

(6) sounds like a category mistake to me. That is, it sounds like it isn’t true. If that’s true, it follows immediately by Leibniz’s law that I’m not identical to my body.

The kind of move I’m making here is similar to the move Kit Fine has been making over recent years, arguing for the non-identity of (what’s usually thought of) a thing and what it is composed of without looking at modal or temporal properties of the thing. But what’s nice, I think, about this example is that makes the non-identity point without suggesting any particularly odd metaphysics. We don’t get a sense that there are spooks of any kind by thinking about limps.

I’m inclined to take the difference between (5) and (6) as more evidence for the view I independently favour, namely that people are really *events*. I’m the event of this body having certain dynamic features. When the event that is me ends, my body will (in all probability) survive, but it will lose those dynamic features. My body isn’t an event, so I can’t be identical to it. Unless we regard events as too metaphysically spooky to contemplate, having people be distinct from bodies isn’t any kind of spooky view.

Perception and Philosophy

I’ve been thinking a fair bit about the epistemology of philosophy recently. And I find it helpful to think out the possible positions by comparison to possible positions on the epistemology of perception. Here, crudely, are four positions one might take about perception.

  1. Scepticism – Our perceptual evidence consists of non-factive mental states. (Perhaps it is the existence of those states, perhaps it is our beliefs about those states; the categories I’m interested in don’t discriminate along those lines.) And those states don’t give us sufficient evidence to warrant belief in propositions about the non-phenomenal world.
  2. Idealism – Same as scepticism, except that the propositions we pre-theoretically thought were warranted by perceptual evidence are, in some sense, about the phenomenal world.
  3. Indirect Realism – Our perceptual evidence consists of non-factive mental states and those states do give us sufficient evidence to warrant belief in the non-phenomenal world.
  4. Direct Realism – Our perceptual evidence consists in factive mental states. (This includes views on which the evidence is the existence of the state, and views on which the evdience is the content of the state.) Since our evidence consists in facts about the non-phenomenal world, we have non-inferential warrant for propositions about the external world.

There are lots of different ways in which people draw the direct/indirect realism distinction. See, for instance, “this paper”:http://consc.net/neh/papers/copenhaver.htm by Rebecca Copenhaver for a number of different ways the distinction is drawn. Note in particular that my indirect realist doesn’t require that we represent the existence of mental states in perception, let alone the existence of non-factive mental states, let alone believe that we have non-factive mental states, let alone have those beliefs be the basis for beliefs about the non-phenomenal world. If I token _that’s a table_ in my “visual representation box”, and that tokening justifies my belief that that’s table, that counts as indirect realism for me. (Assuming that the visual representation box does, or at least could, include falsehoods.)

I make no claim that my usage here is in keeping with traditional usage, only that it’s one helpful way to divide up the categories.

Note that one can have different views about different sense modalities. One could be a sceptic about colour, an idealist about smell, and a direct realist about spatial vision, for instance. More interestingly, I think there’s an interesting view that is indirectly realist about most perception, but directly realist about touch. The motivation for this comes from (of all places) Descartes. Here’s something Descartes writes towards the end of the “sixth meditation”:http://oregonstate.edu/instruct/phl302/texts/descartes/meditations/Meditation6.html.

bq. I remark, besides, that the nature of body is such that none of its parts can be moved by another part a little removed from the other, which cannot likewise be moved in the same way by any one of the parts that lie between those two, although the most remote part does not act at all. As, for example, in the cord A, B. C, D, [which is in tension], if its last part D, be pulled, the first part A, will not be moved in a different way than it would be were one of the intermediate parts B or c to be pulled, and the last part D meanwhile to remain fixed. And in the same way, when I feel pain in the foot, the science of physics teaches me that this sensation is experienced by means of the nerves dispersed over the foot, which, extending like cords from it to the brain, when they are contracted in the foot, contract at the same time the inmost parts of the brain in which they have their origin, and excite in these parts a certain motion appointed by nature to cause in the mind a sensation of pain, as if existing in the foot; but as these nerves must pass through the tibia, the leg, the loins, the back, and neck, in order to reach the brain, it may happen that although their extremities in the foot are not affected, but only certain of their parts that pass through the loins or neck, the same movements, nevertheless, are excited in the brain by this motion as would have been caused there by a hurt received in the foot, and hence the mind will necessarily feel pain in the foot, just as if it had been hurt; and the same is true of all the other perceptions of our senses.

There’s an interesting principle here. The principle is that if we get evidence through a chain, then what evidence we get supervenes on the qualities of the last link in the chain. I’m not sure that’s right, but for the sake of developing a position, let’s say that it is. If that’s right, it follows that vision, for instance, can’t be understood the way the direct realist wants to understand it. For if the light between me and my computer (the chain through which I get visual knowledge of the computer’s properties) were altered by a malicious demon, so the computer changed but the light immediately around me stayed the same, I would get the same evidence. But that evidence would not consist in facts about the computer, for I would represent what I actually represent, and this would now be false. So we get very quickly led to indirect realism about any form of evidence that arrives through a chain.

Now for Descartes, all our evidence arrives through chains or cords of one kind or another. That’s because all our evidence has to get to the brain, and thence to the pineal gland. But perhaps that’s false. If we’re good Ryleans, and think that we (as opposed to our brains) gather evidence, then perhaps tactile evidence is not really mediated. If my hand touches my desk, there is nothing between the hand and the desk that mediates the connection. So perhaps we can be direct realists about tactile perception. Note that Descartes’ dualism isn’t doing much work in this argument; what is doing the work is that thinking takes place in (or through) the brain. And that’s a much more widely held view. But arguably it isn’t right; arguably thought, or at least representation/evidence collection, takes place at least throughout the nervous system, and perhaps throughout the whole body. Or so we’ll assume.

The theory we end up with is largely indirect. We have direct evidence that the external world exists. We can touch it. (Note the prevalence throughout history of thinking that touch gives us especially direct evidence of the external world. We refute idealism by _kicking_ the stone, not looking at it.) But we don’t get a whole lot else. Most of the details require filling in by evidence that is indirectly related to its subject.

Coming back to philosophy, all four of our positions are well represented in the contemporary debate.

The sceptic is the person who thinks that we have at best indirect evidence, i.e. intuitions, for philosophical theses, and these are not good evidence for the claims we want.

The idealist thinks that we only have intuitions, but that’s good because the desired conclusions were largely conceptual in nature. The idealist, that is, thinks philosophy is largely an investigation into the nature of concepts, so facts about mental states, i.e. intuitions, are a perfectly good guide.

The indirect realist thinks that philosophical questions are not, or at least not usually, about concepts. And she thinks that our evidence is largely intuitive, and hence indirect. But she thinks that, when we’re doing philosophy well, these can provide warrant for our desired conclusions.

And the direct realist thinks that all three are wrong about the nature of evidence. We start with evidence that bears directly on the questions we’re interested in. We simply know, and hence have as part of our evidence, facts like the fact that a Gettiered subject doesn’t know, and that torturing cats for fun is wrong. There isn’t any need to worry about the link between evidence and conclusion since the evidence often entails the desired conclusion.

Williamson’s “The Philosophy of Philosophy” is largely an argument for direct realism in philosophy, an argument that often proceeds by attacking the other views. So chapter 2 is a direct attack on idealism. Chapter 7 is an attack on indirect realism, with some attacks on scepticism thrown in. And the sceptic is the subject of criticism throughout the book, especially in chapters 5 and 7.

I think the position I want to end up holding is something like the position on perception I outlined above. Direct realism is partially true. Some of our philosophical evidence consists of knowledge, not of non-factive states. (For instance, our knowledge that a vegetarian diet is healthy is philosophical evidence.) But this won’t get us very far, any more than touch alone gives us much perceptual insight into the world. Most of our evidence is indirect; it is intuitive. So I’m largely an indirect realist about philosophy

Holding indirect realism leads to two challenges. First, we must respond to arguments against indirect realism. The rough response I’ve been running in recent posts has been that the arguments against indirect realism are generally arguments against a very strong form of indirect realism, and we can hold on to a modified form without any cost. Second, we must explain how indirect evidence can bear on philosophical questions. That’s obviously the harder challenge, and one I wish I had more to say about.

Refereeing Journals and Rants

Over at “Brian Leiter’s blog”:http://leiterreports.typepad.com/blog/2008/07/a-proposal-abou.html there was a long thread recently about journal refereeing and reviewing practices. I thought I’d make a few points here that are getting lost in the crush.

1) In my experience, most absolute disasters with delays about refereeing concern (a) potential referees who simply don’t answer requests to referee, and (b) cases where the editors run out of people they know/trust on the relevant topic. If everyone who received a request to referee a paper could answer it, even in the negative, that day, and if answering negatively suggest 1-3 names of people with some expertise in the field, that would make things flow much more smoothly.

2) Relatedly, I think a lot of people, when refereeing, don’t take into account how time sensitive it is. Imagine you’ve got a paper that you’ve promised to referee within the month. And you’ve got a project of your own that is due at the end of that month. And you’ve got enough time in the month to do both. What should you do? I think the answer is that you should referee the paper straight away. Usually getting your paper done earlier won’t make a difference to anyone. Getting the report done earlier will make a difference. I think the system would work a lot more smoothly if every referee, upon getting a paper, seriously considered the question “Can I do this today?” Obviously if you have to present a lecture that day, or the next day, and it isn’t done, then the answer is no. But often times the answer is yes. It’s not like you’ll often spend more than a few hours on the paper, or that doing the paper that day will take more time, but it will make a difference to editors and writers.

3) If we want to keep the model of some journals being run through departments, rather than through publishers, then some amount of delay is going to be inevitable. If nothing else, most journals run by departments have a support staff of 1. If that one person is sick, or on annual leave for a time, the whole system basically creaks to a halt. If that person is spending literally all their time for a two or three week period getting an issue readyopt print, nothing happens with submissions. I’ve never had to deal with this, but I imagine if you don’t have good staff (or, more likely, don’t have good staff management) things are worse.

Probably the single biggest thing that could be done to improve journal response times would be to find a way to keep the system running when less than fully staffed. But it’s hard to do that in a small operation, when you can’t simply move staff from elsewhere onto the project.

4) The journal management software systems that are currently being rolled out make a huge difference. There’s nothing as good as keeping a paper from dropping off the face of the earth as reminders every few days that your report on it is overdue. (Since I sign off on every paper on Compass, I get a lot of these, but I’m not that late on too many.) Potentially these systems can, by automating processes now done by staff, help a lot with point (1). And that’s important, because otherwise point (1) seems to me to be intractable short of handing over all the journals to commercial presses.

Having said that, everyone hates the software when it is being rolled out. But it really makes all the difference in the world.

5) There’s been some discussion of cutting back on referee reports. I think this is basically a good idea. It’s true that referees need to say something to editors about what’s good or bad about a paper. But from experience I’ve learned that it’s *much* easier to find something informative to say about a paper to an editor than it is to say something informative and polite to an author. And anything that speeds up the process is probably good.

6) But I really don’t think the comments thread at Leiter is taking seriously how much of the problem is caused by there being too many papers being submitted. If every paper being submitted was a real philosophical advance, that wouldn’t be a problem – it would be paradise. But I don’t really think this is so.

Lots of papers I see to referee are basically glorified blog points that don’t attempt to make more than a very small point. Some of them would be quite good blog posts. But most journals aim a little higher than that. (Note this is different to the length point. Lots of good papers, even papers in top anthologies, are short. But they are all ambitious.)

Disturbingly, many papers seem to be largely unaware of the relevant literature, especially with the most recent developments. I see too many papers that simply don’t pay attention to relevant work from the last 10 years.

Now I don’t want to pretend that I’ve never written (or published) papers that fall in one or other of these categories. But I do think that many papers get sent out when the author could profitably have either rolled the paper into a larger paper, or spent time talking to colleagues/friends/blog readers about relevant literature that should be consulted.

I used to think this was a tragedy of the commons problem. (Mark van Roojen makes this suggestion in the Leiter thread.) The pressures to publish meant not quite cooked papers were being frequently released. And that’s too bad, but an inevitable consequence of everyone acting in enlightened self-interest. But really I don’t think that’s true.

That’s because I don’t think most people appreciate how important very very good papers are to one’s philosophical career. If you’re Tim Williamson or David Lewis you can write several papers a year that are important and groundbreaking. But most of us aren’t like that. Most of us will be such that most papers we write will sink without much trace. The vast bulk of attention will be paid to just a few papers. This can be seen in public through looking at citation rates. (Here are “mine”:http://scholar.google.com/scholar?q=weatherson&hl=en&lr=&btnG=Search on Google Scholar for example.) The most cited papers have an order of magnitude more citations than the bulk of papers, especially when self-citations are removed.

And if we care about professional advancement as much as contribution to philosophical thought, the same story really holds. People tend to get hired based on their best papers. (And they tend to get passed over based on their worst papers.) This shouldn’t be too surprising. People are busy. They don’t have time to read a job candidates full dissertation, let alone their full output if they’re more senior. They read what is (reputed to be) the best work. And that’s what goes into hiring decisions. As we see every year when looking at junior hires, it doesn’t really matter if that best paper was published in _Philosophical Review_, the Proceedings of the Philosistan grad conference, or (more likely) the candidate’s own website. What matters is how good it is, or appears. As a rule, spending more time improving your best paper will do more for your professional prospects than sending it off and moving on to another paper.

Indeed, even if one just cares about publication, I imagine a lot of people (probably me included) could do with being slower on the “submit” button. Most, though not all, bad papers get rejected. And that takes time. Spending time making a good paper very good, rather than submitting the (seemingly) good paper may well mean one fewer rejection, and hence quicker publication.

So, simple solution to the problem of journals being so slow – don’t submit so much!

Barstool Philosophy

One of the things that’s been a running thread through my recent thoughts about the epistemology of philosophy is that it is importantly a group activity. This is largely for prudential reasons. For those of us who aren’t Aristotle or Kant, by far the best way to regiment our philosophical thinking is subjecting it to the criticisms of others. That’s a substantial constraint; it means giving up points that can’t convince our peers. And sometimes that will have costs; we’ll be right and our peers wrong. Sometimes we might even know we’re right and they’re wrong. But as a rule one does better philosophy if one subjects oneself to this kind of constraint from the group.

Or so it seems to me. A thorough empirical investigation would be useful here, especially in terms of trying to figure our just what exceptions, if any, exist to this general principle. But given the relatively low quality of philosophy produced by most people who don’t regard themselves as being regulated by criticisms of their peers, I think it’s pretty clear the rule as a whole is a good one.

That all suggests that the metaphor of “armchair theorising” or “armchair philosophy” is very much mistaken. For armchairs are really places where one engages in solitary activities. And contemporary philosophy is a group activity par excellence.

So we need a new metaphor. “Conference room philosophy” sounds dreary even to me. “Coffeeshop philosophy” is better. But it might be better still to keep the idea of a seat. After all, most philosophy is done sitting down. I suggest “barstool philosophy”. I’m not convinced the best philosophy is done during/after drinking, but the image is pleasingly social at least!

Plurals and Deferred Ostension

Plurals and Deferred Ostension

I was trying to use some other examples of deferred ostension in order to put some constraints on what might be happening with the ‘we’ in “We won 4-2 last night”. The canonical example is (1)

(1) The ham sandwich is getting impatient.

This manages to communicate that the person who ordered the ham sandwich is getting impatient. That is, “the ham sandwich” somehow manages to pick out the person who ordered the ham sandwich.

Both the explicit term “the ham sandwich” and the intended referent, its orderer, are singular. I was wondering what happened when we made either plural. First, imagine that the person ordering hadn’t ordered a ham sandwich, but had instead ordered the olives. Then I think (2a) would be more or less appropriate, but (2b) would be infelicitous.

(2a) ?The olives are getting impatient.
(2b) #The olives is getting impatient.

Second, imagine that the intended referent is plural, but the phrase used is singular. So a table of people ordered the paella, and they are getting impatient. I think (3a) is a little better than (3b).

(3a) ?The paella are getting impatient.
(3b) ??The paella is getting impatient.

Do others agree with those judgments? If they’re right, they suggest that plurality ‘trumps’. That is, if either the noun phrase used, or the intended referent, is plural, then the verb should be plural as well.

A Puzzle about Plural Pronouns

Ishani and I have been talking about an odd usage of “we” that seems to raise interesting philosophical issues. I’ll just set up the puzzle today, and hopefully over the week there will be some attempts to solve the issue.

It’s common to say that “we” is a first-person plural pronoun. It’s also common to use “we” when referring to the activities of a group that, strictly speaking, you’re not part of. So, when asked about Geelong’s latest game, I might say something like “We were three goals down at half time, but we played well in the second half and won by ten points.” Now there’s a group of 22 guys who, in the example, played well in the second half. But I’m not one of them. I’m too old, too unfit, too useless and, crucially, not a registered player for the club. What’s going on in cases like this?

The easiest thing to say is that this is simply a mistaken use of language. But I don’t think that will do. For one thing, it’s simply too widespread a mistake to be written off so easily. In some sense, a usage that widespread can’t be simply mistaken. For another, the usage shows some degree of systematicity, the kind of systematicity that we as philosophers/semanticists should be in the business of explaining. We’ll see some of the respects of systematicity as we go along, but for now let me note just two of them. The first is that it’s very hard to have this kind of usage for first-person pronouns. (There are exceptions, but this is the rule.) So (1) is fine, but (2) is marked.

(1) We played well in the second half.
(2) *I played well in the second half.

The other is that there aren’t that many cases where we can say _We did X_ to mean that some group of which you’re particularly fond did X. So it is possible to say it about (most) liked sporting teams, but not about, say, your favourite restaurant. No matter how much you like _Le Rat_, if you’re simply a fan (rather than an employee) you can’t say

(3) *We got three stars from Bruni in the _Times_.

Similarly, it is possible to say _We did X_ to mean that a political group you affiliate with did X, but not a rock band you are a fan of. So if you’re a fan and supporter of Peter Garrett both as a rock star and a politician, and Garrett has a number 1 single and an 8 point lead in the polls, then (4) could be permissible, but (5) seems considerably more marked.

(4) We have an 8 point lead in the polls.
(5) *We have a number 1 single.

So it looks like there is something interesting to explain about the pattern of usage here. In fact, there seem to be two distinct questions to ask.

The first of these we might call the *truthmaker* question. That is, what relation must hold between the speaker and the group whose actions constituted X happening for _We did X_ to be true? (Or, if you don’t think these utterances are generally true, for it to be appropriate.)

The second of these we might call the *semantic* question. Say that we settle the truthmaker question by saying that the speaker S has to stand in some distinctive relation R to the group G that did X for _We did X_ to be true. There remains a question about how _We did X_ comes to have those truth conditions.

It could be that _we_ picks out the group G. That would be an odd way for _we_ to behave, since the speaker isn’t among the G. Call this result a kind of _deferred ostension_.

Or it could be that _did X_ picks out a property that can be applied to a larger group than those that directly did X. So even if 22 guys on a field in Geelong won the game, _won_ in _We won_ could pick out a property that’s instantiated by a larger group, perhaps the group of all Geelong’s supporters. Call this result a kind of _deferred predication_.

The semantic question then is whether examples like (1) and (4) involve deferred predication or deferred ostension.

The truthmaker and semantic questions are related, we think, and hopefully by the end of the week we’ll have answers to them.

Conditionalising on Rationality

Asssume we have a radioactive particle with a half-life of 1. Then there is a countably additive probability function, whose domain includes all open intervals (x, y) and is closed under union and complementation, such that Pr(S) is the probability that the particle’s decay time is in S.

In cases where Pr(T) is non-zero, we can define Pr(S|T) in the usual way as Pr(S&T)/Pr(T). But even in cases where Pr(T) is zero, we might like to be able to have Pr(S|T) as defined.

Let T then be the set of rational numbers. (Note that if the domain of Pr is closed under countable union and complementation, then T will be in the domain.) Now we might wonder what Pr( |T) looks like. That is, we might wonder what Pr looks like when we conditionalise on T.

I think, and if I’m wrong here I’d welcome having this pointed out, that these conditional probabilities are not defined. And not because Pr(T)=0. In lots of cases probability conditional on a zero-probability event can be sensibly defined. But in this case, if there were such a thing as Pr( |T), then for any rational number _x_, Pr({x)|T) would be 0. And that would lead to a failure of countable additivity.

I imagine all of this is well known, but I hadn’t realised the consequences of this. Let D be the smallest set of sets of positive reals that includes all open intervals (x, y) and is closed under countable union and complementation with respect to the reals. Then there is no _conditional_ probability function from D x D\{} -> [0, 1] such that for any open interval (x, y), Pr((x, y)|R) is the chance that the particle will decay in (x, y). (By R here I mean the set of all reals.) If there is any function that has this last property, it must be defined over a narrower domain than D x D\{} -> [0, 1].

Irrational Credences

An interesting technical question came up in my probability lectures at St Andrews the other day, and it took me until now to realise the correct answer to it.

The question was whether there’s any good reason to think that credences can be irrational numbers. Why, went the question, couldn’t we hold the structure of credences to have the topology of the rationals rather than the reals?

Now one possible answer is that we want to preserve the Principal Principle and since physical theory gives us irrational chances, we might allow irrational credences. But I think this puts the cart before the horse. If we didn’t think that credences and chances had the right kind of topology to support the Principal Principle, I don’t think the Principal Principle would look that plausible.

A better answer involves countable additivity. The rationals are closed under finite addition, multiplication and non-zero division. But they’re not closed under countable additivity. (For examples, think of the expansions of _e_ or _pi_ as infinite series of rationals.) Since, I hold, we should think countable additivity is a coherence constraint on credences, we should think that credences have a structure that is closed under countable addition. And that means they should be (or at least include) the reals, not that they should be confined to the rationals.

Philosophy Bleg: One

For a long time I thought it was established that (given a standard axiomatisation of the probability calculus) countable additivity and countable conglomerability were equivalent. But I’ve lost confidence in my belief. So I’m wondering if anyone can tell me exactly what the answers are to a few questions below.

Just to make sure we’re clear, I’m taking countable additivity to be the principle that if each of the Ei in {E1, …, En, …} are disjoint, then Pr(E1 v … v En v …) = Pr(E1) + … + Pr(En) + ….

And I’m taking countable conglomerability to be the following. Again, if each of the Ei in {E1, …, En, …} are disjoint, then there is some Ei such that Pr(E | Ei) <= Pr(E).

_Question One_: Does a failure of countable additivity entail a failure of countable conglomerability?

I'm pretty sure that, as stated, the answer to that is *no*. Consider a standard finitely additive probability function. So there's some random variable X, and for all natural x, Pr(X=x)=0, while the Pr(X is a natural number)=1. Now insist that Pr is only defined over propositions of the form _X is in S_, where S is a finite or cofinite set of natural numbers. (By a cofinite set, I mean a set whose complement, relative to the naturals, is finite.) I'm reasonably sure that there's no way to generate a failure of countable conglomerability.

_Question Two_: Assume there is a random variable X such that Pr(X is in S1 | X is in S2) is defined for every S1, S2 that are non-empty subsets of the naturals. And assume that whenever S2 is infinite, and the intersection of S1 with S2 is finite, then Pr(X is in S1 | X is in S2) is 0. (So Pr violates countable additivity.) Does Pr fail to respect countable conglomerability?

I'm even more confident that the answer to this is *yes*. Here's the proof. Any positive integer _x_ can be uniquely represented in the form 2n(2m+1), with _n_ and _m_ non-negative integers. For short, let a statement of the form _n=x_ mean that X is one of the numbers such that when represented this way, _n=x_, and similarly for _m_. Then for any non-negative integer, Pr(X is odd | _m=x_) = 0, since for any given _m_ there is one way to be odd, and infinitely many ways to be even. By conglomerability, that implies Pr(X is odd) = 0. But an exactly parallel argument can be used to argue that Pr(X+1 is odd) = 0. And this leads to a contradiction.

_Question Three_: Assume there is a random variable X such that for any x, Pr(X=x)=0, while Pr(X is a natural number)=1, and that Pr(X is in S1 | X is in S2) is defined for every S1, S2 that are non-empty subsets of the naturals. Does Pr fail to respect countable conglomerability?

This is what I don’t know the answer to. I think the answer is *yes*, but I can’t see any obvious proof. Nor can I come up with a counterexample. Does anyone know (a) what the answer to this question is, and (b) where I might find a nice proof of the answer?

Much thanks in advance for helpful replies!

Evidence Neutrality as Regulative Ideal

There is one other argument that Williamson deploys against Evidence Neutrality: it is unattainable. EN requires that the community be able to decide what its evidence is. But an individual can’t, in all cases, even decide what her own evidence is. In hard cases, EN doesn’t just fail as a theory of group evidence, it fails as a theory of individual evidence.

This isn’t something special about evidence. Williamson thinks there is almost nothing that we can, in all cases, tell whether it obtains. Evidence is undecidable because, he argues, practically everything is undecidable in hard cases. The latter conclusion has constraints for norms. If there are norms, then they can’t be things that we know to obtain. Williamson gives a nice example. When one is speaking to a group, the rule _Adjust the volume of your voice to the size of the room_ is a good rule, an ideal to aim for, even if we don’t know, and can’t in principle know, the exact size of the room. Such a norm is a regulative ideal; we aim for it, even if we can’t always tell how close we are to hitting it.

So there can be norms that we can’t always obtain, or perhaps can at best obtain by luck. EN might, for all Williamson has said, have such a position. We should use evidence that all the members of our community recognise as evidence. The benefits of such a rule can be seen by looking at the relative success, over the course of human history, of individual and group research projects. The great majority of our knowledge of the world is the outcome of research by large, and often widely dispersed, communities of researchers. Even in cases where a great individual advances knowledge, such as Darwin in his theorising about evolution, the individual’s work is typically improved by holding themselves to EN as a norm. In Darwin’s case, the reason for this is relatively clear, and I think instructive. Darwin collected so much evidence over such a long period of time, that the only way his younger self could convince his later self that it was all part of his evidence was by the same methods that his younger self could convince the community of biologists that it was part of his evidence. It was holding to EN that allowed him to engage in a fruitful long-term research project.

In many ways, EN is quite a weak norm. In earlier posts I discussed what amount to two major exceptions to it. First, EN doesn’t require rule neutrality. So the maverick scientist can hold EN while coming to quite bizarre conclusions by adopting various odd rules. As we saw above, we can put some constraints on what makes a good rule, but those constraints won’t individuate the good rules. Second, EN, as I’m interpreting it, allows one to choose one’s own community. One of the ways we uphold EN in science is by excluding from the community those who doubt the relevant evidence collecting methods. That means we exclude the odd crank and sceptic, but it also means we exclude, from this particular community for the while, those scientists who carefully study the evidence collection methods that we use. In the latter case at least, there is a very real risk that our community’s work will be wasted because we are using bad methods. But the alternative, waiting until there is a rigorous defence of a method before we start using it, threatens a collapse into Cartesian scepticism.

Even if EN is a norm of evidence, a regulative ideal, rather than a constitutive principle of evidence, we might still be pushed hard towards taking intuitions to be evidence. Or at least we might be so pushed some of the time. It doesn’t violate EN to take what nutritionists tell us about a healthy diet at face value; the reports of nutrition science are common ground among the community of ethicists. But we can hardly take facts about disputed examples, for instance, as given, even if they are quite intuitive to some of us. And even if, as it turns out, we know the answer. If there are people who are, by any decent standard, part of our community of philosophers, who disagree about the cases, we should be able to give our grounds for disagreement. Not because this is necessary for knowledge, but because the policy of subjecting our evidence to the community’s judgment is a better policy than any known alternative.

To be sure, some work needs to be done to show that that taking intuitions as basic does conform to this idea. As Williamson notes, one thing that might (even in somewhat realistic cases) be in dispute is the strength of an intuition. So taking EN as normative might require some modification to intuition-driven philosophical practice. But I don’t think it will require as big a diversion as Williamson’s preferred anti-psychologistic approach.