It turns out one of the Williamson papers I linked to the other day contains an argument that I had been (for somewhat independent reasons) running in Tamar’s seminar on Tuesday night. Here’s the position Williamson is arguing against, and that I was also opposing last Tuesday.
bq. “Philosophical ‘Intuitions’ and Scepticism about Judgement”:http://users.ox.ac.uk/~sfop0009/files/intuit3.pdf
bq. The result is the uneasy conception which many contemporary analytic philosophers have of their own methodology. They think that, in philosophy, our ultimate evidence consists only of intuitions. Under pressure, they take that not to mean that our ultimate evidence consists of the mainly non-psychological putative truths that are the contents of those intuitions. Rather, they take it to mean that our ultimate evidence consists of the psychological truths that we have intuitions with those contents, whether true or false. That is, our ultimate evidence in philosophy amounts only to psychological facts about ourselves.
Williamson goes on to run through some of the reasons this line is wrong, and some responses to defences of it.. Again, much the same thing happened in the seminar, with me somewhat inexpertly playing the Williamson role, perhaps without the required conviction to be fully convincing. So I was a little surprised, though I shouldn’t have been, to find the person cited as being most guilty of this kind of approach as being _me_, particularly me qua author of “this paper”:http://brian.weatherson.org/counterexamples.pdf. It’s not an unfair reading of the paper on Tim’s part, quite the opposite, so this isn’t a complaint about Tim’s citation. In fact being unfavourably cited by the great and the good beats being ignored every day so I’m not complaining a bit. But it seemed like an apt opportunity to explore the issue a bit.
I’m not sure what I think about ‘ultimate evidence’ in these cases, so I want to work around a related question. How best should we understand the argument being presented in passages like the following?
bq. According to consequentialism, it is always best to do that which has the best consequences. In a case where it is possible to kill someone and use their organs to save five people who will otherwise die, what produces the best consequences is killing the person and harvesting their organs. But when we reflect on the case, we see quite clearly that this is not the right thing to do. So consequentialism is not correct.
I’ll make one simplifying terminological move, and one simplifying assumption about the case. First, I’ll use the following terminology.
bq. p = Consequentialism is true.
q = The best thing to do in the circumstance described is kill the person and harvest their organs.
I’ll assume for simplicity that we’re working with a version of consequentialism on which p -> q is clearly true. Now how should we formalise the argument against consequentialism being put forward. I think there are two importantly different options.
bq. *Demonstrative*
p -> q
~q
Therefore, ~p
bq. *Non-Demonstrative*
p – > q
Intuitively, ~q
Therefore, ~p
I’m inclined to think the difference between these is very important, but that’s for another post. For now I just want to note some difficulties in identifying either form as *the* form of the argument being presented. (I don’t really have a firm conclusion here, but I think there’s something to be said for the idea that _both_ arguments are being presented. How this relates to Williamson’s claims about ultimate evidence, I don’t know.)
Consider the following argument.
bq. Bayesian decision theory says that you should never violate Independence. But in the “Allais paradox”:http://mathworld.wolfram.com/AllaisParadox.html and the “Ellsberg paradox”:http://thefilter.blogs.com/spring_anthology/2004/04/ellsberg_parado.html many people do violate Independence, and they are it seems perfectly rational in doing so. So Bayesian decision theory is wrong.
It’s most instructive to consider this argument from the perspective of someone who finds the Allais and Ellsberg cases quite powerful, but ultimately unsuccessful, arguments against Bayesianism. (E.g. me.) If we take the tacit argument to be like *Demonstrative* then what is wrong with the argument is that it simply has a false premise. Now for some arguments that _is_ exactly what is wrong with them, so this isn’t immediately a problem, but in this case it seems to be missing an important fact. Compare that argument to the following.
bq. Bayesian decision theory says that you should never violate Independence. But when playing roulette, it seems frequently to be rational to violate Independence. So Bayesian decision theory is wrong.
If we also formulate this argument like *Demonstrative*, then it too has a false premise. Indeed, our evaluation of this argument is just like our evaluation of the previous argument. Each of them looks like this
bq. If p -> q
~q
~p
In each case, p is Bayesian decision theory, while in the first instance q is that it is irrational to violate Independence in the Allais or Ellsberg cases, and in the second that it is irrational to violate Independence when playing roulette. Either way, we have a valid argument with a true premise and a false premise. But that seems to understate the importance, and quality, of the first argument. There is _some_ crucial disanalogy here that is being missed. If we formulate both arguments as being like *Non-Demonstrative*, the analogy goes away. For now the first argument has two true premises, and a reliable (though on this occasional faulty) form. But the second argument has a true premise and a false premise.
Two responses to that spring to mind. First, there might be other ways in which the arguments differ, even though they have a common form and a common error. Maybe we can know the second premise in the second argument to be false, whereas we can’t know the second premise in the first argument to be false. Or maybe we can know it to be false by inference, but not by direct judgment. It’s a little hard to spell out what this would amount to without making psychological states be important to philosophical argument in a way that Williamson doesn’t want (with reason).
So second, we might reply by stressing the disanalogies between the Allais-Ellsberg argument and arguments from possible cases that actually work. Consider, for instance, the following.
bq. If sensitivity is required for knowledge, then in circumstances C (which I won’t spell out here, but I hope you’re familiar with) I’m in a position to know there’s a red barn on the hill but not in a position to know there’s a barn on the hill. But that’s absurd. In those circumstances I am in a position to know there’s a barn on the hill by deductive inference. So sensitivity is not required for knowledge.
It’s not too hard to feel there are important disanalogies between this argument and the Allais-Ellsberg argument, not least that this argument is _sound_ and the Allais-Ellsberg argument is not. But if all the arguments are presented using *Non-Demonstrative*, those disanalogies are minimised.
So I don’t really have a conclusion here. There’s two more posts (at least) to be written on this – a mildly critical post on Williamson’s arguments against taking intuitions to be the primitive source of evidence here, and a post on why the distinction between argument forms I’ve been chattering about here matters.