One of the striking things about “Peacocke’s book”:http://www.amazon.com/exec/obidos/ASIN/0199270724/caoineorg-20?creative=125581&camp=2321&link_code=as1 is how attached he is to factorisation theories. Empirical knowledge can be factored into its empirical foundations and the rationally justified transitions from those foundations. Any necessary a posteriori truth can be factored into a necessary a priori bit and a contingent a posteriori bit. (Exactly as Sidelle says, though in this book at least Sidelle gets no citation. Perhaps there’s more on this in _Being Known_.) And moral knowledge can be factored into an a priori knowable moral bit and an a posteriori knowable non-moral bit. At this rate of factoring Peacocke could almost be Australian.
When I first read the section on moral knowledge I thought he was just repeating a theory that had been clearly refuted, and he just wasn’t keeping up with the literature. I was at best half-right. What I remembered as the refutation of this view wasn’t a paper I read, but a paper I heard “Sarah McGrath”:http://www.holycross.edu/departments/philosophy/website/mcgrath.html deliver. So Peacocke can’t be blamed for not having heard of this paper when he wrote his book. Sarah isn’t responding to Peacocke but to an earlier statement of a similar view by Judith Jarvis Thomson in “Moral Knowledge and Moral Objectivity”:http://www.amazon.com/exec/obidos/ASIN/0631192115/ref=nosim/caoineorg-20, and it possibly is poor form that Peacocke doesn’t cite Thomson here. (When you try and write a 270 page book on _absolutely everything_ some things are bound to get left out I guess.)
As I mentioned, I thought Sarah’s response to Thomson was pretty persuasive. I’ll first set out the kind of “principle-based” approach Peacocke wants to defend, and then why Sarah’s criticisms against it look persuasive, then investigate some moves Peacocke could make in response.
bq. (If you want the take-home version, it’s that Peacocke’s theory as it stands can’t really deal with the fact that moral claims stated in non-moral language tend to have exceptions, but maybe there’s an analogy with Gödel’s incompleteness results that helps him out here, not that he actually develops it. The analogy is well explored in Richard Holton’s “Principles and Particularisms”:http://homepages.ed.ac.uk/rholton/princpass.pdf which I highly recommend.)
So let’s use a famous example of Gilbert Harman’s. Britney Spears walks around a corner and sees Michael Moore setting fire to a cat. Britney comes to the belief that what Michael is doing is wrong, as indeed it is, and it is plausible this belief of Britney’s is knowledge. When she thinks about what makes it knowledge she, having read Peacocke closely, reasons as follows.
bq. Well, I know a priori that “prima facie the infliction of avoidable pain is wrong”, as Peacocke says on page 214 and as reflection on what it takes to possess the concepts involved verifies. And I know by regular means the non-moral proposition that Michael is inflicting avoidable pain on the cat. So I can conclude that what Michael is doing is wrong.
At least that’s what she first thinks, but Britney is smarter than we give her credit for, and she is worried twice over about her Peacockean reasoning.
bq. Oops! It’s not a non-moral proposition that Michael is inflicting _avoidable_ pain on the cat. Whether a pain infliction is avoidable is itself a _moral_ judgment. For setting fire to the cat would be _unavoidable_ in the salient sense if it was the only way to stop the cat killing a person, but _avoidable_ if it was the only way to stop the cat killing a mouse. And that’s just because of the moral differences between persons and mice. So we haven’t got the factorisation into moral and non-moral components down.
bq. And even if we bracket that concern (which Uncle Jerry tells me just means pretend a real problem isn’t a problem) I was using invalid arguments again! All I’d be entitled to conclude is that prima facie what Michael is doing is wrong. But I know something stronger, that what he’s doing is wrong _simpliciter_. What could ground that?
Peacocke has half a response to the second concern. He notes on page 223 that on this view some actions will be prima facie right and prima facie wrong, but not right or wrong simpliciter. This seems right, but I think he underestimates the problem. If _every_ moral principle is qualified by a prima facie operator, then Britney’s second concern won’t go away. We will only _ever_ know that actions are prima facie right or wrong, which is false to the data that we know some things are right and others wrong.
Sarah’s main argument against Thomson, which I think is also persuasive against Peacocke, is that Britney’s first worry here generalises. Peacocke doesn’t, I think, really address this point. The objection is that there’s no way to lay out a priori moral principles, which for Peacocke at least means true in every world considered as actual, without using moral vocabulary to specify the situations in which they apply. Every claim over a descriptively specified domain has exceptions, which is to say it isn’t necessary, which is to say (since moral terms are not Twin-Earthable) that it isn’t a priori.
Maybe, just maybe, we can avoid this by leaning heavily on the ‘prima facie’ operator. That is maybe we can get universal necessary/a priori moral truths that are prefixed by ‘prima facie’. But then Britney’s other worry will just become more pressing. So there’s no way out of Sarah’s criticism here.
Perhaps there is a way out of this if we let the domain specification be infinitely long. Arguably the moral supervenes on the descriptive, and it’s a priori what moral facts hold given the complete descriptive facts. But does Britney really know these infinitely long quantified claims? (Maybe she does, as I’ll get to below.)
Sarah’s position, I take it, is that we can’t factor out the moral component from the non-moral component of perceptual/empirical knowledge. (Compare, e.g. McDowell on perception, Williamson on knowledge, Yablo on modality, etc., etc.) And considering how hard it is to find _true_ a priori moral principles that really support our actual moral knowledge, such a position looks plausible. I want to close with a consideration that points the other way, but that’s really just to explore the terrain, not because I’m persuaded that a factorisation story will work.
Peacocke’s best example in this area, I think, is his discussion of our capacity to tell standard from non-standard models for arithmetic. (This area? Isn’t that mathematics not modality? Well, if Peacocke is on the right track then moral epistemology should resemble mathematical epistemology. Peacocke actually stresses the connections with modal epistemology, but here the mathematical case is more interesting. The connection between mathematics and morality that I’ll be looking at is to my knowledge first suggested in print by “Richard Holton”:http://homepages.ed.ac.uk/rholton/princpass.pdf, in a very good paper on moral particularism.)
Here’s the thing about standard and non-standard models. When we acquire the number concepts, and enough concepts to do with infinity to understand the question, we acquire the capacity for telling standard from non-standard models. The terminology ‘standard’ suggests something normatively loaded, and it is. What we acquire the capacity to do is tell that _those ones_ are the models we intended to be using when we started talking about the numbers, and not _those ones_.
Two things to note about this ability. First, it is pretty clearly a priori. Wherever else we can’t factor out the a priori bits, we can do so here. Second, it _arguably_ can’t be reduced to knowledge of principles about numbers. At least, it can’t be reduced to knowledge of _finitely_ many principles stated in the language of first-order logic with identity.
Setting aside this linguistic restriction (and I don’t know why we should be allowed to do this) we might now have a response to Sarah’s argument. Maybe even though Britney couldn’t state the infinitely long principle moral principle that is relevant to the question of whether what Michael is doing is wrong, she might know it, and know it a priori. For the arithmetic case shows that she can know, and know a priori, what makes for a standard model even though she could not finitely state what it is.
As I said, this is all meant to be fairly exploratory. I’m not sure that the moral/mathematical analogy is precise here, especially because of worries about what happens to the mathematical case when we drop the restriction to sentences in first-order logic. (And I’m worried I’m revealing some deep technical incompetence here. I’m, to say the least, at the limits of my technical abilities here.) But maybe Peacocke can rescue his principle-based moral theory by appeal to a mathematical case he already discusses at some length.