Ethics and Neurology

In the long Philosophical Perspectives thread there was very little discussion of the actual papers in the volume, so I thought it might be time for an ethics post around here to move the discussions back to philosophy. In particular, I wanted to note one possible complication arising out of the paper Andy and I contributed.

A lot of people think that the way to do ethical epistemology is to systematise intuitions about a range of cases. One of the points Andy and I were making was that if you’re playing this game, it really might matter just which cases you focus on. Focus on life-and-death cases and you might get a different theory to if you focus on cases of everyday morality. This probably isn’t too surprising – I imagine a lot of people think consequentialism is at least extensionally correct for everyday matters, while some kind of deontological theory is needed for life-and-death cases. That is, I imagine there are lots of people who are happy with consequentialism plus rights as trumps, and the rights in question are only in danger of being violated in life-and-death cases. This is hardly a majority view, but it’s not a surprising view. What was odd about our position was we went the other way, arguing that a form of consequentialism (and maybe even of Consequentialism) was extensionally adequate in life-and-death cases, but failed to give the right answers when thinking about some everyday pranksters. (Actually we were neutral on whether this consequentialist theory was extensionally adequate, since the counterexamples we had in mind might not be actual. But it failed to be extensionally adequate in a nearby world.)

Bracketing the details of the cases for now, it’s worthwhile to stop back and reflect on what this should tell us about methodology. In particular, I want to think about what would happen if we found out the following things were true.

  • Systematising intuitions about life-and-death cases supported moral theory X
  • Systematising intuitions about everyday cases supported moral theory Y, which is inconsistent with X
  • The reason for the divergence is that different parts of the brain are involved with forming moral intuitions about everyday cases as compared to life-and-death cases; everyday cases are handled by a part of the brain generally associated with cognition, life-and-death cases by a part of the brain generally associated with emotional response

The third point is an enormous oversimplification of the neurology – it’s not really true there’s a part of the brain for cognition I guess, and the divide between emotionally loaded cases and non-loaded cases doesn’t exactly track the everyday/life-and-death distinction – but from what I’m told it’s not entirely off base. There are different parts of the brain that are at work in different moral cases. (Thanks to Tamar for pointing me to the studies showing this.) And the different parts are differentially correlated with emotional response. So figuring out what to do in such a case might be of some practical import.

As I see it there are four possible responses.

First, we might take this kind of result to be evidence that we were wrong all along in thinking moral epistemology should be based around intuitions in this way. There’s something to be said for that view, though I won’t have anything useful to say about it here.

Second, we could adopt a relatively weak form of particularism, one which said not that there are necessarily no general moral principles, but there are no general principles that you can support from one kind of case that have application in a different kind of case. The idea would be that whatever we learn about life-and-death cases tells us about life-and-death cases and nothing more, so the possibility that theories X and Y above could be genuinely inconsistent vanishes. I think this is a reasonable view to take I guess.

Third and fourth, we could come up with arguments for one of other of X and Y being more firmly supported by the intuitions. Which way we go here will depend, I think, on how great a role we think emotional response should play in moral epistemology. On the one hand, it is odd to think our sober reasoned judgments could have to be corrected by the judgments we make under emotional duress. (And I take it part of the point of the neurological studies is that even considering some of the cases ethicists work with does constitute at least a mild form of emotional duress.) On the other hand, it seems a moral theory coldly detached from our emotional bond with the world is somehow deficient, that moral judgments at some level are meant to carry emotional commitment with them.

I don’t have any ideas for how we should proceed at this point, I think it is just a hard question. But if the neurological data suggests that moral intuitions are radically diverse in their origins, it is a question that we intuition-synthesisers will have to address sooner rather than later.

8 Replies to “Ethics and Neurology”

  1. Pedantry: Neurology is a branch of medicine. The general term for the science of the brain is just ‘neuroscience’. ‘Neurobiology’ and ‘neurophysiology’ are usually acceptable in these contexts, too.

  2. This “two parts of the brain” idea is similar to the explanations of the way the brain processes linguistic irregularities differently from linguistic regularities (as discussed, for example, in Steven Pinker’s The Language Instinct and Words and Rules), the distinction being that irregularities are processed by look-up tables while regularities are processed by algorithms. I’m suggesting that the difference between life-and-death moral judgments and every-day moral judgments also has this look-up table vs. algorithm distinction.

    It might be that in either the linguistic case or the moral-judgment case the look-up table system is simpler (neurologically) and evolved first, but since the capacity of a look-up table is clearly limited, there was an advantage to eventually overlay that system with a more complex system of unlimited capacity, namely algorithmic computations. In both the linguistic and moral-judgment cases, though, the old system remained, either because it still has some advantage (such as speed of response), or simply because it just hasn’t withered away yet.

    If all this is true, it’s easy to see how a really bad problem can occur when my moral look-up table has been programmed (in childhood or genetically) differently than yours. What seems to me beneficial (or even necessary), to you can be “just plain wrong,” and no amount of reasoning (i.e., algorithmic neural computation) can resolve the dispute.

  3. If you’re going to argue that ethicists need to take empiricial data seriously, its kinda odd to present possible worlds cases (suppose that there is a world in which…) – though Stich has arguments against the reliability of epistemic intuitions along these lines. In any case, it seems to me more interesting to deal with what the neuroscience is actually telling us, rather than what it might have. And the studies do not show that life-and-death cases activate regions of the brain associated with emotion. Instead, the claim is that whether such regions are activated does not depend upon whether life is a stake, but on the kinds of means used to bring about a death. In forthcoming work, Greene writes that if the means could be grasped by our common ancestor with chimps, we get the emotional response (and deontological intuiitions); if not, not. This, it seems to me, is an easier bullet to bite than the case you sketch for someone who wants to build morality upon intuitions.

    In any case, we oughtn’t to take fMRI results to tell us anything more than how the brain of individuals tested actually respond. Hypothesis: the American undergrads tested are confused about morality. Certainly, we shouldn’t think that their intuitions are in reflective equilibrium or anything approaching it. Consider an analogy. Michael Persinger claims that stimulating the temporal lobe (with high powered magnets) gives rise to experiences variously decribed as spiritual or religious. When he tried this on Richard Dawkins, he got no such repsonse. What should we conclude? One possibility is that RD is deficient in his TLs; hence his atheism. Another is that most individuals who believe in God (or perhaps are from believing cultures and therefore were exposed to god beliefs at developmentally significant times) are disposed to have these experiences under TMS. Even if this disposition is innate, in some sense, this would not show that a culture in which no one had it was not accessible (perhaps if everyone had a Dawkins-style upbringing, no one would have it). Given that these are live possibilities, we oughtn’t to think, without actually testing, that Jack Smart, or Christine Korsgaard, has the same patterns of brain activation as American undergrads.

  4. One observation on the paper. J.S. Mill’s, On Liberty chap. II, On the Liberty of Thought and Discussion covers (really, in spades) the points made in pp. 4-10 (roughly, by my reckoning, 40%) of the paper (i.e. Egan/Weatherson). We find in II inter alia good utilitarian reasons (i.e., value production) for permitting people to hold, defend, and express offensive beliefs, unconventional beliefs (for instance, immoral beliefs by received standards), as well as well-established falsehoods. Why no footnote to Mill? That’s very surprising. Maybe I missed it, but I didn’t spot a footnote until p. 5 or so.

  5. Neil, you’re right that there needs to be much more data in – not just tests on undergraduates – before we form any empirical judgments. And you’re right that I’m playing fast and loose with the science here. But I’m not sure what the conclusion you’re trying to draw is – that it’s OK to be a deontologist about chimp-activities and a consequentialist about other-activities?

    Mike, we didn’t talk about Mill because we’re not talking about the same thing he’s talking about. He’s talking about what should and shouldn’t be regulated we’re talking about what should and shouldn’t be. I take it as more or less common ground that we shouldn’t regulate/legislate against bad character. From that nothing follows about whether or not the world is better (ceteris paribus) if more people have good character or fewer people have bad character, the questions are more or less orthogonal. But maybe I’m interpreting Mill differently to you here.

  6. In response to your question – yes. I suggested that we have no reason, as yet, to think that the patterns of brain activation so far revealed tell us anything about how moral dilemmas would get processed with our intuitions in reflective equilibrium. But it might turn out that Greene is right, and that if the means is personal, deontology is triggered (perhaps in all accessible practicably environments this will turn out to the be the case for most well functioning people). Why shouldn’t we simply incorporate this into our morality? If we’re not Platonists, that is, if we don’t take our moral intuitions to tell us something about a non-natural moral reality independent of our moral responses, then why not just think that the contours of our morality follow the contours of well-functioning individuals with their intuitions in reflective equilibrium? This is what Goldman calls a mentalist view about intuitions; seems pretty plausible to me here. We need to distinguish here between the use of intuitions in, say, metaphysics, and morality. We want our metaphysics to tell us about the structure of mind-independent reality, and it may be that our intuitions are not good guides to this reality. If we base metaphysics on intuitions, we may end up studying the wrong thing; perhaps a branch of psychology rather than reality. But in my view folk-morality (suitably constrained) just is morality; we can’t drive a wedge between them.

  7. I take it as more or less common ground that we shouldn’t regulate/legislate against bad character. From that nothing follows about whether or not the world is better (ceteris paribus) if more people have good character or fewer people have bad character, the questions are more or less orthogonal

    That’s interesting. But there is nothing in Mill to suggest that it is the cost of regulating that makes permissible the expression of falsehoods, immoral conduct, etc. Mill rather refers (as you do) to the benefits forthcoming from permitting such expressions. Indeed the benefits of allowing people to express and defend what most perceive as falsehoods include things like preventing complacency about the received truth. This is all very close to the sort of argument from benefits you offer.

  8. I see Neil’s position now, and I think it’s fairly interesting. The classification scheme I was adopting (though not carefully expressing) made it a response of the second kind. If I’ve (this time around) properly understood Neil, bringing intuitions into reflective equilibrium is compatible with having what look like different rules for chimp-cases and non-chimp-cases. We shouldn’t expect that what we think about pushing people off bridges will change which intuitions we’ll have in equilibrium about cases that don’t involve things chimps could do. That could well be true, and I think Neil’s right that it’s more likely to be true if we don’t take ourselves to be chasing the Platonic form of the good, which we often assume to be not so disjunctive. (I’m of course in constant pursuit of the Platonic form of the good, but that’s possibly a story for a different post.)

    And thanks for Mike for that point. I see I’d misread/misremembered Mill on these points, and we probably should have had a link. Our views are on these points directly contrary to Mill’s. We want to show that bad acts, not just apparently suboptimal thoughts or words, can have good consequences through (among other things) their educative effects. (This was never meant to be the primary good consequence; well-aimed pies are a good thing because they are funny, not because they are educational, though education is a bonus.) But this is bad news for the consequentialist, because it shows there’s a gap between what’s good and bad and what has better and worse consequences. Though now (or at least when I get back to work) I’ll have to go and see what Mill says about that.

Comments are closed.