I had to write up a short summary of my current research and plans for future research, so I thought I’d share a draft of it with you. It’s reasonably long (and, as these things usually are, somewhat narcissistic) so it’s all in the extended entry.
Vagueness
Im interested in defending theories of
vagueness that allow for the existence of some semantic vagueness, but which
are as conservative as possible with respect to logic. To that end Ive
developed three ideas.
The
first is that we can adopt the core idea behind the many-valued logic
approach to vagueness, that there are some sentences whose truth value is
strictly between perfect truth and perfect falsity, without abandoning
classical logic. As long as we say these extra truth values are not linearly
ordered, but are arranged in a Boolean lattice, there is a very natural sense
in which all and only classically acceptable inference rules will be acceptable
in our theory. We can do even better by dropping the truth values themselves,
and just keeping the ordering relation between them. So instead of saying that
sentences S1 and S2 have truth values 0.6
and 0.4, we just say that S1 is truer than S2.
I argue that all the philosophical work we want the extra truth values to do
can be done by the truer relation. And the values are pernicious, because they
lead to problems of false precision. I develop all this in my paper "True,
Truer, Truest", forthcoming in Philosophical Studies.
The
second is a pragmatic solution to the Sorites paradox. Existing semantic
theories of vagueness do very little, I think, to explain why the premises in a
Sorites argument should look compelling. The best idea is one Kit Fine
mentions, almost in passing, in "Vagueness, Truth and Logic". The hypothesis
that speakers systematically confuse a is F for a is determinately F,
both when a is F is a complete sentence and when it is a constituent of
a longer sentence, makes a number of true predictions about which compound sentences
involving vague terms people will accept. Given some plausible background
assumptions, it also predicts people will accept the premises in a Sorites
argument. The challenge then is to explain why people make this confusion. I
want to argue that it is just another instance where people confuse speaker
meaning for sentence meaning. If this is right, it might support the arguments
of linguists such as Gennaro Chierchia and Stephen Levinson who argue that
implicatures are computed locally rather than globally, and in future work
I hope to investigate this connection further.
The
third is a rather neat solution to a puzzle for the supervaluationist theory of
vague names that Stephen Schiffer raised. Assume Barry is a name with many
precisifications. Then Barry lives in London will have many precisifications.
So if Sally believes that Barry lives in London is to be super-true, it would
seem that Sally would have to believe each and every one of these
precisifications. But this sentence can, intuitively, be perfectly true without
Sally even thinking about these precise versions of Barry. The solution is to
note that when Sally has a thought about Barry, this thought too will be vague,
and will have a number of precisifications. Provided there is a penumbral
connection between Sallys thoughts about Barry and the name Barry, the one
vague thought of Sallys can be the truthmaker for Sally believes that Barry
lives in London on every precisification. In "Many Many Problems" (Philosophical
Quarterly October 2003) I argue that it is plausible such a penumbral
connection should exist, and then use this idea to solve a few apparently
unrelated puzzles surrounding the problem of the many. The reason for this work
is not that Im a supervaluationist, but rather that I think something like
Schiffers puzzle can be raised for any semantic theory of vagueness, including
my own, and I believe something like the solution I offer can work for all such
puzzles.
My
future work will involve figuring out how to tie these three ideas together.
Because while I think all three ideas are basically correct, and important
components of the correct theory of vagueness, Im not entirely sure how to
draw a single theory that encompasses them all. The problem thats driving my
current work is that its quite important to the first idea that
precisifications are not a primitive concept in the theory of vagueness – this
does some work in showing how the theory can generalise to cover higher-order
vagueness – but its quite important to the third idea that precisifications
are primitive. My goal for future research is to show how to solve puzzles like
Schiffers without relying on the concept of acceptable precisifications. When
I do that I think Ill have a rather attractive theory of vagueness, and I hope
to write a short book outlining its virtues.
Epistemology
My work in epistemology is based around
three interests: responses to scepticism, the nature of evidence, and the use
of probabilistic reasoning in epistemology.
Ive
never been particularly impressed by sceptical arguments that make a direct
appeal to intuitions of the form You dont know that p, where p
is some anti-sceptical principle such as The external world exists, or The
future will resemble the past. I simply dont have such intuitions, so I
dont feel particularly moved by these arguments. What I am impressed by are
sceptical arguments that push us to reveal how we know these principles.
Following some ideas of James Pryor, I take the best sceptical arguments to
consist of arguments that we cant know these principles a posteriori, plus
arguments that we cant know them a priori. Unlike the arguments based on
direct appeal to intuition, I think we need to make some substantive
commitments to block these arguments, at least when they are developed
carefully. To block the argument that they cannot be know a posteriori, I think
we need to accept a kind of externalism about justification. To block the
argument that they cannot be known a priori, we need to accept a very strong
rationalist claim – that there are deeply contingent facts we can know a
priori. So reflection on scepticism leads to a trilemma: accept scepticism,
accept externalism or accept rationalism. I dont think this is an unbearable
trilemma, indeed Im happy ultimately to accept both the externalism and the
rationalism, but I think it is one that not all epistemologists have faced up
to, because some accept none of these three options. I develop all this in
"Scepticism, Rationalism and Externalism".
Im
impressed by Timothy Williamsons arguments in Knowledge and its Limits
against traditional conceptions of evidence, but less than convinced by his
analysis of evidence as being just what we know. There are a few reasons for my
wariness here, but the simplest and strongest is that I think we can know
things about the future, but our evidence cannot include propositions about the
future. But I do not have a rival analysis of evidence to offer. Indeed, I
doubt any enlightening conceptual analysis is possible. I do think it is
possible to develop a plausible a posteriori theory of evidence. Currently I am
working on the idea that our evidence consists of all and only the outputs of
our reliable perceptual modules, and I hope to develop this in future publications.
My
earliest work in epistemology was on the philosophical applications of the old
economists distinction between risk and uncertainty. One of Keyness initial
motivations for accepting that distinction was the repeated failure of anyone
to find a plausible principle of indifference, but the distinction can be
independently motivated, as it was in Frank Knights work and in Keyness later
work. Recently Nick Bostrom and Adam Elga have used new principles of
indifference to argue for sceptical conclusions. In "Are You a Sim" (Philosophical
Quarterly July 2003) I show that Bostroms principle, like most old
principles of indifference, is inconsistent. In "Should We Respond to Evil with
Indifference" (Philosophy and Phenomenological Research forthcoming) I
show that Elgas principle, while in a sense consistent, does lead to nasty
paradoxes and hence should be avoided. I also show that the motivation for his
principle relies on blurring the distinction between risk and uncertainty. If
we keep the distinction in mind we can motivate, at best, a much weaker
principle of indifference. What isnt clear, and what I hope to return to in
future work, is whether the weaker principle can still generate perniciously
sceptical conclusions. I suspect it cannot, but I dont think the arguments I
have so far developed conclusively show that.
Imagination
Ive become interested in a puzzle of
Humes which has recently been revived by Kendall Walton and Tamar Szabó
Gendler. We can imagine that the world is different in ever so many different
ways. We can imagine things that are metaphysically impossible, such as singing
snowmen. We can even imagine the logically impossible, such as a person both
dying and not dying in 1985. (We accept invitations to imagine such things all
the time in cheap time-travel stories.) But in certain respects we cannot
imagine the world is morally different. We cannot imagine that gratuitous acts
of torture are morally praiseworthy. And what goes for imagination goes for
truth in fiction. We cannot have a story in which it is true that gratuitous
acts of torture are morally praiseworthy, although we can of course have
stories in which such acts are widely praised. What accounts for the
distinction, and what does the distinction tell us about the nature of
imagination?
Gendler
argued that the distinction is due to particular facts about the role of moral
propositions in our cognitive lives. I disagree, because I think we can get
similar barriers to imagination in stories where there are no moral facts in
dispute. In "My Favourite Puzzle" I develop this at some length, using
primarily examples involving furniture. What I think drives the puzzle is the
way we imagine higher-order facts; facts that hold, when they do, in virtue
of more primitive facts obtaining. If the economic facts, say, hold in virtue
of the psychological facts, where this is not just a mere supervenience of the
economic on the psychological but a much stronger relation of metaphysical
dependence, then we cannot imagine the economic facts being different while the
psychological facts remain the same.
While
I disagree with Gendlers diagnosis of the puzzle, I agree with one very
important conclusion she draws from it. There is an important distinction
between supposing and imagining. We can suppose, for the sake of
argument, that ethical egoism is true, even if we cannot imagine it being true.
In future work I will be investigating whether some arguments from the nature
of imagination for dualism (e.g. Jacksons black-and-white Mary argument,
Chalmerss zombie argument) rely on blurring the supposition/imagination
distinction, and if so what forms of physicalism (if any) are shown by the
theories of the previous paragraph to be invulnerable to those arguments. Ive
been invited to present papers on this work at the APA Pacific next March, and
the American Society for Aesthetics conference next October.
Other Work
With Andy Egan and John Hawthorne I’ve been working on a long paper on epistemic modals called "Epistemic Modals in Context". We tentatively argue for a quite radical position, one that takes its cue from John MacFarlane’s recent theory of future contingents. The truth of a sentence involving an epistemic modal might not be sensitive to just the context of utterance and the nature of the world, but also to the context of evaluation. It would be very surprising if something like this were true, but the theory does systematise quite a bit of otherwise awkward data about epistemic modals. This paper will be forthcoming in a volume on contextualism put out by OUP.
I’ve co-drafted a couple of short papers on ethics. In "Prankster’s Ethics" Andy Egan and I argue that an otherwise quite attractive form of consequentialism cannot account for the immorality of certain amusing but not-quite-harmless pranks. In "Cloning and Harm" Sarah McGrath and I argue for a libertarian position on cloning. We argue (very briefly) that the only good ground for a universal ban on reproductive cloning would be that it impermissibly harmed the child created. We then argue (drawing heavily on Elizabeth Harman’s recent work on the non-identity puzzle) that while it is prima facie plausible that until cloning in fact does in fact harm the child created, this is not obviously impermissible if there is no other way for the parents to have a child, and that in these cases there is certainly no good grounds for the state to intervene.
In the longer-term I hope to do some work on the arguments for and against the existence of modal parts. I’m interested in seeing how the arguments Ted Sider marshalls for and against the existence of temporal parts in his book Four-Dimensionalism look when they are translated into arguments concerning modal parts. My initial impression is that the arguments still look fairly good, so we should, contrary to the impression of practically every metaphysician except David Lewis, accept modal parts. I suspect this will be the right conclusion even if, unlike Lewis, we reject modal realism.