Edinburgh

Among the many ways in which the world
could change for the better, the following two seem sort of salient to this
site. First, it could be easier to link to papers available in online journals.
I suppose if we all collectively decided to only send papers to freely
available journals like The
Philosophers Imprint
things might be a little easier in this respect.
Anyway, without this I can only tell you that among the many recent papers I’d
recommend are Hartry Field’s “Saving the Truth Schema From Paradox” in the
February Journal of Philosophical Logic,
and Graham Priest’s critical notice of the entire Lewis oeuvre (you think I’m
kidding, don’t you) in the June Noûs.

Secondly,
it would be nice if I was competent in putting up links. I was meant to deliver
this talk in Edinburgh last week. I managed to not pack a
copy of the talk, which is a rather impressive feat of poor packing I must say.
Anyway, I wasn’t particularly worried about this, because I thought I had
linked to it in the previous post. Sad to say, the link was broken, though I do
hope it’s now fixed. I did get a copy of the paper back from a different source
about 2 hours before I was meant to give it, but by that stage I’d quickly
rewritten the whole thing (sometimes it pays to be a quantity-based rather than
quality-based philosopher) and gave the rewritten version instead. I don’t
think it was a complete disaster, but it would have been nice to have been a
bit more competent with postings and avoided the whole thing. I did discover
that it’s more fun writing in Scottish bars than in American bars, but I think
I could have done with merely having a
priori
knowledge of that particular proposition.

Justification and Innateness

It’s been a long time between posts here, which
is not good. I just did a paper at the central APA, a copy of which is here. And I just sent the following abstract to the 2002
AAP. It’s common practice to send papers that are not yet written to the AAP,
which makes the conference a little more cutting edge, and the outcomes a
little more variable.

 

Justification and Innateness

Our concept of epistemic justification is a
somewhat awkward amalgam of two related concepts: a reliabilist concept that is
appropriate for evaluating believers without the capacity for critical
reflection, and a coherentist concept that is appropriate for evaluating those
with this capacity. The application of this concept gets complicated when
dealing with believers who have this capacity at some stages of their
existence, and lack it at crucial other times. To take one interesting example,
we don’t acquire the capacity for critical reflection until well after we start
acquiring beliefs, so these difficulties matter to us. I propose that the
reliabilist concept is suitable for evaluating beliefs acquired before the
onset of critical reflection, and the coherentist concept is suitable for
evaluating beliefs acquired after this time. This proposal deals with some cases,
largely inspired by Bonjour’s clairvoyant, that defeat simpler versions of
reliabilism, while retaining a sizeable role for accuracy in our theory of
justification.

 

If you want a copy of the paper when it’s
done, let me know and I’ll email you a copy. Of course, you could probably
figure out what I’m going to write by the posts below, but that would spoil the
fun of having a good paper.

By
the way, Neil McKinnon (another great Monash product) has a number of really
interesting papers up on his website. If
you’re interested in issues about time, persistence and vagueness (and really,
who isn’t) you should look at it.

Reliabilism

Time for some random thoughts on
epistemology. I have been playing around with a two-tiered theory of
justification over recent months, which recognises a concept of ‘machine
justification’ which is more or less reliabilist, and a concept of ‘agent
justification’ which is more or less coherentist. Roughly, X is justified in
believing p iff X is an agent and X is agent-justified in believing p,
or X is not an agent and X is machine-justified in believing p.

This
puts a lot of stress on the concept of agency, and I don’t have a lot to say
about this, but roughly the idea is that X is an agent iff X has the capacity
for both inductive reasoning and critical reflection on her own beliefs. So we
humans become agents sometime after infancy, but presumably not too long into
childhood. Induction is important here because as Fodor shows in his recent book,
it isn’t a modular process, and non-modularity is important because, well this
is going to sound like a cheat, but because it is impossible to solve the
generality problem for non-modular processes so a reliabilist concept like
machine justification can’t apply to them.

Anyway,
there’s two important caveats to the above analysis of justification, both to
do with entities that start off as machines, but become agents. First, if X
acquires a justified belief in p while still a machine, she is still
justified in believing p once she becomes an agent, even if she wouldn’t
be agent-justified in believing p on the basis of the evidence she now
has. Secondly, and this is the crucial one I think, if X acquires (or, more
likely, activates) a reliable modular belief-forming mechanism while still a
machine, beliefs acquired through that mechanism are justified even after X
becomes an agent. So assuming that we are not being massively deceived and our
faculties are more or less reliable, our perceptual beliefs are justified. But
this turns crucially on the fact that we became perceivers before we became
agents. If we acquired a perceptual faculty late in life (i.e. after becoming
agents and hence after we are capable of reflecting on the reliability of this
faculty), beliefs acquired through it are not justified until we have a reason
for thinking the faculty is reliable. This, I take it, is the lesson of
BonJour’s Clairvoyant Claire example, and my Blind
Belinda
example. Further, if we acquired all of our perceptual
faculties late in life, we wouldn’t have any justified perceptual
beliefs. This captures what is right, I think, about Cartesian scepticism about
justification. If we were born agents, we wouldn’t be justified in believing
anything. (So my theory is just false if the ‘theory theory’ is true – that’s a
risk I’m willing to take!) There’s a few further wrinkles in the theory about
how the concept of coherence works (basically it’s still a little externalist
for agents who used to be machines), and a few things to say about why this is
a much better theory than various internalist and externalist theories
of justification, and a little more useful than Ernie
Sosa’s
distinction between animal knowledge and human knowledge. But for
now I want to spend a little time on the sceptical conclusion I just stated.

Let’s
pretend that it’s possible for an entity without perceptual facilities to have
beliefs, at least about mathematics. I think this is probably possible, but if
you don’t, please just pretend. Imagine that such a thing acquires doxastic
agency in the sense described above. It believes, on inductive grounds, that
all even numbers are the sum of two primes, and on reflection it realises that
this belief is less secure than its belief that 3+3=6. It then acquires a
single perceptual faculty, say sight. I think it would have no reason
whatsoever to trust any of these inputs. It’s a little hard to imagine the
case, but if the thing didn’t even have a kinaesthetic sense, I think it would
be very hard for it to know just what sense to make of these visual images
flooding in. So far, at least, I think, my sceptical conclusion is right, even
if the visual beliefs of the thing are forced and reliable, they aren’t
justified. (Remember I don’t apply these sceptical conclusions to us – we
acquired our justification for perceptual beliefs while still machines.)

Anyway,
that’s not the problem I want to raise. Imagine such a thing gets a whole host
of new, and clearly distinct, kinds of perceptual input. Just to make things
concrete, imagine that all of a sudden it has visual, auditory, tactile and
kinaesthetic senses. And it notices, very quickly, that the inputs it gets from
these sense all coheres very nicely. Would it then be justified in
believing all of the inputs? This is a bootstrapping problem, but it isn’t an
‘easy knowledge’ problem, as Stewart Cohen puts it. Each of the faculties is tested against the
others, and it could in principle fail this test. Does this mean that they
start delivering justified beliefs? I’m still inclined to think not, but maybe
I’m wrong. Any thoughts?

Vagueness Test Again

It is no longer true that everyone who has
taken the vagueness test has got
the results Kamp and Raffman predict! Does one counterexample refute the
theory, even if it’s in an uncontrolled experiment? I doubt it, but it’s not
great news for the theory.

There
hasn’t been much updating recently because of either extreme business in my
life or extreme laziness in my work habits. I’ll leave it to you to decide
which.

I’m
currently rewriting the pragmatics of vagueness paper to make it be about the
Sorites. This doesn’t change the underlying thesis that much, but hopefully it
will be a good marketing angle. If anyone reads this, I’d be interested in
hearing if you’ve ever seen a Sorites argument of the following form:

 

A person with a billion dollars is rich.

For all n, either person with n
dollars is not rich or a person with n-1 dollars is rich.

Therefore, a person with 2 dollars is rich.

 

This is clearly valid (at least outside
Australia), and in theory its premises seem at least as plausible as the
premises in a normal Sorites argument. By that I mean that in theory it seems
that if If A then B is true then Not A or B should be true, so
the second premise here should, in theory, be entailed by the premise in a
normal Sorites. But (a) I’ve never seen an argument of this form in the
literature and (b) it seems rather painless in this case to simply deny the
second premise. One of the aims of the paper, as currently constituted, is to
explain why this argument does not seem sound, and hence cannot be the
basis of any paradox, so I do hope it doesn’t seem sound.

Vagueness and Voluntarism

Everyone who has taken the vagueness test so far has got the
results Kamp and Raffman predict. I would be very pleased to hear
counterevidence, but I doubt there’s going to be much of that. It would be nice
to have a test of this that didn’t involve phenomenal properties, but I can’t
see how to do it in this framework. If I could even come up with a Sorites
series that went from ‘Cars are vehicles’ to ‘Skateboards are vehicles’ to
‘Sheep are vehicles’ to ‘Chairs are vehicles’ we couldn’t run this test,
because the subjects would remember the cases as they were going back down the
scale. Not that inevitable experimental design flaws have stopped me before!

Nick
Zangwill suggested a nice variation on the vague picture case below. Instead of having a malicious vandal
change the picture, as I was suggesting doing, just imagine a normal painting
that fades. This will eventually not represent anything at all, but it does not
seem there is a first time when it stops being representational. And this in
turn does not seem to be because of vagueness in the word ‘representational’,
though I admit I don’t have much of an argument for that last claim, and indeed
am prepared to believe it if I don’t have any other choices.

Some
would bridle at this talk of being prepared to believe things. It sounds like I
can just choose what I believe. Well, contrary to what you might have heard, it
is possible to choose what you believe at least some of the time. The other
day, for instance, I decided to believe that voluntarism about belief is true.
I was worried that this was irrational, but it can hardly be irrational to have
self-verifying beliefs.

There
is a more serious argument for this kind of voluntarism. Sometimes I slip into
believing that p on the basis of manifestly insufficient evidence. For
example, I was tricked into believing that the departing Clintonistas really
did steal all the W keys off White House keyboards. (I actually thought this
was mildly amusing in the circumstances.) As we all know, this didn’t happen,
and I would have been better served to have not believed it. More often, when I
hear stories like this about the greatest president since Truman, I am tempted
to believe them, especially if they are in the New York Times, but I have a
technique for guarding against such belief. I decide to believe that I don’t
have sufficient evidence to believe the anti-Clinton story. It really isn’t too
hard to make such decisions; the practice of becoming a skeptic, in the good
sense of that term, involves remembering to make this decision, not implementing
it, which is really very easy.

Vagueness Test

I was trying desperately to write something
about the Kamp/Raffman/Soames/Graff theory of vagueness, and I noticed that
both Kamp and Raffman note that their theory makes an empirical claim, one that
they are apparently sure is true, but which they have never tested. Well, I
haven’t tested it either, because testing theories costs real money, and my
research fund is lucky to run to a couple of conferences and a few books a
year, let alone real experiments. But I did come up with a way to do an
uncontrolled experiment on the hypothesis in question.

If
you want to take the experiment yourself, open this file and unzip it. Then
open the Word document in it and answer the onscreen questions until you get a
summary sheet of the results. Take note of the two numbers you are given, they
will be important. I won’t yet tell you what Kamp and Raffman’s prediction is
concerning those two numbers, it might be better (well, less appallingly awful
from the pov of experimental design) if you don’t know that yet. Note that the
test works best if you have your computer set to run in True Colour, and
probably doesn’t work at all if you aren’t running Word 98 or later. (If I
could only learn to program the latter two problems could be fixed – my skill
set is still, sadly, set-sized.)

 

Vagueness
Test

If you have advanced virus protection you
may have to be quite insistent with your computer or it won’t let you run the
attached macros. Trust me, you won’t catch a virus this way. (Not that I
guarantee anything in case you do :))

 

Have you taken the test yet? Good, keep
reading. If not, go back and take it you slacker!

 

The result Kamp and Raffman want is that
the first number is higher than the second number. Essentially, they claim that
among the many technical flaws in our perceptual system is a hysteresis in our
colour perception. If you slowly change a colour from red to purple (they both
use orange, but the experiments are easier I find with purple) then the change
in apparent colour will lag the change in actual colour. So the effect will be
that we judge some colours as red if we have previously been looking at reds,
but we will judge the very same colours as non-reds if we have previously been
looking at non-reds. If this is true, then when you run the experiment, the
second of the two numbers you get at the end should be lower.

For
what it’s worth, I do get this result when I run the test. I was rather hoping
I would not, so I could quickly refute this theory and go back to working out
the true ‘truer’ theory. If you run the test, let me know the results, and I’ll
keep a very unscientific running tally of the totals. If the test doesn’t work
also let me know. If running the test causes grave computer malfunctions, call
an expert, I’m going to be of no help whatsoever.

Counterexamples

My counterexamples
paper was just conditionally accepted at Philosophical Studies. Woo hoo!
The bad news is that the condition is that some fairly extensive changes are
made. The good news is that the suggested changes will make it a much
better paper. Right now parts of it read as being slightly less formal than
this weblog. That’s probably a bad thing. It’s also a sign that my writing was
too chatty even before I started reading internet sites where everyone writes
that way.

All Vagueness is Linguistic?

So I was trying to write something on metaphysical
vagueness, when I came across the following little puzzle. The aim was to turn
the few comments on Trenton
Merricks’s
PPR paper in section 8 of my problem of the many paper into a full fledged discussion note. So I
started off by noting that the issue isn’t really whether all vagueness is
linguistic, because any representation, including pictures and (as Merricks
notes) thoughts can be vague. I then went to say that this doesn’t matter, and
that Merricks was right to focus on the linguistic case, when I suddenly had a
rather large fear that it does matter. Here’s why. Merricks spends a lot of
time fretting about whether the fact that (1) is indeterminate when Harry is a
borderline case of baldness.

 

(1) ‘Bald’
describes Harry.

 

Merricks claims that this is an instance of
metaphysical vagueness, because it is indeterminate whether a particular
object, the word ‘bald’, has a particular property, describing Harry. Set aside
concerns about whether describing Harry is a real property. There is a
huge issue remaining about just which object indeterminately has this
‘property’. It can’t be the word itself. It is not words themselves, but words
in languages, that describe (or don’t describe) people. So (1) should be ‘Bald’
in X describes Harry
. But it is rather plausible that for every legitimate
substitution instance of X, we get a
sentence that is either determinately true or determinately false. There’s more
of a story to tell about how this avoids sliding into epistemicism, which is
Merricks’s response to a similar move he considers in the paper, but that story
can wait until the paper gets written.

The
real issue is that we can’t make the same move with pictures, because pictures
don’t represent with respect to a language. So imagine we start with a picture
of George Washington. Let’s start with this one:

 

 

This picture represents George Washington.
I could change it into a picture that didn’t represent Washington. The most
dramatic way to do this would be to replace every non-black pixel with a black
one. Let’s assume I did this slowly. (If I get some time this weekend I might
do just this, just to see the results in practice.) So we’d end up with
pictures that had causal origin in Washington, but whether they really were
pictures of Washington, well that would be hard to say. Indeed, whether they
were pictures of anything would be hard to say. Let a be the name for
one of these pictures. My claim is that it might be indeterminate whether $x(Represents(a, x)) is
true. I would have hoped that this wasn’t because of vagueness in ‘Represents’,
but I don’t really see any way out other than that. Any suggestions?

Vagueness Paper

I posted a new
vagueness paper to the vagueness page, and to the new papers page. It is a
short note showing that John Burgess’s recent AJP article arguing against epistemicism has a small bug in it, but
that the bug can be fixed without too much damage to the structure of the
argument.

I haven’t been adding much
philosophical content to this site for a bit, but hopefully that will change
soon. I should have some things on disjunctive theories of perception up soon,
and maybe something on voluntarism about belief. In the meantime, I’ve been
thinking of trying to Google Bomb
my own pages up, but I doubt this works. (For more info on what a Google Bomb
is, see the link. The best such bomb is the one attacking the “Church” of Scientology.) Right now my vagueness page is #12 on a Google search for
‘vagueness’ – I’m sure it can go higher than that!

Foreign Aid

This isn’t particularly
philosophical, but I thought it was fairly interesting. This is from a story in
today’s New
York Times
about globalization. “Mr. Chirac and Lionel Jospin, the prime minister,
who is his rival in forthcoming presidential elections, have detailed competing
proposals to tax the profits of globalization to provide aid funds.” So if I
read that right, both major
candidates in France are proposing tax increases
the point of which will be to spend more on foreign aid. The contrast with some other countries (e.g. any
English-speaking country not called ‘New Zealand’) could hardly be starker. I
read somewhere recently a claim that Chirac was to the left of Bill Clinton. At
the time I thought it was ridiculously simplistic, but now I’m not so sure.