Reliabilism

Time for some random thoughts on
epistemology. I have been playing around with a two-tiered theory of
justification over recent months, which recognises a concept of ‘machine
justification’ which is more or less reliabilist, and a concept of ‘agent
justification’ which is more or less coherentist. Roughly, X is justified in
believing p iff X is an agent and X is agent-justified in believing p,
or X is not an agent and X is machine-justified in believing p.

This
puts a lot of stress on the concept of agency, and I don’t have a lot to say
about this, but roughly the idea is that X is an agent iff X has the capacity
for both inductive reasoning and critical reflection on her own beliefs. So we
humans become agents sometime after infancy, but presumably not too long into
childhood. Induction is important here because as Fodor shows in his recent book,
it isn’t a modular process, and non-modularity is important because, well this
is going to sound like a cheat, but because it is impossible to solve the
generality problem for non-modular processes so a reliabilist concept like
machine justification can’t apply to them.

Anyway,
there’s two important caveats to the above analysis of justification, both to
do with entities that start off as machines, but become agents. First, if X
acquires a justified belief in p while still a machine, she is still
justified in believing p once she becomes an agent, even if she wouldn’t
be agent-justified in believing p on the basis of the evidence she now
has. Secondly, and this is the crucial one I think, if X acquires (or, more
likely, activates) a reliable modular belief-forming mechanism while still a
machine, beliefs acquired through that mechanism are justified even after X
becomes an agent. So assuming that we are not being massively deceived and our
faculties are more or less reliable, our perceptual beliefs are justified. But
this turns crucially on the fact that we became perceivers before we became
agents. If we acquired a perceptual faculty late in life (i.e. after becoming
agents and hence after we are capable of reflecting on the reliability of this
faculty), beliefs acquired through it are not justified until we have a reason
for thinking the faculty is reliable. This, I take it, is the lesson of
BonJour’s Clairvoyant Claire example, and my Blind
Belinda
example. Further, if we acquired all of our perceptual
faculties late in life, we wouldn’t have any justified perceptual
beliefs. This captures what is right, I think, about Cartesian scepticism about
justification. If we were born agents, we wouldn’t be justified in believing
anything. (So my theory is just false if the ‘theory theory’ is true – that’s a
risk I’m willing to take!) There’s a few further wrinkles in the theory about
how the concept of coherence works (basically it’s still a little externalist
for agents who used to be machines), and a few things to say about why this is
a much better theory than various internalist and externalist theories
of justification, and a little more useful than Ernie
Sosa’s
distinction between animal knowledge and human knowledge. But for
now I want to spend a little time on the sceptical conclusion I just stated.

Let’s
pretend that it’s possible for an entity without perceptual facilities to have
beliefs, at least about mathematics. I think this is probably possible, but if
you don’t, please just pretend. Imagine that such a thing acquires doxastic
agency in the sense described above. It believes, on inductive grounds, that
all even numbers are the sum of two primes, and on reflection it realises that
this belief is less secure than its belief that 3+3=6. It then acquires a
single perceptual faculty, say sight. I think it would have no reason
whatsoever to trust any of these inputs. It’s a little hard to imagine the
case, but if the thing didn’t even have a kinaesthetic sense, I think it would
be very hard for it to know just what sense to make of these visual images
flooding in. So far, at least, I think, my sceptical conclusion is right, even
if the visual beliefs of the thing are forced and reliable, they aren’t
justified. (Remember I don’t apply these sceptical conclusions to us – we
acquired our justification for perceptual beliefs while still machines.)

Anyway,
that’s not the problem I want to raise. Imagine such a thing gets a whole host
of new, and clearly distinct, kinds of perceptual input. Just to make things
concrete, imagine that all of a sudden it has visual, auditory, tactile and
kinaesthetic senses. And it notices, very quickly, that the inputs it gets from
these sense all coheres very nicely. Would it then be justified in
believing all of the inputs? This is a bootstrapping problem, but it isn’t an
‘easy knowledge’ problem, as Stewart Cohen puts it. Each of the faculties is tested against the
others, and it could in principle fail this test. Does this mean that they
start delivering justified beliefs? I’m still inclined to think not, but maybe
I’m wrong. Any thoughts?

Vagueness Test Again

It is no longer true that everyone who has
taken the vagueness test has got
the results Kamp and Raffman predict! Does one counterexample refute the
theory, even if it’s in an uncontrolled experiment? I doubt it, but it’s not
great news for the theory.

There
hasn’t been much updating recently because of either extreme business in my
life or extreme laziness in my work habits. I’ll leave it to you to decide
which.

I’m
currently rewriting the pragmatics of vagueness paper to make it be about the
Sorites. This doesn’t change the underlying thesis that much, but hopefully it
will be a good marketing angle. If anyone reads this, I’d be interested in
hearing if you’ve ever seen a Sorites argument of the following form:

 

A person with a billion dollars is rich.

For all n, either person with n
dollars is not rich or a person with n-1 dollars is rich.

Therefore, a person with 2 dollars is rich.

 

This is clearly valid (at least outside
Australia), and in theory its premises seem at least as plausible as the
premises in a normal Sorites argument. By that I mean that in theory it seems
that if If A then B is true then Not A or B should be true, so
the second premise here should, in theory, be entailed by the premise in a
normal Sorites. But (a) I’ve never seen an argument of this form in the
literature and (b) it seems rather painless in this case to simply deny the
second premise. One of the aims of the paper, as currently constituted, is to
explain why this argument does not seem sound, and hence cannot be the
basis of any paradox, so I do hope it doesn’t seem sound.

Vagueness and Voluntarism

Everyone who has taken the vagueness test so far has got the
results Kamp and Raffman predict. I would be very pleased to hear
counterevidence, but I doubt there’s going to be much of that. It would be nice
to have a test of this that didn’t involve phenomenal properties, but I can’t
see how to do it in this framework. If I could even come up with a Sorites
series that went from ‘Cars are vehicles’ to ‘Skateboards are vehicles’ to
‘Sheep are vehicles’ to ‘Chairs are vehicles’ we couldn’t run this test,
because the subjects would remember the cases as they were going back down the
scale. Not that inevitable experimental design flaws have stopped me before!

Nick
Zangwill suggested a nice variation on the vague picture case below. Instead of having a malicious vandal
change the picture, as I was suggesting doing, just imagine a normal painting
that fades. This will eventually not represent anything at all, but it does not
seem there is a first time when it stops being representational. And this in
turn does not seem to be because of vagueness in the word ‘representational’,
though I admit I don’t have much of an argument for that last claim, and indeed
am prepared to believe it if I don’t have any other choices.

Some
would bridle at this talk of being prepared to believe things. It sounds like I
can just choose what I believe. Well, contrary to what you might have heard, it
is possible to choose what you believe at least some of the time. The other
day, for instance, I decided to believe that voluntarism about belief is true.
I was worried that this was irrational, but it can hardly be irrational to have
self-verifying beliefs.

There
is a more serious argument for this kind of voluntarism. Sometimes I slip into
believing that p on the basis of manifestly insufficient evidence. For
example, I was tricked into believing that the departing Clintonistas really
did steal all the W keys off White House keyboards. (I actually thought this
was mildly amusing in the circumstances.) As we all know, this didn’t happen,
and I would have been better served to have not believed it. More often, when I
hear stories like this about the greatest president since Truman, I am tempted
to believe them, especially if they are in the New York Times, but I have a
technique for guarding against such belief. I decide to believe that I don’t
have sufficient evidence to believe the anti-Clinton story. It really isn’t too
hard to make such decisions; the practice of becoming a skeptic, in the good
sense of that term, involves remembering to make this decision, not implementing
it, which is really very easy.

Vagueness Test

I was trying desperately to write something
about the Kamp/Raffman/Soames/Graff theory of vagueness, and I noticed that
both Kamp and Raffman note that their theory makes an empirical claim, one that
they are apparently sure is true, but which they have never tested. Well, I
haven’t tested it either, because testing theories costs real money, and my
research fund is lucky to run to a couple of conferences and a few books a
year, let alone real experiments. But I did come up with a way to do an
uncontrolled experiment on the hypothesis in question.

If
you want to take the experiment yourself, open this file and unzip it. Then
open the Word document in it and answer the onscreen questions until you get a
summary sheet of the results. Take note of the two numbers you are given, they
will be important. I won’t yet tell you what Kamp and Raffman’s prediction is
concerning those two numbers, it might be better (well, less appallingly awful
from the pov of experimental design) if you don’t know that yet. Note that the
test works best if you have your computer set to run in True Colour, and
probably doesn’t work at all if you aren’t running Word 98 or later. (If I
could only learn to program the latter two problems could be fixed – my skill
set is still, sadly, set-sized.)

 

Vagueness
Test

If you have advanced virus protection you
may have to be quite insistent with your computer or it won’t let you run the
attached macros. Trust me, you won’t catch a virus this way. (Not that I
guarantee anything in case you do :))

 

Have you taken the test yet? Good, keep
reading. If not, go back and take it you slacker!

 

The result Kamp and Raffman want is that
the first number is higher than the second number. Essentially, they claim that
among the many technical flaws in our perceptual system is a hysteresis in our
colour perception. If you slowly change a colour from red to purple (they both
use orange, but the experiments are easier I find with purple) then the change
in apparent colour will lag the change in actual colour. So the effect will be
that we judge some colours as red if we have previously been looking at reds,
but we will judge the very same colours as non-reds if we have previously been
looking at non-reds. If this is true, then when you run the experiment, the
second of the two numbers you get at the end should be lower.

For
what it’s worth, I do get this result when I run the test. I was rather hoping
I would not, so I could quickly refute this theory and go back to working out
the true ‘truer’ theory. If you run the test, let me know the results, and I’ll
keep a very unscientific running tally of the totals. If the test doesn’t work
also let me know. If running the test causes grave computer malfunctions, call
an expert, I’m going to be of no help whatsoever.

Counterexamples

My counterexamples
paper was just conditionally accepted at Philosophical Studies. Woo hoo!
The bad news is that the condition is that some fairly extensive changes are
made. The good news is that the suggested changes will make it a much
better paper. Right now parts of it read as being slightly less formal than
this weblog. That’s probably a bad thing. It’s also a sign that my writing was
too chatty even before I started reading internet sites where everyone writes
that way.

All Vagueness is Linguistic?

So I was trying to write something on metaphysical
vagueness, when I came across the following little puzzle. The aim was to turn
the few comments on Trenton
Merricks’s
PPR paper in section 8 of my problem of the many paper into a full fledged discussion note. So I
started off by noting that the issue isn’t really whether all vagueness is
linguistic, because any representation, including pictures and (as Merricks
notes) thoughts can be vague. I then went to say that this doesn’t matter, and
that Merricks was right to focus on the linguistic case, when I suddenly had a
rather large fear that it does matter. Here’s why. Merricks spends a lot of
time fretting about whether the fact that (1) is indeterminate when Harry is a
borderline case of baldness.

 

(1) ‘Bald’
describes Harry.

 

Merricks claims that this is an instance of
metaphysical vagueness, because it is indeterminate whether a particular
object, the word ‘bald’, has a particular property, describing Harry. Set aside
concerns about whether describing Harry is a real property. There is a
huge issue remaining about just which object indeterminately has this
‘property’. It can’t be the word itself. It is not words themselves, but words
in languages, that describe (or don’t describe) people. So (1) should be ‘Bald’
in X describes Harry
. But it is rather plausible that for every legitimate
substitution instance of X, we get a
sentence that is either determinately true or determinately false. There’s more
of a story to tell about how this avoids sliding into epistemicism, which is
Merricks’s response to a similar move he considers in the paper, but that story
can wait until the paper gets written.

The
real issue is that we can’t make the same move with pictures, because pictures
don’t represent with respect to a language. So imagine we start with a picture
of George Washington. Let’s start with this one:

 

 

This picture represents George Washington.
I could change it into a picture that didn’t represent Washington. The most
dramatic way to do this would be to replace every non-black pixel with a black
one. Let’s assume I did this slowly. (If I get some time this weekend I might
do just this, just to see the results in practice.) So we’d end up with
pictures that had causal origin in Washington, but whether they really were
pictures of Washington, well that would be hard to say. Indeed, whether they
were pictures of anything would be hard to say. Let a be the name for
one of these pictures. My claim is that it might be indeterminate whether $x(Represents(a, x)) is
true. I would have hoped that this wasn’t because of vagueness in ‘Represents’,
but I don’t really see any way out other than that. Any suggestions?

Vagueness Paper

I posted a new
vagueness paper to the vagueness page, and to the new papers page. It is a
short note showing that John Burgess’s recent AJP article arguing against epistemicism has a small bug in it, but
that the bug can be fixed without too much damage to the structure of the
argument.

I haven’t been adding much
philosophical content to this site for a bit, but hopefully that will change
soon. I should have some things on disjunctive theories of perception up soon,
and maybe something on voluntarism about belief. In the meantime, I’ve been
thinking of trying to Google Bomb
my own pages up, but I doubt this works. (For more info on what a Google Bomb
is, see the link. The best such bomb is the one attacking the “Church” of Scientology.) Right now my vagueness page is #12 on a Google search for
‘vagueness’ – I’m sure it can go higher than that!

Foreign Aid

This isn’t particularly
philosophical, but I thought it was fairly interesting. This is from a story in
today’s New
York Times
about globalization. “Mr. Chirac and Lionel Jospin, the prime minister,
who is his rival in forthcoming presidential elections, have detailed competing
proposals to tax the profits of globalization to provide aid funds.” So if I
read that right, both major
candidates in France are proposing tax increases
the point of which will be to spend more on foreign aid. The contrast with some other countries (e.g. any
English-speaking country not called ‘New Zealand’) could hardly be starker. I
read somewhere recently a claim that Chirac was to the left of Bill Clinton. At
the time I thought it was ridiculously simplistic, but now I’m not so sure.

Backwards Causation

In the course of a
rather uncollegial attack on Paul Krugman, Ben Stein
makes the following rather striking claim. The Great Depression was caused by
the New Deal. Now the Depression did not start with the stock market crash of
1929. But it was well underway by 1932. And the New Deal did not start getting
implemented until 1933. Now maybe it was exacerbated by the New Deal, but by
most measures the worst of the Depression was before the New Deal came in. What
can we conclude from all this? That Ben Stein has less grasp of history than
we’d expect of the average high-school sophomore. Well, that wouldn’t be a
philosophically interesting conclusion now, would it? Better to conclude that
Ben Stein believes in backwards
causation
. And since Ben Stein is clearly one of the folk, this means that
the folk believe in backwards causation. (What, you think that a game show host
who dabbles in economics is not part of the folk? If he isn’t, who is?) This is
a philosophical bombshell!

The Ethics of Choosing a Team

In the latest issue of the Journal
of Applied Philosophy
, Nicholas Dixon discusses the ethics of
supporting various sporting teams. The main thesis was that it is (a) morally
acceptable to pick a team for arbitrary reasons and stick to them through at
least some turmoil (that is, to be a partisan
fan), and that (b) this is morally preferable to picking a property F of teams and supporting whatever team
is F (that is, being a puritan fan). The qualification to (a)
was that if the team starts to engage in indefensible practices, then you
should stop supporting the team. Well, I suppose this is right: you shouldn’t
defend the indefensible!

The
main argument for both (a) and (b) was an analogy between supporting a sporting
team and being in love. I guess there are some
analogies here; though as a Red Sox fan I’m not too sure I want to stress them.
The point is meant to be that (a) it is morally permissible to love a
particular person for somewhat arbitrary reasons and (b) this is preferable to
picking some property F and loving
whatever person you know best instantiates that property. So the analogy is
meant to ground both the permissibility of arbitrariness in team selection, and
the impermissibility of being principled in a certain way about changing teams.
It is also meant to ground the kinds of considerations that lead to justifiably
abandoning (dumping?) teams, though here things get a little murky.

The
analogy is a little strained in one pretty important dimension. If supporting a
team is like being in love, it is like being in love with someone who doesn’t
love you in return, and who indeed does not know of your existence, and whom
you know does not love you in return, or even know of your existence. The team
does know about, and even care about, a class of people to which you belong in
virtue of supporting the team, but if the analogy here is romantic love, then I
don’t think that’s much of a consolation. Given that important disanalogy, we
might wonder how much of the argument falls apart.

The
main argument that disappears is the argument for (b). It is not, I think,
impermissible to abandon a love that is so dramatically unrequited if the
initial basis for the love disappears. Anyone who organised their human
relationships this way would lack a crucial virtue. But the important point
here is that there is no relationship with
the team, since the team does not even know of your existence, or care about
you de re. (Since most teams care about their fans de dicto, this qualification is needed.)
So why keep loving them once they cease to be lovable?

I
have a little interest in this because in some sports, particularly American
football, I am somewhat of a puritan fan. I don’t really understand American
football, so when I watch it I really want to see two things: trick plays and
long passes. And I will quite happily support a team that promises lots of
those plays and then cease supporting them when they cease providing. I’m
probably missing out on something here, the special kind of qualia one gets
from genuinely being committed to a team, but I get quite enough of that as a
Red Sox fan to go on with. So I think purism is morally defensible, and feel
perfectly happy being a purist about the strange game they call ‘football’ in
this country.

At
another stage, Dixon compares supporting a team to supporting certain artistic
performers. But here any kind of partisan behaviour is absurd. I mean, I liked
Kevin Spacey’s mid-90s work about as much as I liked any artistic work in that
time period. But that doesn’t mean I’m going to even pretend to like his more
recent work. The purist can jump to the next good actor to come along, the
partisan is stuck watching K-Pax all the way to the hidden scenes.

It’s
a different point, but I also didn’t like the grounds Dixon endorsed for
abandoning teams as a partisan. Without going into too much detail, he
basically held that the grounds for this should be the on-field behaviour of a
team. He even said that you should dump a team that engages in ‘verbal
intimidation’. I don’t buy this. I wouldn’t like the Australian cricket team
near as much as I do if they didn’t engage in a little verbal intimidation from
time to time. I think a much better ground for abandoning a team is the off
field behaviour of their players. Or, a little more generally, it is the kind
of off-field behaviour that they endorse in virtue of who they sign. So I think
the otherwise superlative season the Seattle Mariners had last year was
tarnished by the fact that they signed Al Martin just after he’d been indicted
on charges of assaulting one of his wives. And whatever the Cubbies do this
year will be tarnished by their using their first draft pick on alleged human Ben Christensen. (See link
for the gruesome details.) Aside that, the odd gratuitous foul or questioning
of one’s opponent’s parentage is entirely acceptable.

There
was one amusing error in the article: Michael Jordan did play for the Chicago
White Sox (and I guess the Bulls too), but not the Cubbies!

Oh,
and the Red Sox just beat the Rangers despite an A-Rod home run.