Justification and Innateness

It’s been a long time between posts here, which
is not good. I just did a paper at the central APA, a copy of which is here. And I just sent the following abstract to the 2002
AAP. It’s common practice to send papers that are not yet written to the AAP,
which makes the conference a little more cutting edge, and the outcomes a
little more variable.

 

Justification and Innateness

Our concept of epistemic justification is a
somewhat awkward amalgam of two related concepts: a reliabilist concept that is
appropriate for evaluating believers without the capacity for critical
reflection, and a coherentist concept that is appropriate for evaluating those
with this capacity. The application of this concept gets complicated when
dealing with believers who have this capacity at some stages of their
existence, and lack it at crucial other times. To take one interesting example,
we don’t acquire the capacity for critical reflection until well after we start
acquiring beliefs, so these difficulties matter to us. I propose that the
reliabilist concept is suitable for evaluating beliefs acquired before the
onset of critical reflection, and the coherentist concept is suitable for
evaluating beliefs acquired after this time. This proposal deals with some cases,
largely inspired by Bonjour’s clairvoyant, that defeat simpler versions of
reliabilism, while retaining a sizeable role for accuracy in our theory of
justification.

 

If you want a copy of the paper when it’s
done, let me know and I’ll email you a copy. Of course, you could probably
figure out what I’m going to write by the posts below, but that would spoil the
fun of having a good paper.

By
the way, Neil McKinnon (another great Monash product) has a number of really
interesting papers up on his website. If
you’re interested in issues about time, persistence and vagueness (and really,
who isn’t) you should look at it.

Justification and Innateness

It’s been a long time between posts here, which
is not good. I just did a paper at the central APA, a copy of which is here. And I just sent the following abstract to the 2002
AAP. It’s common practice to send papers that are not yet written to the AAP,
which makes the conference a little more cutting edge, and the outcomes a
little more variable.

 

Justification and Innateness

Our concept of epistemic justification is a
somewhat awkward amalgam of two related concepts: a reliabilist concept that is
appropriate for evaluating believers without the capacity for critical
reflection, and a coherentist concept that is appropriate for evaluating those
with this capacity. The application of this concept gets complicated when
dealing with believers who have this capacity at some stages of their
existence, and lack it at crucial other times. To take one interesting example,
we don’t acquire the capacity for critical reflection until well after we start
acquiring beliefs, so these difficulties matter to us. I propose that the
reliabilist concept is suitable for evaluating beliefs acquired before the
onset of critical reflection, and the coherentist concept is suitable for
evaluating beliefs acquired after this time. This proposal deals with some cases,
largely inspired by Bonjour’s clairvoyant, that defeat simpler versions of
reliabilism, while retaining a sizeable role for accuracy in our theory of
justification.

 

If you want a copy of the paper when it’s
done, let me know and I’ll email you a copy. Of course, you could probably
figure out what I’m going to write by the posts below, but that would spoil the
fun of having a good paper.

By
the way, Neil McKinnon (another great Monash product) has a number of really
interesting papers up on his website. If
you’re interested in issues about time, persistence and vagueness (and really,
who isn’t) you should look at it.

Reliabilism

Time for some random thoughts on
epistemology. I have been playing around with a two-tiered theory of
justification over recent months, which recognises a concept of ‘machine
justification’ which is more or less reliabilist, and a concept of ‘agent
justification’ which is more or less coherentist. Roughly, X is justified in
believing p iff X is an agent and X is agent-justified in believing p,
or X is not an agent and X is machine-justified in believing p.

This
puts a lot of stress on the concept of agency, and I don’t have a lot to say
about this, but roughly the idea is that X is an agent iff X has the capacity
for both inductive reasoning and critical reflection on her own beliefs. So we
humans become agents sometime after infancy, but presumably not too long into
childhood. Induction is important here because as Fodor shows in his recent book,
it isn’t a modular process, and non-modularity is important because, well this
is going to sound like a cheat, but because it is impossible to solve the
generality problem for non-modular processes so a reliabilist concept like
machine justification can’t apply to them.

Anyway,
there’s two important caveats to the above analysis of justification, both to
do with entities that start off as machines, but become agents. First, if X
acquires a justified belief in p while still a machine, she is still
justified in believing p once she becomes an agent, even if she wouldn’t
be agent-justified in believing p on the basis of the evidence she now
has. Secondly, and this is the crucial one I think, if X acquires (or, more
likely, activates) a reliable modular belief-forming mechanism while still a
machine, beliefs acquired through that mechanism are justified even after X
becomes an agent. So assuming that we are not being massively deceived and our
faculties are more or less reliable, our perceptual beliefs are justified. But
this turns crucially on the fact that we became perceivers before we became
agents. If we acquired a perceptual faculty late in life (i.e. after becoming
agents and hence after we are capable of reflecting on the reliability of this
faculty), beliefs acquired through it are not justified until we have a reason
for thinking the faculty is reliable. This, I take it, is the lesson of
BonJour’s Clairvoyant Claire example, and my Blind
Belinda
example. Further, if we acquired all of our perceptual
faculties late in life, we wouldn’t have any justified perceptual
beliefs. This captures what is right, I think, about Cartesian scepticism about
justification. If we were born agents, we wouldn’t be justified in believing
anything. (So my theory is just false if the ‘theory theory’ is true – that’s a
risk I’m willing to take!) There’s a few further wrinkles in the theory about
how the concept of coherence works (basically it’s still a little externalist
for agents who used to be machines), and a few things to say about why this is
a much better theory than various internalist and externalist theories
of justification, and a little more useful than Ernie
Sosa’s
distinction between animal knowledge and human knowledge. But for
now I want to spend a little time on the sceptical conclusion I just stated.

Let’s
pretend that it’s possible for an entity without perceptual facilities to have
beliefs, at least about mathematics. I think this is probably possible, but if
you don’t, please just pretend. Imagine that such a thing acquires doxastic
agency in the sense described above. It believes, on inductive grounds, that
all even numbers are the sum of two primes, and on reflection it realises that
this belief is less secure than its belief that 3+3=6. It then acquires a
single perceptual faculty, say sight. I think it would have no reason
whatsoever to trust any of these inputs. It’s a little hard to imagine the
case, but if the thing didn’t even have a kinaesthetic sense, I think it would
be very hard for it to know just what sense to make of these visual images
flooding in. So far, at least, I think, my sceptical conclusion is right, even
if the visual beliefs of the thing are forced and reliable, they aren’t
justified. (Remember I don’t apply these sceptical conclusions to us – we
acquired our justification for perceptual beliefs while still machines.)

Anyway,
that’s not the problem I want to raise. Imagine such a thing gets a whole host
of new, and clearly distinct, kinds of perceptual input. Just to make things
concrete, imagine that all of a sudden it has visual, auditory, tactile and
kinaesthetic senses. And it notices, very quickly, that the inputs it gets from
these sense all coheres very nicely. Would it then be justified in
believing all of the inputs? This is a bootstrapping problem, but it isn’t an
‘easy knowledge’ problem, as Stewart Cohen puts it. Each of the faculties is tested against the
others, and it could in principle fail this test. Does this mean that they
start delivering justified beliefs? I’m still inclined to think not, but maybe
I’m wrong. Any thoughts?

Reliabilism

Time for some random thoughts on
epistemology. I have been playing around with a two-tiered theory of
justification over recent months, which recognises a concept of ‘machine
justification’ which is more or less reliabilist, and a concept of ‘agent
justification’ which is more or less coherentist. Roughly, X is justified in
believing p iff X is an agent and X is agent-justified in believing p,
or X is not an agent and X is machine-justified in believing p.

This
puts a lot of stress on the concept of agency, and I don’t have a lot to say
about this, but roughly the idea is that X is an agent iff X has the capacity
for both inductive reasoning and critical reflection on her own beliefs. So we
humans become agents sometime after infancy, but presumably not too long into
childhood. Induction is important here because as Fodor shows in his recent book,
it isn’t a modular process, and non-modularity is important because, well this
is going to sound like a cheat, but because it is impossible to solve the
generality problem for non-modular processes so a reliabilist concept like
machine justification can’t apply to them.

Anyway,
there’s two important caveats to the above analysis of justification, both to
do with entities that start off as machines, but become agents. First, if X
acquires a justified belief in p while still a machine, she is still
justified in believing p once she becomes an agent, even if she wouldn’t
be agent-justified in believing p on the basis of the evidence she now
has. Secondly, and this is the crucial one I think, if X acquires (or, more
likely, activates) a reliable modular belief-forming mechanism while still a
machine, beliefs acquired through that mechanism are justified even after X
becomes an agent. So assuming that we are not being massively deceived and our
faculties are more or less reliable, our perceptual beliefs are justified. But
this turns crucially on the fact that we became perceivers before we became
agents. If we acquired a perceptual faculty late in life (i.e. after becoming
agents and hence after we are capable of reflecting on the reliability of this
faculty), beliefs acquired through it are not justified until we have a reason
for thinking the faculty is reliable. This, I take it, is the lesson of
BonJour’s Clairvoyant Claire example, and my Blind
Belinda
example. Further, if we acquired all of our perceptual
faculties late in life, we wouldn’t have any justified perceptual
beliefs. This captures what is right, I think, about Cartesian scepticism about
justification. If we were born agents, we wouldn’t be justified in believing
anything. (So my theory is just false if the ‘theory theory’ is true – that’s a
risk I’m willing to take!) There’s a few further wrinkles in the theory about
how the concept of coherence works (basically it’s still a little externalist
for agents who used to be machines), and a few things to say about why this is
a much better theory than various internalist and externalist theories
of justification, and a little more useful than Ernie
Sosa’s
distinction between animal knowledge and human knowledge. But for
now I want to spend a little time on the sceptical conclusion I just stated.

Let’s
pretend that it’s possible for an entity without perceptual facilities to have
beliefs, at least about mathematics. I think this is probably possible, but if
you don’t, please just pretend. Imagine that such a thing acquires doxastic
agency in the sense described above. It believes, on inductive grounds, that
all even numbers are the sum of two primes, and on reflection it realises that
this belief is less secure than its belief that 3+3=6. It then acquires a
single perceptual faculty, say sight. I think it would have no reason
whatsoever to trust any of these inputs. It’s a little hard to imagine the
case, but if the thing didn’t even have a kinaesthetic sense, I think it would
be very hard for it to know just what sense to make of these visual images
flooding in. So far, at least, I think, my sceptical conclusion is right, even
if the visual beliefs of the thing are forced and reliable, they aren’t
justified. (Remember I don’t apply these sceptical conclusions to us – we
acquired our justification for perceptual beliefs while still machines.)

Anyway,
that’s not the problem I want to raise. Imagine such a thing gets a whole host
of new, and clearly distinct, kinds of perceptual input. Just to make things
concrete, imagine that all of a sudden it has visual, auditory, tactile and
kinaesthetic senses. And it notices, very quickly, that the inputs it gets from
these sense all coheres very nicely. Would it then be justified in
believing all of the inputs? This is a bootstrapping problem, but it isn’t an
‘easy knowledge’ problem, as Stewart Cohen puts it. Each of the faculties is tested against the
others, and it could in principle fail this test. Does this mean that they
start delivering justified beliefs? I’m still inclined to think not, but maybe
I’m wrong. Any thoughts?

Vagueness Test Again

It is no longer true that everyone who has
taken the vagueness test has got
the results Kamp and Raffman predict! Does one counterexample refute the
theory, even if it’s in an uncontrolled experiment? I doubt it, but it’s not
great news for the theory.

There
hasn’t been much updating recently because of either extreme business in my
life or extreme laziness in my work habits. I’ll leave it to you to decide
which.

I’m
currently rewriting the pragmatics of vagueness paper to make it be about the
Sorites. This doesn’t change the underlying thesis that much, but hopefully it
will be a good marketing angle. If anyone reads this, I’d be interested in
hearing if you’ve ever seen a Sorites argument of the following form:

 

A person with a billion dollars is rich.

For all n, either person with n
dollars is not rich or a person with n-1 dollars is rich.

Therefore, a person with 2 dollars is rich.

 

This is clearly valid (at least outside
Australia), and in theory its premises seem at least as plausible as the
premises in a normal Sorites argument. By that I mean that in theory it seems
that if If A then B is true then Not A or B should be true, so
the second premise here should, in theory, be entailed by the premise in a
normal Sorites. But (a) I’ve never seen an argument of this form in the
literature and (b) it seems rather painless in this case to simply deny the
second premise. One of the aims of the paper, as currently constituted, is to
explain why this argument does not seem sound, and hence cannot be the
basis of any paradox, so I do hope it doesn’t seem sound.

Vagueness Test Again

It is no longer true that everyone who has
taken the vagueness test has got
the results Kamp and Raffman predict! Does one counterexample refute the
theory, even if it’s in an uncontrolled experiment? I doubt it, but it’s not
great news for the theory.

There
hasn’t been much updating recently because of either extreme business in my
life or extreme laziness in my work habits. I’ll leave it to you to decide
which.

I’m
currently rewriting the pragmatics of vagueness paper to make it be about the
Sorites. This doesn’t change the underlying thesis that much, but hopefully it
will be a good marketing angle. If anyone reads this, I’d be interested in
hearing if you’ve ever seen a Sorites argument of the following form:

 

A person with a billion dollars is rich.

For all n, either person with n
dollars is not rich or a person with n-1 dollars is rich.

Therefore, a person with 2 dollars is rich.

 

This is clearly valid (at least outside
Australia), and in theory its premises seem at least as plausible as the
premises in a normal Sorites argument. By that I mean that in theory it seems
that if If A then B is true then Not A or B should be true, so
the second premise here should, in theory, be entailed by the premise in a
normal Sorites. But (a) I’ve never seen an argument of this form in the
literature and (b) it seems rather painless in this case to simply deny the
second premise. One of the aims of the paper, as currently constituted, is to
explain why this argument does not seem sound, and hence cannot be the
basis of any paradox, so I do hope it doesn’t seem sound.

Vagueness and Voluntarism

Everyone who has taken the vagueness test so far has got the
results Kamp and Raffman predict. I would be very pleased to hear
counterevidence, but I doubt there’s going to be much of that. It would be nice
to have a test of this that didn’t involve phenomenal properties, but I can’t
see how to do it in this framework. If I could even come up with a Sorites
series that went from ‘Cars are vehicles’ to ‘Skateboards are vehicles’ to
‘Sheep are vehicles’ to ‘Chairs are vehicles’ we couldn’t run this test,
because the subjects would remember the cases as they were going back down the
scale. Not that inevitable experimental design flaws have stopped me before!

Nick
Zangwill suggested a nice variation on the vague picture case below. Instead of having a malicious vandal
change the picture, as I was suggesting doing, just imagine a normal painting
that fades. This will eventually not represent anything at all, but it does not
seem there is a first time when it stops being representational. And this in
turn does not seem to be because of vagueness in the word ‘representational’,
though I admit I don’t have much of an argument for that last claim, and indeed
am prepared to believe it if I don’t have any other choices.

Some
would bridle at this talk of being prepared to believe things. It sounds like I
can just choose what I believe. Well, contrary to what you might have heard, it
is possible to choose what you believe at least some of the time. The other
day, for instance, I decided to believe that voluntarism about belief is true.
I was worried that this was irrational, but it can hardly be irrational to have
self-verifying beliefs.

There
is a more serious argument for this kind of voluntarism. Sometimes I slip into
believing that p on the basis of manifestly insufficient evidence. For
example, I was tricked into believing that the departing Clintonistas really
did steal all the W keys off White House keyboards. (I actually thought this
was mildly amusing in the circumstances.) As we all know, this didn’t happen,
and I would have been better served to have not believed it. More often, when I
hear stories like this about the greatest president since Truman, I am tempted
to believe them, especially if they are in the New York Times, but I have a
technique for guarding against such belief. I decide to believe that I don’t
have sufficient evidence to believe the anti-Clinton story. It really isn’t too
hard to make such decisions; the practice of becoming a skeptic, in the good
sense of that term, involves remembering to make this decision, not implementing
it, which is really very easy.

Vagueness and Voluntarism

Everyone who has taken the vagueness test so far has got the
results Kamp and Raffman predict. I would be very pleased to hear
counterevidence, but I doubt there’s going to be much of that. It would be nice
to have a test of this that didn’t involve phenomenal properties, but I can’t
see how to do it in this framework. If I could even come up with a Sorites
series that went from ‘Cars are vehicles’ to ‘Skateboards are vehicles’ to
‘Sheep are vehicles’ to ‘Chairs are vehicles’ we couldn’t run this test,
because the subjects would remember the cases as they were going back down the
scale. Not that inevitable experimental design flaws have stopped me before!

Nick
Zangwill suggested a nice variation on the vague picture case below. Instead of having a malicious vandal
change the picture, as I was suggesting doing, just imagine a normal painting
that fades. This will eventually not represent anything at all, but it does not
seem there is a first time when it stops being representational. And this in
turn does not seem to be because of vagueness in the word ‘representational’,
though I admit I don’t have much of an argument for that last claim, and indeed
am prepared to believe it if I don’t have any other choices.

Some
would bridle at this talk of being prepared to believe things. It sounds like I
can just choose what I believe. Well, contrary to what you might have heard, it
is possible to choose what you believe at least some of the time. The other
day, for instance, I decided to believe that voluntarism about belief is true.
I was worried that this was irrational, but it can hardly be irrational to have
self-verifying beliefs.

There
is a more serious argument for this kind of voluntarism. Sometimes I slip into
believing that p on the basis of manifestly insufficient evidence. For
example, I was tricked into believing that the departing Clintonistas really
did steal all the W keys off White House keyboards. (I actually thought this
was mildly amusing in the circumstances.) As we all know, this didn’t happen,
and I would have been better served to have not believed it. More often, when I
hear stories like this about the greatest president since Truman, I am tempted
to believe them, especially if they are in the New York Times, but I have a
technique for guarding against such belief. I decide to believe that I don’t
have sufficient evidence to believe the anti-Clinton story. It really isn’t too
hard to make such decisions; the practice of becoming a skeptic, in the good
sense of that term, involves remembering to make this decision, not implementing
it, which is really very easy.