Time for some random thoughts on
epistemology. I have been playing around with a two-tiered theory of
justification over recent months, which recognises a concept of machine
justification which is more or less reliabilist, and a concept of agent
justification which is more or less coherentist. Roughly, X is justified in
believing p iff X is an agent and X is agent-justified in believing p,
or X is not an agent and X is machine-justified in believing p.
This
puts a lot of stress on the concept of agency, and I dont have a lot to say
about this, but roughly the idea is that X is an agent iff X has the capacity
for both inductive reasoning and critical reflection on her own beliefs. So we
humans become agents sometime after infancy, but presumably not too long into
childhood. Induction is important here because as Fodor shows in his recent book,
it isnt a modular process, and non-modularity is important because, well this
is going to sound like a cheat, but because it is impossible to solve the
generality problem for non-modular processes so a reliabilist concept like
machine justification cant apply to them.
Anyway,
theres two important caveats to the above analysis of justification, both to
do with entities that start off as machines, but become agents. First, if X
acquires a justified belief in p while still a machine, she is still
justified in believing p once she becomes an agent, even if she wouldnt
be agent-justified in believing p on the basis of the evidence she now
has. Secondly, and this is the crucial one I think, if X acquires (or, more
likely, activates) a reliable modular belief-forming mechanism while still a
machine, beliefs acquired through that mechanism are justified even after X
becomes an agent. So assuming that we are not being massively deceived and our
faculties are more or less reliable, our perceptual beliefs are justified. But
this turns crucially on the fact that we became perceivers before we became
agents. If we acquired a perceptual faculty late in life (i.e. after becoming
agents and hence after we are capable of reflecting on the reliability of this
faculty), beliefs acquired through it are not justified until we have a reason
for thinking the faculty is reliable. This, I take it, is the lesson of
BonJours Clairvoyant Claire example, and my Blind
Belinda example. Further, if we acquired all of our perceptual
faculties late in life, we wouldnt have any justified perceptual
beliefs. This captures what is right, I think, about Cartesian scepticism about
justification. If we were born agents, we wouldnt be justified in believing
anything. (So my theory is just false if the theory theory is true – thats a
risk Im willing to take!) Theres a few further wrinkles in the theory about
how the concept of coherence works (basically its still a little externalist
for agents who used to be machines), and a few things to say about why this is
a much better theory than various internalist and externalist theories
of justification, and a little more useful than Ernie
Sosas distinction between animal knowledge and human knowledge. But for
now I want to spend a little time on the sceptical conclusion I just stated.
Lets
pretend that its possible for an entity without perceptual facilities to have
beliefs, at least about mathematics. I think this is probably possible, but if
you dont, please just pretend. Imagine that such a thing acquires doxastic
agency in the sense described above. It believes, on inductive grounds, that
all even numbers are the sum of two primes, and on reflection it realises that
this belief is less secure than its belief that 3+3=6. It then acquires a
single perceptual faculty, say sight. I think it would have no reason
whatsoever to trust any of these inputs. Its a little hard to imagine the
case, but if the thing didnt even have a kinaesthetic sense, I think it would
be very hard for it to know just what sense to make of these visual images
flooding in. So far, at least, I think, my sceptical conclusion is right, even
if the visual beliefs of the thing are forced and reliable, they arent
justified. (Remember I dont apply these sceptical conclusions to us – we
acquired our justification for perceptual beliefs while still machines.)
Anyway,
thats not the problem I want to raise. Imagine such a thing gets a whole host
of new, and clearly distinct, kinds of perceptual input. Just to make things
concrete, imagine that all of a sudden it has visual, auditory, tactile and
kinaesthetic senses. And it notices, very quickly, that the inputs it gets from
these sense all coheres very nicely. Would it then be justified in
believing all of the inputs? This is a bootstrapping problem, but it isnt an
easy knowledge problem, as Stewart Cohen puts it. Each of the faculties is tested against the
others, and it could in principle fail this test. Does this mean that they
start delivering justified beliefs? Im still inclined to think not, but maybe
Im wrong. Any thoughts?