Kieran Healy writes on the

Kieran
Healy
writes on the (slow-)growing controversy over the role of intuitions
in philosophy. For background, see the papers by Jonathan Weinburg et
al
here,
here
and here. (If you haven’t seen the survey results about intuitions on Gettier cases across cultural and social groups in these papers yet, you should. And prepare to be a little suprised.)
Kieran has a rather funny caricature of the way philosophers (or at least
metaphysicians) generally argue, but then goes off on a riff about why we
should care more about where intuitions come from.

In the meantime, you might be interested in looking at other writers, who
have explored the
idea
that our intuitions might have institutional
roots
; that culture might mold conceptions
of rationality
and thus deeply affect
how you think
; that classification
is a social process
which might have its origins in
material life
; and that although individual and social cognition interact in
complex ways
, getting socialized into a culture often implies subscribing
to its point of view
.

I’m
not sure how any of this undercuts the use philosophers make of intuitions. It
seems to me that even if we acknowledge all of this, there are still epistemological
and metaphysical reasons to use intuitions in philosophy. (You
mean you’ll be defending philosophy by using more philosophy?
Yeah,
well what did you expect me to use, chemistry or something?)

The
epistemological reason is that for each of these facts about intuition, we
could (I think) find an equally
disturbing fact about perception. How we see the world around us is affected by
the kind of culture we’re in, what we expect to find and so forth. But none of
that implies that we should stop trusting perceptions as a source of evidence,
provided we’re suitably careful about how we employ them. Of course, practically
nothing should stop us trusting perception as a source of
evidence; that way lies madness, if not philosophical
immortality
.

The
metaphysical reason is that intuitions are sometimes constitutive of the
concepts we’re aiming to analyse. Want to know what’s a house? Well, presumably
houses are things that satisfy the predicate “house”, or fall under the concept
HOUSE. And presumably the facts about what makes an object satisfy the
predicate “house” include facts about how the term “house” gets the meaning it
gets in the language we speak. And presumably those facts include facts about
the intuitions people have about houses. A similar story is probably true for
the concept HOUSE, though here there are some more prominent
dissenters
. Now it’s rather controversial whether a similar story could be
true if we replaced “house” with “item of knowledge”, or “rational belief”, or “mind”,
or “person”, or “just act”, or (I guess most controversially) “object”, but at
least for terms towards the left of that list, it seems plausible enough.

Brad DeLong writes that he

Brad
DeLong
writes that he only just realised that there could be non-spectral
colours.

Until yesterday, it had never occurred to me that I could see colors that
weren’t in the spectrum–I had thought that all colors were somewhere in the
rainbow (or could be made from rainbow colors by darkening or lightening them).

But that is clearly false. Consider magenta. A magenta light plus a
green light equals a white light–all colors. But green is in the middle of the
spectrum. So where in the spectrum is magenta? Magenta is red and blue–the
complement of green. And nowhere in the spectrum is there a wavelength of light
that excites both the red-cones and the blue-cones but does not excite the
green-cones.

I was
going to write a comment saying just how magenta was possible, then I realised
I wasn’t exactly sure. Then I was going to link to a website that explained it
all clearly, until I realised I couldn’t find one. So if anyone could enlighten
me, or Brad, please write in!

Here’s
what I think happens, though I’m not entirely sure. The spectral colours are
colours produced by light of constant length. But we know there’s lots of waves
that do not have constant wave lengths. This is obvious for sound: you never
hear the sound of a trumpet, even a trumpet playing a ‘constant’ note, when you
just listen to waves of constant length. Magenta, I think, is one of the things
that happens when the light in question is not a wave of constant frequency.

But, that
doesn’t really say enough about what happens. I don’t know how the waves ‘mix’.
Is it that magenta light contains only photons of a constant frequency, but
some of them are around the typical frequency of red light and some of them
around the typical frequency of blue light? Or is it that individual photons
‘vibrate’ in some non-sinusoidal pattern, as the air does when two or more
notes are played? Or does this distinction not really make sense when we’re
dealing with light?

And
I’m not even sure this is the right story about magenta. I think it is, but for
all I’m certain of, magenta could be a contrast colour, like brown, that is
only apparent when there are other visible colours with which it contrasts.

Some
might think that it’s embarrassing how little I know about colours, but (a) if
I was going to be embarrassed by my ignorance there are many other things I’d
be embarrassed about first, and (b) since my department already has an expert on
colour
, the marginal value of my learning more is not very high.

all the time

The most fun seminar I’ve been attending
this semester has been Jeff King’s seminar at Harvard on the
semantics/pragmatics distinction. (Hang on, isn’t that the only seminar you’ve
been attending? – ed.
Not at all, I’ve also been attending my own seminar,
and normally I’d think that would be the most fun seminar, because I get to
talk .) The main theme of the seminar has been a sustained
attack on theories that provide too small a role for semantics in a theory of
communication. (Some of the attack is presented in this
paper
co-written with Jason Stanley.) These theories usually say, in one
way or another, that the explanation for the success of certain kinds of
communication is pragmatic not semantic. (They often go on to say other things
too, but that’s the part that I’m most interested in.) So, to provide a
representative sample, consider two stories about how (1) gets the intuitive
truth conditions that it has.

(1)      If
Charlie drank ten beers and drove home, she broke the law.

Intuitively, (1) is true, because (1) is
true iff it is the case that if Charlie drank ten beers and drove home shortly
afterwards
, she broke the law, and that’s clearly true. How could (1) have those
truth conditions? Some theorists (including some time-slices of me) say that
the semantic content of (1) is just that if the conjunction (Charlie drank ten
beers Ù
Charlie drove home) is true then it is true that Charlie broke the law. The
intuition is explained by the truth of some more or less complicated pragmatic
theory, that somehow predicts that if “Charlie drank ten beers and drove home”
is normally only said if the events happened in that order, then (1) is
normally only said if Charlie’s drinking and driving in that order implies
that she broke the law. And of course there’s a story in Grice about why “Charlie
drank ten beers and drove home” is normally only uttered if the events occurred
in that order, even if the ordering is not part of the truth conditions.

Jeff doesn’t want to accept any of that. He
argues that the most plausible story about the semantics of (1) has the
intuitive truth conditions fall out as being the truth conditions. The first
point to note is that every sentence in English (and every other natural
language) is tensed, and the tenses are presumably part of the semantic
content. So “Charlie drank ten beers” has as its semantic content $t (is in the
past) (Charlie drinks ten beers at t). Importantly, the quantifier here is
restricted. Whether Charlie drank ten beers at Bill Clinton’s second inaugural doesn’t
really matter to the truth of an ordinary utterance of “Charlie drank ten beers”
unless for some reason we are talking about Clinton’s second inaugural.

Arguably (and
better philosophers than I have persuasively argued for this at length) every
sentence that isn’t in the present tense literally expresses a proposition that
contains a quantifier over time. And this quantifier isn’t present because of
some mysterious pragmatic process, it’s encoded in the verbs of the sentence,
just like most semantic content is encoded somewhere in surface structure. And
what goes for whole sentences goes for constituent sentences too, so to a first
approximation, the semantic content of (1) is (2).

(2)      If $t1 (Past t1)(Charlie
drinks ten beers at t1) and $t2 (Past
t2)(Charlie drives home at t2) then
$t3 (Past
t3)(Charlie broke the law at t3).

This isn’t much
help yet, but if we also hold (a) all three quantifiers here are restricted,
and (b) the restrictions are somehow co-ordinated, then we can have the
semantic content of (1) really be something like (3).

(3)      If $t1 (Past t1)(Salient
t1)(Charlie drinks ten beers at t1) and $t2 (Past
t2)(t2 is shortly
after t1)(Charlie
drives home at t2) then
$t3 (Past
t3)(t3 = t2)(Charlie
broke the law at t3).

This is obviously
very rough, because as it stands we’ve got variables appearing outside the
scope of the quantifers that bind them, but at least this is a workable
suggestion for how (1)’s truth conditions might match its intuitive truth
conditions. And to the extent that the argument for radical pragmatic theories was
premised on the assumption that there isn’t even a workable suggestion for how (1)’s
truth conditions might match its intuitive truth conditions, well those
arguments are looking fairly weak. (That would include some arguments I’d
previously adopted. Oh well – you can’t be right all the time.)

But not all the
examples of alleged separation between truth conditions and intuitive truth
conditions are handled with quite such ease.

(4)      If Hannah insulted Joe and Joe resigned,
then Hannah is in trouble.

As Jeff and Jason
note, (4) “seems to express the proposition that if Hannah insulted Joe and Joe
resigned as a result of Hannah’s insult, then Hannah is in trouble.” The
suggestions above about using restricted quantifiers over times won’t help
here, because they won’t get the causal link between Hannah’s (possible) insult
and Joe’s (possible) resignation into the proposition. So what can our heroes
do? They start by taking a rather sensible approach: when in trouble, ask What
Would Bob Stalnaker Do?

As Robert Stalnaker has argued, indicative conditionals normally
exploit a similarity relation that counts only worlds compatible with the
mutually accepted background assumptions as the most similar worlds for
purposes of semantic evaluation. … An indicative conditional is true if and
only if the consequent is true in every one of the most relevantly similar
worlds in which the antecedent is true. (King and Stanley, 48)

Well, I’m not
sure that’s exactly what Stalnaker said, for reasons that shall become apparent
presently. Anyway, applying this theory to (4) we get the following
conclusions.

Fortunately, however, there is no reason to give a non-semantic account
of the intuitive readings of (4). The relevant reading of (4) is simply
predicted by the semantics for indicative conditionals that we have endorsed.
In a context in which the speaker has in mind a causal relationship between
Hannah’s insulting of Joe and Joe’s resignation, all relevantly similar worlds
in the speaker’s context set in which Hannah insulted Joe and Joe resigned,
will ipso facto be ones in which Joe’s resignation is due to Hannah’s insult.
The speaker’s context set is what is epistemically open to her. This may include
worlds in which the conjunction holds, and there is no causal relationship between
the conjuncts. But given that she has a causal relationship saliently in mind,
such worlds will not be the most relevantly similar worlds in the context set.
So, if she has a causal relation in mind between the two events, that is just
to say that the similarity relation for indicative conditionals will select
those worlds in which there is a causal relationship between the conjuncts of
the antecedent as the most similar worlds to the world of utterance in which
the antecedent is true. So, the causal reading of (4) is predicted by the
simple semantics for the indicative conditional that we have adopted above.
(King and Stanley, 53, numbering adjusted.)

Imagine
that all the following circumstances obtain:

(5)      Jeff and Jason are right about the
semantics of indicative conditionals;
(6)      Hannah recently insulted Joe;
(7)      Shortly after that, Joe resigned
(8)      Joe’s resignation was not due to
Hannah’s insult
          (in fact it was because he just realised
he always wanted to be a lumberjack)
(9)      Hannah is not in trouble.
(10)    Someone uttered (4) knowing (6)
and (7), but not (8).

In
those circumstances, I think the utterance of (4) may well be true. All the
epistemically open scenarios in which (6) and (7) are true are ones in which
Hannah is in trouble. And according to Jeff and Jason, if the antecedent of (4)
is true iff (6) and (7) are true. So all (epistemically) nearby worlds in which
the antecedent is true are worlds in which the conclusion is true, so the
utterance of (4) is true.

But,
per hypothesis, the actual world is also a world in which (6) and (7) are true,
and hence the antecedent of (4) is true. And the actual world is a world where
the consequent of (4) is false. So the actual world is a world where the
premises of the following argument are true and the conclusion false.

If
Hannah insulted Joe and Joe resigned, then Hannah is in trouble.
Hannah insulted Joe and Joe resigned.
So, Hannah is in trouble.

So modus
ponens is not a valid argument form. Something may have
gone awry. There’s two problems here, both of them potentially serious. First, on
the formal semantics Stalnaker adopts for the indicative conditional, modus
ponens is valid, yet Jeff and Jason claim to just be implementing Stalnaker,
and they’ve ended up rejecting modus ponens. Either Stalnaker’s got his own
theory wrong, or Jeff and Jason have got him wrong.

Secondly,
THEY’RE REJECTING MODUS PONENS. Isn’t this something that should be a serious
issue? I mean, it’s at least somewhat surprising. Not as surprising as, say, the
fact that Rocky
VI is going to get made
. But surprising. Reading through Jeff and Jason’s
papers, and certainly listening to Jeff, one gets the impression that the
forces they’ve lined up against present views that are seriously flawed in some
way or other. I do hope that rejecting modus ponens is not the only
alternative to these positions.

Sometimes I think it would

Sometimes I think it would be fun to run a critical
thinking course focussing on how to spot fallacious reasoning that only ever used examples drawn from the contemporary media.
Depending on how sensitive Brown students are, I could end up getting accused
of every sort of bias imaginable. (And the evidence is that some of them are much
too sensitive
.) But I don’t have such a course yet, so I’ll have to stick
to the blog. This is from the Washington
Post
.

"This Lott
story has continued primarily because of criticism from conservatives,"
said Whit Ayres, a Republican pollster based in Atlanta. “If the only people
raising doubts were Jesse Jackson and Al Sharpton, this story would have died
of its own weight several days ago. It’s the anguish from conservatives that
has kept the story going.”

Um, yeah. The hidden premise here that only
people who ‘raised doubts’ were Jesse Jackson, Al Sharpton and conservatives.
Given that extra premise, the conclusion that “it’s the anguish from
conservatives that has kept the story going” I guess would follow. And you
know, if you’re prepared to count Josh
Marshall
, Paul
Krugman
and Al
Gore
as conservatives, well the hidden premise still wouldn’t be true, but
at least there wouldn’t be a refutation I could find within five seconds of
scanning the NY Times.

Sometimes I think it would

Sometimes I think it would be fun to run a critical
thinking course focussing on how to spot fallacious reasoning that only ever used examples drawn from the contemporary media.
Depending on how sensitive Brown students are, I could end up getting accused
of every sort of bias imaginable. (And the evidence is that some of them are much
too sensitive
.) But I don’t have such a course yet, so I’ll have to stick
to the blog. This is from the Washington
Post
.

"This Lott
story has continued primarily because of criticism from conservatives,"
said Whit Ayres, a Republican pollster based in Atlanta. “If the only people
raising doubts were Jesse Jackson and Al Sharpton, this story would have died
of its own weight several days ago. It’s the anguish from conservatives that
has kept the story going.”

Um, yeah. The hidden premise here that only
people who ‘raised doubts’ were Jesse Jackson, Al Sharpton and conservatives.
Given that extra premise, the conclusion that “it’s the anguish from
conservatives that has kept the story going” I guess would follow. And you
know, if you’re prepared to count Josh
Marshall
, Paul
Krugman
and Al
Gore
as conservatives, well the hidden premise still wouldn’t be true, but
at least there wouldn’t be a refutation I could find within five seconds of
scanning the NY Times.

A Problem for Process Reliabilism

The following strikes me as a pretty
persuasive argument against a thorough-going process reliabilism. Since I’m no
expert on the field, I don’t know how similar it is to existing arguments
against process reliabilism, which is to say that if this turns out to be a
boring repetition of familiar points, well at least it wasn’t intentional
plagiarism.

Process reliabilism says that the
justification of a belief is proportional to the reliability of the process
that generated the belief. This raises the generality problem, as stressed in
Conee and Feldman’s 1998 paper – what is the process by which the belief
is generated? Or, to put the point more obscurely, what are the individuation
conditions for process types being used in this formulation. At one level the
generality problem is the problem of making the basic claim of process
reliabilism contentful – if we are prepared to count gruesome enough types,
then every belief is the product of some very reliable processes, and some very
unreliable processes. But let’s assume that problem has been handled.

At another level, the generality problem
raises a tension that I think can’t be resolved for a full-blown process
reliabilist. On the one hand, we want processes to be instantiated more than
one time, or else we’ll be led to the crazy view that a belief is justified iff
it is true. So we don’t want the instantiation to be too fine-grained.
On the other hand, the definition of justification entails rather immediately
(so immediately that it might surprise you to learn how long it took me to
realise this) every belief generated by the same process is equally justified. To
the extent that justificatory status can be very sensitive to the particular
ways a belief is formed, that implies we want processes to be individuated
quite finely. I think, and I think I have an example that supports this, that
these two constraints can’t be satisfied at once. Onto the example…

DIAGNOSIS

Morgan is
displaying symptoms S. Dr Watson knows that symptoms S normally
imply that the patient has a liver disease. But he also knows that in some
cases, happily enough in all and only cases where the patient has genetic condition
C, a patient with symptoms S doesn’t have a liver disease, but in
fact has a kidney disease. Dr. Watson also knows that genetic condition C
is rare, only 1% of males and 7% of females are C. And he knows that
there’s no easy way to test for whether a patient has condition C, for
usually it has no readily observable effects. And he knows he has no other
relevant information about whether Morgan is has condition C. So Watson
concludes that Morgan has a liver disease.

How justified is Dr. Watson’s belief?

I think you don’t know enough to say yet,
because you don’t know whether Morgan is male or female. If Morgan is male,
then Watson’s belief is very well justified. If Morgan is female, then Watson’s
belief isn’t particularly well justified, for he should be taking more
seriously the possibility that Morgan has condition C. Even in that case, it isn’t a disastrous
belief, but not as well justified as in the case where Morgan is male. Since
the two possible beliefs are not equally well justified, we need to say that
they are the results of different processes.

That alone might not be a problem. Perhaps
we can find a different way of categorising beliefs such that the belief that a
male patient displaying S has a liver disease falls into a different
category than the belief that a female patient displaying S has a liver
disease, though I’m not entirely convinced that existing (pure) reliabilist
theories have the resources to do this.

The problem is that the example generalises.
If x and y are both relatively small numbers, and Watson knows
that x% of males have condition C and y% of females do,
then his conclusion that Morgan has a liver disease is more justified if Morgan
is male rather than female for any such x and y, even if they are very
close, say x = 4.5 and y = 5, or even, I’d guess, if x =
4.5 and y = 4.51.

That means that we’re going to have to posit
infinitely many different categories of belief-forming processes, just to
account for all the different possible processes via which Watson could form
the belief that Morgan has a liver disease. The problem is that when categories
belief-forming processes get so fine-grained, we will start to get some
lucky guesses counting as justified beliefs, because they are the only beliefs
ever formed by that process, and some unlucky reasoned judgments counting as unjustified
beliefs, again because of the small sample size. This I take it should be
intolerable.

One response to related problems raised in
the 1980s was to modalise the notion of reliability. Maybe I’ll come back to
that in later posts, but I think it should be pretty clear that won’t help. The
problem is that there’s too many darn worlds to possibly count successes and
failures of a process, and no other approach to summarising the data from
nearby possible worlds seems to be much use.

This is not a problem for theories of
justification that incorporate some aspects of process reliabilism, but also
build in some more traditional internalist evaluations of modes of reasoning.
Ernie Sosa’s virtue reliabilism is like this, and my theory, which is
reliabilist about observational beliefs and (sorta kinda) foundationalist about
non-observational beliefs isn’t either. But a theory that is all process
reliabilism all the time really looks like it has problems with DIAGNOSIS.

A Problem for Process Reliabilism

The following strikes me as a pretty
persuasive argument against a thorough-going process reliabilism. Since I’m no
expert on the field, I don’t know how similar it is to existing arguments
against process reliabilism, which is to say that if this turns out to be a
boring repetition of familiar points, well at least it wasn’t intentional
plagiarism.

Process reliabilism says that the
justification of a belief is proportional to the reliability of the process
that generated the belief. This raises the generality problem, as stressed in
Conee and Feldman’s 1998 paper – what is the process by which the belief
is generated? Or, to put the point more obscurely, what are the individuation
conditions for process types being used in this formulation. At one level the
generality problem is the problem of making the basic claim of process
reliabilism contentful – if we are prepared to count gruesome enough types,
then every belief is the product of some very reliable processes, and some very
unreliable processes. But let’s assume that problem has been handled.

At another level, the generality problem
raises a tension that I think can’t be resolved for a full-blown process
reliabilist. On the one hand, we want processes to be instantiated more than
one time, or else we’ll be led to the crazy view that a belief is justified iff
it is true. So we don’t want the instantiation to be too fine-grained.
On the other hand, the definition of justification entails rather immediately
(so immediately that it might surprise you to learn how long it took me to
realise this) every belief generated by the same process is equally justified. To
the extent that justificatory status can be very sensitive to the particular
ways a belief is formed, that implies we want processes to be individuated
quite finely. I think, and I think I have an example that supports this, that
these two constraints can’t be satisfied at once. Onto the example…

DIAGNOSIS

Morgan is
displaying symptoms S. Dr Watson knows that symptoms S normally
imply that the patient has a liver disease. But he also knows that in some
cases, happily enough in all and only cases where the patient has genetic condition
C, a patient with symptoms S doesn’t have a liver disease, but in
fact has a kidney disease. Dr. Watson also knows that genetic condition C
is rare, only 1% of males and 7% of females are C. And he knows that
there’s no easy way to test for whether a patient has condition C, for
usually it has no readily observable effects. And he knows he has no other
relevant information about whether Morgan is has condition C. So Watson
concludes that Morgan has a liver disease.

How justified is Dr. Watson’s belief?

I think you don’t know enough to say yet,
because you don’t know whether Morgan is male or female. If Morgan is male,
then Watson’s belief is very well justified. If Morgan is female, then Watson’s
belief isn’t particularly well justified, for he should be taking more
seriously the possibility that Morgan has condition C. Even in that case, it isn’t a disastrous
belief, but not as well justified as in the case where Morgan is male. Since
the two possible beliefs are not equally well justified, we need to say that
they are the results of different processes.

That alone might not be a problem. Perhaps
we can find a different way of categorising beliefs such that the belief that a
male patient displaying S has a liver disease falls into a different
category than the belief that a female patient displaying S has a liver
disease, though I’m not entirely convinced that existing (pure) reliabilist
theories have the resources to do this.

The problem is that the example generalises.
If x and y are both relatively small numbers, and Watson knows
that x% of males have condition C and y% of females do,
then his conclusion that Morgan has a liver disease is more justified if Morgan
is male rather than female for any such x and y, even if they are very
close, say x = 4.5 and y = 5, or even, I’d guess, if x =
4.5 and y = 4.51.

That means that we’re going to have to posit
infinitely many different categories of belief-forming processes, just to
account for all the different possible processes via which Watson could form
the belief that Morgan has a liver disease. The problem is that when categories
belief-forming processes get so fine-grained, we will start to get some
lucky guesses counting as justified beliefs, because they are the only beliefs
ever formed by that process, and some unlucky reasoned judgments counting as unjustified
beliefs, again because of the small sample size. This I take it should be
intolerable.

One response to related problems raised in
the 1980s was to modalise the notion of reliability. Maybe I’ll come back to
that in later posts, but I think it should be pretty clear that won’t help. The
problem is that there’s too many darn worlds to possibly count successes and
failures of a process, and no other approach to summarising the data from
nearby possible worlds seems to be much use.

This is not a problem for theories of
justification that incorporate some aspects of process reliabilism, but also
build in some more traditional internalist evaluations of modes of reasoning.
Ernie Sosa’s virtue reliabilism is like this, and my theory, which is
reliabilist about observational beliefs and (sorta kinda) foundationalist about
non-observational beliefs isn’t either. But a theory that is all process
reliabilism all the time really looks like it has problems with DIAGNOSIS.

There’s been a rush on the vagueness
experiment in the last few hours, from where I have no idea. Anyway, as best I
can tell from looking through the counters (and taking into account comments
like Ehud’s that they hit some of the counter pages
because they were just looking around) the score is now Consistency 53 – Contextualism
12. I’m going to be away from the computer for a few hours – at this rate the
over/under for the combined score when I get back is about 100.

UPDATE: It turns out that the flood of
responses to the vagueness experiment are because of this rather kind link
by Matthew Yglesias, who runs one
of the best combined academic/political blogs around. Go read it, and if you
agree you can even vote
for him
in Dwight Meredith’s Koufax
Awards
. The Koufax
Awards are for the best lefty blogs around, and are allegedly named after the
best lefty pitcher ever. Though in that case why they aren’t named the Grove awards
is a bit of a mystery. Perhaps it’s because if the award were really for best
lefty pitcher they’d have to change their name to the Johnson
awards sometime between when Randy starts next year’s All-Star game and when he
wins next year’s Cy Young
award. Oh, in the experiment the score is now 71-18, so everyone who took the under on the bet I mentioned above wins.

There’s been a rush on the vagueness
experiment in the last few hours, from where I have no idea. Anyway, as best I
can tell from looking through the counters (and taking into account comments
like Ehud’s that they hit some of the counter pages
because they were just looking around) the score is now Consistency 53 – Contextualism
12. I’m going to be away from the computer for a few hours – at this rate the
over/under for the combined score when I get back is about 100.

UPDATE: It turns out that the flood of
responses to the vagueness experiment are because of this rather kind link
by Matthew Yglesias, who runs one
of the best combined academic/political blogs around. Go read it, and if you
agree you can even vote
for him
in Dwight Meredith’s Koufax
Awards
. The Koufax
Awards are for the best lefty blogs around, and are allegedly named after the
best lefty pitcher ever. Though in that case why they aren’t named the Grove awards
is a bit of a mystery. Perhaps it’s because if the award were really for best
lefty pitcher they’d have to change their name to the Johnson
awards sometime between when Randy starts next year’s All-Star game and when he
wins next year’s Cy Young
award. Oh, in the experiment the score is now 71-18, so everyone who took the under on the bet I mentioned above wins.

Free

Via Martijn
Blaauw
I got a notice of this graduate
student conference in epistemology
to be held in Amsterdam next May. It
looks like fun, and not just because it’s in Amsterdam. Anyone who can get
funding for going to Amsterdam and participating in a fun philosophy conference
should pause and reflect on just how much good fortune they possess. Sometimes grad students have all the luck!

UPDATE: I didn’t read the fine print very closely. It seems the deadline for submitting papers to this conference has passed. I don’t know how strict they will be about enforcing things like deadline rules. (It’s at the University of Amsterdam, you’d think there wouldn’t be things like rules anyway.) But if they are strict this isn’t as appealing as it first looked. Thanks to Alyssa Ney for picking up this little detail that I missed.