Kripkenstein’s Mules

I’ve been thinking recently about the possible fruitfulness of comparing cleverly-disguised-mule-worries (CDMW) in epistemology with Kripkensteinian-meaning-underdetermination-worries (KMUW).

I think it is helpful, in understanding CDMW, to think about two kinds of questions:

(1) Why does S believe those animals are zebras rather than lions?
(2) Why does S believe those animals are zebras rather than cleverly disguised mules?

‘Because they are zebras’ looks like a good answer to questions like (1) and a bad answer to questions like (2).

For a contextualist explanationist about knowledge like myself, this suggests that in contexts where the question ‘Is S’s belief explained by the fact believed?’ amounts to something like (1), ‘S knows they are zebras’ looks good, and in contexts where that question amounts to something like (2), ‘S knows they are zebras’ looks bad.

What I find suggestive, in trying to understand KMUW, is an analogy with the questions:

(1′) Why does S use ‘plus’ for addition rather than subtraction?
(2′) Why does S ‘plus’ for addition rather than quaddition?

‘Because of the dubbing, or otherwise word-defining, activities of S’s linguistic predecessors’ looks like a good answer to questions like (1′) and a bad answer to questions like (2′).

More on this topic will follow …

Knowledge, Justified Belief and Practical Interests

I’ve been thinking again about the issues about knowledge justified belief and practical interests that I explored a bit in “this old paper”:http://brian.weatherson.org/cwdwpe.pdf. In that paper I have a rather complicated example that’s meant to show that a principle Jeremy Fantl and Matthew McGrath endorse, namely (PC) is false. Here is the principle.

(PC) S is justified in believing that _p_ only if S is rational to prefer as if _p_.

The rough outline of why (PC) is wrong is that whether one is rational to prefer as if _p_ might depend not only on whether one has justified attitudes towards _p_, but on whether one’s other attitudes are justified. Here is one example in which that distinction matters.

S justifiably has credence 0.99 in _p_. She unjustifiably has credence 0.9999 in _q_. (She properly regards _p_ and _q_ are probabilistically independent.) In fact, given her evidence, her credence in _q_ should be 0.5.

S is offered a bet that pays $1 if _p_v_q_ is true, and loses $1000 otherwise. Assume S has a constant marginal utility for money. It is irrational for S to prefer to take the bet. Given her evidence, it has a negative expected value. Given her (irrational) beliefs, it has a positive expected value, but if she properly judged the evidence for _q_, then she would not take the bet.

Of course, given _p_ the bet is just a free grant of $1, so she should take it.

So this is a case where it is not rational to prefer as if _p_. She should prefer to decline the bet, but to accept the bet given _p_.

If we accept (PC), it follows that S is not justified in believing _p_. But this conclusion seems wrong. S’s credence in _p_ is perfectly justified. And on any theory of belief that seems viable around here, S’s credence in _p_ counts as a belief. (On my preferred view, S believes _p_ iff she prefers as if _p_. And she does. The main rival to this view is the “threshold view”, where belief requires a credence above the threshold. And the usual values for the threshold are lower than 0.99.)

So this is a counterexample to (PC). In a recent paper, Fantl and McGrath defend a weaker principle, namely (KA).

(KA) S knows that _p_ only if S is rational to act as if _p_.

Is this case a counterexample to (KA) as well? (Assume that _p_ is true, so the agent could possibly know it.) I don’t believe that it is a counterexample. I think the things that an agent knows are the things she can use to frame a decision problem. If the agent knows _p_, then the choice between taking or declining the bet just is the choice between taking a dollar and refusing it. So she should take the bet. This would be irrational, so that must be the wrong way to frame the bet. Hence she doesn’t know that _p_.

The upshot of this is that these practical cases give us a new kind of counterexample to K = JTB. In the case I’ve described, the agent has a justified true belief that _p_, but does not know _p_.

Animal Communication

The other day I was reading about the amazing <A href=http://en.wikipedia.org/wiki/Waggle_dance>waggle dance</a> that honeybees perform to tell their hivemates of the location of food, and then found the wikipedia page on <A href=http://en.wikipedia.org/wiki/Zoosemiotics>animal communication</a>.  From reading this article, it seems that some very useful work could be done if philosophers of language collaborated with ethologists (or whatever scientists work in this field) and cleared up some of the fundamental issues.  Now, I don’t know how representative of the field the wikipedia article is (the article references many studies and papers, though it’s hard to tell whether experts agree with the overall organization of the article), but it suggests some fundamental confusions.

The wikipedia article states that “Animal communication is any behavior on the part of one animal that has an effect on the current or future behavior of another animal.”  This is a nice operational definition for scientists to use, but it obviously has some flaws.  This is admitted further along in the article:<blockquote>If a prey animal moves or makes a noise in such a way that a predator can detect and capture it, that fits the definition of “communication” given above. Nonetheless, we do not feel comfortable talking about it as communication. Our discomfort suggests that we should modify the definition of communication in some way, either by saying that communication should generally be to the adaptive advantage of the communicator, or by saying that it involves something more than the inevitable consequence of the animal going about its ordinary life.</blockquote> It seems to me that just thinking about things in Gricean terms would help clear things up.

Some interesting examples that are discussed include warning coloration (many poisonous animals have very bright coloration, which has co-evolved with the perceptual systems of potential predators, saving both species much grief in the long run), pursuit-deterrence (some antelopes engage in “stotting” (high jumping while starting to run) when escaping predators, to indicate that they have the energy to far outrun the predator), and warning signals (many monkeys make certain vocalizations to indicate to their group the presence of predators).  It seems that these particular examples rely on different aspects of Grice’s account of speaker meaning.  Warning coloration doesn’t seem to rely on any particular intention of the “speaker” – in fact, the animal with the coloration generally has no intentional control at all.  Stotting is also similar – a predator that sees the antelope stotting can quickly realize that it can’t catch the potential prey, and will give up.  A difference between these two however is that warning coloration is purely conventional (a predator may know that bright orange frogs are poisonous, but if it ends up in a different environment with bright blue snakes, it might not recognize the signal) while stotting is somehow more natural (which is not to say that every potential predator will recognize the speed advantage the stotting indicates – this shows that there is still a difference between stotting and the rustle animals generally make in the bushes, which is an unmistakable sign of prey).  Warning calls that monkeys make seem to involve more of the Gricean mechanism – they may or may not be intentional in the sense we are familiar with for human behavior (perhaps they’re more akin to humans saying “ouch!” when hurt), but the recognition of the quasi-intention is essential for the targets of the signal.  Unlike stotting, this is a signal that can be faked (stotting is presumably so hard to do that it would be impossible to fake if an animal wasn’t actually capable of outrunning the predator).  Thus, the listener needs to understand the intention of the “speaker” in order to properly respond to the signal.

This last point about the potential for faking a signal has apparently been a focus of discussion – most evolutionarily stable animal communication is honest, though there are some instances of dishonesty.  (For instance, many harmless animals that live in the same environment as poisonous ones end up evolving the same coloration, to protect themselves from predators.  Human communication is another notable instance of animal communication that often involves dishonesty.)  But according to this article on animal communication, Amotz Zahavi has argued that evolutionarily stable dishonest communication is impossible – I don’t know exactly what the bounds of this claim are, but it sounds reminiscent of the Kantian argument for why lying is wrong.

Of course, even if some of this communication reaches the level of Gricean speaker-meaning, none of it seems to constitute full-fledged language.  The wikipedia article on <A href=http://en.wikipedia.org/wiki/Animal_language>animal language</a> seems to make this clear, though again the categories that are studied seem like they might be slightly puzzling to philosophers of language.  But I would guess there is good potential for interdisciplinary work in this area.

Philosophy Videos

Fresh off his winning the Elite Research Prize, Vincent Hendricks has “a TV show”:http://www.dk4.dk/?p=plug-side-item;id=2619. Here’s a rough-and-ready translation of the text on that link.

bq. The Power of Mind is a TV-series on philosophy which attempts to show how fundamental philosophical questions and issues show themselves everywhere – in science as well as everyday life.

bq. The show is hosted by Professor Vincent F. Hendricks who in each program will have a new guest in the studio to discuss ethics, religion, science, aestetics, politics mathematics, logic, knowledge and other themes making up the fundamental disciplines of philosophy.

And Joshua Knobe is featured on “Bloggingheads”:http://www.bloggingheads.tv/diavlogs/8796 discussing experimental philosophy.

Personally I much prefer getting philosophy in text form rather than over video or audio. But it’s very exciting to see philosophy being presented to a broader audience, particularly on prime time national TV as Vincent is doing!

On Sleep

I think it’s pretty common to think of how asleep someone is as something that comes in degrees, by which I mean that someone can be a little bit asleep (in which case their eyes will be closed, but they might remember overhearing a conversation nearby, and be wake-able with very little stimulus, such as someone whispering their name, or opening the door of the room they are in), or very very deeply asleep, in which case they might sleep through a loud storm/band playing next door/someone poking them or even moving them, and in all kinds of states in between. But Demmett and Vaughan’s The Promise of Sleep argues that this is wrong: though there are indeed different kinds of sleep (i.e. stages 1-4 and REM sleep) sleep itself is discrete on/off thing.

The main experiment Demmett cites in support of this goes more or less like this: you keep a subject awake for 3 or 4 days, so that they build up a large sleep debt, making them liable to fall asleep quickly. Then you clip their eyelids open (yes, it does sound torturous) and sit them in front of a bright flash, like that of a camera, which goes of randomly, but on average every 8 seconds or so. Then you ask them to push a button every time the flash goes off. Here’s what happens. For the first couple of minutes they push the button diligently every time the flash goes off. But after a couple of minutes, there is a flash and they fail to push the button. The experimenters ask them why they didn’t push the button, and the subject replies that there was no flash. But of course, there was a flash, the experimenters all saw it, and the subject is sitting there with their eyes pinned open in front of the flash bulb. The electrodes attached to the subject’s scalp (which you can use to measure electrical activity in the brain) show that the subject actually fell asleep for 2 seconds.

Continue reading

Eidos Metaphysics Conference CFP

The folks at Eidos are organizing a metaphysics conference which will take place in July 2008, and are currently calling for papers on topics in metaphysics suitable for presentation in 40 minutes (leaving 20 minutes for discussion).  Speakers currently lined up include Kit Fine and Robin LePoidevin.  If past experience is anything to go by, it should be fun!  The deadline for submission of one-page abstracts is 30th March.