Much friendlier than the Monaco doggies.
There was no way I was going to fly all the way to the south of France and not make (several versions of) that joke.
(Post edited to increase comedic clarity.)
And another paper, about disputes about taste:
Here’s a paper, largely about “you”, in which I say stuff that sounds quite a bit like stuff Brian and Kenny say below. It’s a little different, though, and I take longer to say it. Plus I also talk about answering machines and oatmeal.
For the reasons offered in Finlay (comment on previous post), please don’t quote or cite without (ridiculously easy to obtain) permission.
And here’s a very cool paper by Josh Parsons with another take on the same sort of phenomenon, though less about “you” and more about “now”:
Josh doesn’t seem to have any “don’t quote or cite” warning on his page, but it’d probably be nice to ask him anyway if you’re going to.
(Revised to get the links right…)
I’ve seen the raw, street version of this before at grad student parties, and at conferences after the drinking moves from the hotel bar to somebody’s room, but only recently discovered that there’s actually a genre here. And apparently a documentary in progress. This guy‘s rhymes (check out, in particular, “Message No. 419”) are a lot better than ours were, too.
So I’ve been spending time that I should be spending writing about disagreements about taste, fragmented belief, perception, and funny kinds of context dependence, thinking instead about a lyric from a Black Eyed Peas song (“Latin Girls”) that I was listening to while
surfing the web writing about all of the very important topics that I ought to be writing about. Anyway, here’s the lyric:
“Girl, you know I know you know what I mean.”
And I started wondering (as one does):
(a) What’s the difference between the overall communicative effects of asserting,
(i) You know what I mean
(ii) You know I know you know what I mean
(b) Whatever the differences are, can you get an adding-to-common-knowledge view of assertion to predict them?Continue reading “The Black Eyed Peas on Assertion and Iterated Attitudes”
A paper that Tyler Doggett and I have been working on for a while, which is now (we hope) more or less ready for prime time:
Wanting Things You Don’t Want
We argue (with folks like Kendall Walton, Gregory Currie, Ian Ravenscroft, and David Velleman, and against folks like Stephen Stich, Shaun Nichols, Jonathan Weinberg, and Aaron Meskin) that in order to give a happy account of our engagement with and responses to fictions and games of make-believe, we need to postulate not just an imaginative analogue of belief (that is, imagination), but also an imaginative analogue of desire (which we call i-desire – other people call it other things).
Some potentially helpful info, from Mark Moyer (cut and pasted from his email):
Several of us that were at the APA have asked for, and have been given, a refund for our hotel bills for the final night of the conference (due to the fire). Those who have been given refunds include a few people who were on the 7th floor (where the fire was) as well as one person on the 3rd floor who incurred water damage. I don’t know if they are giving a refund to everyone who asks, or just to those on the 7th floor, or … But presumably many people would like to know this so they too can try to get a refund, whether they were funded by their school or, for many such as graduate students, they were footing the bill themselves. Hence, I thought this might deserve it’s own post on TAR.
The website for the 2007 edition of the Bellingham Summer Philosophy Conference is up now. This is a great conference – if you’re a philosopher in the market for summer conference-attending, you should probably send them a paper! The site is here.
One of the nice features of being in Canberra is that I get to go for runs around Lake Burley Griffin with Nic Southwood and talk (or wheeze) about philosophy. A nice feature of talking about philosophy while running is that, when I’m actually just out of breath and can’t talk, I can pretend that the reason why I’m not talking is because I’m being terribly deep, and having a good hard think about what the best thing to say next is. The last time we went for a run, we got to talking (wheezing) about The Moral Problem. All of the confused parts of what follows are due to me. All of the lucid parts are due to Nic. (Well, except for the ones that are due to Michael Smith or to Brian.)
Here’s a way of stating Michael Smith’s view in TMP that people often get away with:
Aing in C is (morally) right iff our ideally rational selves would advise us to A in C.
(I’ve said this in philosophical company and not had anyone complain about it, and I’ve been in conversations where somebody else said it and all of the rest of us let it pass without complaint.)
Here are two concerns about that view:
1) It doesn’t distinguish the advice that our idealized selves would give on moral grounds from the advice that our idealized selves would give on any other kind of grounds – comic, aesthetic, prudential, or whatever. And so it’s not going to succeed in picking out the morally right. Instead, it’ll pick out something like the advisable all-things-considered.
2) Suppose you think that moral reasons don’t always trump other sorts of reasons. Then you’ll think that there are cases where the (morally) right thing to do is to B, but your ideally rational self would, on account of the stronger countervailing nonmoral reasons, advise you to A. In cases like this (if there are any), the Smithian view above will misclassify Aing as morally right, since it’s the action that our ideally rational selves would, all things considered, advise us to perform.
(There’s a lot of room for filling in cases, and, obviously, a lot of room for disputing about whether particular cases really are examples of the relevant phenomenon. A contentious, but not obviously crazy, example that Brian and I use elsewhere, for other purposes, is a situation where it’d be a little bit morally bad, but really funny, to throw a pie in the face of some undeserving victim.)
Now, the view that people (including me) often get away with attributing to MS isn’t, as far as I can tell from a quick scan, actually endorsed by him anywhere in The Moral Problem. What he actually says is, “our A-ing in circumstances C is right if and only if we would desire that we A in C, if we were fully rational, where A-ing in C is an act of the appropriate substantive kind: that is, it is an act of the kind picked out in the platitudes about substance” (p184, his italics). (I’ve replaced phis with ‘A’s, since I’m a blogging amateur and don’t know how to get the greek letters from my word file into the post.)
It’s not clear, though, how much help this is in handling the two concerns above.
The first objection is clearly what the italicized bit of Smith’s official view is designed to avoid. The move is to distinguish the moral oughts from the nonmoral ones by carving off a domain of actions, such that our idealized selves’ advice about which of those actions to perform is moral advice. (Presumably the same will happen for other sorts of oughts – other domains of behavior will be carved off as the domains of prudential, comic, aesthetic, etc. advice.) (I’m going to be a little bit sloppy about the distinction between what our idealized selves would desire and what they would advise in what follows. I don’t think anything bad will come of it, and I’m too much in the habit of thinking about MS’s view in terms of advice to be able to self-edit reliably…)
The problem is that distinguishing moral and non-moral oughts by appeal to the kinds of actions to which they apply just doesn’t seem like the right way to go. For (pretty much) any kind of action, there can be both moral and non-moral (and various different kinds of non-moral) reasons for performing that kind of action. Sometimes the (predominant) reasons why we ought to cut our hair, sell our shares in Exxon, throw a pie at Brian, eat our vegetables, etc. (or to refrain from doing these things) are prudential. Sometimes the (predominant) reasons why we ought to do (or refrain from doing) these things are moral. If my idealized self would advise me to eat my vegetables for exclusively prudential reasons, then I ought, prudentially, to eat my vegetables, but it’s not the case that I ought, morally, to eat my vegetables. If my idealized self would advise me to eat my vegetables for exclusively moral reasons, then I ought, morally, to eat my vegetables, but it’s not the case that I ought, prudentially, to eat my vegetables. At least, that seems like the natural thing to say. But we can’t say it on Smith’s account. Whether my idealized self’s advising me to A means that I morally ought to A or not depends, on Smith’s account, only on what kind of action Aing is, and not at all on the considerations on the basis of which my idealized self would advise me to perform that sort of action.
It’s also pretty clear that the official view isn’t going to help with (2) – what we need there is, again, a way of identifying a distinctively moral class of reasons for advising one action over another, rather than a way of identifying a distinctively moral realm of behavior, about which all advice is moral advice. Even on Smith’s official view, so long as Aing is an action of the right substantive type, our ideally rational selves advising it – for whatever reason – will be enough to guarantee its rightness. That’ll be enough to render impossible the sort of situation described in (2), where the moral reasons just barely favor Bing, but since they’re outweighed by stronger nonmoral reasons to A, our ideally rational selves would, all things considered, prefer that we A. To the extent that we think this sort of situation is possible, we should be as suspicious of the official view as we were of the not-quite-official one.
So here’s the summary of what Nic and I are worried about, I think (at least, here’s what I’m worried about as a result of talking (wheezing) to Nic about this stuff): Smith’s view looks like it’s trying to draw the distinction between the moral and the prudential, aesthetic, etc. in the wrong place: between sorts of actions, rather than between sorts of reasons for action. And that looks like it’s going to deliver some bad results. If we think that there can be both moral and nonmoral reasons for or against doing more or less anything, and that moral reasons don’t always win when the two conflict, we should expect a lot of misclassification. In cases where the action’s of the relevant substantive type, but the reasons for or against performing it, are, in this particular case, exclusively prudential, it’s going to misclassify actions as right (or wrong) when in fact they’re just prudent or imprudent. When the action’s of the right substantive type, and weak moral reasons against are outweighed by stronger nonmoral reasons in favor, the action will be misclassified as morally right. And when the action’s of the wrong substantive type, but there are strong reasons for doing it that are exclusively or primarily moral, the action will fail to be classified as morally right, even though it seems like it ought to be.
(One way to resist this is to say something fancy about what the substantive types are, such that there can’t be both moral and nonmoral reasons for and against performing actions of the relevant types. Maybe that’ll work. I can’t see real clearly how it would go, and I’m concerned that there’ll always be counterexamples, but I don’t have a good argument in hand that it can’t be done. But my guess is that, even if it works, the fancy things one will have to say will include building stuff about reasons into the action types. And if you’re going to do that, why not just start by talking about reasons?)
We’ve knocked around a couple of ideas about how to futz with MS’s view in response to this, but maybe I’ll leave that until after people have had a chance to say why we’ve got it all wrong about the trouble for the official view and there’s no need for any futzing…