Skip to main content.
26 September, 2007

More Links

I’m mostly pottering around here trying to figure out how iChat, AIM and Skype all work. (Answer: Not as well as I’d like them to.) In the meantime, here are a few links.

Successful navigation through a series of decisions (say, the decision to graduate from high-school, the decision to pursue study at a postsecondary institution, the decision to take a course on critical thinking, the decision to take intro-level survey courses in philosophy, the decision to take upper-level courses in philosophy, the decision to major in philosophy, the decision to pursue philosophy at the graduate level, admittance to a graduate program, successful advancement through a graduate program, matriculation from a graduate program, entrance into the job market, progress through a tenure-track position, etc) culminates in a student’s entry into a particular area of professional study. By empirically measuring participation rates at various levels of study, we can find out when participation by members of certain groups “drops off” (i.e., after intro-level courses but before the choice to major in a particular area, or after admission into a graduate program but before finishing coursework, etc). This information can help us pinpoint the level of educational study at which members of underrepresented groups find themselves alienated or disengaged.

Posted by Brian Weatherson at 3:06 pm

2 Comments »

23 September, 2007

Quick Links

I’m mostly just worrying about (a) the Grand Final and (b) the pile of editing on my desk. But here are some other quick points.

Posted by Brian Weatherson at 3:09 pm

No Comments »

18 September, 2007

Infinite Probabilities

There is an odd paper by Jeanne Peijnenburg in the latest Mind. (It’s subscription only, so no link.) There’s a formal point and a philosophical point.

The formal point concerns the following question. Are there values of a1, b1, a2, b2, … such that given that P(Ei|Ei+1) = ai, and P(Ei|~Ei+1) = bi for all i, we can compute the value of P(E1)? This is answered in the affirmative, in some complicated cases where we have to compute some tricky infinite sequences.

The philosophical point is that this is meant to be a defence of infinitism, a la Peter Klein. The idea, if I’ve understood it, is that we can (contra Klein’s critics) say that we can deduce unconditional probabilities from an infinite string of conditional probabiilties. So probabilities don’t have to be ‘grounded’ in unconditional probabilities, as Klein suggests.

But there’s a much simpler way to prove the formal point. If a1 = b1 = x, the Pr(E1) = x, whatever the other values are. Here is a way to get from conditional probabilities to unconditional probabilities. And we don’t even need an infinite chain. So I don’t see how this is meant to give any support to infinitism. Maybe I’m just missing something here. At the very least, I’m certainly missing how these computations of particular probabilities support the idea that infinite chains can justify old-fashioned, non-probabilistic, belief.

Posted by Brian Weatherson at 11:14 pm

2 Comments »

Tuesday Morning Links

The first two are things I possibly should have added to the link to Robbie’s post below.

Posted by Brian Weatherson at 10:37 am

No Comments »

Get a Job (in Britain)

Robbie Williams has a good guide to the British job market.

Posted by Brian Weatherson at 8:01 am

No Comments »

17 September, 2007

Who Knew?

From the NYT.

Where do moral rules come from? From reason, some philosophers say. From God, say believers. Seldom considered is a source now being advocated by some biologists, that of evolution.

Someone should tell Brian Skyrms. I bet he’d have something interesting to say about this newly considered source.

Posted by Brian Weatherson at 10:39 pm

3 Comments »

Representation Theorems

This may all be old news to philosophers who work on decision theory and related things, but I think it bears repeating.

There’s an interesting post up at Cosmic Variance by the physicist Sean Carroll wondering idly about some issues that come up in the foundations of economics. One paragraph in particular caught my eye:

But I’d like to argue something a bit different – not simply that people don’t behave rationally, but that “rational” and “irrational” aren’t necessarily useful terms in which to think about behavior. After all, any kind of deterministic behavior – faced with equivalent circumstances, a certain person will always act the same way – can be modeled as the maximization of some function. But it might not be helpful to think of that function as utility, or [of] the act of maximizing it as the manifestation of rationality. If the job of science is to describe what happens in the world, then there is an empirical question about what function people go around maximizing, and figuring out that function is the beginning and end of our job. Slipping words like “rational” in there creates an impression, intentional or not, that maximizing utility is what we should be doing – a prescriptive claim rather than a descriptive one. It may, as a conceptually distinct issue, be a good thing to act in this particular way; but that’s a question of moral philosophy, not of economics.

There’s a lot of stuff in here. Part of this is a claim that science only addresses descriptive issues, not normative ones (or “prescriptive” in his words – I’m not sure what distinction there is between those two words, except that “prescriptive” sounds more like you’re meddling in other people’s activities). Now to a physicist I think this claim sounds natural, but I’m not sure that it’s true. I think it’s perhaps clearest in linguistics that scientific claims are sometimes about normative principles rather than merely descriptive facts. As discussed in this recent post by Geoffrey Pullum on Language Log, syntax is essentially an empirical study of linguistic norms – it’s not just a catalog of what sequences of words people actually utter and interpret, but includes their judgments of which sequences are right and wrong. Linguists may call themselves “descriptivists” to contrast with the “prescriptivists” that don’t use empirical evidence in their discussions of grammaticality, but they still deal with a notion of grammaticality that is essentially normative.

I think the same is true of economics, though the sort of normativity is quite different from the norms of grammaticality (and the other norms studied in semantics and pragmatics). There is some sort of norm of rationality, but of course it’s (probably) different from the sort of norm discussed in “moral philosophy”. Whether or not it’s a good thing to maximize one’s own utility, there’s a sense in which it’s constitutive of being a good decision maker that one does. Of course, using the loaded term “rationality” for this might be putting more force on this norm than we ought to (linguists don’t call grammaticality a form of rationality, for instance) but I think it’s actually a reasonable name for it. The bigger problem with the term “rationality” is that it can be used both to discuss good decision making and also good reasoning, thus confusing “practical rationality” and “epistemic rationality”.

And that brings me to the biggest point I think there is in this paragraph. While there might be good arguments that maximizing utility is the expression of rationality, and there might be some function that people descriptively go around maximizing, it’s not clear that this function will actually be utility. One prominent type of argument in favor of the claim that degrees of belief must obey the axioms of probability theory is a representation theorem. One gives a series of conditions that it seems any rational agent’s preferences should obey, and then shows that for any such function there is a unique pair of a “utility function” and a “probability function” such that the agent’s preferences always maximize expected utility. However, for each of these representation theorems, at least some of the conditions on the preference function seem overly strong to require of rational agents, and then even given the representation, Sean Carroll’s point still applies – what makes us sure that this “utility function” represents the agent’s actual utilities, or that this “probability function” represents the agent’s actual degrees of belief? Of course, the results are very suggestive – the “utility function” is in fact a function from determinate outcomes to real numbers, and the “probability function” is a function from propositions to values in the interval [0,1], so they’re functions of the right sort to do the job we claim they do. But it’s certainly not clear that there’s any psychological reality to them, the way it seems there should be (even if subconscious) for an agent’s actual utility and degree-of-belief functions.

However, if this sort of argument can be made to work, then we do get a connection between an agent’s observed behavior and her utility function. We shouldn’t assume her decisions are always made in conformity with her rational preferences (since real agents are rarely fully rational), but if these conditions of rationality are correct, then there’s a sense in which we should interpret her as trying to maximize some sort of expected utility, and just failing in certain instances. This sense is related to Donald Davidson’s argument that we should interpret someone’s language as having meanings in such a way that most of their assertions come out as true. In fact, in “The Emergence of Thought”, he argues that these representation theorems should be united with his ideas about “radical translation” and the “principle of charity” so that belief, desire, and meaning all fall out together. That is, the normativity of rationality in the economic sense (as maximizing expected utility) just is part of the sort of behavior agents have to approximate in order to be said to have beliefs, desires, or meaning in their thoughts or assertions – that is, in order to be an agent.

So I think Sean Carroll’s found an important point to worry about, but there’s already been a lot of discussion on both sides of this, and he’s gone a bit too fast in assuming that economic science should avoid any talk of normativity.

Posted by Kenny Easwaran at 7:01 pm

No Comments »

16 September, 2007

Not Quite so Rigid

According to CNN, the official kilogram is lighter than it used to be. The consequences for semantic theory are not remarked upon in the article.

Posted by Brian Weatherson at 11:53 pm

3 Comments »

6 September, 2007

Thursday Links

Quick hits while feeling happy that iTunes has finally added album rating.

Posted by Brian Weatherson at 12:02 pm

1 Comment »

5 September, 2007

Another Link

To the philosophy bites blog, which is mostly a collection of podcast interviews with (mostly) British philosophers. I haven’t listened to any of them, but hopefully will soon. It’s a great idea, and apparently is doing well on the iTunes PodCast charts.

I keep meaning to try out podcasting, but first I guess I better figure out how to record things, and how to speak in a radio voice.

Posted by Brian Weatherson at 10:24 pm

No Comments »

« Previous Entries  Next Page »