First of all, I’d like to thank Brian for inviting me to post here (like Gillian and Carrie, and some others that haven’t decided to say anything yet).

The topic I’m interested in is a parallel between the sorites arguments typical in discussions of vagueness and certain arguments for the adoption of strong new axioms in set theory. (I discussed some of those arguments in this post on my other blog.)

Basically, the idea is based on Gödel’s second incompleteness theorem. For every nice enough theory T (basically, T needs to be strong enough to represent basic arithmetic, and orderly enough that you can tell whether or not a given statement is an axiom), there is an arithmetical statement called Con(T) that says that T is consistent. However, for any such T, the statement Con(T) is neither provable nor disprovable from T itself. But if T has only true statements as axioms, then it must clearly be consistent, so T is incomplete. In particular, there is a true theory T’ that proves all the consequences of T, but also proves Con(T), and we should adopt T’ instead of T.

Because this argument then applies to T’ as well, we seem to either have to withhold judgement on our initial theory, or adopt a theory far stronger than what we started out with. (A recent post of mine discusses a parallel argument by Roger Penrose that claims to show that our mathematical knowledge is given instead by a non-computable theory.) Set theorists often use this argument to show that mathematicians who accept the axioms of ZFC (the standard framework most mathematicians tacitly accept as the foundations of what they do) must therefore accept much stronger principles as well, despite the fact that they can’t be proven.

I’d like to agree with the set theorists, but this argument reminds me of some fallacious reasoning in cases of vagueness. The idea is that for each theory T, if T is true, then T+Con(T) must be as well. This is similar to the claim that for every n, if n grains of sand don’t make a heap, then n+1 don’t either. The argument that shows mathematicians must accept every large cardinal claim that set theorists come up with is parallel to the argument that there are no heaps of sand. It seems plausible to me that whatever account of vagueness one has to block the heap argument could be adopted to block the set-theoretic argument. On the other hand, if a solution to the problem of vagueness doesn’t apply to the mathematical case (perhaps because it seems implausible to assign intermediate truth-values to mathematical claims or something), then one might see this solution as somehow lacking.

Some people might also run the sorites argument as a modus tollens instead of a modus ponens, saying that any number of grains of sand form a heap, and similarly that ZFC is not true (because adopting it as true forces them towards further claims about large sets that they have trouble believing).

A position like this is adopted by finitists, who accept claims about various finitary mathematical objects (like natural numbers, rational numbers, and the like) but only accept “potential infinities” (like a list that one can keep adding to) rather than “completed infinities” (like the actual precise value of some irrational number, or some non-computable set of natural numbers). However, most finitists accept Peano Arithmetic as a set of axioms, and a similar argument works starting with PA to drive one seemingly inexorably towards ZFC, and thus to the higher infinite.

The even more drastic solution accepted by some is known as ultrafinitism, on which one doubts even some “finite” numbers. In practice, these doubts arise about extremely large numbers, like a googolplex, which are believed to be larger than the number of subatomic particles in the entire universe. However, a similar sorites argument is going to cause trouble for the ultrafinitist – if the ultrafinitist accepts that the natural number n makes sense mathematically, then it would also seem that she should accept that the natural number n+1 does as well. (After all, one can just take the set of n things that one already has, and add this set itself as an element to create a new set!) If this is right, then once one accepts that even a single natural number makes sense, this successor principle pushes one inexorably towards at least full finitism, if not towards the infinite. The ultrafinitist has to reject the claim that every natural number has a successor, but it seems that she shouldn’t point to any number as “the last one”.

I’d like to use these arguments to show that once one adopts any amount of mathematics, one basically has to go “all the way to the top” in terms of the scales of infinity. However, these arguments seem to share troubling features with sorites arguments that we do want to block, so I’m interested in seeing what accounts of vagueness might do to them.

(The idea for this post originated in a class I co-taught last week with Mike Shulman to mathematically talented high school students at the Canada/USA Mathcamp.)

Posted by Kenny Easwaran in *Uncategorized*