First of all, I’d like to thank Brian for inviting me to post here (like Gillian and Carrie, and some others that haven’t decided to say anything yet).

The topic I’m interested in is a parallel between the sorites arguments typical in discussions of vagueness and certain arguments for the adoption of strong new axioms in set theory. (I discussed some of those arguments in this post on my other blog.)

Basically, the idea is based on Gödel’s second incompleteness theorem. For every nice enough theory T (basically, T needs to be strong enough to represent basic arithmetic, and orderly enough that you can tell whether or not a given statement is an axiom), there is an arithmetical statement called Con(T) that says that T is consistent. However, for any such T, the statement Con(T) is neither provable nor disprovable from T itself. But if T has only true statements as axioms, then it must clearly be consistent, so T is incomplete. In particular, there is a true theory T’ that proves all the consequences of T, but also proves Con(T), and we should adopt T’ instead of T.

Because this argument then applies to T’ as well, we seem to either have to withhold judgement on our initial theory, or adopt a theory far stronger than what we started out with. (A recent post of mine discusses a parallel argument by Roger Penrose that claims to show that our mathematical knowledge is given instead by a non-computable theory.) Set theorists often use this argument to show that mathematicians who accept the axioms of ZFC (the standard framework most mathematicians tacitly accept as the foundations of what they do) must therefore accept much stronger principles as well, despite the fact that they can’t be proven.

I’d like to agree with the set theorists, but this argument reminds me of some fallacious reasoning in cases of vagueness. The idea is that for each theory T, if T is true, then T+Con(T) must be as well. This is similar to the claim that for every n, if n grains of sand don’t make a heap, then n+1 don’t either. The argument that shows mathematicians must accept every large cardinal claim that set theorists come up with is parallel to the argument that there are no heaps of sand. It seems plausible to me that whatever account of vagueness one has to block the heap argument could be adopted to block the set-theoretic argument. On the other hand, if a solution to the problem of vagueness doesn’t apply to the mathematical case (perhaps because it seems implausible to assign intermediate truth-values to mathematical claims or something), then one might see this solution as somehow lacking.

Some people might also run the sorites argument as a modus tollens instead of a modus ponens, saying that any number of grains of sand form a heap, and similarly that ZFC is not true (because adopting it as true forces them towards further claims about large sets that they have trouble believing).

A position like this is adopted by finitists, who accept claims about various finitary mathematical objects (like natural numbers, rational numbers, and the like) but only accept “potential infinities” (like a list that one can keep adding to) rather than “completed infinities” (like the actual precise value of some irrational number, or some non-computable set of natural numbers). However, most finitists accept Peano Arithmetic as a set of axioms, and a similar argument works starting with PA to drive one seemingly inexorably towards ZFC, and thus to the higher infinite.

The even more drastic solution accepted by some is known as ultrafinitism, on which one doubts even some “finite” numbers. In practice, these doubts arise about extremely large numbers, like a googolplex, which are believed to be larger than the number of subatomic particles in the entire universe. However, a similar sorites argument is going to cause trouble for the ultrafinitist – if the ultrafinitist accepts that the natural number n makes sense mathematically, then it would also seem that she should accept that the natural number n+1 does as well. (After all, one can just take the set of n things that one already has, and add this set itself as an element to create a new set!) If this is right, then once one accepts that even a single natural number makes sense, this successor principle pushes one inexorably towards at least full finitism, if not towards the infinite. The ultrafinitist has to reject the claim that every natural number has a successor, but it seems that she shouldn’t point to any number as “the last one”.

I’d like to use these arguments to show that once one adopts any amount of mathematics, one basically has to go “all the way to the top” in terms of the scales of infinity. However, these arguments seem to share troubling features with sorites arguments that we do want to block, so I’m interested in seeing what accounts of vagueness might do to them.

(The idea for this post originated in a class I co-taught last week with Mike Shulman to mathematically talented high school students at the Canada/USA Mathcamp.)

Hey Kenny,

Some similar issues are discussed in Dummett’s ‘Wang’s Paradox’ in the classic ’75 Synthese volume on the logic and semantics of Vagueness, and reprinted in Truth and Other Enigmas and the Keefe and Smith Vagueness Reader. Dummett defends his Intuitionism against the charge that a strict finitist is in a position to rerun Dummett’s critique of the classicist against the Intuitionist. His tactic is to show that the coherence of strict finitism depends on the coherence of particular kinds of sets; it has to be coherent for there to be sets of natural numbers which are closed under successor, but which nonetheless have an upper-bound. It’s claimed that the extensions of vague predicates make plausible that there are such sets, for example, the set of beats of my heart in my childhood. Dummett argues that the kind of sets the strict finitist needs aren’t in fact coherent; intuitionism is thus shown not to be resting on unstable middle ground between the strict finitism and the classicist.

Anyways, since I know the vagueness stuff much better than I know the math, could you say a little more about why you think there’s something like a Sorites here?

In the heap case, the paradox is generated because the conclusion seems clearly unacceptable, but each of the steps to it seems irresistable. What I’d like to get clearer on is why you think the sitution here is similar in the relevant respects. What unacceptable conclusion do we reach by this reasoning from acceptance of ZFC? What’s wrong with going “all the way to the top”? (I remember Josh Dever talking about some result by Ken Kunen that might be relevant here, but I don’t remember the details).

I’m also not up to speed on the technicalities enough to know what the connection is between accepting principles of the form T -> T’ (where T’ proves all the consequences of T plus Con(T)), and accepting increasingly large large cardinal assumptions. Is the connection here formal or does it rest on an analogy? (Again, sorry Josh, I’m sure I should know the answer to this by now).

Ok, found the answer to my last question in the antimeta post you linked.

Kenny, I think you are wrong about ultrafinitism. I think ultrafinitism (or as the position is sometimes called ‘strict finitism’) can accept that every number has a successor with out going ‘all the way to the top’. The reason is that for ultrafinitsts not only are numbers small, but proofs have to be short. So although for any given number n, one can prove that n+1 exists, one cannot combine too many of these steps to get a proof of the existence of very large finite number.

As Aidan points out above, the coherence of ultrafinitsim has been challenged by Dummett in ‘Wang’s Paradox’ for reasons related to ones you discuss, but in a paper I am currently writing, I argue that Dummett’s argument does not work and that ultrafinists

canaccept the existence of sets of natural numbers that are closed under successor but have an upper bound (I can send you a draft if you’re interested).One small thing: Aidan, I am a little surprised at your interpretation of Dummett. I didn’t understand Dummett’s paper as defending intuitionism, but rather presenting a challenge or problem for intuitionism. For example, in p. 302 (of the Synthese version) Dummett says: “If strict finitism were to prove to be internally incoherent, then either such a disanalogy exists [a disanalogy between the intuitionistic arguments against platonism and the strict finitistic arguments against intuitionism], or the argument for traditional constructivism is unsound, even in the absence of any parallel incoherence in the constructivist position”. (And if I am not mistaken, Dummett’s suggestion that if strict finitism is incoherent then this is also a problem for intuitionism is defended in Crispin Wright’s ‘Strict Finitism’, Synthese 1982).

Well, I don’t have access to Dummett’s paper at the moment, and it’s certainly possible I’ve misremembered the emphasis. But that said, I don’t see how the passage you quote supports the alternative reading; it looks like a disjunction, with the first disjunct being what I take to be Dummett’s conclusion and the second being your alternative suggestion. The context of the quote might settle the matter, but I don’t remember what the context is. But the wider context of Dummett’s work from the late ’50s onwards suggests to me that there would be something odd about Dummett arguing that strict finitism is incoherent, and that this incoherence infects the position he has been at pains to defend. I’ll need to look at the text when I’m back in the States next week.

I haven’t read Crispin’s paper yet – reading ‘Realism, Meaning, and Truth’ is one of my big projects for the next couple of months.

Yes, I agree with you that there is a disjunction here, but Dummett says nothing in the paper to support the first disjunct (namely, that there is a disanalogy between the finitistic argument and the intuitionistic one). I therefore take it that the paper at least poses a challenge to intuitionism: to show that there exists such a disanalogy. (Rather than your reading, which sees the paper as essentially providing a response to this challenge). I also had a look at the Wright paper again. Here is what he says in the first paragraph, which I take it supports my reading of Dummett:

“This paper is primarily concerned with the Modus Tollens inference adverted to by Dummett in “Wang’s Paradox””, namely: arguments essentially analogous to those which the mathematical intuitionists…use to support their revisions of classical logic and mathematics lead to a yet more radical strict finitist outlook; This outlook, however, in incapable of issuing in a coherent philosophy of mathematics; therefore there must be something amiss with the arguments which lead to it, and by analogy, with the original intuitionistic argument also.”

Finally, I am not an expert on Dummett’s work, but I have the feeling that the commonly shared perception according to which Dummett is a major defender of intuitionism is not quite accurate. I think Dummett sought to show that intuitionism was a very interesting position that has a lot going for it, but never quite explicitly endorsed it.

Well, again I’m going to say the passage from Wright needs to be taken in context. It is true that Dummett ‘adverted’ to that line of reasoning; the passage you quoted in your first post is evidence enough of that. That doesn’t yet show even that Crispin took that to be the intended conclusion of Dummett’s paper. But look, I’m aware I’m currently doing scholarship based on memories and a priori reflection, which is never a good plan. I’m certainly convinced that I need to read the paper again with fresh eyes, and I can’t do that just now.

The issue of Dummett’s own conviction in intuitionism is tricky. In ‘Realism and Anti-Realism’ in The Seas of Language, he presents things much as you suggest. But it’s hard to square that with many of his other writings in which he spends a tremendous amount of ink and energy expousing and developing arguments which reach the conclusion that we should adopt intuitionistic logic (not just the manifestation and acquisition arguments, but also his argument from indefinite extensibility), and his criticisms of neo-fregeanism’s reliance on impredicative definitions. By most of our normal criteria for attributing philosophical positions, we’d place Dummett in the Intuitionist camp. So I agree there’s complications here, but I still don’t find it a stretch to think that in general Dummett was more prone to defend than attack intutionism’s critique of classical mathematics in the period in which ‘Wang’s Paradox’ was written. But again, textual evidence could show me wrong, and I don’t have the text in front of me.

Hiya,

There is some really interesting discussion of the “vagueness” maneouver in this sort of context in a couple of papers by Hartry Field (collected in Truth and the absence of fact). The context there is looking for what could count as “ideal” arithmetic (where one constraint on ideal theory is that it should be axiomatizable—-I’ve forgotten exactly why. Maybe to make the consequence relation tractable.). Repeat the addition of consistency statements too often and you’ll get an unaxiomatizable theory.

So (as I remember it) Field suggests that adding consistency principles gives you an “improved theory” the first few times you do it, but that somewhere (and it’s vague exactly where) you don’t get a “better” theory by adding the consistency statement.

Field is interested in views according to which (speaking loosely) arithmetical truth doesn’t outrun the consequences of ideal theory (fictionalism being one member of this family). So you now get the following surprising consequence. Suppose that ideal arithmetic is somewhere in the chain that begins with PA and proceeds by adding consistency principles—-but that it’s vague where this is. Then no matter how you precisify the vagueness, the consistency of ideal arithmetic won’t follow from ideal arithmetical theory (of course, the inconsistency doesn’t follow either). But since ideal arithmetical theory is supposed to be the sole determinate of truth in this area, there is no fact of the matter about whether or not ideal arithmetic is consistent.

Anyway, I may have misreported Field (if so, apologies) this, but anyway: I recommend the original article!

Sounds like I’ll definitely have to check up the stuff surrounding Dummett’s positions in this debate! It’s interesting to hear that someone might consider the possibility of a bounded set that is nevertheless closed under successor. I think I once briefly had the idea that maybe all the different ordinals and cardinals could be identified with really really big finite numbers, but I think you’d have to embrace a lot of the ultrafinitist ideas to get this to make any sense.

Robbie – I’ll have to look at this Field stuff, to see what he’s already said about the interaction between vagueness and this stuff! Especially because I’m interested in fictionalism, but it seems that Field thinks that truth in the fiction is much more bounded than I would think.