Skip to main content.
December 6th, 2009

Autonomy and Logicism

Stewart Shapiro’s 1998 paper “Induction and indefinite extensibility: The Gödel sentence is true, but did someone change the subject?” makes a really nice point that I hadn’t thought about before. Let’s say that you’re attracted to the following two kinds of claims.

1) Autonomy in logic – A logical truth should be provable using the logical rules for just the connectives in that truth.
2) Logicism in mathematics – Mathematical truths are logically true, in a properly extended sense of ‘logically’.

Several writers, tracing back at least to Dummett, have used (1) to argue against classical logic. The problem is that Pierce’s Law, i.e. ((p → q) → p) → p, is a classical theorem. But, they say, it can’t be proven given just the appropriate logical rules for →. The latter claim is controversial, but plausible.

The problem is that if you don’t like Pierce’s Law for this reason, you should think that ∃x ∃y (x ≠ y) should be provable using just the logical rules for ∃ and ≠, at least if it is a logical truth. Now on the one hand, it doesn’t seem plausible that the rules for ∃ and ≠ suffice to prove ∃x ∃y (x ≠ y). Certainly the intuitionist rules for ∃ and ≠ don’t suffice.

On the other hand, 0 ≠ 1 is a mathematical truth. So ∃x ∃y (x ≠ y) is a mathematic truth. So, given logicism, it is a logical truth. So given autonomy, it should be provable by considerations purely about ∃ and ≠.

Of course, there’s a lot that can be said about various ways out of this puzzle you might try while holding on to something like (1) and (2). But ∃x ∃y (x ≠ y) isn’t the only case where there’s a conflict between (1) and (2), so the ways out have to do a lot of work. Shapiro’s paper is a very nice study of what the options are here, and why they all look fairly problematic. I recommend the paper to anyone who is interested in these issues. (Though, to be fair, there probably aren’t that many people like me who both are interested in these issues and haven’t read the paper already!)

Posted by Brian Weatherson in Uncategorized

7 Comments »

This entry was posted on Sunday, December 6th, 2009 at 8:00 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

7 Responses to “Autonomy and Logicism”

  1. crispin says:

    Brian, I am not sure what you have in mind as proof theory for ≠, but if we add the N-in and N-out transitions characteristic of moves across Hume’s principle as supplementary rules for =, and set them in a normal (intuitionist, impredicative) second-order logic, it is straightforward to establish ∃x ∃y (x ≠ y) by considerations purely about ∃, =, and negation.

  2. Brian Weatherson says:

    Hi Crispin!

    I was thinking that the N-in and N-out rules were rules for N, not =. So the rules for = would just be reflexivity of identity, and indiscernibility of identicals. It’s true that if N-in and N-out are rules for =, then we have a proof of ∃x ∃y (x ≠ y). But I’m unsure why we should think Hume’s principle provides rules for identity, rather than rules for numbers.

  3. Greg Restall says:

    Hi Crispin, Hi Brian,

    Another way to go, in line with Crispin’s general approach, is to take the N rules to somehow play a role in a larger family of rules for singular terms, and hence, for the existential quantifier, since those rules depend on the range of singular terms for their own field of applicability.

    Then the N rules tell us something of how numerical terms work, and since they’re singular terms, they feed into the existential quantifier rules.

    Then since identity rules are difficult — I think the best way to understand rules for identity use predicate ‘variables’ (following Stephen Read’s nice paper “Identity and Harmony” Analysis 2004) — I don’t find it surprising that you get the failure of the subformula property and a proof of something beyond what is given in first order logic with identity alone.

    What this means for logicism is another matter.

  4. crispin says:

    Greg’s proposal is congenial to me. What I had in mind, though, was more narrowly conceived, viz. to view the N-in and -out rules as characterising sumultaneously the inferential role of the N-operator and (as further characterising) ‘=’. As Brian in effect notes, it is usual to give the identity in-rule schematically as
    |- x=x
    which obviously says nothing about cases where disparate terms flank ‘=’. So there is prima facie space for a further determination of the content of ‘=’ as it features in such cases.

    Is it legitimate for putative inferential role characterisations to work on more than one concept at a time? Well, why not – provided they are at least consistent with prior partial characterisations?

    What does any of this have to do with logicism? Not much, I think, unless we insist that all logical truths, properly so regarded, are autonomous truths. But logicism is an epistemological theses and I am not sure that much of epistemological interest hangs on the distinction between those truths that can be deduced using just the proof theory for the constants featuring explicitly in them and those that rest in addition on truths established via the proof theory for additional notions that ‘disappear’ under e.g. an application of the in-rule for their main operator. Presumably no existential statement can count as a logical truth except under the more relaxed rubric.

  5. Brian Weatherson says:

    That clarifies a lot Crispin – thanks! I just want to try to clarify what’s going on (largely for my own benefit) and see if I can state where the challenges might be.

    We can define a narrow notion of ‘logical’ where all logical truths are autonomous. But it’s no part of logicism that mathematical truths are logical in this narrow sense. In fact perhaps there are no existential claims that are narrow in this sense; or perhaps just the fact that something exists is logical.

    What is distinctive of logicism is that we have a special reason to believe in the truths of mathematics. And this reason is analogous in certain respects to the reasons we have to believe in logical truths in the narrower sense. In particular, it might be that mathematical truths can be properly derived from the inferential rules constitutive of mathematical concepts.

    This does leave open a kind of possibility that a classical logician who is also a logicist might want to exploit. Perhaps it is partially constitutive of → that from (p → ⊥) → ⊥ we can infer p. Or it is partially constitutive of ∨ that in from zero premises we can infer p ∨ ¬p. Perhaps that’s a second intro rule for ∨ that we can only understand once we comprehend ¬.

    The classical logician who says this won’t say that Pierce’s Law is logical in the narrow sense, any more than ∃x ∃y (x ≠ y) is. But Pierce’s Law does have, they say, the same kind of justification as mathematics does, because it can be derived from inferential roles.

    My guess is that you’d say that such a position is coherent, but as a matter of fact, we have reason to think that these rules (either double negation elimination or law of excluded middle) are not part of the inferential role of → or ∨. Is that right?

  6. Greg Restall says:

    I think I agree with everything everyone’s said here, at least as indicating options on the broadly inferentialist map. I’d just like to add that when it comes to identity, and reasons for concluding ∃x ∃y (x ≠ y) and like things, =-out is the rule that’s going to be doing the work for us, not =-in.

    For that, something like Fa, a=b |- Fb is what is usually asked for (with real questions concerning what’s appropriate to substitute in the F slot). Then it depends on whether one’s logic is going to give you Fa and ~Fb, but if it does (if, say, it can tell us that 0 is even and that 1 is not, for example), we’re done: we get a≠b.

    Independently of your thoughts on autonomy as a principle, it’s an interesting question to ask what conceptual resources suffice to give a proof of ∃x ∃y (x ≠ y), and what different minimal bases there might be.

  7. crispin says:

    Your guess is correct, Brian. In fact, I once defended such a ‘justification’ of classical logic (in a critical study of Truth and Other Enigmas.) I later became dissatisfied with it, though, on the strength of the — murky but tantalising — point that not every coherent pattern of use allows of coherent understanding. Think of ‘tunk’, whose in-rule is that of conjunction and out-rule that of disjunction. The pattern of use so determined is coherent, but allows of coherent understanding only on the assumption that one or the other of the rules demands too much. That there should not be this kind of misfit is, of course, what is being gestured at by those who bang on about Harmony as a constraint on well-founded inferential role. But I see the point of this notion not as a safeguard against paradox — it isn’t that (see Curry, and Law V) — but as a condition on making sense.

    In the case of counting LEM as an in-rule for disjunction, what happens — I suggest — is that we get this sort of lapse of intelligibility unless we can back it up with the kind of semantic/metaphysical considerations characteristic of realism. Regular Or-in and Or-out underwrite reasoning by cases. But if we reason by cases from LEM counted simply as a further primitive Or-in rule, we can wind up— modulo certain collateral hypotheses which may impress as locally plausible — making disjunction-free claims that impress as unjustified (for instance, that there is a sharp cut-off in a soritical series for “bald”.)

    Whether the N-rules, conceived as determinant of the inferential roles both of the cardinality operator and =, fall out on the right side of this distinction is another matter. I don’t know of any strong prima facie reason to doubt it. The symmetry of the two rules seems enough to ensure their harmony wrt “N”. So the issue is plausibly taken as whether the identity contexts that N-in lets us prove, justify the claims that they ought to justify in the light of identity-out. That’s a question of detail (essentially it is the question whether every open sentence “A…” defined for terms of the form, “N(F)”, corresponds to a condition, X…, on concepts for which 1-1 correspondence is a congruence.)

    I guess we won’t really understand these matters until, if ever, we have a better handle on the notion of harmony and its connections with ‘making sense’.

Leave a Reply

You must be logged in to post a comment.