Stewart Shapiro’s 1998 paper “Induction and indefinite extensibility: The Gödel sentence is true, but did someone change the subject?” makes a really nice point that I hadn’t thought about before. Let’s say that you’re attracted to the following two kinds of claims.

1) Autonomy in logic – A logical truth should be provable using the logical rules for just the connectives in that truth.

2) Logicism in mathematics – Mathematical truths are logically true, in a properly extended sense of ‘logically’.

Several writers, tracing back at least to Dummett, have used (1) to argue against classical logic. The problem is that Pierce’s Law, i.e. ((p → q) → p) → p, is a classical theorem. But, they say, it can’t be proven given just the appropriate logical rules for →. The latter claim is controversial, but plausible.

The problem is that if you don’t like Pierce’s Law for this reason, you should think that ∃x ∃y (x ≠ y) should be provable using just the logical rules for ∃ and ≠, at least if it is a logical truth. Now on the one hand, it doesn’t seem plausible that the rules for ∃ and ≠ suffice to prove ∃x ∃y (x ≠ y). Certainly the intuitionist rules for ∃ and ≠ don’t suffice.

On the other hand, 0 ≠ 1 is a mathematical truth. So ∃x ∃y (x ≠ y) is a mathematic truth. So, given logicism, it is a logical truth. So given autonomy, it should be provable by considerations purely about ∃ and ≠.

Of course, there’s a lot that can be said about various ways out of this puzzle you might try while holding on to something like (1) and (2). But ∃x ∃y (x ≠ y) isn’t the only case where there’s a conflict between (1) and (2), so the ways out have to do a lot of work. Shapiro’s paper is a very nice study of what the options are here, and why they all look fairly problematic. I recommend the paper to anyone who is interested in these issues. (Though, to be fair, there probably aren’t that many people like me who both are interested in these issues and haven’t read the paper already!)

Brian, I am not sure what you have in mind as proof theory for â‰ , but if we add the N-in and N-out transitions characteristic of moves across Hume’s principle as supplementary rules for =, and set them in a normal (intuitionist, impredicative) second-order logic, it is straightforward to establish âˆƒx âˆƒy (x â‰ y) by considerations purely about âˆƒ, =, and negation.

Hi Crispin!

I was thinking that the N-in and N-out rules were rules for N, not =. So the rules for = would just be reflexivity of identity, and indiscernibility of identicals. It’s true that if N-in and N-out are rules for =, then we have a proof of âˆƒx âˆƒy (x â‰ y). But I’m unsure why we should think Hume’s principle provides rules for

identity, rather than rules fornumbers.Hi Crispin, Hi Brian,

Another way to go, in line with Crispin’s general approach, is to take the N rules to somehow play a role in a larger family of rules for singular terms, and hence, for the existential quantifier, since those rules depend on the range of singular terms for their own field of applicability.

Then the N rules tell us something of how numerical terms work, and since theyâ€™re singular terms, they feed into the existential quantifier rules.

Then since identity rules are difficult â€” I think the best way to understand rules for identity use predicate â€˜variablesâ€™ (following Stephen Read’s nice paper â€œIdentity and Harmonyâ€

Analysis2004) â€” I donâ€™t find it surprising that you get the failure of the subformula property and a proof of something beyond what is given in first order logic with identity alone.What this means for

logicismis another matter.Gregâ€™s proposal is congenial to me. What I had in mind, though, was more narrowly conceived, viz. to view the N-in and -out rules as characterising sumultaneously the inferential role of the N-operator and (as further characterising) â€˜=â€™. As Brian in effect notes, it is usual to give the identity in-rule schematically as

|- x=x

which obviously says nothing about cases where disparate terms flank â€˜=â€™. So there is prima facie space for a further determination of the content of â€˜=â€™ as it features in such cases.

Is it legitimate for putative inferential role characterisations to work on more than one concept at a time? Well, why not â€“ provided they are at least consistent with prior partial characterisations?

What does any of this have to do with logicism? Not much, I think, unless we insist that all logical truths, properly so regarded, are autonomous truths. But logicism is an epistemological theses and I am not sure that much of epistemological interest hangs on the distinction between those truths that can be deduced using just the proof theory for the constants featuring explicitly in them and those that rest in addition on truths established via the proof theory for additional notions that â€˜disappearâ€™ under e.g. an application of the in-rule for their main operator. Presumably no existential statement can count as a logical truth except under the more relaxed rubric.

That clarifies a lot Crispin – thanks! I just want to try to clarify what’s going on (largely for my own benefit) and see if I can state where the challenges might be.

We can define a narrow notion of ‘logical’ where all logical truths are autonomous. But it’s no part of logicism that mathematical truths are logical in this narrow sense. In fact perhaps there are no existential claims that are narrow in this sense; or perhaps just the fact that something exists is logical.

What is distinctive of logicism is that we have a special reason to believe in the truths of mathematics. And this reason is analogous in certain respects to the reasons we have to believe in logical truths in the narrower sense. In particular, it might be that mathematical truths can be properly derived from the inferential rules constitutive of mathematical concepts.

This does leave open a kind of possibility that a classical logician who is also a logicist might want to exploit. Perhaps it is partially constitutive of → that from (p → ⊥) → ⊥ we can infer p. Or it is partially constitutive of ∨ that in from zero premises we can infer p ∨ ¬p. Perhaps that’s a second intro rule for ∨ that we can only understand once we comprehend ¬.

The classical logician who says this won’t say that Pierce’s Law is logical in the narrow sense, any more than âˆƒx âˆƒy (x â‰ y) is. But Pierce’s Law does have, they say, the same kind of justification as mathematics does, because it can be derived from inferential roles.

My guess is that you’d say that such a position is coherent, but as a matter of fact, we have reason to think that these rules (either double negation elimination or law of excluded middle) are not part of the inferential role of → or ∨. Is that right?

I think I agree with everything everyoneâ€™s said here, at least as indicating options on the broadly inferentialist map. Iâ€™d just like to add that when it comes to identity, and reasons for concluding âˆƒx âˆƒy (x â‰ y) and like things, =-out is the rule that’s going to be doing the work for us, not =-in.

For that, something like Fa, a=b |- Fb is what is usually asked for (with real questions concerning whatâ€™s appropriate to substitute in the F slot). Then it depends on whether oneâ€™s logic is going to give you Fa and ~Fb, but if it does (if, say, it can tell us that 0 is even and that 1 is not, for example), weâ€™re done: we get aâ‰ b.

Independently of your thoughts on autonomy as a principle, itâ€™s an interesting question to ask what conceptual resources suffice to give a proof of âˆƒx âˆƒy (x â‰ y), and what different minimal bases there might be.

Your guess is correct, Brian. In fact, I once defended such a â€˜justificationâ€™ of classical logic (in a critical study of Truth and Other Enigmas.) I later became dissatisfied with it, though, on the strength of the â€” murky but tantalising â€” point that not every coherent pattern of use allows of coherent understanding. Think of â€˜tunkâ€™, whose in-rule is that of conjunction and out-rule that of disjunction. The pattern of use so determined is coherent, but allows of coherent understanding only on the assumption that one or the other of the rules demands too much. That there should not be this kind of misfit is, of course, what is being gestured at by those who bang on about Harmony as a constraint on well-founded inferential role. But I see the point of this notion not as a safeguard against paradox â€” it isnâ€™t that (see Curry, and Law V) â€” but as a condition on making sense.

In the case of counting LEM as an in-rule for disjunction, what happens â€” I suggest â€” is that we get this sort of lapse of intelligibility unless we can back it up with the kind of semantic/metaphysical considerations characteristic of realism. Regular Or-in and Or-out underwrite reasoning by cases. But if we reason by cases from LEM counted simply as a further primitive Or-in rule, we can wind upâ€” modulo certain collateral hypotheses which may impress as locally plausible â€” making disjunction-free claims that impress as unjustified (for instance, that there is a sharp cut-off in a soritical series for â€œbaldâ€.)

Whether the N-rules, conceived as determinant of the inferential roles both of the cardinality operator and =, fall out on the right side of this distinction is another matter. I donâ€™t know of any strong prima facie reason to doubt it. The symmetry of the two rules seems enough to ensure their harmony wrt â€œNâ€. So the issue is plausibly taken as whether the identity contexts that N-in lets us prove, justify the claims that they ought to justify in the light of identity-out. Thatâ€™s a question of detail (essentially it is the question whether every open sentence â€œAâ€¦â€ defined for terms of the form, â€œN(F)â€, corresponds to a condition, Xâ€¦, on concepts for which 1-1 correspondence is a congruence.)

I guess we wonâ€™t really understand these matters until, if ever, we have a better handle on the notion of harmony and its connections with â€˜making senseâ€™.