Skip to main content.
June 17th, 2009

Induction and Supposition

Inspired by some things Stewart Cohen and Jonathan Vogel said at the weekend’s scepticism conference, I’ve written a short note on the intersection of inductive reasoning and suppositional reasoning.

Induction and Supposition (PDF)

Here’s the first paragraph, which gives you a flavour of what I’m arguing against.

Here’s a fairly quick argument that there is contingent a priori knowledge. Assume there are some ampliative inference rules. Since the alternative appears to be inductive scepticism, this seems like a safe enough assumption. Such a rule will, since it is ampliative, licence some particular inference From A infer B where A does not entail B. That’s just what it is for the rule to be ampliative. Now run that rule inside suppositional reasoning. In particular, first assume A, then via this rule infer B. Now do a step of →-introduction, inferring A B and discharging the assumption A. Since A does not entail B, this will be contingent, and since it rests on a sound inference with no (undischarged) assumptions, it is a priori knowledge.

Posted by Brian Weatherson in Uncategorized

16 Comments »

This entry was posted on Wednesday, June 17th, 2009 at 11:49 am and is filed under Uncategorized. You can follow any responses to this entry through the comments RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.

16 Responses to “Induction and Supposition”

  1. Stewart Cohen says:

    I think there are some misunderstandings about defeasible reasoning in the argument of your paper. First some minor points and then a major one.

    R99 can’t be correct as stated because it doesn’t take into account rebutting defeaters. What you’ve included in the rule is an undercutting defeater. But there could be a rebutting defeater–a reason to believe a is not Y. Suppose Y is a perceptual predicate and I’m looking directly at a. I could have a reason to believe a is not Y even if 99% of Xs are Ys and there is no defeater of the sort you specify. It’s more perspicuous to state R99 as a defeasible inference rule:

    99% of Xs are Ys
    a is X
    ————
    a is Y

    The line means only that the premises provide a defeasible reason for the conclusion, not that the conclusion follows from the premises. Such a rule is non-monotonic. Adding information to the premises can make it the case that they no longer provide a reason for the conclusion. This information can take the form of undercutting defeaters or rebutting defeaters. When there are no defeaters, there is a reason simpliciter to believe the conclusion.

    Moreover, a defeater needn’t be provable (from undischarged assumptions or otherwise). It’s sufficient that there be an (undefeated) reason to believe there is a defeater. Defeasible inference rules do not yield proofs. This last point is crucial to why I think the reductio fails. It’s not because there is a mistake in the reasoning. Rather it’s because the alleged absurdity is not absurd.

    Consider (18) which is said to be a bad result. It’s important to remember that the argument that supports (18) uses R99, which is a defeasible inference rule. That means that that the reasoning at best gives us only a defeasible reason to believe (18)–not a proof. Now consider R99 as I’ve represented it above. It’s clear that if the premises provide a defeasible reason for the conclusion, then R99’ is also a defeasible inference rule:

    99% of Xs are Ys
    All Xs are Ys

    It would be inexplicable if R99 were a defeasible inference rule but R99’ were not. By suppositional reasoning using R99’ we can derive a defeasible reason for:

    99% of Xs are Ys –> All Xs are Ys

    And because the consequent of (18) entails Some F is -G, it follows that there is a defeasible reason for

    99% of Xs are Ys –> –99(FH,-G)

    So the defeasible reason for (18) turns out to be a trivial consequence of supposing you can do suppositional reasoning with defeasible inference rules. In other words, (18) is simply to close to
    (5)–what you get from doing suppositional reasoning on R99— to be considered a reductio of that suppositional reasoning..

    Similar considerations apply to (20). The antecedent is a defeasible reason to believe Everything is Y. The consequent without the negation entails There is a -Y. Thus Everything is F entails the the consequent (with the negation put back in). This does not mean that there is an a priori reason to believe that there are fewer than 101 individuals. The truth of (20) does not require this. If everything is Y, then (20) is true, regardless of the number of individuals).

  2. Brian Weatherson says:

    Maybe R99 is too strong, but it doesn’t seem to be too strong for just that reason. After all, it is part of R99 that there is nothing else in the context that implies a is a member of some subgroup of Fs that is mostly not-G. And if we know a is not-G, then the subgroup can be {a}.

    On the bigger point, I don’t see why it is obvious that if R99 is an acceptable rule, then R99’ is as well. After all, you only get from R99 to R99’ by using the rule inside the scope of a supposition. And that, I think, is a bad move, just like using a rule like necessitation inside the scope of a supposition would be a bad move.

    And the real reductio is (20). I don’t quite understand the problem here. I was meaning Y to be a second-order variable. It’s true that one of its values will be the plurality of all individuals, but that won’t be its only value. Put another way, the truth of any instance of (20) doesn’t put any limits on the number of indviduals there are, but the truth of the universally quantified claim does.

  3. Stewart Cohen says:

    Okay—strike one. Let me try again. (I’ve been talking to Paul Oppenheimer who helped me to come up with this).

    Brian’s derivation involves suppositional reasoning with a defeasible inference rule. This means that when you discharge the assumption, you get only a defeasible justification for that line in the derivation. Subsequent lines are derived using deductive logic. This means that the derivation presupposes that defeasible justification is closed under deductive consequence. But this closure principle is dubious—pretheoretically anyway. Suppose R is a reason to believe P. If the closure principle is correct, then one can reason from R to P to -(R & -P), e.g., (on the supposition that a’s looking red is a defeasible reason to believe a is red) from [a looks red] to [a is red] to [ -(a is non-red but illuminated by red lights)]. This closure principle is at the heart of the dogmatism debate and the Moorean response to skepticism. The dogmatist claims that one can move from [It looks like I have hands] to [I have hands] to [I’m not a handless biv]. But many question this reasoning. As several people (Roger White, John Hawthorne, me) have noted, this rule does not comport with standard Bayesian updating by conditionalization. Of course, it is sometime okay to reason in this way. If dogmatist reasoning is a counterexample to the closure principle, then it suggests the following restriction.

    If e is a defeasible reason for p, and p entails q, then one may reason from e to p to q only if e is a defeasible reason for q

    So the following reasoning is prohibited:

    (1) a looks red.
    (2) a is red
    (3) a is not (non-red but illuminated by red lights)

    because (1) is not a defeasible reason for (3).

    Brian’s derivation violates this restriction.

    One way to look at Brian’s derivation is that it shows that one of the following has to go:

    (4) There are defeasible reasons.
    (5) One can use a defeasible (ampliative) inference rule within the scope of a supposition.
    (6) Defeasible justification is closed under deductive consequence.

    Of course Brian discusses other ways out, but let’s assume he’s correct in rejecting them.

    I take it that we don’t want to reject (4) on pain of skepticism. Brian’s derivation uses (6) to reject (5). But it seems to me that it should go the other way. We should use (5) to reject (6). In fact we often use defeasible inference rules within the scope of a supposition. We can illustrate this using statistical syllogism. Suppose I know that Fido is a pitbull. You ask me if it’s true that pitbulls tend to be dangerous (Most pitbulls are dangerous). I reply (noticing Fido headed our way), “I don’t know, but if they do, we’d better run.” Surely this reasoning is unobjectionable. But of course this reasoning is just an instance of using statistical syllogism within the scope of a supposition (plus a bit of practical reasoning given our desire not to be attacked).

    (7) Fido is a pitbull. (assumption)
    (8) Most pitbulls are dangerous. (assumption)
    (9) Fido is dangerous (4,5, & Stat Syll)
    (10) If most pitbulls are dangerous, then Fido is dangerous. (discharging 8)

    Surely we reason this way all the time. Of course, this doesn’t show it’s always okay to reason this way. But if we can’t always reason this way, I’d like to know the restriction. Of course it’s controversial to cite dogmatist reasoning as a counterexample to (6). Dogmatism is accepted by some smart people. But my suspicion is that the acceptance of Dogmatism is driven, at least in part, by the thought that there is no other way to avoid skepticism. But if we accept accept (5) (along with (4)), then we can avoid skepticism,. For then we can derive a defeasible a priori justification to believe a contingent proposition like [If it looks like I have a hand, then I have a hand]. Admittedly, it’s very puzzling to suppose that we can have a priori justification for a contingent proposition. But accepting (4) and (5) also allows us to avoid the bootstrapping version of the easy knowledge problem (or so I’ve been arguing). And by my lights, bootstrapping reasoning is patently absurd. Is the claim that there are defeasible a priori reasons for contingent propositions absurd? Well, as Brian notes, others have argued that there can be such reasons. I don’t know of anyone who bites the bullet on bootstrapping reasoning (except maybe Michael Bergmann). Perhaps, as is often the case, it’s a matter of choosing your poison.

  4. Brian Weatherson says:

    It seems to me that (6) is ambiguous between a principle that seems wrong, and which I don’t use, and a principle that seems true, and which is all I need.

    The wrong principle is that if e is a reason to believe p, and p entails q, then e is a reason to believe q. That’s fairly clearly bad, as can be seen when q is something like 0=0, and e is that the weather forecast calls for rain, and p is that it will rain.

    The true principle is that if e is a reason to believe p, and p entails q, then e could be a reason to believe q, assuming that a person who has e realises that p entails q.

    I don’t think that the `easy knowledge’ examples undermine this. If we have a reason to believe that something is a red wall, we have reason to believe that it’s either a red wall or an X, for arbitrary X. Our reason for believing that disjunction might not be the same as our reason for believing it is a red wall. (If X is ‘not a red wall’, then our reason may be pure logic, for instance, rather than perception.) But there’s no reason to think that we lack reasons. That doesn’t mean we have to `pull ourselves up by our bootstraps’; it might just mean that if we’re reasoning properly, we must already be airborne.

    So I think the relevant version of (6) is true, and can’t be what’s going wrong in my argument.

    If the problem is (5) then, I better say something about what is going on in the Fido/pitbull case. I reckon we can get from premise to conclusion without doing any defeasible reasoning inside the scope of a supposition. Here is how I would go about it.

    Assume Fido is a pitbull, and most pitbulls are dangerous. The premises, plus background knowledge that we don’t have any defeaters about Fido etc, entail that Fido is probably dangerous. Given some other background facts about our utility function, that entails that expected utility will be maximised if we run. And assuming that what maximises expected utility is what we should do, that entails we should run. So discharnging the original assumption, we get that if most pitbulls are dangerous, we better run.

    That is slightly convoluted, and somewhat over-intellectualised, but it seems to capture what needed to be captured in the Fido example, without violating (5).

  5. Stewart Cohen says:

    I don’t see how what you’re calling “the true principle” escapes the counterexample to the wrong principle. I agree with you that the wrong principle is false.
    .
    But the following principle seems true to me:

    “If e is a defeasible reason for p, and p entails q, then one can reason defeasibly from e to p to q.”

    And that’s all you need to do your reductio.

    On the other hand, the Fido example shows that it is sometimes permissible to reason defeasibly within the scope of a conjunction. I don’t understand your response to this example. Just because you can reach the conclusion in a different way, does not show that there is anything wrong with the reasoning as I displayed it–where there is defeasible reasoning within the scope of a supposition. And I can think of no reason why the reasoning as I displayed is fallacious, unless you think that we can only infer from the premises that probably Fido is dangerous, which as you note, is entailed by the premises. But that would be to deny that statistical syllogism, or for that matter any defeasible inference rule, is correct (as you yourself note in the paper).. I thought we we’re assuming that you can do defeasible reasoning, and arguing about whether you can do it inside the scope of a supposition. So if there are defeasible inference rules, I don’t see how you can deny that the reasoning in the Fido example is correct. Admittedly, appearances can be deceiving. Perhaps all this time when we reasoned as in the Fido example, we were reasoning fallaciously. But I think there’s a way to allow that the Fido reasoning is okay while still responding to your reductio.

    It turns out that there is a much simpler reductio that doesn’t involve second-order logic or universal generalization.

    1) 99 (F,G)&Fa (supposition)
    2) Ga (rule 99,1)
    3) (99(F,G)&Fa)—>Ga(discharging 1)
    4) 99 (F,G)&Fa&99(FH,-G)&FHa (supposition)
    5) 99(F,G)&Fa (4,conjunction elimination, skipping a few steps)
    6) (99(F,G)&Fa&99(FH,-G)&FHa)→99(F,G)&Fa (discharging 4)
    7) (99(F,G)&Fa&99(FH,-G)&FHa)→Ga (3,6, transitivity of ‘→’)

    Presumably, we don’t have an a priori defeasible reason to believe 7.

    Now you may say that this just strengthens the case for prohibiting defeasible reasoning within the scope of a supposition. But in fact my reductio and yours have a common feature. Each has a defeasible inference and then subsequently a supposition that is a defeater of that inference. In my reductio, the supposition at line 4 defeats the inference from 1 to 2. And in your reductio, the supposition at line 6 defeats the inference from 2&3 to 4. So I propose the following restriction on defeasible reasoning:

    RES: If a “derivation” D has a line that derives from a defeasible inference I, then no subsequent line in D can contain a supposition S, where S is a defeater of I.

    Perhaps this rule will have to be refined, but it has going for it that it can block both your reductio and mine, while vindicating our intuition that the Fido reasoning as I displayed it is correct.

  6. Branden Fitelson says:

    Brian — I’m probably missing something subtle here about 2nd-order universal natural deduction rules (which aren’t really my thing), but isn’t there something a little funny about the applications of 2nd-order UI/UE in steps 19/20? If this really is a “2nd-order-generalizable” inference (up through 18), shouldn’t we be able to run it (from 1-18) using — specifically — the choice of X,Y,Z you use in your second-order UE at the end (i.e., at 20). Namely, shouldn’t we be able to run the argument so that the conclusion at (18) follows, specifically, for X = I, Y = G, and Z = ~G? If so, then I don’t see how it’s supposed to work. When I run through 1-18, with this choice of X,Y,Z in mind (from the top), I can’t see how to get to anything stronger than 99(I,G) -> Gc (for arbitrary constant c). That is, I don’t see how we’re going to get the salient instance of (18), which would be 99(I,G) -> ~99(~G,~G). Maybe that’s not necessary for “2nd-order-generalizability” (or maybe I’m missing how that argument should go)? [Sorry if I’m being stupid here. Maybe this just means I need to study more second-order logic…:)]

  7. Branden Fitelson says:

    Clarification. If we look at the X = I, Y = G, and Z = ~G “instance” of the proof steps 1-18, then, at step (5) we get (something equivalent to):

    (5) 99(I,G) -> Ga (for arbitrary a)

    And, at line (10), we get (something equivalent to):

    (10) (99(~G,~G) & ~Ga) -> ~Ga (for arbitrary a)

    But, (10) is a logic truth for this choice of X,Y,Z. Thus, I don’t see how any subsequent (valid) reasoning can get us anything stronger than:

    (5) 99(I,G) -> Ga (for arbitrary a)

    That is, I don’t see how the salient instance of (18) could possibly be deduced [99(I,G) -> ~99(~G,~G)]. So, I don’t see how the 2nd-order UG step at (19) is valid, since it seems to fail for this instance of X,Y,Z.

  8. Brian Weatherson says:

    I agree that the proof is rather odd at this point. It isn’t obvious, in retrospect, that F, G and H are really arbitrary in the sense that is needed for this step. But I’m not sure this gets at the heart of it.

    The problem, as you point out, is that I seem to have done little more than derive 99(I,G) → Gc. Now by (first-order) universal introduction, we can get from that 99(I, G) → ∀x:Gx.

    The odd thing about the way I’ve defined ‘99’ is that this entails 99(I,G) → ~99(~G,~G). After all, I said 99(X, Y) is false if there are no Xs, so ∀x: Gx entails ~99(~G, ~G). So actually it is not too surprising if we can derive 99(I,G) → ~99(~G,~G).

    But the bigger question of whether these predicates are really arbitrary in the relevant sense is harder, and I’m not sure that I know quite what to say about it.

  9. Branden Fitelson says:

    Sorry — here’s a version of my last post that should be formatted better now:

    Thanks, Brian — I knew I was missing something (it was the full content of your definition of “99”, in addition to the 2nd-order UI subtlety). So, what we have now (correct me if I’m wrong) is the following simplification of your argument from 20 steps, several 1st-order IUs and 3 second-order UIs, down to 7 steps, one 1st order UI and 1 2nd-order UI. That is, we now have the following simpler argument for your conclusion (right?):

    (1) 99(I, G) Assumption
    (2) Ga R99, (1), Ia is a logical truth
    (3) 99(I, G) → Ga (1)-(2), discharging (1)
    (4) 99(I, G) → (Ax)Gx (3), UI (1st-order)
    (5) (Ax)Gx → ~99(~G,~G) definition `99`
    (6) 99(I,G) → ~99(~G,~G) (3),(5), transitivity of ->
    (7) (AY)(99(I,Y) -> ~99(~Y,~Y)) (6), UI (2nd-order)

    So, we can get your conclusion much more simply now (and in a way that Stew’s restriction “Res” will not block — right, Stew?).

    But, I am still worried about the conclusion (7). Does it really follow from (6)? That is, is it guaranteed to hold for all predicates “G”? As I said, this second-order stuff is above my pay-rate — so I’m not sure. But, in any event, at least we now have a simpler argument for the conclusion you wanted, which is nice. Yes?

  10. Branden Fitelson says:

    OK — here’s a link to a PDF file with a proper (and readable!), 9-step simplification of Brian’s argument for his conclusion (20). Sorry for the multiple sloppy posts (been driving all day, and so I’m a bit fuzzy). Brian — you can go ahead an delete the three posts above. Sorry!

    http://fitelson.org/weatherson_proof.pdf

  11. Brian Weatherson says:

    That is simpler than what I had in mind. And perhaps I shouldn’t have gone through this fuss with `H’. Though I think line 13 in my original proof is kinda crazy anyway, and that’s before we get to any of the generalisation steps that Sinan Dogramaci was worrying about. So perhaps I should shorten the proof to that.

  12. Branden Fitelson says:

    Thanks, Brian. I agree that (13) is weird. And, in my rendition of your argument for (20), we don’t get an analogous claim, until after we do the first-order UI. That is, once we have (8) in my argument, we can, of course, get the following instance of (13), with F = I and H = ~G:

    (8*) [99(~G,~G) & 99(I,G)] —> ~Ga

    But, this only works in the simpler argument after we’ve done the UI step at (6). Maybe this is a reason for fussing around with “H” after all.

  13. Richard Zach says:

    R99 is not a valid inference schema (obviously), so why is it surprising that you can use it to derive something false?

    But more to the point of your argument: If you assume that you can reason with generic instances of R99 (the way you do in your derivation), the defeater condition won’t ever play a role (unless, of course, you add as an undischarged assumption to your argument that there is a defeater). That’s what lets you derive (5) — since F,G, and a are variables, of course you can’t prove from the other undischarged assumptions (there are none) that for some Z that F(a) & Z(a) and less than 99% etc.

    Here’s a much shorter derivation of something false using R99:

    1-5 get you (99(F, G) & Fa) → G(a))
    ∀-Intro: ∀F ∀G ∀a (99(F, G) & Fa) → G(a))
    ∀-Elim: 99(λx . x > 0, λx . x > 1) & 1 > 0) → 1 > 1

    I don’t think there’s a way of fixing this short of revising R99 to include as a premise “a is not an exception to the rule”, which would make the rule valid (and so not ampliative).

  14. Brian Weatherson says:

    Hi Richard,

    The problem isn’t that R99 leads to something false. The problem is that it leads to something that isn’t justified.

    My target here is the person who thinks that we can get a very quick defeasible a priori justification for various propositions of the form E → H, where E is non-demonstrative evidence for H, by assuming E and using the rule that lets us inductively derive H. That would be a very quick argument for contingent a priori justification, and perhaps for contingent a priori knowledge. And I don’t think it is a good argument; not only might be output of such reasoning be false, in some cases it won’t even be justified.

  15. Richard Zach says:

    Is the reason you say that the conclusion of your argument is not justified that we have independent reasons to think it’s false (viz, we know that there are more than 101 things)? The conclusion of my argument would then also not only be false but be unjustified (it is an elementary mathematical falsehood).

    I still think the problem is that you assume you can apply R99 in its open (ie, implicitly universally quantified) form, plus perhaps hat the defeater condition on the rule is not an extra premise that has to be present, but a “non-presence” condition. The first reason you get in trouble is that you are allowed to derive Ga from 99(F, G) and Fa. And why can you do that? Because as long as you don’t know anything about F, G, a (ie, no other open assumptions involving F, G, a), nothing will be provable, so the “no defeater” condition will automatically be satisfied. If on the other hand, you had another premise, say of the form “¬∃Z(…)”, you wouldn’t be able to apply rule 99 in this case.

  16. Brian Weatherson says:

    I think the reason the conclusion isn’t justified is that it is crazy to have a priori grounds to believe there are fewer than 101 things in the world! That doesn’t look like an a priori justifiable belief.

    It’s true that you could avoid the problem by weakening R99 a lot. I’m worried that the weakening that you propose would in effect make it a non-ampliative rule. After all, if ~Ga, then there is some Z, namely x=a, that would violate the extra condition. In any case, I think most people who want to use rules like R99 in regular reasoning think we simply need an absence of defeaters, not a known absence of defeaters. And I wanted to mirror that in the proof.

Leave a Reply

You must be logged in to post a comment.