Knowing How, Regresses and Frames

I’m just back from my annual trip to St Andrews to work at Arché. It was lots of fun, as always. The highlight of the trip was taking the baby overseas for the first time, and letting her meet so many great people, especially the other babies. And there was lots of other fun besides. I taught a 9-seminar class on game theory. I have to revise my notes a bit to correct some of the mistakes that became clear in discussion there, but hopefully soon I’ll post them.

Over the last two weekends I was there, there were two very interesting conferences. The first was on the interface between the study of language and the study of philosophy. The second was on knowing how. I didn’t get to attend all of it, so it’s possible that the things I’ll be saying here were addressed in talks I couldn’t make. And this isn’t really my field, so I suspect much of what I’m saying here will be old news to cognoscenti. But I thought that at times some of the anti-Ryleans understated, or at least misstated, the force of Ryle’s arguments.

Regress Arguments

Jason Stanley briefly touched on the regress argument Ryle gives in favour of a distinction between knowing how and knowing that. Or, at least, he briefly touched on a regress argument that Ryle gives, though I think this isn’t Ryle’s only regress argument. Here’s a rough version of the argument Jason attributes to Ryle.

  • Knowing that is a static state.
  • No matter how many static states a creature is in, there is no guarantee that anything dynamic will happen, e.g., tht the creature will move, or change.
  • But our knowledge does sometimes lead to dynamic effects.
  • So there is more to knowledge than knowing that.

This is a pretty terrible argument I think, and Jason did a fine job demolishing it. For one thing, whatever it means to say that knowing that is static, knowing how might be just as static. And given a functionalist/dispositionalist account of content, it just won’t be true that knowing that is static in the relevant sense. If an agent never has the disposition to go to the fridge even though they have a strong desire for beer, and no conflicting dispositions/impediments, then they don’t really believe there is beer in the fridge, so don’t know that there is beer in the fridge.

This way of presenting Ryle makes it sound like knowing how is some kind of ‘vital force’, and Ryle himself is a vitalist, looking for the magical force that is behind self-locomotion. I don’t think that’s a particularly fair way, though, of looking at Ryle. A better approach, I think, starts with consideration of the following kind of creature.

The creature is very good, indeed effectively perfect, at drawing conclusions of the form I should φ. But they do not always follow this up by doing φ. If you think it is possible to form beliefs of the form I should φ without ever going on to φ, or even forming a disposition to φ, imagine the creature is like that. If you think that’s impossible, perhaps on functionalist grounds, imagine that the creature moves from knowledge she expresses with I should φ to actually doing φ as rarely as is conceptually possible. (I set aside as absurd the idea that the functionalist characterisation of mental content rules out there being large differences in how frequently creatures move from I should φ to actually doing φ.)

I think such a creature is missing something. If they frequently don’t do φ in cases where it would be particularly hard, what they might be missing is willpower. But let’s not assume that. They frequently just don’t do what they think they should do, given their interests, and often instead do something harder, or less enjoyable. But what they are missing doesn’t seem to be propositional knowledge, since by hypothesis they are very good at figuring out what they should do, and if they were missing propositional knowledge, that’s what they would be missing.

What they might be missing is a skill, such as the skill of acting on one’s normative judgments. But I think Ryle has a useful objection to that characterisation. It is natural to characterise the person’s actions as stupid, or more generally unintelligent, when they don’t do what they can quite plainly see they should do. A person who lacks a skill at digesting hot dogs quickly, or playing the saxaphone, or sleeping on an airplane, isn’t thereby stupid or even unintelligent. (Though they might be stupid if they know they lack these skills and nevertheless try to do things that call for such a skill.) Indeed, we typically criticise cognitive failings as being unintelligent. So our imagined creature must have a cognitive failing. And that failing must not be an absence of knowledge that, since by hypothesis that isn’t lacking. So we call what is lacking knowledge how.

Note that I really haven’t given an argument that this is the kind of thing that natural language calls knowing how. It’s consistent with this argument that everything that is described as knowing how in English is in fact a kind of knowing that. But it is an argument that there is some cognitive skill that plays one of the key roles in regress-stopping that Ryle attributed to knowing how.

Ryle on the Frame Problem

There’s another problem for a traditional theory that identifies knowledge with knowing that, and it is the frame problem. Make the following assumptions about a creature.

  • It knows most of the relevant true propositions of the form That p is true is relevant to my decision about whether to do ψ.
  • It knows an enormous number of the relevant true propositions of the form That q is true is irrelevant to my decision about whether to do ψ, though of course there infinitely many it does not know.
  • If it consciously draws on a piece of knowledge that in figuring out whether to do ψ, that has large computational costs.
  • If it subconsciously draws on a piece of knowledge that in figuring out whether to do ψ, that has small but not zero computational costs.
  • If it simply ignores q in figuring out whether to do ψ, rather than first considering whether to ignore q and then ignoring it, that has zero computational costs.

It seems to me that such a creature has to work out a way to use its knowledge that of propositions like That q is true is irrelevant to my decision about whether to do ψ in making practical deliberations without actually drawing on that knowledge. If it does draw on it, the computational costs will go to infinity, and nothing will get done. In short, it has to be able to ignore propositions like q, and it has to ignore them without thinking about whether to ignore them.

It seems that a skill like this is not something one gets by simply having a lot of knowledge. One can know all you like about how propositions like q should be ignored in practical deliberation. But it won’t help a bit if you have to go through the propositions one by one and conclude that they should be ignored, even if you can do all this subconsciously.

Moreover, it is a sign of intelligence to have such a skill. Someone whose mind drifts onto thoughts about the finer details of French medieval history when trying to decide whether to catch this local train or wait for the express is displaying a kind of unintelligence. As above, Ryle concludes from that that the skill is a distinctively cognitive skill, and worthy of being called a kind of knowledge. Since it isn’t knowledge that – our creature has all the salient knowledge that – it is a kind of knowing how.

Now I assume that the five assumptions I made above are actually true of creatures like us. Perhaps they are not; perhaps we have a way of drawing on knowledge that which doesn’t involve any computational costs. But I rather doubt that’s true. I think that we drawing on knowledge by using it computationally, and computational usage is by definition costly. Not nearly as costly as conscious thought, but costly. Many of us are sensitive to our knowledge of unimportance without drawing on it; we make decisions about whether to catch the local or the express without first considering whether French medieval history is relevant, and deciding that it isn’t. But this is because we know how to ignore irrelevant information, not merely that we know that the irrelevant information is irrelevant. Knowing that it is irrelevant is no use if you don’t know how to adjust your decision making process in light of that knowledge.

4 Replies to “Knowing How, Regresses and Frames”

  1. This is regarding argument 1.

    Take your creature who has lots of normative knowledge-that, but no good motivation (or not enough). It’s lacking something. By hypothesis, it doesn’t lack knowledge-that. Here’s an argument — not the one you attribute to Ryle — that it’s not skill that’s lacking: imagine a variant on your creature who has all the knowledge-that, and also all the skill, but who still doesn’t do what it knows it ought to do. I don’t see why this should be impossible; someone could be very skilled at phi-ing without often or ever phi-ing. So it looks like skill isn’t enough either, for exactly the same reason that knowing-that wasn’t.

    But exactly the same, I should think, will go for knowing-how. There’s knowing how to do something, and there’s doing it. And knowing how to do it doesn’t guarantee that you’ll do it, any more than knowing that you should do it does. So if the problem was that the things we’ve stipulated doesn’t guarantee good action, I just don’t see how talking about knowing-how is supposed to help.

  2. I think it’s always a mistake here to talk about guarantees. Everything is going to be cet par.

    Having said that, I do think there’s something odd, perhaps not coherent, about the case you describe. If someone routinely fails to phi when phi-ing is called for, then they aren’t very skilled at phi-ing when phi-ing is called for!

  3. But isn’t it pretty plausible that ceteris paribus, people who know they ought to phi, phi?

    Your last sentence indicates a worthwhile distinction about skill: maybe being skilled at phi-ing isn’t the same thing as being skilled at phi-ing when phi-ing is called for. I think one might be skilled at finding flaws in arguments, for example, but very bad at recognizing the conditions under which it’s a good idea to do so. But is there a corresponding distinction for knowing how to?

  4. Right, that’s true cet par. But there’s a big difference between someone who does what they think they should 85% of the time, and someone who does what they think they should 99.999% of the time. True, in neither case does the knowledge that phi should be done guarantee phi-ing. True, in both cases they will cet par do what they know they should. But the rates are very very different, and they deserve explaining. And I think non-propositional knowing how is a good idea for how to explain them.

Leave a Reply