I’m teaching an upper level logic course from the fourth edition of *Computability and Logic*, which as anyone who has used it knows has an impressive number of errors. These are almost all relatively trivial typing and typesetting errors, but the problem with a book this error-riddled is that when something happens that you can’t quite follow, there’s a temptation to blame the book rather than your tiny addled brain. This is almost always a mistake when working in this area, but sometimes it becomes irresistable.

For instance, on page 194 (of the first printing) there’s a definition of what it is for x to be the code number of a simple atomic sentence, where a simple atomic sentence is one that doesn’t contain identity or function symbols. The account given there is that x is the code number for a sequence such that:

- The first member of that sequence is the code number of an n-place predicate, where 2n+2 is the length of the sequence
- The second member of that sequence is 1 (the code number of ‘(’)
- The next n odd-numbered places in the sequence are filled by the code numbers for atomic terms, which under our current hypothesis are constants or variables
- The intermediate even-numbered places in the sequence are filled by the number 5 (the code number of ‘,’)
- The last member of the sequence is 3 (the code number of ‘)’)

Now when we reinstate function symbols, we’re just told that the definition of an atomic term has to change to allow for the possibility that, for example, (0, 0) could be an atomic term. Apart from that, the book suggests, the definition of an atomic sentence can be preserved. But of course it can’t – because atomic terms no longer take only 1 character. So if R is a two place predicate, R((0,0),0) is an atomic sentence despite satisfying only two of the five above requirements.

I don’t think this is too hard a problem to patch, but given the level of detail that’s gone into for the function-free language, it’s surprising that this is not only elided, but suggested to be a non-problem.

Having said all that, I should say that there is a lot to like about the fourth edition of C&L, especially for those of us who want to teach a course primarily on the incompleteness theorems and their implications for the logic of provability, and don’t want to take a detour through Turing machines to get there. And the problem sets are a great addition. So I think Burgess did a great job with the book. But sometimes I’m left scratching my head trying to follow what’s going on, and I’m not sure it’s always my fault.

Fourth edition? That’s inexcusable. You’d think they could pay some poor grad student to proofread it. I’ve noticed the same problem with some other texts, such as Hughes & Cresswell’s A New Introduction to Modal Logic. I thought logicians were supposed to be meticulous…

To be fair, I think a part of the problem is that when mistakes are made in a logic book, it’s a little harder to cover them up, claim it’s what you meant all along, etc. Also, the fourth edition was a substantial revision, so it’s not as if these errors survived through four printings before only now coming to light. And as I said, the

overwhelmingmajority are irrelevant, like using > for O in a running head.But I agree that in a logic book, especially one dedicated to the proof of such a surprising result, you’d expect a higher degree of attention paid to these details.

When I was a grad student, I took modal logic with a then-brand-new Hughes & Cresswell New Intro to Modal Logic. A number of the exercises asked us to prove unproveable results, and there were some new methods of proof advocated (details escape me now) that weren’t as powerful as they were cracked up to be. But of course you never knew which problems would wind up being insoluble, so any time you couldn’t prove something, you were never sure whether it was the book’s fault or yours. It was a good educational experience; it was much more like the fooling around with logic problems that I do now—when I

don’tknow whether there is an answer or not—than working through a standard logic book. On the other hand, it was maddeningly frustrating. I would like to say that logic authors should ask “some poor grad student” to do all their exercises and check the proofs, but maybe there just aren’t that many grad students-cum-book funding around.I’m not convinced that there’s an error. Now, I don’t have the fourth edition, so I have to guess a bit about context.

The definition of x being the code number of an atomic

sentenceis in terms of x being the code number of asequencesuch that yada yada. This makes me think that the translation process is like this: Formulas get mapped to sequences of numbers and sequences get mapped to individual numbers. For example, ‘P(0,0)’ would get mapped to or something. The sequence could then be mapped to a number straightforwardly by relying on unique prime factorization: (2^9)(3^1)(5^7)(7^7)(11^5)(13^7)(17^3).If that’s the way it works, then the definition looks fine to me. The code number of a formula such as ‘P(0,)’ would be the code number of the sequence , where G is the code number of ‘(0,0)’.

All five conditions hold.

Jake

I thought about something like that, though not in as much detail, but I wasn’t convinced it would work. The problem is how you deal with substitution for a variable inside G, which now becomes much more complicated than they suggested.

The worry is how you provide a decent systematic treatment of substituting some value for x whenever it occurs that covers both the following cases

>(x,0)

>(+(x,0),0)

On the proposal you put forward, the code number for x (let’s say it’s 2) appears in the sequence for >(x,0) but not the one for >(+(x,0),0), so you can’t get the replacement by just looking at where 2 appears in the sequence. I guess that isn’t the biggest problem in the world, but it’s a complication that should have been addressed if that’s how they’re handling functions.

Ah, I see. I guess you’d need a recursive definition.

I agree with you that that’s not a trivial change. They could have at least left it as an exercise for the reader.

Perhaps you can use the errata to your advantage: Encourage close, engaged reading by offering extra credit for the identification and correction of mistakes.

A few years ago I took a course in Modal Logic from Ed Gettier. It was a terrific course, but Ed made frequent mistakes on the chalk board (probably because he was writing so damn fast). I think the mistakes were effective, pedagogically. We students had to proofread the proofs as they went up and so we really had to follow the reasoning. (I just told this story to my students this morning as I was screwing up a derivation. They kept correcting me and I told them I was making mistakes on purpose for their own good.)