Logic Problems: Paradox Lost

Careful analysis can untangle some logical conundrums


Scientific American, June 2000

Some of the most provocative problems in mathematics concern logical paradoxes. The deepest paradoxes are self-contradictory statements; the best known of these is "This sentence is a lie." To analyze such statements, mathematical logicians have to be very careful in defining their terms. Some paradoxes stand up to strong scrutiny, and when they do, they illuminate the limitations of logical thinking. Other paradoxes don't fare so well under close examination. Here are my views on a few of them, which you may feel free to challenge.

Protagoras was a Greek philosopher who taught law in the 5th century B.C. He had a student who agreed to pay for his law lessons after he had won his first case. But the student didn't get any clients, and eventually Protagoras threatened to sue him. Protagoras reckoned that he would win either way: if the court upheld his case, then the student would be required to pay up, but if Protagoras lost, then by their agreement the student would have to pay anyway because he would have just won his first case. The student argued exactly the other way around: if Protagoras won, then by their agreement the student would not have to pay, but if Protagoras lost, the court would have ruled that the student did not have to pay.

All great fun, but I don't think this one stands up to scrutiny. Both litigants are doing a pick-and-mix—at one moment they assume that their agreement is valid, but then they assume that the court's decision can override it. But why would you take an issue like this to court in the first place? Because the court's job is to resolve any ambiguities in the contract and override it if need be. So if the court rules for Protagoras, the student has to pay up, but if the court sides with the student, he doesn't have to. Under the harsh glare of logic, this paradox seems to melt away, so I call it an example of Paradox Lost.

Let's turn to a much more interesting paradox devised by Jules Antoine Richard, a French logician. In the English language, some phrases define positive integers, and others do not. For example, "The year of the Declaration of Independence" defines the number 1,776, whereas "The historical significance of the Declaration of Independence" does not define a number. Now consider this phrase: "The smallest number that cannot be defined by a phrase in the English language containing fewer than 20 words." Observe that whatever this number may be, we have just defined it using an English phrase containing only 19 words. Oops.

What's going on here? The only obvious way out of the conundrum is if the proposed phrase does not actually define a number. If that were the case, the paradox would disappear because the statement would not contradict itself. So we must determine if this hypothetical number—the smallest that cannot be defined in a short phrase—really exists.

If we accept that the English language contains a finite number of words, then the number of phrases with fewer than 20 words is itself finite. For instance, if we allow 99,999 words, then there are at most 100,00019 phrases of 19 words or fewer. (To include the shorter phrases in the total, we also allow blank words, which explains why the base is 100,000 instead of 99,999.) Of course, many of these phrases make no sense, and many of those that do make sense don't define a positive integer, but that just means we have fewer phrases to consider. Between them, they define a finite set of positive integers, and it is a standard theorem of mathematics that in such circumstances there is a unique smallest positive integer that is not in the set. So on the face of it, Richard's 19-word phrase must define a positive integer.

Yet it can't. One could argue that another phrase, "A number that when multiplied by zero gives zero," would let us wriggle off the logical hook because it defines all positive integers, leaving nothing for Richard's phrase to define. But if a phrase is ambiguous, we must rule it out as a definition, because a definition surely requires unambiguity. Is Richard's phrase ambiguous, then? Not really. It defines a number uniquely—there cannot be two distinct smallest numbers that satisfy its conditions. Note that if instead we had considered the phrase "The smallest number that cannot be defined by a phrase in the English language containing fewer than 19 words," we wouldn't have had a problem. So Richard's paradox tells us something quite deep about the limitations of language as a description of arithmetic. Because the problem remains inscrutable even under close examination, I call it an example of Paradox Regained.

In a more recreational vein, there is the Surprise Test paradox. A teacher tells her class that there will be a test one day next week (Monday through Friday) and that it will be a surprise. The teacher can choose any day, and there is no way that the students can predict that day in advance. The students, however, reason like this: if the surprise test were scheduled for Friday, by the end of school hours on Thursday we'd know the test must take place the next day, so it would not be a surprise. Therefore, we can rule out Friday as the day of the surprise test. Now we're down to the same problem with a four-day week (Monday through Thursday). If the surprise test doesn't take place by the end of Wednesday, we'd know it must be scheduled for Thursday, so it would not be a surprise. So we can rule out Thursday as well. And using the same argument, we can rule out Wednesday, Tuesday and Monday. We conclude that no surprise test is possible.

On the other hand, if the teacher sets the test on Wednesday, there seems to be no way the students could actually know this ahead of time. So something is screwy about the logic. Is this a case of Paradox Lost or Paradox Regained?

I think it's an example of something that looks like a paradox but isn't. Let's consider a restatement of the problem that is logically equivalent. Suppose that each morning the students announce confidently, "The test will be today." Eventually they will do so on the day of the test, at which point they will be able to claim that the test was not a surprise. This statement is a cheat—it's true but trivial. If every day you expect the surprise to happen, then naturally you won't be surprised. My view—and I've argued with many mathematicians on this subject—is that the Surprise Test paradox involves the same cheat, but it's dressed up to look mysterious. The cheat is simply less obvious because everything is intuited instead of acted on.

I'm suggesting two things here. The less interesting one is that this paradox hinges on what we mean by "surprise." The more interesting claim is that whatever reasonable thing we mean by "surprise," there are two logically equivalent ways to state the students' prediction strategy. One is the usual way to present the puzzle, which seems to indicate a genuine paradox. The other, which presents the problem in terms of real actions instead of hypothetical ones, turns it into something correct but unremarkable, destroying the element of paradox.

To illustrate my point, I'll add another condition to the Surprise Test paradox. Suppose that the students have poor memories, so that any work they do on a given evening to prepare for the test is forgotten by the next evening. If, as the students claim, the test is not going to be a surprise, they should wait until the evening before the test to do their studying. But if they don't study on Sunday evening and the test is on Monday, they'll fail. The same is true for Monday through Thursday evenings. So despite never being surprised by the test, the students have to study five evenings in a row.

Paradox Lost, I'd say.