Computable Calculus, Oliver Aberth, Academic Press 2001, ISBN 0-12-041752-9, 192+xiv, with compact disk 0-12-141753-7

Constructive mathematics is a marriage of computation and pure mathematics. The basic idea is that all mathematics should be grounded in computation. That may seem pretty uncontroversial, especially to the general public, but mathematicians do lots of things that have no apparent computational basis. The subject here, however, is calculus, which, one would think, is firmly based on computation. Not so. As the theory of calculus is ordinarily taught, the highly noncomputational least upper bound principle is freely invoked and the question of the equality of two given real numbers, which cannot be settled by a computation, is taken as unproblematic. Aberth's book attempts to remedy that.

Heyting [4] contrasted two approaches to constructive mathematics:

It is necessary to distinguish between theories of the constructible and constructive theories; the main difference being that in a constructive theory an object exists only after it has been constructed while in theories of the constructible one defines a certain class of mathematical objects to be constructible objects.
Heyting was an intuitionist, a student of L.E.J. Brouwer. The intuitionists, and those of us who practice constructive mathematics after the manner of Errett Bishop [2], develop constructive theories without defining a class of constructible objects. The rules of constructive proof were spelled out, stripped of their philosophical context, by Heyting's formulation of intuitionistic logic. By restricting ourselves to intuitionistic logic, we can be assured that our theories will be constructive.

On the other hand, those who study recursive functions or Turing machines, and related concepts such as computable real numbers, deal with constructible objects, but not usually from a constructive point of view (see, for example, [8]). The Russian school of constructivism, which inspired Aberth, incorporates both of these approaches---it is a constructive theory of constructible objects. Boris Kushner [5] stated the three principles of this school (see also Sanin [9]):

• The objects of study are constructive objects.

• The intuitive concept of computability is replaced by the precise concept of algorithm.

• A special interpretation of mathematical judgments is used which takes into account the specific nature of constructive objects.
The "constructive objects" studied by this school are finite strings of symbols taken from a finite alphabet. Strings of digits can be interpreted directly as natural numbers; other strings can be interpreted as programs in a programming language---in particular, a real number a can be identified with a program that computes arbitrarily close rational approximations to a. The "precise concept of algorithm" was introduced by Markov [7]. Various other models of computation, such as Turing machines and recursive functions, have been shown to be translatable into Markov algorithms, and vice versa [6]. The "special interpretation of mathematical judgments" is intuitionistic logic, a logic that does not use the law of excluded middle.

Why would anyone reject the law of excluded middle? One consequence of that law is that, for each real number a, either a=0 or a¹0. From a thoroughgoing computational point of view, that should mean that we can calculate whether or not a equals 0. But this is exactly Aberth's Nonsolvable Problem 3.1: "For any real number a, decide whether or not a equals 0." Intuitively, Problem 3.1 is unsolvable because there is clearly no way to figure out from a rational approximation to a, no matter how close, whether or not a is 0. Technically, the unsolvability of the halting problem for Turing machines says that you can't figure it out even if you look at a program that computes arbitrarily close rational approximations to a. Thus, grounding all of our mathematics in computation leads to rejecting the law of excluded middle.

Aberth draws a distinction between "there exists a number such that ... ," and "there cannot fail to exist a number such that ...." His statement of the intermediate value theorem is, "If f(0)<0<f(1) then a number r between 0 and 1 such that f(r)=0 cannot fail to exist." (Kushner [5] also employs the phrase "cannot fail to exist", but phrases this particular theorem in terms of the impossibility that f(r)¹0 for all r.) Thus Aberth appears to reject classical logic, and so to subscribe to the third principle in Kushner's list. However, instead of adopting a constructive logic, Aberth simply says,

Rather than say "a number x0 exists" or "a number x0 can be found," when x0 sometimes cannot be determined accurately by finite means, it is less confusing to say "the number x0 cannot fail to exist."
Nevertheless, Aberth often uses the word "exists" in a noncomputational sense, as when he says, "A computable way of obtaining K0 as a function of E is not required, only the certainty that for every E an integer K0 exists." Similarly, despite Nonsolvable Problem 3.1, he claims that "for any two real number a and b, we have exactly one of the three possibilities: a=b, a<b, or a>b." This suggests that his point of view is more that of a classical recursive function theorist than a Russian constructivist.

Another aspect of the Russian school is that it has a philosophical commitment to constructive objects---those are the only objects on the table. A constructive real number is not thought of as a special type of real number. As Sanin [9, page 11] put it, "Concepts of this kind will not be considered here. They lie outside the scope of the constructive approach to mathematics since they are based upon the nonconstructive concept of real number used in classical mathematics." Aberth does not take this position. He defines computable real numbers as "real numbers for which arbitrarily precise approximations can be computed by a machine".

In spite of these differences, I found that Aberth's previous book, Computable Analysis [1], provides a much better way for the average reader to get acquainted with the ideas of Russian constructivism than do the translations of the formidable Russian sources, and is much more in the spirit of Russian constructivism than is the recursive analysis developed by recursive function theorists. The same can be said of the present book, which is addressed to a less sophisticated audience and is even more user friendly.

The commitment to machine computability has consequences that seem bizarre to the uninitiated. An example is the Specker sequence---an ascending sequence of rational numbers in (0,1) that is eventually bounded away from any (computable) real number. Because of this sequence, the least upper bound principle is not only unprovable, it is demonstrably false, as is the theorem that a continuous function on a compact interval is uniformly continuous. This gives the subject quite a different flavor from bare-bones intuitionism, or Bishop-style constructive mathematics, in which no classical theorem is refutable and every proof can be read as a classical proof. Another surprise is Ceitin's theorem that every function that is defined for all real numbers is continuous, which is also a theorem in full-blown intuitionism for different reasons.

The standard examples of discontinuous functions turn out not to be defined everywhere from a computational point of view. If the function f that is 0 on (-¥,0] and 1 on (0,¥) were defined everywhere, then we could use it to decide, for any real number a, whether | a| £0 or | a| >0; that is, we could solve Nonsolvable Problem 3.1. Aberth argues that, in fact, f is not defined at 0. That seems odd because f(0) is defined to be 0. I would be more inclined to say that f is defined on (-¥,0]È(0,¥), but not necessarily on (-¥,¥). That distinction makes sense in the context of intuitionistic logic because you can't prove that (-¥,0]È (0,¥)=(-¥,¥) without using some form of the law of excluded middle. Aberth, who subscribes to classical logic, has difficulty distinguishing between (-¥,0]È(0,¥) and (-¥,¥) , so if f is defined on the former set, then he considers f to be defined everywhere.

Bishop, in his development of Lebesgue measure on the line [3], uses functions like f all the time. Why does Bishop believe that f is a legitimate function? Suppose we want to calculate f(a) for a in (-¥,0]È(0,¥). Because a is in the union, either a is in (-¥,0] or a is in (0,¥). That's what it means to be in the union. In the former case, we set f(a)=0, in the latter we set f(a)=1.

This very straightforward argument is hard to reject, but it can be rejected from Aberth's point of view. The reason is that in order to calculate f(a), we need more information about a than is provided by a program Pa that calculates rational approximations to a; we need the information as to whether a is in (-¥,0] or a is in (0,¥). For Bishop, that information is available: an element of (-¥ ,0]È(0,¥) comes with that information by definition. After all, if we wanted to show that a was in (-¥,0]È(0,¥), we would have to supply that information. But the programs that Aberth uses to calculate f(a) are only allowed to look at the program Pa; they can accept as input only real numbers, not real numbers that are given as belonging to some subset. That's a restriction that makes for simpler programming but less natural mathematics.

Chapters 5 and 12 of the book are devoted to a particular model of computation in the form of a simple random access machine similar to the RAM of [6]. Software is included with the book in case you want to simulate this machine on your PC. These chapters illustrate a few rudimentary ideas of computable calculus in a concrete computational setting, but the rest of the text can be read independently of them.

## References

[1]
Aberth, Oliver, Computable analysis, McGraw-Hill 1980.

[2]
Bishop, Errett, Foundations of constructive analysis, McGraw-Hill 1967.

[3]
Bishop, Errett and Henry Cheng, Constructive measure theory, AMS Memoirs 116, 1972..

[4]
Heyting, Arend, Some remarks on intuitionism, Constructivity in mathematics: Proceedings of the colloquium held at Amsterdam, 1957 (A. Heyting, ed.) 69--71, North-Holland 1959

[5]
Kushner, Boris A., Lectures on constructive mathematical analysis, 1973, AMS Translations, 60, 1985

[6]
Machtey, Michael and Paul Young, An introduction to the general theory of algorithms, North-Holland 1978.

[7]
Markov, A. A., The theory of algorithms, 1954, Israel Program for Scientific Translations 1962

[8]
Pour-El, Marian B. and Jonathan I. Richards, Computability in analysis and physics, Springer 1989

[9]
Sanin, N. A., Constructive real numbers and function spaces, 1962, AMS Translations, 21, 1968

[10]
Turing, Alan, M., On computable numbers, with an application to the Entscheidungsproblem, Proc. London Math. Soc., 42 (1936/37) 230--265, correction 43 (1937) 544-546.

This document was translated from LATEX by HEVEA.