Few mathematical structures have undergone as many revisions or have been presented in as many guises as the real numbers. Every generation reexamines the reals in the light of its values and mathematical objectives.

F. Faltin, N. Metropolis, B. Ross and G.-C. Rota, *The
real numbers as a wreath product*

Many believers in the equality think that we may no longer discuss how best to capture the intuitive notion of a real number by formal properties. They dismiss any idea at variance with the currently fashionable views. They claim that skeptics who question whether the real numbers form a complete ordered field are simply ignorant of what the real numbers are, or are talking about a different number system.

One argument for the equality goes like this. Set

multiply both sides by 10

subtract the first equation from the second

so

Essentially you are observing that 9*x* + *x* = 9 + *x*, which is true,
and then concluding that 9*x* = 9. That's a valid inference, if *x* is cancellable.
A skeptic would say that 9*x* = 8.999..., which is different from 9, even
though when we add *x* to each of them we get the same thing. The skeptic's
intuition that 0.999... is not equal to 1 conflicts with the
intuition that we should be able to cancel *x*. Which of those intuitions
should take precedence?

An even simpler argument is

multiplying by 3 gives

The second step is pretty hard to fault, so a skeptic must challenge the first equation. This argument gets its force from the fact that most people have been indoctrinated to accept the first equation without thinking.

Yet a third kind of argument proffered by believers is that

and the sum of the geometric series on the right is

A skeptic who accepts the series interpretation could say that 0.999...
*converges* to 1, or that it is equal to 1 *in the limit*, but
is not *equal* to 1. There is an ambiguity in standard usage as to
whether the expression on the right stands for the series or for its limit.
The fact that we use that notation whether the series converges or not
argues in favor of the series interpretation. Also, we talk about the rate
of convergence of such expressions. So some distinction between convergence
and equality in the present case might well be appropriate.

Perhaps the situation is that some real numbers can only be approximated, like the square root of 2, whereas others, like 1, can be written exactly, but can also be approximated. So 0.999... is a series that approximates the exact number 1. Of course this dichotomy depends on what we allow for approximations. For some purposes we might allow any rational number, but for our present discussion the terminating decimals---the decimal fractions---are the natural candidates. These can only approximate 1/3, for example, so we don't have an exact expression for 1/3.

As usual, we don't allow the string to start with a 0 unless the decimal
point comes immediately after. There are a couple of notational
conventions. An infinite string of 6's (without a decimal point) is
denoted by 6*.
So the number at issue, 0.999..., is denoted by
0.9*. The number 120.450* is said to **terminate**, and is
denoted simply by 120.45, while 120.0* is denoted by 120 (with no
decimal point).

The decimal numbers are ordered in the standard way. Line up the decimal points and compare corresponding digits. At the first place where the digits differ, the number with the bigger digit is the bigger number. So 999.999... is less than 1247.421... because the initial 1 of the latter number is the first place where the digits differ, while 1247.430... is bigger than 1247.421... because the 3 in the former number is bigger than the corresponding digit 2 in the latter, and that is the first place they differ. In particular, 0 = 0.0* is the smallest decimal number, and 0.9* is less than 1.

How do you add two decimal numbers? There is a problem because carries can
propagate over arbitrarily long stretches, and we can't start adding at the
far right! But the carry can never be bigger than 1, so if the sum of the
two digits in a given place is less than 9, or if it is greater than 9,
then we can compute the digits in the sum up to, but not necessarily
including, that place. If the sum of the digits is *exactly* 9 from
some place on, then there will be no carry at, or past, that place.

- (
**Digression**) The question as to whether there is a carry into a given place cannot be decided by a finite computation. That means that you can't necessarily*compute*the decimal expansion of a sum from the decimal expansions of its addends. For example, suppose we have a number*x*whose decimal expansion starts out 0.05555.... So*x*is close to 0.05*, but we can't be sure it is equal to 0.05* because we only know as many digits in its expansion as we care to compute. It may be that*x*is at most 0.05*, or it may be that*x*> 0.05*. What is the first digit after the decimal point in the expansion of 0.04* +*x*? It is 0 if*x*is at most 0.05*, and it is 1 if*x*> 0.05*. We may not have enough information, even after computing the first million places, to determine which of these alternatives holds.The fact that you can't compute the decimal expansion of a sum from the decimal expansions of its addends is a well known phenomenon that was noticed by Turing. In a fully constructive treatment of the real numbers, this is often stated by saying (informally) that not every positive real number has a decimal expansion. More precisely, there is no constructive proof that every positive real number has a decimal expansion (or at least we don't know of one).

Is 1/3 = 0.3*? Clearly if a sum is cancellable, then each addend is
cancellable, so there is no decimal number *x* such that
*x* + *x* + *x* = 1. That is,
1/3 is not a decimal number. More generally, no nonterminating decimal
number *x* can satisfy an equation of the form *mx* = *n*
with *m* and *n* positive integers.

What about multiplication of decimal numbers? It is convenient to define multiplication in terms of cuts, and we will want to look at cuts in any case.

This is essentially Dedekind's definition in [1]. Dedekind then
identified the cut {*x* in *D* : *x* < *r*}
with the cut
{*x* in *D* : *x* < *r*
or *x* = *r*}, for
each *r* in *D*, saying they were "only unessentially different." A
similar move, made for example in [4, Definition 1.4], is to
restrict to Dedekind cuts that do not have a greatest element, so
{*x* in *D* : *x* < *r*
or *x* = *r*}
is not considered to be a cut. Why do that? Precisely to rule
out the existence of distinct numbers 0.9* and 1. Indeed, 0.9*
corresponds to the cut
{*x* in *D* : *x* < 1}
while 1 corresponds to the cut
{*x* in *D* : *x* < 1 or *x* = 1}.
In general, we may identify an element *d* in *D* with
the cut
{*x* in *D* : *x* < *d*
or *x* = *d*} (we call these **principal cuts**).
So we see that in the traditional definition
of the real numbers, the equation 0.9* = 1 is built in at the
beginning. That is why anyone who challenges that equation is, in fact,
challenging the traditional formal view of the real numbers.

If *D* is the ring of decimal fractions, then each decimal number
*u* gives rise to a Dedekind cut

in *D*. Note that this cut contains 0.
Conversely, any Dedekind cut *S* in the ring of decimal fractions, that
contains 0, is associated with a unique decimal number *u* as follows.
For fixed *m*, the largest element of *S* that can be written as
a fraction with denominator the *m*-th power of 10 gives
the digits in *u* up to the *m*-th place to the right of the
decimal point.
The nonterminating decimal number 3.1415926535... corresponds to the
cut consisting of all those decimal fractions *r* such that
*r* < 3, or *r* < 3.1, or *r* < 3.14, or
*r* < 3.141, or ... .

Let cut *D* denote the set of all Dedekind cuts in *D*.
Define the sum of two cuts in the usual way

It is easily shown that the commutative and associate laws hold. The
additive identity is the principal cut
0 = {*x* in *D* : *x* < 0 or *x* = 0}.
The elements of *D*, in the guise of principal cuts, form a subgroup
of cut *D*. In fact *D* consists precisely of the (additively)
cancellable elements of cut *D*. This is because

while if 1¯ = {*x* in *D* : *x* < 1},
then *u* + 1¯ = *u* + 1 for any cut *u*
that is not principal. However, the nonprincipal cuts are cancellable among
themselves, and are closed under addition, so they also form a subgroup of
the monoid, cut *D*. This group may be identified with the
traditional real numbers, as Rudin does with cuts in the rational numbers.
Recall that any traditional positive real number has a unique
*nonterminating* decimal expansion. Note that
0¯ = {*x* in *D* : *x* < 0} is
the identity element of the group of nonprincipal cuts.

The order on cut *D* is given by inclusion of cuts.
The **weakly positive** cuts are those
that contain the rational number 0. These correspond exactly to the decimal
numbers if *D* is the ring of decimal fractions.
The **product** of two weakly positive cuts *u* and *v*
is defined to be
{*st* : *s* is in *u* and
*t* is in *v*}. This multiplication on weakly positive cuts
shows how to multiply any two decimal numbers. It's straightforward to show
that the associative, commutative and distributive laws hold. So
the decimal numbers form a **positive, totally ordered, commutative
semiring** in the sense of [3].

The picture here is the traditional real numbers, in the form of
nonprincipal cuts, living uneasily together with the ring *D*,
in the form of principal cuts. For each
element *d* of *D*, there is a traditional real number
*d*¯ just below it, and
*u* + *d*¯ = *u* + *d* for each traditional real
number *u*. That, for traditionalists, is a complete description of
the additive structure of cut *D*. Note that
*d*¯ = *d* + 0¯.

Clearly 0.9* = 1 + 0¯, so 0¯ is a sort of negative infinitesimal.
On the other hand, you can't solve the equation 0.9* + *X* = 1 because,
in cut *D*, the sum of a traditional real with any real is a
traditional real.

However, cut *D* can be characterized in the following way (for *D* the
nonnegative decimal fractions). It contains the decimal numbers and all
the decimal fractions, both positive and negative. Each element of cut *D*
can be written as a difference *u* - *d* of a decimal number
*u* and a (nonnegative) decimal fraction *d*. We may think of
cut *D* as being obtained from the decimal numbers by adjoining the
negative decimal fractions, and taking sums. This construction is
legitimate because the decimal fractions are cancellable in *D*.

Instead of extending the decimal numbers so as to include additive
inverses of those decimal numbers that are cancellable under addition,
we could extend them so as to include multiplicative inverses of those
decimal numbers that are cancellable under multiplication. These are
exactly the positive decimal fractions, because (0.9*)*x* =
*x* whenever *x* is a nonterminating decimal number.
This construction is an instance of forming a semiring of fractions,
see [3]. It is not hard to verify that the result is
(isomorphic to) the weakly positive elements of cut **Q**, where
**Q** is the ring of rational numbers.

Because of this, multiplication of *arbitrary* real numbers is also a
serious problem, if for no other reason than that we don't know how to multiply
-1 by 3.14159265.... Even in the traditional approach, multiplication is
awkward. The elegant treatment of addition is replaced by
an ugly division into cases: one defines how to multiply positive numbers,
and extends to negative numbers according to the usual rules
[4, pages 7--8].

[2] Faltin, F., N. Metropolis, B. Ross and G.-C. Rota, The
real numbers as a wreath product, *Advances in Math*., **16**(1975),
278--304.

[3] Golan, Jonathan S., *The theory of semirings with applications
in mathematics and theoretical computer science*, Longman Scientific
and Technical, 1992.

[4] Rudin, Walter, *Principles of mathematical analysis*,
McGraw-Hill, 1964.

Last modified June 8, 1999