A division algorithm

Fred Richman
Florida Atlantic University
Boca Raton, FL 33431
richman@fau.edu

Abstract: A divisibility test of Arend Heyting, for polynomials over a field in an intuitionistic setting, may be thought of as a kind of division algorithm. We show that such a division algorithm holds for divisibility by polynomials of content 1 over any commutative ring in which nilpotent elements are zero. In addition, for an arbitary commutative ring R, we characterize those polynomials g such that the R-module endomorphism of R[X] given by multiplication by g has a left inverse.

1   Introduction

In [1], Arend Heyting talks about polynomials over what has become known as a Heyting field [3]. This is a commutative ring with an apartness relation, a b, that satisfies the following properties

If g = a0 + a1X + + anXn is a polynomial with coefficients in a Heyting field, then g 0 means ai 0 for some i.

The model that Heyting had in mind was the real numbers, with apartness being a positive form of inequality: two real numbers are apart if we can find a positive rational number that bounds them away from one another. Thus tightness is a form of the archimedean property---no infinitesimals---that is the basis for Archimedes' famous proofs by double contradiction. The first three properties are the duals of the axioms for an equivalence relation. We can use the field property to define the relation a 0, and then define a b to be a - b 0 in accordance with shift invariance. Thus we can eliminate the apartness entirely, and just keep the algebraic versions of the first four properties.

The algebraic interpretation of consistency is that zero is not a unit, that is, the ring is nontrivial. Symmetry is automatic while cotransitivity says that the ring is local: if x + y is invertible, then either x or y is invertible. Tightness says that any nonunit is zero, so it is this property that gives us a field in the traditional sense. In the presence of the law of excluded middle, tightness also implies that any nonzero element is a unit. Heyting does not assume that law, so being a unit may be stronger than being nonzero, which is the whole point of introducing the apartness. I'm not concerned with this issue here as I am going to drop the tightness condition.

Many intuitionistic theorems about Heyting fields require neither tightness nor consistency, so they are really theorems about local rings. However, one consequence of tightness and consistency which will play a role in what follows is that nilpotent elements are zero. Indeed, suppose an = 0 and we want to show that a = 0. By tightness, it suffices to derive a contradiction from a 0. But if a 0, then a is a unit, so an is a unit, so an 0. By consistency, this is a contradiction, so a = 0.

This paper was motivated by the following theorem from [1] which I paraphrase.


Theorem 1  [Heyting]   Let R be a Heyting field. Let f and g be polynomials with coefficients in R such that g 0. Then we can compute elements e1,... ,em of R, polynomials in the coefficients of f and g, so that f = gh for some h if and only if ei = 0 for all i.


For example, if g is a monic polynomial, then we can use the division algorithm to write f = qg + r where deg r < deg g. The coefficients of r serve as the elements e1,... ,em in the theorem. Thus Theorem 1 may be thought of as an extension of the division algorithm.

The division algorithm for a monic polynomial g can be thought of in terms of the map l taking f to q. This map is an R-endomorphism of R[X] that is a left inverse for multiplication by g. Given such a left inverse l , we can get Heyting's elements e1,... ,em as the coefficients of f - l(f)g. In general, a left inverse is too much to expect (see Theorem 6) so, with Heyting, we construct local left inverses (Theorem 4). This can be done for an arbitrary commutative ring (if g has content 1). However, for local inverses to suffice, we must be able to bound the degree of q in terms of the degree of f, and this is where the condition that nilpotent elements of R are zero comes into play.

2   Local left inverses

Let R be a commutative ring, R[X] the polynomial ring over R in the indeterminate X, and
R[X]n = {c0 + c1X + + cnXn : ci R for i = 0,,n}
the (free) R-submodule of R[X] consisting of the polynomials of degree at most n. If g R[X]n, then multiplication by g induces an R-homomorphism Tm : R[X]m R[X]m+n. When does Tm have a left inverse? For R a Heyting field, Heyting showed that Tm has a left inverse if g 0. His proof goes through for local rings.

We will show, for an arbitrary commutative ring R, that it is enough for the coefficients of g to generate R as an ideal (g has content 1), which in the local case simply says that some coefficient of g is a unit (that is, g 0). The key fact is the following lemma.


Lemma 2   Let q be an (m+1)-form in Z[X0,... ,Xn]. Then there exist m-forms p0,... ,pm+n in Z[X0,... ,Xn] such that k=0n pkXk = q and k=0n pk+jXk = 0 for 0 < j m.


Proof. Consider the map F from (m+n+1)-tuples of m-forms to (m+1)-tuples of n-forms such that F(p0,... ,pm+n) = (v0,... ,vm) where vj = k=0n pk+jXk. We want to show that (q,0,... ,0) is in the image of F for any (m+1)-form q. By linearity we may assume that q is a monomial. The proof is by induction on m + n. If either m = 0 or n = 0, choose Xi dividing q, let pi = q/Xi and let pk = 0 if k i. So we may assume that m,n > 0.

First suppose that Xn divides q, so q = Xnq. By induction, because m > 0, there exist (m-1)-forms p0,... ,pm+n-1 such that k=0n pk+j Xk = 0 if 0 < j m - 1, and k=0n pk Xk = q. Set pk = Xnpk for k < m + n and pm+n = -k=0n-1pk+m Xk. Then


n

k = 0 
pk+jXk =



Xn\dsumk = 0npkXk = Xnq = q
if j = 0
Xn\dsumk = 0npk+jXk = 0
if 0 < j < m
Xn\dsumk = 0n-1pk+mXk+pn+mXn = 0
if j = m
Note that pk is divisible by Xn if k<m+n. So if q is divisible by Xn, then (q,0,... ,0)=F (p0,... ,pm+n) where pk is divisible by Xn if k<m+n. It follows that F (0,... ,0,p0,... ,pm+n-i)=(v0,... ,vm) where vi+1,... ,vm are zero, v0,... ,vi-1 are divisible by Xn and vi=q. By taking linear combinations of these, we see that any vector (v0,... ,vm) is in the image of F if each vi is divisible by Xn.

Now suppose that Xn does not divide q, so q Z[X0,... ,Xn-1]. By induction, because n>0, there exist m-forms p0,... ,pm+n-1 in Z[X0,... ,Xn-1] such that k=0n-1pkXk=q and k=0n-1pk+jXk=0 for 0<j m. For any value of pm+n we have F (p0,... ,pm+n)=(v0,... ,vm) where v0-q and v1,... ,vm are divisible by Xn. So adding a vector constructed in the previous paragraph gives us (q,0,... ,0).




Theorem 3   Let a0,a1,... ,an,s0,s1,... ,sn be indeterminates and m a nonnegative integer. Then there exist integral forms b0,b1,... ,bm+n of degree m in the a's and degree 1 in the s's such that i=0nb_i+ja_i={
n
i=0
siaim+1 if j=0
 
0 if 0<j m
.


Proof. By linearity it suffices, for each k=0,... ,n, to construct forms b0,b1,... ,bm+n such that i=0nb_i+ja_i={
skakm+1 if j=0
 
0 if 0<j m
. Lemma 2 gives integral m-forms b0 ,b1 ,... ,bm+n in Z[a0,... ,an] such that i=0nb_i+j^a_i={
akm+1 if j=0
 
0 if 0<j m
. Set bi=skbi for i=0,... ,m+n.




Theorem 4   If g R[X]n, then the map Tm:R[X]m R[X]m+n, induced by multiplication by g, has a left inverse if and only if the coefficients of g generate R as an ideal.


Proof. If l is a left inverse of Tm, then l (g)=l (Tm(1))=1 R[X]m. So following l by evaluation at 0 gives a linear functional on R[X]m+n whose value on g is equal to 1. Thus some linear combination of the coefficients of g is equal to 1.

Conversely, suppose that the coefficients of g generate R as an ideal. Let g=a0+a1X+ +anXn. Then we can find si R such that s0a0m+1+ +snanm+1=1. By Theorem ?? there exist b0,b1,... ,bm+n R so that i=0nb_i+ja_i={
1 if j=0
 
0 if 0<j m
. The map Tm is described by the m+n+1 by m+1 matrix (
a0 0 0 0 0 0 0
a1 a0 0 0 0 0 0
a2 a1 a0 0 0 0 0














0
an an-1 an-2 a0 0 0 0
0 an an-1 a1 a0 0 0
0 0 an a2 a1 a0 0


















0 0 0 an an-1 an-2 a0
0 0 0 0 an an-1 a1
0 0 0 0 0 an a2


















0 0 0 0 0 0 an
) The problem is to find a left inverse for this matrix. If we multiply on the left by the m+1 by m+n+1 matrix (
b0 b1 b2 bm bm+n
0 b0 b1 bm-1 bm+n-1
0 0 b0 bm-2 bm+n-2














0 0 0 b0 bn
) we get a lower triangular m+1 by m+1 matrix with 1's down the diagonal. This matrix has a left inverse, hence so does the original matrix.



We obtain the desired generalization of Heyting's theorem as a corollary.


Corollary 5   Suppose that nilpotent elements of R are zero and the coefficients of g R[X]n generate R as an ideal. Then there exists an R-homomorphism l :R[X]m+2n R[X]m+n such that f R[X]m+n R[X]m+2n is a multiple of g if and only if f=gl (f).


Proof. Let l be a left inverse of the map R[X]m+n R[X]m+2n induced by multiplication by g. The ``if'' part is trivial, so suppose f=gh. It suffices to show that h R[X]m+n. Suppose h=h0+h1X+ +hdXd with d m+n. Induct on d-m-n. If d=m+n, then we are done. If d>m+n, let S={1,hd,hd2,... }. Clearly h is regular of degree d over S-1R. Therefore g=0 over S-1R, because deg gh m+n<d. As the coefficients of g generate R , the ring S-1R is trivial. Thus hd is nilpotent in R, therefore zero, and we are done by induction.



We need some hypothesis in this corollary to eliminate the case g=1-aX, where a is nilpotent of large order, in which case f=1 is a multiple of g but not by an element of R[X]m+n.

3   Global left inverses

Theorem 4 says that some linear combination of the coefficients of g is equal to 1 if and only if the map T:R[X] R[X], given by multiplication by g, has local left inverses: that is, the restriction of T to a map R[X]m R[X]m+n has a left inverse for each m. When does T itself have a left inverse? Equivalently, when is g regular and R[X]g an R-summand of R[X]? If the leading coefficient of g is invertible, then the division algorithm provides a left inverse. If g itself is invertible, like 1-aX where a is nilpotent, then multiplication by g-1 provides a left inverse. If nilpotent elements of R are zero, then we have the following characterization.


Theorem 6   Suppose that nilpotent elements of R are zero and that g is a polynomial of degree n. If the map T:R[X] R[X] given by multiplication by g has a left inverse, then R=R1 Rk and the leading coefficient of g over Ri is invertible for each i.


Proof. Suppose g R[X]n and let a be the coefficient of Xn in g. Note that a might be zero. Let I={r R:rak=0 for some k}. We will show that 1 Ra+I.

By hypothesis, R[X]=R[X]g P as an R-module. Let R=R/I noting that a is cancellable in R. Then R[X]=R[X]g P. If f R[X], then the extended division algorithm [2, Theorem 2.14] says that akf R[X]g+R[X]n-1 for some k. Let P0 be the projection of R[X]n-1 into P. Then P0 is finitely generated, hence contained in R[X]i for some i. As a is cancellable, it follows that P R[X]i. But P is a summand of R[X], so P itself is finitely generated. Now P@ R[X]/R[X]g, so, as P is finitely generated, R[X] R[X]g+R[X]m for some m. In particular, Xm+1=qg+r in R[X], where r R[X]m. So g is a factor of a monic polynomial, whence a is invertible, that is, 1 Ra+I.

Write 1=ra+b where akb=0. Then ak=rak+1 so Rak is generated by the idempotent rkak. It follows that R factors as a product R1 R2 with ak invertible in R1 and zero in R2. As nilpotents in R are zero, a is invertible in R1 and zero in R2. Thus g is of the desired form over R1 and is in R2[X]n-1 over R2, so we are done by induction on n.



Note that the converse of Theorem 6 follows from the remarks about the division algorithm for polynomials g with invertible leading coefficient. The question remains of what happens if R has nilpotent elements.


Lemma 7   Let I be a nilpotent ideal of a commutative ring R, and f=a0+a1X+ R[X]. Suppose an is invertible and deg f=n modulo I. Then there is g R[X] such that gmodI and deg fg=n modulo I2.


Proof. Induct on the degree m n of f modulo I2. If m=n, take g=1. If m>n, set h=1-an-1amXm-nmodI. Then (fh)_i=a_i-a_n^-1a_ma_i-m+n where aj=0 for j<0. Thus (fh)n an modI is invertible, (fh)m=0, and (fh)i I2 if i>m because i-m+n>n. So deg fh<m modulo I2. By induction there exists gmodI so that deg fhg=n modulo I2. The desired g of the lemma is hg.



Theorem 8   Let R be a commutative ring with nilradical N, and f R[X]. Then the following are equivalent:

  1. f=gh where gmodN and h has invertible leading coefficient.

  2. f=gh where g is invertible in R[X] and h is monic.

  3. The R-module endomorphism of R[X] given by multiplication by f has a left inverse.

  4. f has invertible leading coefficient modulo N.



Proof. Assume 1. As gmodN we have g=1-n where n k=0 for some k. So g is invertible with 1+n +n 2+ +vk-1 as its inverse. If a is the leading coefficient of h, then ag and h/a are desired polynomials for 2.

Assume 2. Multiplication by g has multiplication by g-1 as its left inverse. The division algorithm gives a left inverse for multiplication by h . So multiplication by f=gh has a left inverse.

To see that 3 implies 4, apply Theorem 6 over the ring R/N.

Finally, assume 4. Let n be the degree of f modulo N. Let I be the ideal generated by the coefficients fi for i>n. Then I is nilpotent. Repeated application of Lemma 7 gives GmodN in R[X] so that deg fG=n. Let g=G-1 and h=fG.


4   An example and a comment

For g R[X]n, let T:R[X] R[X] and Tm:R[X]m R[X]m+n be multiplication by g as before. One way the coefficients of g R[X]n can generate R is for g to be monic; another is for g(0)=1. In the latter case we can use the division algorithm in reverse to write f=qmg+rmXm+n+1, with qm a unique element of R[X]m+n. If f=gh with h R[X]m, then rm=0 , so sending f R[X]m+n to qm gives a left inverse for Tm, which is guaranteed by Theorem 4. On the other hand, if f=1 and g=1-aX, then qm=1+aX+a2X2+ +am+nXm+n and rm=am+n+1 so this technique will not generally construct a left inverse for T. Indeed, if R is an integral domain and a 0 is not invertible, then Theorem 6 precludes a left inverse for T.

Finally, I would like to point out that all the proofs given here are constructive, as befits a paper on a generalization of a theorem of Heyting.

References

[1]
Heyting, Arend, Untersuchungen ber intuitionistische Algebra, Verhandelingen Akad. Amsterdam, eerste sectie 18 (1941) 1--36.

[2]
Jacobson, Nathan, Basic Algebra, W. H. Freeman 1974.

[3]
Mines, Ray, Fred Richman and Wim Ruitenburg, A course in constructive algebra, Springer-Verlag, 1988.

This document was translated from LATEX by HEVEA.