# A division algorithm

## Abstract

A divisibility test of Arend Heyting, for polynomials over a field in an intuitionistic setting, may be thought of as a kind of division algorithm. We show that such a division algorithm holds for divisibility by polynomials of content 1 over any commutative ring in which nilpotent elements are zero. In addition, for an arbitary commutative ring R, we characterize those polynomials g such that the R-module endomorphism of R[X] given by multiplication by g has a left inverse.

## 1  Introduction

In [], Arend Heyting talks about polynomials over what has become known as a Heyting field []. This is a commutative ring with an apartness relation, a ¹ b, that satisfies the following properties

• Not a ¹ a. (consistent)

• If a ¹ b, then b ¹ a. (symmetric)

• If a ¹ b, then either a ¹ c or b ¹ c. (cotransitive)

• If not a ¹ b, then a = b. (tight)

• If a ¹ b, then a+c ¹ b+c. (shift invariant)

• a ¹ 0 if and only if a is invertible. (field)

If g = a0+a1X+¼+anXn is a polynomial with coefficients in a Heyting field, then g ¹ 0 means ai ¹ 0 for some i.

The model that Heyting had in mind was the real numbers, with apartness being a positive form of inequality: two real numbers are apart if we can find a positive rational number that bounds them away from one another. Thus tightness is a form of the archimedean property-no infinitesimals-that is the basis for Archimedes' famous proofs by double contradiction. The first three properties are the duals of the axioms for an equivalence relation. We can use the field property to define the relation a ¹ 0, and then define a ¹ b to be a-b ¹ 0 in accordance with shift invariance. Thus we can eliminate the apartness entirely, and just keep the algebraic versions of the first four properties.

The algebraic interpretation of consistency is that zero is not a unit, that is, the ring is nontrivial. Symmetry is automatic while cotransitivity says that the ring is local: if x+y is invertible, then either x or y is invertible. Tightness says that any nonunit is zero, so it is this property that gives us a field in the traditional sense. In the presence of the law of excluded middle, tightness also implies that any nonzero element is a unit. Heyting does not assume that law, so being a unit may be stronger than being nonzero, which is the whole point of introducing the apartness. I'm not concerned with this issue here as I am going to drop the tightness condition.

Many intuitionistic theorems about Heyting fields require neither tightness nor consistency, so they are really theorems about local rings. However, one consequence of tightness and consistency which will play a role in what follows is that nilpotent elements are zero. Indeed, suppose an = 0 and we want to show that a = 0. By tightness, it suffices to derive a contradiction from a ¹ 0. But if a ¹ 0, then a is a unit, so an is a unit, so an ¹ 0. By consistency, this is a contradiction, so a = 0.

This paper was motivated by the following theorem from [] which I paraphrase.

Theorem 1 [Heyting] Let R be a Heyting field. Let f and g be polynomials with coefficients in R such that g ¹ 0. Then we can compute elements e1,¼,em of R, polynomials in the coefficients of f and g, so that f = gh for some h if and only if ei = 0 for all i.

For example, if g is a monic polynomial, then we can use the division algorithm to write f = qg+r where degr < degg. The coefficients of r serve as the elements e1,¼,em in the theorem. Thus Theorem 1 may be thought of as an extension of the division algorithm.

The division algorithm for a monic polynomial g can be thought of in terms of the map l taking f to q. This map is an R-endomorphism of R[X] that is a left inverse for multiplication by g. Given such a left inverse l, we can get Heyting's elements e1,¼,em as the coefficients of f-l(f)g. In general, a left inverse is too much to expect (see Theorem ) so, with Heyting, we construct local left inverses (Theorem ). This can be done for an arbitrary commutative ring (if g has content 1). However, for local inverses to suffice, we must be able to bound the degree of q in terms of the degree of f, and this is where the condition that nilpotent elements of R are zero comes into play.

## 2  Local left inverses

Let R be a commutative ring, R[X] the polynomial ring over R in the indeterminate X, and

 R[X]n = {c0+c1X+¼+cnXn:ci Î R for i = 0,¼,n}
the (free) R-submodule of R[X] consisting of the polynomials of degree at most n. If g Î R[X]n, then multiplication by g induces an R-homomorphism Tm:R[X]m® R[X]m+n. When does Tm have a left inverse? For R a Heyting field, Heyting showed that Tm has a left inverse if g ¹ 0. His proof goes through for local rings.

We will show, for an arbitrary commutative ring R, that it is enough for the coefficients of g to generate R as an ideal (g has content 1), which in the local case simply says that some coefficient of g is a unit (that is, g ¹ 0). The key fact is the following lemma.

Lemma 2 Let q be an (m+1)-form in Z[X0,¼,Xn]. Then there exist m-forms p0,¼,pm+n in Z[X0,¼,Xn] such that åk = 0npkXk = q and åk = 0npk+jXk = 0 for 0 < j £ m.

Proof. Consider the map F from (m+n+1)-tuples of m-forms to (m+1)-tuples of n-forms such that F(p0,¼,pm+n) = (v0,¼,vm) where vj = åk = 0npk+jXk. We want to show that (q,0,¼,0) is in the image of F for any (m+1)-form q. By linearity we may assume that q is a monomial. The proof is by induction on m+n. If either m = 0 or n = 0, choose Xi dividing q, let pi = q/Xi and let pk = 0 if k ¹ i. So we may assume that m,n > 0.

First suppose that Xn divides q, so q = Xnq¢. By induction, because m > 0, there exist (m-1)-forms p0¢,¼,pm+n-1¢ such that åk = 0npk+j¢Xk = 0 if 0 < j £ m-1, and åk = 0npk¢Xk = q¢. Set pk = Xnpk¢ for k < m+n and pm+n = -åk = 0n-1pk+m¢Xk. Then

n
å
k = 0
pk+jXk = ì
ï
í
ï
î
 Xn\dsumk = 0npk¢Xk = Xnq¢ = q
 if j = 0
 Xn\dsumk = 0npk+j¢Xk = 0
 if 0 < j < m
 Xn\dsumk = 0n-1pk+m¢Xk+pn+mXn = 0
 if j = m
Note that pk is divisible by Xn if k < m+n. So if q is divisible by Xn, then (q,0,¼,0) = F(p0,¼,pm+n) where pk is divisible by Xn if k < m+n. It follows that F(0,¼,0,p0,¼,pm+n-i) = (v0,¼,vm) where vi+1,¼,vm are zero, v0,¼,vi-1 are divisible by Xn and vi = q. By taking linear combinations of these, we see that any vector (v0,¼,vm) is in the image of F if each vi is divisible by Xn.

Now suppose that Xn does not divide q, so q Î Z[X0,¼,Xn-1]. By induction, because n > 0, there exist m-forms p0,¼,pm+n-1 in Z[X0,¼,Xn-1] such that åk = 0n-1pkXk = q and åk = 0n-1pk+jXk = 0 for 0 < j £ m. For any value of pm+n we have F(p0,¼,pm+n) = (v0,¼,vm) where v0-q and v1,¼,vm are divisible by Xn. So adding a vector constructed in the previous paragraph gives us (q,0,¼,0).

Theorem 3 Let a0,a1,¼,an,s0,s1,¼,sn be indeterminates and m a nonnegative integer. Then there exist integral forms b0,b1,¼,bm+n of degree m in the a's and degree 1 in the s's such that

n
å
i = 0
bi+jai = ì
ï
ï
í
ï
ï
î
 n å i = 0 siaim+1 if j = 0
 0 if 0 < j £ m

Proof. By linearity it suffices, for each k = 0,¼,n, to construct forms b0,b1,¼,bm+n such that

n
å
i = 0
bi+jai = ì
ï
í
ï
î
 skakm+1 if j = 0
 0 if 0 < j £ m
Lemma 2 gives integral m-forms b0¢,b1¢,¼,bm+n¢ in Z[a0,¼,an] such that
n
å
i = 0
bi+j¢ai = ì
ï
í
ï
î
 akm+1 if j = 0
 0 if 0 < j £ m
Set bi = skbi¢ for i = 0,¼,m+n.

Theorem 4 If g Î R[X]n, then the map Tm:R[X]m®R[X]m+n, induced by multiplication by g, has a left inverse if and only if the coefficients of g generate R as an ideal.

Proof. If l is a left inverse of Tm, then l(g) = l(Tm(1)) = 1 Î R[X]m. So following l by evaluation at 0 gives a linear functional on R[X]m+n whose value on g is equal to 1. Thus some linear combination of the coefficients of g is equal to 1.

Conversely, suppose that the coefficients of g generate R as an ideal. Let g = a0+a1X+¼+anXn. Then we can find si Î R such that s0a0m+1+¼+snanm+1 = 1. By Theorem there exist b0,b1,¼,bm+n Î R so that

n
å
i = 0
bi+jai = ì
ï
í
ï
î
 1 if j = 0
 0 if 0 < j £ m
The map Tm is described by the m+n+1 by m+1 matrix
æ
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
ç
è
 a0
 0
 0
 ¼
 0
 0
 0
 ¼
 0
 a1
 a0
 0
 ¼
 0
 0
 0
 ¼
 0
 a2
 a1
 a0
 ¼
 0
 0
 0
 ¼
 0
 :
 :
 :
 :
 :
 :
 :
 ¼
 0
 an
 an-1
 an-2
 ¼
 a0
 0
 0
 ¼
 0
 0
 an
 an-1
 ¼
 a1
 a0
 0
 ¼
 0
 0
 0
 an
 ¼
 a2
 a1
 a0
 ¼
 0
 :
 :
 :
 :
 :
 :
 :
 :
 :
 0
 0
 0
 ¼
 an
 an-1
 an-2
 ¼
 a0
 0
 0
 0
 ¼
 0
 an
 an-1
 ¼
 a1
 0
 0
 0
 ¼
 0
 0
 an
 ¼
 a2
 :
 :
 :
 :
 :
 :
 :
 :
 :
 0
 0
 0
 ¼
 0
 0
 0
 ¼
 an
ö
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
÷
ø
The problem is to find a left inverse for this matrix. If we multiply on the left by the m+1 by m+n+1 matrix
æ
ç
ç
ç
ç
ç
ç
è
 b0
 b1
 b2
 ¼
 bm
 ¼
 bm+n
 0
 b0
 b1
 ¼
 bm-1
 ¼
 bm+n-1
 0
 0
 b0
 ¼
 bm-2
 ¼
 bm+n-2
 :
 :
 :
 :
 :
 :
 :
 0
 0
 0
 ¼
 b0
 ¼
 bn
ö
÷
÷
÷
÷
÷
÷
ø
we get a lower triangular m+1 by m+1 matrix with 1's down the diagonal. This matrix has a left inverse, hence so does the original matrix.

We obtain the desired generalization of Heyting's theorem as a corollary.

Corollary 5 Suppose that nilpotent elements of R are zero and the coefficients of g Î R[X]n generate R as an ideal. Then there exists an R-homomorphism l:R[X]m+2n® R[X]m+n such that f Î R[X]m+n Ì R[X]m+2n is a multiple of g if and only if f = gl(f).

Proof. Let l be a left inverse of the map R[X]m+n®R[X]m+2n induced by multiplication by g. The ``if'' part is trivial, so suppose f = gh. It suffices to show that h Î R[X]m+n. Suppose h = h0+h1X+¼+hdXd with d ³ m+n. Induct on d-m-n. If d = m+n, then we are done. If d > m+n, let S = {1,hd,hd2,¼}. Clearly h is regular of degree d over S-1R. Therefore g = 0 over S-1R, because deggh £ m+n < d. As the coefficients of g generate R, the ring S-1R is trivial. Thus hd is nilpotent in R, therefore zero, and we are done by induction.

We need some hypothesis in this corollary to eliminate the case g = 1-aX, where a is nilpotent of large order, in which case f = 1 is a multiple of g but not by an element of R[X]m+n.

## 3  Global left inverses

Theorem 2 says that some linear combination of the coefficients of g is equal to 1 if and only if the map T:R[X]® R[X], given by multiplication by g, has local left inverses: that is, the restriction of T to a map R[X]m®R[X]m+n has a left inverse for each m. When does T itself have a left inverse? Equivalently, when is g regular and R[X]g an R-summand of R[X]? If the leading coefficient of g is invertible, then the division algorithm provides a left inverse. If g itself is invertible, like 1-aX where a is nilpotent, then multiplication by g-1 provides a left inverse. If nilpotent elements of R are zero, then we have the following characterization.

Theorem 6 Suppose that nilpotent elements of R are zero and that g is a polynomial of degree n. If the map T:R[X]® R[X] given by multiplication by g has a left inverse, then R = R1×¼×Rk and the leading coefficient of g over Ri is invertible for each i.

Proof. Suppose g Î R[X]n and let a be the coefficient of Xn in g. Note that a might be zero. Let I = {r Î R:rak = 0 for some k}. We will show that 1 Î Ra+I.

By hypothesis, R[X] = R[X]gÅP as an R-module. Let [`(R)] = R/I noting that [`(a)] is cancellable in [`(R)]. Then [`(R)][X] = [`(R)][X][`(g)]Å[`(P)]. If f Î R[X], then the extended division algorithm [] says that akf Î R[X]g+R[X]n-1 for some k. Let P0 be the projection of R[X]n-1 into P. Then P0 is finitely generated, hence contained in R[X]i for some i. As [`(a)] is cancellable, it follows that [`(P)] Ì [`(R)][X]i. But [`(P)] is a summand of [`(R)][X], so [`(P)] itself is finitely generated. Now [`(P)] @ [`(R)][X]/[`(R)][X][`(g)], so, as [`(P)] is finitely generated, [`(R)][X] Ì [`(R)][X][`(g)]+[`(R)][X]m for some m. In particular, Xm+1 = q[`(g)]+r in [`(R)][X], where r Î [`(R)][X]m. So [`(g)] is a factor of a monic polynomial, whence [`(a)] is invertible, that is, 1 Î Ra+I.

Write 1 = ra+b where akb = 0. Then ak = rak+1 so Rak is generated by the idempotent rkak. It follows that R factors as a product R1×R2 with ak invertible in R1 and zero in R2. As nilpotents in R are zero, a is invertible in R1 and zero in R2. Thus g is of the desired form over R1 and is in R2[X]n-1 over R2, so we are done by induction on n.

Note that the converse of Theorem 3 follows from the remarks about the division algorithm for polynomials g with invertible leading coefficient. The question remains of what happens if R has nilpotent elements.

Lemma 7 Let I be a nilpotent ideal of a commutative ring R, and f = a0+a1X+¼ Î R[X]. Suppose an is invertible and degf = n modulo I. Then there is g Î R[X] such that g º 1  \funcmodI and degfg = n modulo I2.

Proof. Induct on the degree m ³ n of f modulo I2. If m = n, take g = 1. If m > n, set h = 1-an-1amXm-n º 1  \funcmodI. Then

 (fh)i = ai-an-1amai-m+n
where aj = 0 for j < 0. Thus (fh)n º an  \funcmodI is invertible, (fh)m = 0, and (fh)i Î I2 if i > m because i-m+n > n. So degfh < m modulo I2. By induction there exists g º 1  \funcmodI so that degfhg = n modulo I2. The desired g of the lemma is hg.

Theorem 8 Let R be a commutative ring with nilradical N, and f Î R[X]. Then the following are equivalent:

1. f = gh where g º 1  \funcmodN and h has invertible leading coefficient.

2. f = gh where g is invertible in R[X] and h is monic.

3. The R-module endomorphism of R[X] given by multiplication by f has a left inverse.

4. f has invertible leading coefficient modulo N.

Proof. Assume 1. As g º 1  \funcmodN we have g = 1-n where nk = 0 for some k. So g is invertible with 1+n+n2+¼+vk-1 as its inverse. If a is the leading coefficient of h, then ag and h/a are desired polynomials for 2.

Assume 2. Multiplication by g has multiplication by g-1 as its left inverse. The division algorithm gives a left inverse for multiplication by h. So multiplication by f = gh has a left inverse.

To see that 3 implies 4, apply Theorem 3 over the ring R/N.

Finally, assume 4. Let n be the degree of f modulo N. Let I be the ideal generated by the coefficients fi for i > n. Then I is nilpotent. Repeated application of Lemma 3 gives G º 1  \funcmodN in R[X] so that degfG = n. Let g = G-1 and h = fG.

## 4  An example and a comment

For g Î R[X]n, let T:R[X]® R[X] and Tm:R[X]m® R[X]m+n be multiplication by g as before. One way the coefficients of g Î R[X]n can generate R is for g to be monic; another is for g(0) = 1. In the latter case we can use the division algorithm in reverse to write f = qmg+rmXm+n+1, with qm a unique element of R[X]m+n. If f = gh with h Î R[X]m, then rm = 0 , so sending f Î R[X]m+n to qm gives a left inverse for Tm, which is guaranteed by Theorem 2. On the other hand, if f = 1 and g = 1-aX, then qm = 1+aX+a2X2+¼+am+nXm+n and rm = am+n+1 so this technique will not generally construct a left inverse for T. Indeed, if R is an integral domain and a ¹ 0 is not invertible, then Theorem 3 precludes a left inverse for T.

Finally, I would like to point out that all the proofs given here are constructive, as befits a paper on a generalization of a theorem of Heyting.

## References

[]
HEYTING, AREND, Untersuchungen über intuitionistische Algebra, Verhandelingen Akad. Amsterdam, eerste sectie 18 (1941) 1-36.

[]
JACOBSON, NATHAN, Basic Algebra, W. H. Freeman 1974.

[]
MINES, RAY, FRED RICHMAN AND WIM RUITENBURG, A course in constructive algebra, Springer-Verlag, 1988.

File translated from TEX by TTH, version 2.27.
On 19 Dec 2001, 13:15.