Newton's identities, also known as the Girard-Newton formulae


Download 0.84 Mb.
Pdf ko'rish
Sana10.10.2020
Hajmi0.84 Mb.
#133242
Bog'liq
Newton's identities - Wikipedia


Newton's identities

In mathematics, Newton's identities, also

known as the Girard-Newton formulae,

give relations between two types of

symmetric polynomials, namely between

power sums and elementary symmetric

polynomials. Evaluated at the roots of a

monic polynomial P in one variable, they

allow expressing the sums of the k-th

powers of all roots of P (counted with

their multiplicity) in terms of the

coefficients of P, without actually finding

those roots. These identities were found


by Isaac Newton around 1666, apparently

in ignorance of earlier work (1629) by

Albert Girard. They have applications in

many areas of mathematics, including

Galois theory, invariant theory, group

theory, combinatorics, as well as further

applications outside mathematics,

including general relativity.



Formulation in terms of symmetric

polynomials

Let x

1

, ..., x



n

 be variables, denote for k ≥ 1

by p

k

(x

1

, ..., x



n

) the k-th power sum:

Mathematical statement



and for k ≥ 0 denote by e

k

(x

1

, ..., x



n

) the


elementary symmetric polynomial (that

is, the sum of all distinct products of k

distinct variables), so

Then Newton's identities can be stated

as

valid for all n ≥ 1 and n ≥k ≥ 1.



Also, one has

for all k > n ≥ 1.

Concretely, one gets for the first few

values of k:

The form and validity of these equations

do not depend on the number n of

variables (although the point where the

left-hand side becomes 0 does, namely

after the n-th identity), which makes it


possible to state them as identities in the

ring of symmetric functions. In that ring

one has

and so on; here the left-hand sides never



become zero. These equations allow to

recursively express the e



i

 in terms of the



p

k

; to be able to do the inverse, one may

rewrite them as


In general, we have

valid for all n ≥ 1 and n ≥k ≥ 1.

Also, one has

for all k > n ≥ 1.



Application to the roots of a

polynomial

The polynomial with roots x



i

 may be


expanded as

where the coefficients 

are the symmetric polynomials defined

above. Given the power sums of the roots

the coefficients of the polynomial with

roots 


 may be expressed



recursively in terms of the power sums

as

Formulating polynomials in this way is



useful in using the method of Delves and

Lyness


[1]

 to find the zeros of an analytic

function.


Application to the characteristic

polynomial of a matrix

When the polynomial above is the

characteristic polynomial of a matrix A

(in particular when A is the companion

matrix of the polynomial), the roots 

are the eigenvalues of the matrix,

counted with their algebraic multiplicity.

For any positive integer k, the matrix A



k

has as eigenvalues the powers x



i

k

, and


each eigenvalue   of A contributes its

multiplicity to that of the eigenvalue x



i

k

 of


A

k

. Then the coefficients of the

characteristic polynomial of A

k

 are given

by the elementary symmetric

polynomials in those powers x



i

k

. In




particular, the sum of the x

i

k

, which is the



k-th power sum p

k

 of the roots of the

characteristic polynomial of A, is given by

its trace:

The Newton identities now relate the

traces of the powers A



k

 to the


coefficients of the characteristic

polynomial of A. Using them in reverse to

express the elementary symmetric

polynomials in terms of the power sums,

they can be used to find the

characteristic polynomial by computing

only the powers A

k

 and their traces.



This computation requires computing the

traces of matrix powers A



k

 and solving a

triangular system of equations. Both can

be done in complexity class NC (solving

a triangular system can be done by

divide-and-conquer). Therefore,

characteristic polynomial of a matrix can

be computed in NC. By the Cayley–

Hamilton theorem, every matrix satisfies

its characteristic polynomial, and a

simple transformation allows to find the

adjugate matrix in NC.

Rearranging the computations into an

efficient form leads to the Faddeev–



LeVerrier algorithm (1840), a fast parallel

implementation of it is due to L. Csanky



(1976). Its disadvantage is that it

requires division by integers, so in

general the field should have

characteristic, 0.



Relation with Galois theory

For a given n, the elementary symmetric

polynomials e

k

(x

1

,...,x



n

) for k = 1,..., n

form an algebraic basis for the space of

symmetric polynomials in x

1

,.... x



n

: every


polynomial expression in the x

i

 that is


invariant under all permutations of those

variables is given by a polynomial

expression in those elementary

symmetric polynomials, and this

expression is unique up to equivalence of



polynomial expressions. This is a general

fact known as the fundamental theorem

of symmetric polynomials, and Newton's

identities provide explicit formulae in the

case of power sum symmetric

polynomials. Applied to the monic

polynomial 

with all coefficients a



k

 considered as free

parameters, this means that every

symmetric polynomial expression



S(x

1

,...,x



n

) in its roots can be expressed

instead as a polynomial expression

P(a

1

,...,a



n

) in terms of its coefficients

only, in other words without requiring

knowledge of the roots. This fact also

follows from general considerations in

Galois theory (one views the a



k

 as


elements of a base field with roots in an

extension field whose Galois group

permutes them according to the full

symmetric group, and the field fixed

under all elements of the Galois group is

the base field).

The Newton identities also permit

expressing the elementary symmetric

polynomials in terms of the power sum

symmetric polynomials, showing that any

symmetric polynomial can also be

expressed in the power sums. In fact the

first n power sums also form an

algebraic basis for the space of

symmetric polynomials.


There are a number of (families of)

identities that, while they should be

distinguished from Newton's identities,

are very closely related to them.



A variant using complete

homogeneous symmetric

polynomials

Denoting by h



k

 the complete

homogeneous symmetric polynomial

that is the sum of all monomials of

degree k, the power sum polynomials

also satisfy identities similar to Newton's

identities, but not involving any minus

Related identities





signs. Expressed as identities of in the

ring of symmetric functions, they read

valid for all n ≥ k ≥ 1. Contrary to

Newton's identities, the left-hand sides

do not become zero for large k, and the

right-hand sides contain ever more non-

zero terms. For the first few values of k,

one has


These relations can be justified by an

argument analogous to the one by



comparing coefficients in power series

given above, based in this case on the

generating function identity

Proofs of Newton's identities, like these

given below, cannot be easily adapted to

prove these variants of those identities.



Expressing elementary symmetric

polynomials in terms of power

sums

As mentioned, Newton's identities can be

used to recursively express elementary

symmetric polynomials in terms of





power sums. Doing so requires the

introduction of integer denominators, so

it can be done in the ring Λ

Q

 of


symmetric functions with rational

coefficients:

and so forth.

[2]


 The general formula can

be conveniently expressed as



where the B

n

 is the complete exponential

Bell polynomial. This expression also

leads to the following identity for

generating functions:

Applied to a monic polynomial, these

formulae express the coefficients in

terms of the power sums of the roots:

replace each e

i

 by a



i

 and each p



k

 by s



k

.

Expressing complete



homogeneous symmetric



polynomials in terms of power

sums

The analogous relations involving

complete homogeneous symmetric

polynomials can be similarly developed,

giving equations


and so forth, in which there are only plus

signs. In terms of the complete Bell

polynomial,

These expressions correspond exactly to

the cycle index polynomials of the

symmetric groups, if one interprets the

power sums p

i

 as indeterminates: the

coefficient in the expression for h

k

 of any


monomial p

1

m

1

p

2

m

2

...p



l

m

l

 is equal to the

fraction of all permutations of k that have

m

1

 fixed points, m



2

 cycles of length 2, ...,

and m

l

 cycles of length l. Explicitly, this

coefficient can be written as 

 where 


; this N is the number

permutations commuting with any given

permutation π of the given cycle type.

The expressions for the elementary

symmetric functions have coefficients

with the same absolute value, but a sign

equal to the sign of π, namely

(−1)


m

2

+m



4

+...


.

It can be proved by considering the

following inductive step:

Expressing power sums in terms



of elementary symmetric

polynomials

One may also use Newton's identities to

express power sums in terms of

symmetric polynomials, which does not

introduce denominators:

The first four formulas were obtained by

Albert Girard in 1629 (thus before

Newton).


[3]

The general formula (for all non-negative

integers m) is:

This can be conveniently stated in terms

of ordinary Bell polynomials as

or equivalently as the generating

function:

[4]


which is analogous to the Bell polynomial

exponential generating function given in

the previous subsection.

The multiple summation formula above

can be proved by considering the

following inductive step:


Expressing power sums in terms

of complete homogeneous

symmetric polynomials



Finally one may use the variant identities

involving complete homogeneous

symmetric polynomials similarly to

express power sums in term of them:

and so on. Apart from the replacement of

each e



i

 by the corresponding h



i

, the only

change with respect to the previous

family of identities is in the signs of the

terms, which in this case depend just on

the number of factors present: the sign



of the monomial 

 is −


(−1)

m

1

+m



2

+m

3

+...


. In particular the above

description of the absolute value of the

coefficients applies here as well.

The general formula (for all non-negative

integers m) is:

Expressions as determinants

One can obtain explicit formulas for the

above expressions in the form of

determinants, by considering the first n

of Newton's identities (or it counterparts

for the complete homogeneous





polynomials) as linear equations in which

the elementary symmetric functions are

known and the power sums are

unknowns (or vice versa), and apply

Cramer's rule to find the solution for the

final unknown. For instance taking

Newton's identities in the form

we consider 

 and 

as unknowns, and solve for the final one,



giving

Solving for 

 instead of for 

 is

similar, as the analogous computations



for the complete homogeneous

symmetric polynomials; in each case the

details are slightly messier than the final

results, which are (Macdonald 1979,

p. 20):


Note that the use of determinants makes

that the formula for 

 has additional

minus signs compared to the one for 

,

while the situation for the expanded form



given earlier is opposite. As remarked in

(Littlewood 1950, p. 84) one can

alternatively obtain the formula for 

 by


taking the permanent of the matrix for 

instead of the determinant, and more

generally an expression for any Schur

polynomial can be obtained by taking the

corresponding immanant of this matrix.

Each of Newton's identities can easily be

checked by elementary algebra; however,

their validity in general needs a proof.

Here are some possible derivations.

From the special case n = k

One can obtain the k-th Newton identity

in k variables by substitution into

Derivation of the identities





as follows. Substituting x

j

 for t gives

Summing over all j gives

where the terms for i = 0 were taken out

of the sum because p

0

 is (usually) not



defined. This equation immediately gives

the k-th Newton identity in k variables.

Since this is an identity of symmetric

polynomials (homogeneous) of degree k,



its validity for any number of variables

follows from its validity for k variables.

Concretely, the identities in n < k

variables can be deduced by setting k − n

variables to zero. The k-th Newton

identity in n > k variables contains more

terms on both sides of the equation than

the one in k variables, but its validity will

be assured if the coefficients of any

monomial match. Because no individual

monomial involves more than k of the

variables, the monomial will survive the

substitution of zero for some set of n − k

(other) variables, after which the equality

of coefficients is one that arises in the k-

th Newton identity in k (suitably chosen)

variables.


Comparing coefficients in series

Another derivation can be obtained by

computations in the ring of formal power

series R[[t]], where R is Z[x

1

,..., x



n

], the ring

of polynomials in n variables x

1

,..., x



n

 over


the integers.

Starting again from the basic relation

and "reversing the polynomials" by

substituting 1/t for t and then multiplying

both sides by t

n

 to remove negative

powers of t, gives



(the above computation should be

performed in the field of fractions of



R[[t]]; alternatively, the identity can be

obtained simply by evaluating the

product on the left side)

Swapping sides and expressing the a



i

 as


the elementary symmetric polynomials

they stand for gives the identity

One formally differentiates both sides

with respect to t, and then (for



convenience) multiplies by t, to obtain

where the polynomial on the right hand

side was first rewritten as a rational

function in order to be able to factor out

a product out of the summation, then the


fraction in the summand was developed

as a series in t, using the formula

and finally the coefficient of each t 

j

 was


collected, giving a power sum. (The

series in t is a formal power series, but

may alternatively be thought of as a

series expansion for t sufficiently close

to 0, for those more comfortable with

that; in fact one is not interested in the

function here, but only in the coefficients

of the series.) Comparing coefficients of



t

k

 on both sides one obtains



which gives the k-th Newton identity.

As a telescopic sum of symmetric

function identities

The following derivation, given

essentially in (Mead, 1992), is formulated

in the ring of symmetric functions for

clarity (all identities are independent of

the number of variables). Fix some k > 0,

and define the symmetric function r(i) for

2 ≤ i ≤ k as the sum of all distinct

monomials of degree k obtained by

multiplying one variable raised to the





power i with k − i distinct other variables

(this is the monomial symmetric function



m

γ

 where γ is a hook shape (i,1,1,...,1)). In



particular r(k) = p

k

; for r(1) the

description would amount to that of e

k

,

but this case was excluded since here



monomials no longer have any

distinguished variable. All products p



i

e

ki

can be expressed in terms of the r(j) with

the first and last case being somewhat

special. One has

since each product of terms on the left

involving distinct variables contributes to



r(i), while those where the variable from

p

i

 already occurs among the variables of



the term from e

ki

 contributes to r(i + 1),

and all terms on the right are so obtained

exactly once. For i = k one multiplies by



e

0

 = 1, giving trivially



Finally the product p

1

e



k−1

 for i = 1 gives

contributions to r(i + 1) = r(2) like for

other values i < k, but the remaining

contributions produce k times each

monomial of e



k

, since any one of the

variables may come from the factor p

1

;



thus

The k-th Newton identity is now obtained

by taking the alternating sum of these


equations, in which all terms of the form

r(i) cancel out.

Combinatorial Proof

A short combinatorial proof of Newton's

Identities is given in (Zeilberger, 1984)

[5]


Power sum symmetric polynomial

Elementary symmetric polynomial

Symmetric function

Fluid solutions, an article giving an

application of Newton's identities to

computing the characteristic

polynomial of the Einstein tensor in the

See also


case of a perfect fluid, and similar

articles on other types of exact

solutions in general relativity.

1. Delves, L. M. (1967). "A Numerical



Method of Locating the Zeros of an

Analytic Function" . Mathematics of

Computation. 21: 543–560.

doi:10.2307/2004999 .

2. N.b., the coefficients of the weighted



product terms in the sum given by

the identity above are related to the

M2 numbers in Section 26.4 of the

DLMF  and/or the coefficients

involved in the expansions of Faa di

Bruno's formula

References



3. Tignol, Jean-Pierre (2004). Galois'

theory of algebraic equations

(Reprinted. ed.). River Edge, NJ:

World Scientific. pp. 37 –38.

ISBN 981-02-4541-6.

4. Weisstein, Eric W. "Symmetric



Polynomial" . MathWorld.

5. Zeilberger, Doron (1984). "A



Combinatorial Proof of Newton's

Identities" . Discrete Mathematics.

49: 319. doi:10.1016/0012-

365X(84)90171-7 .

Tignol, Jean-Pierre (2001). Galois'



theory of algebraic equations.

Singapore: World Scientific. ISBN 978-

981-02-4541-2.


Bergeron, F.; Labelle, G. & Leroux, P.

(1998). Combinatorial species and tree-



like structures . Cambridge: Cambridge

University Press. ISBN 978-0-521-

57323-8.

Cameron, Peter J. (1999). Permutation



Groups . Cambridge: Cambridge

University Press. ISBN 978-0-521-

65378-7.

Cox, David; Little, John, and O'Shea,

Donal (1992). Ideals, Varieties, and

Algorithms. New York: Springer-Verlag.

ISBN 978-0-387-97847-5.

Eppstein, D.; Goodrich, M. T. (2007).

"Space-efficient straggler identification

in round-trip data streams via Newton's


identities and invertible Bloom filters".

Algorithms and Data Structures, 10th

International Workshop, WADS 2007.

Springer-Verlag, Lecture Notes in

Computer Science 4619. pp. 637–648.

arXiv:0704.3313 .

Bibcode:2007arXiv0704.3313E .

Littlewood, D. E. (1950). The theory of



group characters and matrix

representations of groups. Oxford:

Oxford University Press. viii+310.

ISBN 0-8218-4067-3.

Macdonald, I. G. (1979). Symmetric



functions and Hall polynomials. Oxford

Mathematical Monographs. Oxford:

The Clarendon Press, Oxford University


Press. viii+180. ISBN 0-19-853530-9.

MR 0553598 .

Macdonald, I. G. (1995). Symmetric

functions and Hall polynomials. Oxford

Mathematical Monographs (Second

ed.). New York: Oxford Science

Publications. The Clarendon Press,

Oxford University Press. p. x+475.

ISBN 0-19-853489-2. MR 1354144 .

Mead, D.G. (1992). "Newton's

Identities". The American Mathematical



Monthly. Mathematical Association of

America. 99 (8): 749–751.

doi:10.2307/2324242 .

JSTOR 2324242 .



Stanley, Richard P. (1999). Enumerative

Combinatorics, Vol. 2. Cambridge

University Press. ISBN 0-521-56069-1.

(hardback). ISBN 0-521-78987-7

(paperback).

Sturmfels, Bernd (1992). Algorithms in

Invariant Theory. New York: Springer-

Verlag. ISBN 978-0-387-82445-1.

Tucker, Alan (1980). Applied

Combinatorics  (5/e ed.). New York:

Wiley. ISBN 978-0-471-73507-6.

Newton–Girard formulas

 on


MathWorld

External links



  Last edited 4 months ago by InternetArchiveBot  

Content is available under CC BY-SA 3.0  unless

otherwise noted.

A Matrix Proof of Newton's Identities

in Mathematics Magazine

Application on the number of real

roots

A Combinatorial Proof of Newton's



Identities  by Doron Zeilberger

Retrieved from



"

https://en.wikipedia.org/w/index.php?



title=Newton%27s_identities&oldid=962139559

"

Download 0.84 Mb.

Do'stlaringiz bilan baham:




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling