Preconditioner


Spectral analysis of the preconditioned matrix


Download 82.01 Kb.
bet3/5
Sana18.06.2023
Hajmi82.01 Kb.
#1559457
1   2   3   4   5
Bog'liq
MS-13430406-H

4. Spectral analysis of the preconditioned matrix


In this section we investigate a regularization preconditioner for solving (1). The idea of preconditioning is to transform the linear system (1) into another one is easier to solve. Left preconditioning (1) gives the following new linear system :

α,Sˆ

α,Sˆ
P1 X = P1 b. (9)

Although right preconditioning can be used in our context, we will focus only on left preconditioning thoughout this work. We describe a block factorization and some prop- erties of the preconditioner Pα,Sˆ.




Proposition 2. The preconditioner (7) has the block-triangular factorization



Pα,Sˆ =
A B

C αSˆ
I O

= CA1 I
A O


O Ss
I A1B O I


, (10)

where Ss = αSˆ CA1B .






The inverse of the preconditioner matrix Pα,Sˆ is given by



P−1 =
A B −1
ˆ
I A1B


A−1 O
−1
I O . (11)
−1

α,Sˆ
C αS
O I O Ss
CA I




α,Sˆ

=
The eigenvalue and eigenvector distributions of the preconditioned matrix relate closely to the convergence rate of Krylov subspace methods. Therefore, it is meaningful to investigate the spectral properties of the preconditioned matrix P1 A. In the following
theorem, we will deduce the eigenvalue distribution of the following preconditioned matrix


α,Sˆ

0 K2
P−1 A = In K1
. (12)

Where: Ss = αSˆ CA1B , K1 = (I + A1BSs1C) A1B and K2 = −Ss1CA1B.


Theorem 1. Let the preconditioned matrix be defined as in (12) and B has a full colmn rank. Then P1 A has m1 +1 distincts eigenvalues {1, λ1, . . . , λm } and (n+m) linearly
α,Sˆ 1
independent eigenvectors, where 1 ≤ m1m. Moreover.




  • 1 is an eigenvalue with multiplicity n.


  • 2
    If α > Real(µ), then |λi| < 1 , for i = 1, .., m1, where µ is the eigenvalue of the matrix Sˆ1CA1B.


  • 2
    If α < Real(µ), then |λi| > 1 , for i = 1, .., m1.


  • α,Sˆ
    If Sˆ1CA1B is diagonalizable, then P1 A is diagonalizable.


α,Sˆ
Proof. Let λ be an eigenvalue of the preconditioned matrix P1 A and [xT , yT ]T be the corresponding eigenvector. To derive the eigenvector distribution, we consider the following generalized eigenvalue problem:


α,Sˆ

y

0 K2
P−1 A x = In K1
x = λ x . (13)




y

y
Equation (12) can be equivalently rewritten as

−1


αλ


ˆ


(14)

(1 − λ)x = −K1y,

CA
By = Sy.
(λ − 1)


If λ = 1 holds true, then from the second equation of (14), we can easily get y = 0.

Thus there are n linearly independent eigenvectors
u(i)
0
, (i = 1, .., n) corresponding

to the eigenvalue 1, where u(i) are arbitrary linearly independent vectors.

/
If λ = 1 and y = 0, then we obtain from the first equation of (14) that x = 0. This contradicts the assumption that [xT , yT ]T is an eigenvector of the preconditioned matrix


α,Sˆ
P1 A and then y
0. If y satisfies the second equation of (14), then


CA1By = µSˆy, (15)


where
We deduce that


αλ
µ = .
(λ − 1)


µ

(16) can be rewritten as follow:
λ =
µ α
. (16)


a + ib
λ =
a α + ib
, (17)





where i is the imaginary complex number (i2 = 1), a and b are the real part and the
α

imaginary part of µ. Hence if
following inequality :


α
> a, then after some manipulations, λ must hold the
2
|λ| < 1.

If < a, then 2
|λ| > 1.
Sˆ1CA1By = µy. (18)



Since Sˆ1CA1B is a diagonalizable, then there are m linearly independent eigenvec- tors, of the form w(i), (i = 1, .., m). U , V and W are matrices whose columns are u(1), ..., u(n) , v(1), ..., v(m) and w(1), ..., w(m) , respectively, where v(i) (i = 1, .., m) are given by the following relation
v(i) = K1 w(i). (19)

(λ − 1)

Since U and W are nonsingular matrices, we infer that det U V
O W
= det(U )det(W ) /=


α,Sˆ
0 and P1 A is diagonalizable.

{ }
When the hypotheses of the preceding theorem are satisfied, the preconditioned Krylov subspace methods converge in at most m1 + 1 iterations. Moreover, in Theo- rem 1, we can see that the bound depends only on the distribution of the eigenvalues 1, λ1, ..., λm1 and on the matrix of eigenvectors.
The preconditioned matrix can be rewritten as follow :


Pα,Sˆ = H + S.
Where the following matrices are Hermitian matrix

K


1


2


2
" In K1 #

H =
and skew-Hermitian matrix
T K2+KT
2 2
, (20)


2


2

S
" O K1 #

= KT

1
2
K 2KT
2
. (21)

Assume that B has a full column rank and H be defined as in (20), D = O and Sˆ = S. Then the matrix H is positive definite, for α = 2 if and only if :
λmin < 3. (22)
Where λmin is the minimal eigenvalue of the matrix BT AT A1B.




KT

=

O SH
Proof. H has the block-triangular factorization


K +K2=

2

2

1
2

K2+K2
2

H =

, (23)
" In K1

# " I O





# I O
I K1





1

2
T KT K1

4

2

H
where SH − . From (23), we infer that det(H) = det(S ), then the
matrix H is positive definite if and only if SH is positive definite matrix. Now we show
that SH is SPD, by proving that for all vector y 0, we have the following inequality :
yT SHy > 0.

Which can be written as
T 1 α2 T

T −1



y α + 1 4(α + 1)2 B A
y
A B y > 0. (24)

Premultiplying (24) with gives
yT y
1 α2


yT BT AT A−1By

α + 1 > 4(α + 1)2
yT y . (25)

Then after some manipulations and by using the fact that α = 2 is the minimum of


4(α+1)2
the function g(α) = α2
, the matrix H is positive definite, for α = 2 if and only if :
λmin < 3. (26)

Where λmin is the minimal eigenvalue of the matrix BT AT A1B. Thus, the proof of Lemma 4 is completed.
Let S be defined as in (21), then S is purely imaginary.
Proof. Let X = [xT , yT ]T /= 0 satisfying the following equality:

T
x T " O K1 # x

2




X SX = y
KT K2KT

1

2
2 2
y . (27)

Which can be written as :


XT SX = yT
T

K K2
2 y,
2

= iz2,
where yT K2y = z1 + iz2. Thus, the proof of Lemma 4 is completed.
The following theorem readily follows from Lemmas 4 − 4.

α,Sˆ
Theorem 2. Let the preconditioned matrix be defined as in (12) and B has a full colmn rank. Then the real part of the eigenvalues of P1 A is strictly positive.




  1. Download 82.01 Kb.

    Do'stlaringiz bilan baham:
1   2   3   4   5




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling