Preconditioner


Download 82.01 Kb.
bet2/5
Sana18.06.2023
Hajmi82.01 Kb.
#1559457
1   2   3   4   5
Bog'liq
MS-13430406-H

Problem formulation


As discussed in [8], IVS model describe the evolution of clusters containing inter- stitial (I), vacancies (V) and solute (S), this so-called ”IVS model”, are described an ordinary differential equation (ODE).

Σ
The evolution equation for the concentration Ck,p reads:



dCk,p dt
= mi j=mv


ms q=0


kj,pq,j,q
Ci,q
Ckj,pq
Bk,p,j,q
Cj,q
Ck,p}

+ mi i=mv
Σms {Ak+j,p+q,j,q Ck+j,p+q Ak,p,j,q Ck,p}. (2)


Σ

Σ {B

q=0
(3)

Each cluster is identified by the couple (k, p), where :



  • where |k| is the number of interstitials (if k > 0 ) or the number of vacancies (if

k < 0), and p (p ≤ 0) is the number of solutes in this cluster.

  • mi: denote the maximum number of interstitials in mobile species.

  • mv: denote the maximum number of vacancies in mobile species.

  • ms: denote the maximum number of solutes in mobile species.

  • Bk,p,j,q: is the absorption rate of a cluster.

  • Ak,p,j,q: is the emission rate of a cluster.

By using a backward differentiation formula (BDF) [7], to integrate (2), we need to solve the following nonlinear equation of the form :


C(i+1)hγF C(i+1) = F C(i) . (4)

Where:


  • i: is the time step.

  • C(i) and C(i+1) represent the vector of cluster concentrations at discretization time

ti and ti+1 respectively.

  • h: denote the current time step, h = ti+1ti.

  • γ: denote a coefficient depending on the discretization method.

  • F (C): denote the right-hand side of (4).

Finding the root of function F in Eq. (4) by means of an exact Newton method requires repeatedly solving linear systems where A is defined as follow:
A = I γhJ , (5)
where J is a Jacobian matrix of F .
  1. Description and analysis of the preconditioner


Iterative methods are ideally suited to solve high-dimensional sparse linear systems of equations of the form (1). Their advantage over direct methods is that they don’t require to factorize matrix A nor even evaluate it. This is the case for Krylov subspace projection methods and, among them the GMRES method implemented in the follow- ing. The only request is the ability to compute the application of the matrix on any vector v. The drawback of iterative methods is that they are inexact and may require a large number of iterations so that the iterate satisfies the tolerance condition. The remedy to this technical drawback is called preconditioning. Here, the preconditioner P applies a linear transformation to system (1) so as to reduce the condition number of the transformed matrix P1A. The preconditioned linear system to solve writes:
P1AX = P1b (6)
If the condition number of the transformed matrix P1A is smaller than that of matrix
A then the number of iterations is generally reduced.
Proposition 1. Assume that A is invertible. Then matrix A is invertible if and only if S = −(D + CA1B) is invertible. Proof see [7, Proposition 2.1].

In this section we consider the following block preconditioner:



Pα,Sˆ
A B

=
C αSˆ
, (7)




A
where α is a given real nonzero parameter and Sˆ is an approximate Schur complement matrix of . For the computation of Sˆ we use the parallel multifrontal direct solver (MUMPS) [2, 3].



    1. Implementation of Schur approach

The Schur approach is an alternative direct method aiming at solving linear system

  1. The goal is to take advantage of the structure of the Newton matrix Pα,Sˆ that


P
stands for in this subsection. The preliminary steps for implementing the Schur ap- proach consists in computing the Schur complement matrix S associated with matrix α,Sˆ as illustrated in and listed below: The successive steps for computing the Schur complement S are listed below the successive steps for computing the Schur complement Sˆ are listed below:

Figure 1: Schematic diagram of Schur complement computation





    1. The block decomposition of M.




    1. The lower-diagonal-upper decomposition (LDU) of M.

    2. The definition of Schur complement.


    3. ⇔ X X ∈
      A1B A = B, where Rn×m, we solve this system with multiple right-hand sides by using Multifrontal Massively Parallel Sparse direct Solver (MUMPS).

    4. Compute the sum D CX by using (LAPACK library) [4].


P
Note that α,Sˆ is nonsingular under the assumptions of Proposition 1.
In order to apply block preconditioner of the form (7) within a Krylov subspace method,
it is necessary to solve (exactly or inexactly, see below) the following linear system at each step:


C αSˆ
A B
x = f
. (8)




y

g
Here is the following steps for solving the linear system (8) using the Schur complement and involve the following additional steps in Fig. 2:

Figure 2: Schematic diagram of Schur approach





  1. We consider the solution of linear equation (8) involving a sparse coefficient ma- trix having a triangular factorization, thus the main costs at each iteration for computing (8) are solving two sub-linear systems with coefficient matrices Sˆ and A respectively.

  2. We solve the system with the coefficient matrix Sˆ by a dense solver from ( LA- PACK library ), because Sˆ is an invertible and dense matrix.




  1. The system with the coefficient matrix A can be efficiently solved by precondi- tioned GMRES (PGMRES) inexactly or MUMPS exactly.

  2. The last step represent the approximate solution of (8) at each step in Newton’s iteration.




Download 82.01 Kb.

Do'stlaringiz bilan baham:
1   2   3   4   5




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling