Identification


Download 1.04 Mb.
bet2/10
Sana01.05.2023
Hajmi1.04 Mb.
#1420268
1   2   3   4   5   6   7   8   9   10
Bog'liq
Abdulla 33

Plant Output


10
5
0
-5



-10
50 100 150


200 250



Plant Input
1
0.5
0
-0.5



-1
50 100


Samples
150
200
250



Figure 7.6. File T: I/O data set
3 All the examples presented in this chapter have been worked out with WinPim (Adaptech) identification software and the MATLAB/Scilab routines for model order estimation (see Section 6.5.4). Small numerical differences will result when using other identification routines.
4 These files are available from the book web sites: http://landau-bookic.lag.ensieg.inpg.fr.

The file T1 has been generated with the same polynomials A(q-1) and B(q-1), but adding a stochastic disturbance, the ARMAX model being of the form


A(q 1) y(t)  q 1B(q 1)u(t)  C(q 1)e(t)
in which {e(t)} is an almost white noise generated by the computer and the degree of C(q-1) is nC = 2.
FILE T


S = 1 M = 1(RLS) A = 1 FILE: T0 DELAY D = 1 INSTANT K = 50
FORGETTING FACTOR = 1
TRACE OF ADAPTATION GAIN = 7.106747E-02 PROCESS OUTPUT = -5.030791
MODEL OUTPUT = -5.030789
PROCESS INPUT = -1
ADAPTATION ERROR = -1.430511E-06
A(1) = -1.49999 B(1) = 0.99998
A(2) = 0.69999 B(2) = 0.50000
We now come back to the file T5. Figure 7.6 shows the input and output sequences collected in this file. For the file T, by using the recursive least squares (RLS) method with the decreasing adaptation gain, the following results are obtained:
where S, M, and A designate the structure, the method and the type of the adaptation gain6 (see Chapter 5, Sections 5.5 and 5.2.4) respectively. A(1), A(2), B(1), B(2) designate the estimated coefficients of polynomials A(q-1) and B(q-1). It can be observed that the estimated parameters converge very fast towards the good values in the case of a noise free system (initial values A(1) = A(2) = B(1) = B(2) = 0).
Similar results are obtained with the output error method, with fixed compensator, decreasing adaptation gain and compensator degree nD=0 (adaptation error = prediction error), as indicated below.



S = 2 M = 2 (OEFC) A = 1 F
INSTANT K

ILE: T0 DELAY D = 1
= 50

FORGETTING FACTOR

= 1

TRACE OF ADAPTATION GAIN

= 7.107091E-02

PROCESS OUTPUT

= -5.030791

MODEL OUTPUT

= -5.031748

PROCESS INPUT

= -1

ADAPTATION ERROR

= 9.570122E-04

A(1) = -1.50000

B(1) = 1.00002

A(2) = 0.700000

B(2) = 0.49993

5 Before starting the identification, data must be centred.
6 S=1 designates the structure S1, M=1 designates the first method used for the structure S1 (see Section 5.5) and A=1 corresponds to the choice of the adaptation gain A.1 discussed in Section 5.2.4.
The estimation of the time delay for the file T in the absence of a priori information will be illustrated next. Indeed with the recursive least squares method and a decreasing adaptation gain, for nA = 2, nB = 4 and d = 0 the following parameters are obtained:



S = 1

M = 1 (RLS)
INSTANT K

A = 1

FILE: T0
= 50

DELAY D = 0

FORGETTING FACTOR

= 1

TRACE OF ADAPTATION GAIN

= .1623274

PROCESS OUTPUT

= -5.030791

MODEL OUTPUT

= -5.030764

PROCESS INPUT

= -1

ADAPTATION ERROR

= -2.717972E-05

A(1) = -1.49998

B(1) = 5.39346E-06

B(3) = .50001

A(2) = .699984

B(2) = .99998

B(4) = 2.62828E-05

The results obtained (B(1) and B(4), very small compared to B(2) and B(3)) clearly show that d = 1 and nB = 2. However it must be stressed that this situation is without any ambiguity in this example, since there is no disturbing noise.


FILE T1
The file T1 is highly contaminated by noise and certain methods (in particular the recursive least squares) will give biased parameter estimates. The quality of the identification will be reflected in the validation results. The estimation has been carried out on 256 samples and the validation has been carried out on the same input/output data set.


S =1 M = 1 (RLS) A = 1 FILE: T1 NS = 256 DELAY D = 1 COEFFICIENTS OF POLYNOMIAL A: A(1) = -1.403533
A(2) = 0.6066453 COEFFICIENTS OF POLYNOMIAL B: B(1) = 0.9831312
B(2) = 0.6512049
VALIDATION TEST : Whiteness of the residual error
System Variance: 18.3791 Model Variance: 17.9035 Error variance R(0): 0.4749
NORMALIZED AUTOCORRELATION FUNCTIONS
Validation Criterion: Theor. Val.: |RN(i)|  0.136, Pract. Val.: |RN(i)|  0.15 RN(0) = 1.000000  RN(1) = -0.505234 
RN(2) = 0.115732 RN(3) = -0.054398
RN(4) =0.016311
The results obtained with the recursive least squares with decreasing gain are given below7:
The appearance of a bias on the estimated parameters is observed, which is also reflected in unsatisfactory validation results (|RN(1)| > 0.15). The residual prediction error is not close to white noise. One should thus consider another “plant
7 In the tables shown in this section the system variance corresponds to the variance of the measured output, the model variance corresponds to the variance of the predicted output and the error variance corresponds to the variance of the residual prediction error (R(0)).
+ disturbance” structure such as, for example, the S3 structure, which replaces the disturbance model e(t) in S1 by C(q-1) e(t).


S =3 M = 3 (OEEPM) A = 1 FILE:T1 NS = 256 DELAY D=1 COEFFICIENTS OF POLYNOMIAL A A(1) = -1.50009
A(2) = 0.69614 COEFFICIENTS OF POLYNOMIAL B B(1) = 0.95782
B(2) = 0.54005 COEFFICIENTS OF POLYNOMIAL C C(1) = -0.83917
C(2) = 0.05308
VALIDATION TEST: Whiteness of the residual error
System Variance: 18.3791 Model Variance: 18.1894 Error variance R(0): 0.2571
NORMALIZED AUTOCORRELATION FUNCTIONS
Validation Criterion: Theor. Val.: |RN(i)|  0.136, Pract. Val.: |RN(i)|  0.15 RN(0) = 1.000000 RN(1) = -0.141702
RN(2) = 0.021206 RN(3) = 0.008497
RN(4) = 0.051014
We choose, among the identification methods applicable to structure S3, the output error method with extended estimation model (M3) and decreasing adaptation gain (A1). The results obtained are given in the following table.
It can be observed that the estimated values of A(1), A(2) and B(2) are better than in the case of the recursive least squares (the sum of the squared biases is lower in this case). On the other hand, the validation results are acceptable since all the normalized autocorrelation functions (RN(1) to RN(4)) have a module less than
0.15. The residual prediction error becomes closer to white noise and its variance has been reduced compared to estimation using the recursive least squares.
The results obtained can be further improved if the output error with extended estimation model, and an adaptation gain with variable forgetting factor are used (A3 with 1(0) = 0.97). The following results are obtained:


S =3 M = 3 (O.E.E.M.P.) A = 3 FILE:T1 NS = 256 DELAY D = 1 COEFFICIENTS OF POLYNOMIAL A: A(1) = -1.508445
A(2) = 0.70574 COEFFICIENTS OF POLYNOMIAL B: B(1) = 0.95120
B(2) = 0.52940 COEFFICIENTS OF POLYNOMIAL C: C(1) = -0.90610
C (2) = 0.08344
VALIDATION TEST: Whiteness of the residual error
System Variance: 18.3791 Model Variance: 18.2276 Error variance R(0): 0.2531
NORMALIZED AUTOCORRELATION FUNCTIONS
Validation Criterion: Theor. Val.: |RN(i)|  0.136, Pract. Val.: |RN(i)|  0.15 RN(0) = 1.0000 RN(1) = -0.091665
RN(2) = 0.032701 RN(3) = 0.025042
RN(4) = 0.064717

which corresponds to an improvement of the results in terms of whiteness and variance of the residual prediction error, on the one hand, and in terms of the sum for the squared biases, on the other hand.


Better results than those obtained with the recursive least squares method can also be obtained with structure S2 by using the output error method with fixed compensator, decreasing adaptation gain and compensator degree nD=0 (adaptation error = prediction error). The results obtained are given below:


S =2 M = 2 (OEFC) A = 1 FILE:T1 NS = 256 DELAY D=1 COEFFICIENTS OF POLYNOMIAL A A(1) = -1.52885
A(2) = .73410 COEFFICIENTS OF POLYNOMIAL B B(1) = .93228
B(2) = .51900
VALIDATION TEST: Error / prediction uncorrelation
System variance: 18.3791 Model variance: 18.5921 Error variance R(0): 0.4860
NORMALIZED AUTOCORRELATION FUNCTIONS
Validation Criterion: Theor. Val.: |RN(i)|  0.136, Pract. Val.: |RN(i)|  0.15 RN(0) = -0.116266
RN(1) = -0.028968
RN(2) = 0.103195

One can see that the values obtained for the normalized cross-correlations satisfy the validation condition.


However, it is interesting to compare these results with those provided by the model identified with recursive least squares for the same validation test.


S =1 M = 1 (RLS) A = 1 FILE:T1 NS = 256 DELAY D=1 COEFFICIENTS OF POLYNOMIAL A A(1) = -1.403533
A(2) = 0.6066453 COEFFICIENTS OF POLYNOMIAL B B(1) = 0.9831312
B(2) = 0.6512049
VALIDATION TEST: Error / prediction uncorrelation
System variance: 18.3791 Model variance: 18.5921 Error variance R(0): 0.76355
NORMALIZED AUTOCORRELATION FUNCTIONS
Validation Criterion: Theor. Val.: |RN(i)|  0.136, Pract. Val.: |RN(i)|  0.15 RN(0) = 0.248184
 RN(1) = 0.313691 
 RN(2) = 0.284383 

It is observed that the parameters estimated by the output error method with fixed compensator are better than those obtained by the recursive least squares (the latter do not satisfy the validation criterion).


This is also confirmed by the validation results (the model identified with recursive least squares does not pass the uncorrelation test).
Finally it can be shown that, even in the presence of significant noise, the time delay can be determined from the relative values of the coefficients of the polynomial B(q-1). The following results are obtained for d = 0, nB = 3, nA = 2, by using the recursive least squares method:

One observes that |B(1)| < 0.15 |B(2)|. This leads to choose d = 1 and nB = 2.
Exercise: Compare the model obtained using the output error with fixed compensator (S = 2, M = 2) with the model obtained using output error with extended estimation model (S = 3, M = 3). See Chapter 6, Section 6.4 for the comparison procedure.
If techniques for the model complexity estimation are used, (by using the error criterion of Equation 6.5.17 and the criterion for the complexity estimation of Equation 6.5.19) a minimum of the criterion is obtained for n = max(nA,nB+d)= 3, that is indeed the good value (see Figure 7.7). The function estorderiv.m has been used. A detailed estimation of the complexity leads to nA = 2, nB = 2 and d = 1.



Download 1.04 Mb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   10




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling