Copyright 1996 Lawrence C. Marsh 0 PowerPoint Slides for Undergraduate Econometrics by


Partitioned  Heteroskedasticity.     ( discrete  categories/


Download 0.54 Mb.
Pdf ko'rish
bet8/13
Sana04.11.2020
Hajmi0.54 Mb.
#140982
1   ...   5   6   7   8   9   10   11   12   13
Bog'liq
allChap6

Partitioned

 Heteroskedasticity.

    (

discrete


 categories/groups)

10.11


Copyright 1996    Lawrence C. Marsh

Proportional Heteroskedasticity

y

t

  =  



β

1

  +  



β

2

x



t   

+  e


t

where


var(e

t

) = 



σ

t

 



2

E(e


t

) = 0


cov(e

t

, e



s

) = 0    t

 



 



s

σ

t



 

2  


σ

 



x

t



 

 The variance is 

  assumed to be

  proportional to

  the value of x

t

10.12



Copyright 1996    Lawrence C. Marsh

σ

t



 

2  


σ

 



x

t



 

y

t



  =  

β

1



  +  

β

2



x

t   


+  e

t

std.dev. proportional to    



x

t

 



variance:

standard deviation:

σ

t

 



  

σ



 

     


x

t

 



y

t

              1             x



t            

e

t



β

1



         + 

β

2             



+   

x

t



             x

t

             x



t            

x

t



To correct for heteroskedasticity divide the model by  

  

x



t

 

var(e



t

) = 


σ

t

 



2

10.13


47

Copyright 1996    Lawrence C. Marsh

y

t



              1             x

t            

e

t



β

1

         + 



β

2             

+   

x

t



             x

t

             x



t            

x

t



y

t

  =  



β

1

x



t1

  +  


β

2

x



t2   

+  e


t

*

*



*

*

var(e



) = var(       ) =        var(e

t

) =       



σ

 



x

t

*



e

t

 



x

t

1



x

t

1



x

t

var(e



) = 


σ

 

2



*

e

t



 is 

heteroskedastic

, but 


e

t

  is 



homoskedastic

.

*



10.14

Copyright 1996    Lawrence C. Marsh

1.  Decide which variable is proportional to the 

heteroskedasticity (x

t

 in previous example).



2.  Divide all terms in the original model by the 

square root of that variable (divide by    x

t

 

 ).



3.  Run least squares on the transformed model 

which has new y

t

, x


t1

 

and x



t2

 variables 

but 

no intercept

.

Generalized Least Squares



These steps describe weighted least squares:

*

*



*

10.15


Copyright 1996    Lawrence C. Marsh

Partitioned Heteroskedasticity

y

t

  =  



β

1

  +  



β

2

x



t   

+  e


t

var(e


t

) = 


σ

1

 



2

var(e


t

) = 


σ

2

 



2

error variance of “field” corn

:

error variance of “sweet” corn



:

y

t



  =

  bushels per acre of corn

x

t

  =



  gallons of water per acre (rain or other)

t = 1,

 . . . 


,100

t = 1,

 . . . 


,80

t = 81,

 . . . 


,100

10.16


Copyright 1996    Lawrence C. Marsh

y

t



 = 

β

1



 + 

β

2



x

t  


+ e

t

var(e



t

) = 


σ

1

 



2

“field” corn

:

y

t



 = 

β

1



 + 

β

2



x

t  


+ e

t

var(e



t

) = 


σ

2

 



2

“sweet” corn

:

y

t



              

1

             x



t             

e

t



β

1



         + 

β

2             



+   

σ

1                   



σ

1

            



σ

1           

σ

1

y



t

              

1

             x



t             

e

t



β

1



         + 

β

2             



+   

σ

2                   



σ

2

            



σ

2           

σ

2

Reweighting Each Group’s Observations



t = 1,

 . . . 


,80

t = 81,

 . . . 


,100

10.17


Copyright 1996    Lawrence C. Marsh

Apply Generalized Least Squares

Run least squares separately on data for each group.

σ

1



 

2

 provides estimator of 



σ

1

 



2

 using


 the 80 observations on “field” corn. 

^

σ



2

 

2



 provides estimator of 

σ

2



 

2

 using



 the 20 observations on “sweet” corn. 

^

10.18



Copyright 1996    Lawrence C. Marsh

1.  Residual Plots  provide information on the   

exact nature of heteroskedasticity (partitioned   

or proportional) to aid in correcting for it. 

2. Goldfeld-Quandt Test  checks for presence      

of heteroskedasticity. 

Detecting Heteroskedasticity

Determine existence and nature of heteroskedasticity

:

10.19


48

Copyright 1996    Lawrence C. Marsh

Residual Plots

e

t

0



x

t

.



.

.

.

.

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

. .

. .

.

.

. .

.

.

.

.

.

.

.

.

.

.

.

.

Plot residuals against one variable at a time

after sorting the data by that variable to try

to find a heteroskedastic pattern in the data.

10.20

Copyright 1996    Lawrence C. Marsh

Goldfeld-Quandt Test

 The Goldfeld-Quandt test can be used to detect

 heteroskedasticity in either the proportional case

 or for comparing two groups in the discrete case.

For proportional heteroskedasticity, it is first necessary

to determine which variable, such as x

t

, is proportional



to the error variance.  Then 

sort the data 

from the 

largest to smallest values of that variable.

10.21


Copyright 1996    Lawrence C. Marsh

H

o



:    

σ

1



 

=



 

σ

2



 

2

 



H

1

:    



σ

1

 



>

 



σ

2

 



2

 

GQ  =          ~ F

[T

1

-K



1

, T


2

-K

2



]

σ

1



 

2

σ



2

 

2



^

^

In the proportional case, drop the middle 



r observations where r 

 T/6, then run



separate least squares regressions on the first

T

1



 observations and the last T

2

 observations. 



Small values of GQ support H

o

 while large values support H

1

.

 Goldfeld-Quandt 



Test Statistic

 Use F 

 Table

10.22


Copyright 1996    Lawrence C. Marsh

σ

t



 

2  


 

σ



 

2

 exp{



α

1

 z



t1

 + 


α

2

 z



t2

More General Model



Structure of heteroskedasticity could be more complicated:

 z

t1



 

and 


z

t2

 are any observable variables upon 



  which we believe the variance could depend.

Note: The function exp{



.

} ensures that 

σ

t

2  



is positive.

10.23


Copyright 1996    Lawrence C. Marsh

σ

t



2  

 



σ

 

2



 exp{

α

1



 z

t1

 + 



α

2

 z



t2

More General Model



ln

t



2

)

  



= ln

 



2

+



 

α

1



 z

t1

 + 



α

2

 z



t2

ln



t

2

)



  

α



0

 

+



 

α

1



 z

t1

 + 



α

2

 z



t2

where


 

α

0



 

= ln


 

2



H

o



α

1



 

 

= 0,



 

α

2



 

= 0


 

H

1



α

1



 

 



 0,

 

α



2

 



 0

 

and/or



Least squares residuals, 

e



^

ln

(



e

t

2



)

 

=



α

0

 



+

α

1



z

t1

+



α

2

z



t2 

ν



t

^

the usual F test

10.24

Copyright 1996    Lawrence C. Marsh

Autocorrelation

Chapter 11

Copyright © 1997 John Wiley & Sons, Inc.  All rights reserved.  Reproduction or translation of this work beyond 

that permitted in Section 117 of the 1976 United States Copyright Act without the express written permission of the 

copyright owner is unlawful.  Request for further information should be addressed to the Permissions Department

John Wiley & Sons, Inc.  The purchaser may make back-up copies for his/her own use only and not for distribution

 or resale.  The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these 

programs or from the use of the information contained herein.

11.1


49

Copyright 1996    Lawrence C. Marsh

The Nature of Autocorrelation

For 

efficiency

 (accurate estimation/prediction) 

all systematic information needs to be incor-

porated into the regression model.



Autocorrelation

 is a systematic pattern in the 

errors that can be either attracting (

positive

or repelling (



negative

) autocorrelation.

11.2

Copyright 1996    Lawrence C. Marsh

Postive


Auto.

No

Auto.



Negative

Auto.


e

t

.

0

e

t



0

e

t



0

t

t



t

.

. . . .

. . . . .

. .

. . . .

.

. . .

.

.

. .

.

.. .

.. .

.

.

.

.

. .

. .

.

.

.

.. .

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

crosses line not enough (attracting)

crosses line randomly

crosses line too much (repelling)

11.3

Copyright 1996    Lawrence C. Marsh

y

t



  =  

β

1



  +  

β

2



x

t   


+  e

t

Regression Model



E(e

t

) = 0



var(e

t

) = 



σ

 

2



zero mean:

homoskedasticity:

nonautocorrelation:

cov(e


t

, e


s

) = 


0    

t

 



 

s



autocorrelation:

cov(e


t

, e


s



 

0    


t

 



 

s

11.4



Copyright 1996    Lawrence C. Marsh

Order of Autocorrelation

y

t

  =  



β

1

  +  



β

2

x



t   

+  e


t

e

t   



=  

ρ 

e



t

1  



ν

t



e

t   


=  

ρ

1



 

e

t



1  


ρ

2



 

e

t



2  


ν

t



e

t   


ρ

1



 

e

t



1  


ρ

2



 

e

t



2  


ρ

3



 

e

t



3

 + 



ν

t

1st Order:



2nd Order:

3rd Order:

We will assume First Order Autocorrelation:

e

t   



=  

ρ 

e



t

1  



ν

t



AR(1) :

11.5


Copyright 1996    Lawrence C. Marsh

First Order Autocorrelation

y

t

  =  



β

1

  +  



β

2

x



t   

+  e


t

e

t   



=  

ρ 

e



t

1  



ν

t



where  

1 < 



ρ

 < 1


E(

ν

t



) = 0

var(


ν

t

) = 



σ

ν

2



cov(

ν

t



ν

s



) = 

0   


t

 



 

s

These assumptions about 



ν

t

 imply the following about 



e

t

 :



E(e

t

) = 0



var(e

t

) = 



σ

 

e



=

cov(e



t

, e


t

k



) = 

σ

e



ρ

k



  

 

for  k



 

 

>



 

 

0



corr(e

t

, e



t

k



) = 

ρ

k



  

 

for  k



 

 

>



 

 

0



σ

ν

2



1

 ρ



2

11.6


Copyright 1996    Lawrence C. Marsh

Autocorrelation creates some

Problems for Least Squares :

1.  The least squares estimator is still linear 

and unbiased but it is 

not efficient

.

2.  The formulas normally used to compute 



the least squares standard errors are no 

longer correct and confidence intervals and 

hypothesis tests using them will be 

wrong

.

11.7



50

Copyright 1996    Lawrence C. Marsh

Generalized Least Squares

y

t



  =  

β

1



  +  

β

2



x

t  


+  e

t

e



t   

=  


ρ 

e

t



1  


ν

t



y

t

  =  



β

1

  +  



β

2

x



 + 


ρ 

e

t



1  


ν

t



 substitute

 in  for  

e

t

Now we need to get rid of 



e

t



1

(continued)

AR(1) :

11.8


Copyright 1996    Lawrence C. Marsh

y

t



  =  

β

1



  +  

β

2



x

t  


+  e

t

y



t

  =  


β

1

  +  



β

2

x



 + 


ρ 

e

t



1  


ν

t



 e

t

   =  y



t

  

− 



 

β

1



 − β

2

x



t

 e

t



1

   =  y



t

1



  

− 

 



β

1

 − β



2

x

t



1

y



t

  =  


β

1

 + 



β

2

x



t

 + 


ρ(

y

t



1

 



 

β



1

 − β


2

x

t



1

)



  

ν



t

 lag the


errors

once


(continued)

11.9


Copyright 1996    Lawrence C. Marsh

y

t



  =  

β

1



 + 

β

2



x

t

 + 



ρ(

y

t



1

 



 

β



1

 − β


2

x

t



1

)



  

ν



t

y

t



  =  

β

1



 + 

β

2



x

t

 + 



ρ

y

t



1

 



− ρ

 

β



1

 − ρ β


2

x

t



1  


ν

t



y



 

ρ

y



t

1



 =  

β

1



(1

−ρ

)  +  



β

2

(x



t

−ρ

x



t

1



)

  



ν

t

y



t

   =  


β

  +  



β

2

x



t2  

+  


ν

t

*



*

*

y



t

 = y


 



ρ

y

t



1

 



*

β



β

1



(1

−ρ



*

x

t2



 = (x

t

−ρ



x

t



1

)

 



*

11.10


Copyright 1996    Lawrence C. Marsh

y

t



   =  

β



  +  

β

2



x

t2  


+  

ν

t



*

*

*



y

t

 = y



 



ρ

y

t



1

 



*

β



β

1



(1

−ρ



*

x

t2



 = x

− ρ



x

t



1

*

Problems estimating 



Download 0.54 Mb.

Do'stlaringiz bilan baham:
1   ...   5   6   7   8   9   10   11   12   13




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling