Copyright 1996 Lawrence C. Marsh 0 PowerPoint Slides for Undergraduate Econometrics by


Download 0.54 Mb.
Pdf ko'rish
bet4/13
Sana04.11.2020
Hajmi0.54 Mb.
#140982
1   2   3   4   5   6   7   8   9   ...   13
Bog'liq
allChap6


 unknown 

population constants

.

The formulas that produce the



sample estimates  b

1

 and b



2

 are


called the 

estimators

 of 


 

β

1



 

and


 

β

2



.

When  b


0

 and b


1

 are used to represent

the formulas rather than specific values,

they are called estimators of 

β

1

 



and

 

β



2

which are 



random variables 

because


they are different from sample to sample.

4.4


Copyright 1996    Lawrence C. Marsh

• If the least squares estimators b



0

 and b

1

are 

random variables

, then what are their

their means, variances, covariances and

probability distributions?

• Compare the properties of alternative 



estimators to the 

properties

 of the 

least squares estimators.

 Estimators are Random Variables 

( estimates are not )

4.5


Copyright 1996    Lawrence C. Marsh

 The Expected Values of b1 and b2 

The least squares formulas (estimators)

in the simple regression case:

b

2

 =



T

Σ

x



t

y

t



 - 

Σ

x



t

 

Σ



y

t

T



Σ

x

t



 -(

Σ

x



t

)

2



2

b

1



 =  y  -  b

2

x



where   

y =  


Σ

y



/ T  

and


  x =  

Σ

x



/ T  


(3.3.8a)

(3.3.8b)


4.6

Copyright 1996    Lawrence C. Marsh

Substitute in  

y

t

  =  



β

1

 + 



β

2

x



t  

ε



t

to get:


b

2

 =  



β

2  


+

T

Σ



x

t

ε



t

 - 


Σ

x

t



 

Σε

t



T

Σ

x



t

 -(


Σ

x

t



)

2

2

The mean of 

b

2

 is:



Eb

2

 =  



β

2  


+

T

Σ



x

t

E



ε

t

 - 



Σ

x

t



 

Σ

E



ε

t

T



Σ

x

t



 -(

Σ

x



t

)

2



2

Since  


E

ε

t



 

= 0


,  then  

Eb

2



 =  

β



.

4.7


19

Copyright 1996    Lawrence C. Marsh

The result  

Eb

2

 =  



β

 means that



the distribution of 

b

2



 is centered at 

β

2



.

Since the distribution of 

b

2

 



is centered at 

β

2



 ,we say that

b



is an 

unbiased

 estimator of 

β

2

.



 An  Unbiased  Estimator 

4.8


Copyright 1996    Lawrence C. Marsh

The unbiasedness result on the 

previous slide assumes that we

are using the 



correct model

.

If the model is of the wrong form



or is missing important variables,

then 


E

ε

t



 

 0



,  then  

Eb

2



 

  



β

.



 Wrong Model Specification 

4.9


Copyright 1996    Lawrence C. Marsh

 Unbiased Estimator of the Intercept 

In a similar manner, the estimator b

1

of the intercept or constant term can be

shown to be an 

unbiased

 estimator of 

β

1

  



when the model is correctly specified.

Eb

1



 =  

β

1



4.10

Copyright 1996    Lawrence C. Marsh

b

2



 =

T

Σ



x

t

y



t

 



 

Σ

x



t

 

Σ



y

t

T



Σ

x

t



 

(



Σ

x

t



)

2

2

(3.3.8a)


(4.2.6)

Equivalent expressions for b

2

:

Expand and multiply top and bottom by T:



b

2

 =



Σ

(x

t



 

 x )



(

y

t



 

 y )



Σ(

x

t



 

 x



 

)

2

4.11

Copyright 1996    Lawrence C. Marsh

  Variance of b

2  

Given that both y



t

 and 


ε

t

 have variance 



σ

 

2

,

the 


variance

 of the estimator b



2

 is:


b

2

 is a function of the y



t

 values but

var(b

2

) does not involve y



t

 directly.

Σ(

x

 



t

 



 

x

)



σ

 

2



2

var(b


2

) =


4.12

Copyright 1996    Lawrence C. Marsh

  Variance of b

1  

Τ Σ(


x

 

t



 

 



x

)

2

var(b

1

) =  


σ

 

2

Σ

x

 



t

2

the 


variance

 of the estimator b



1

 is:


b

1

 =  y  



  b


2

x

Given



4.13

20

Copyright 1996    Lawrence C. Marsh

  Covariance of b

1

 and b


2  

 Σ(


x

 

t



 

 



x

)

2

cov(b

1

,b

2

) =  

σ

2



− 

x

 



If x = 0, slope can change without affecting

the variance.

4.14

Copyright 1996    Lawrence C. Marsh

 What factors determine

 variance and covariance ?

1.  

σ 

2



:  uncertainty about y

t

 values uncertainty about       

b

1

, b

2

 and their relationship.

2. The more spread out the x

t

 values are then the more 

  confidence we have in b

1

, b

2

, etc.

3. The larger the sample size, T, the smaller the 

variances and covariances.

4. The variance b

1

 is large when the (squared) x

t

 values 

are far from zero (in either direction).

5. Changing the slope, b

2

, has no effect on the intercept, 

b

1

, when the sample mean is zero.  But if sample 

mean is positive, the covariance between b

1

 and 

b

2

 will be negative, and vice versa.

4.15


Copyright 1996    Lawrence C. Marsh

Gauss-Markov Theorm

 Under the first five assumptions of the 

 simple, linear regression model, the

 ordinary least squares estimators b

1

 



 and b

2

 have the 



smallest variance 

of

 all linear and unbiased estimators of



 β

1

 and 



β

2

.  This means that b



1

and b


2

 

 are the Best Linear Unbiased Estimators 



 (BLUE) of 

β

1



 and 

β

2



.

4.16


Copyright 1996    Lawrence C. Marsh

implications of Gauss-Markov

1. b


and b


2

 are “best” within the class               

    of linear  and unbiased  estimators.

2. “Best” means 

smallest variance 

within the class of linear/unbiased.

3. All of the first five assumptions  must 

hold to satisfy Gauss-Markov.

4. Gauss-Markov does not require 

assumption six: normality.

5. G-Markov is not based on the least      

   squares principle but on b

and b


2

.

4.17



Copyright 1996    Lawrence C. Marsh

G-Markov implications (continued)

6. If we are not satisfied  with restricting 

our estimation to the class of linear and 

unbiased estimators, we should ignore 

the Gauss-Markov Theorem and use 

some nonlinear and/or biased estimator 

instead.  (Note: a biased or nonlinear 

estimator 

could have smaller variance 

than those satisfying Gauss-Markov.)

7. Gauss-Markov applies to the b

and b



estimators and not to particular sample 

values (estimates) of b

and b



2

.

4.18



Copyright 1996    Lawrence C. Marsh

Probability Distribution

 of Least Squares Estimators 

b

2

  ~  N   

β

,

Σ(

x



 

t

 



 

x



)

σ

 



2

2

b

1

 ~ N   

β



,

Τ Σ(


x

 

t



 

 



x

)

2

σ

 

2  



Σ

x

 



t

2

4.19


21

Copyright 1996    Lawrence C. Marsh

 y

t

 and 



ε 

t

 normally distributed 



The least squares estimator of 

β

2



 can be

expressed as a 



linear

 combination of y

t

’s:


b

2

 =  



Σ 

w



y

t

 



b

1

 =  y  



  b


2

x

 Σ(



x

 

t



 

 



x

)

2

where  w

t

 =

 (



x

 

t



 

 



x

)

This means that b



1

and b


2

 are normal since

linear combinations of normals are normal.

4.20


Copyright 1996    Lawrence C. Marsh

  normally distributed under

 The Central Limit Theorem

If the first five Gauss-Markov assumptions

hold, and sample size, T, is sufficiently large,

then the least squares estimators, b

and b


2

,

have a distribution that approximates the



normal distribution with greater accuracy

the larger the value of sample size, T.

4.21

Copyright 1996    Lawrence C. Marsh

  Consistency  

We would like our estimators, b

1

 and b



2

, to collapse 

onto the true population values, 

β

1



 and 

β

2



, as 

sample size, T, goes to infinity.

One way to achieve this 

consistency

 property is 

for the variances of b

1

 and b



2

 to go to zero as T 

goes to infinity.

Since the formulas for the variances of the least 

squares estimators b

1

 and b



2

 show that their 

variances do, in fact, go to zero, then b

1

 and b



2

are 



consistent

 estimators of 

β

1

 and 



β

2

.



4.22

Copyright 1996    Lawrence C. Marsh

 Estimating the variance 

 of the error term, 

σ 

2

e

t

   =  y



t

 

− 



b

1

 



 b

2



 x

t

^



Σ

e

t



^

t =1

T

2

T

− 



2

σ

 



2

  

=



 

σ

 



2

 

is an unbiased estimator of 



σ

 

2

 

^

^



4.23

Copyright 1996    Lawrence C. Marsh

The Least Squares 

Predictor, y

o

 



^

Given a value of the explanatory 

variable, X

o

, we would like to predict



a value of the dependent variable, y

o

.



The least squares predictor is:

y

o



  =  b

1

 + b



2

 x

o              



(4.7.2)

     


^

4.24


Copyright 1996    Lawrence C. Marsh

Inference 

in the Simple 

Regression Model

Chapter 5

Copyright © 1997 John Wiley & Sons, Inc.  All rights reserved.  Reproduction or translation of this work beyond 

that permitted in Section 117 of the 1976 United States Copyright Act without the express written permission of the 

copyright owner is unlawful.  Request for further information should be addressed to the Permissions Department, 

John Wiley & Sons, Inc.  The purchaser may make back-up copies for his/her own use only and not for distribution

 or resale.  The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these 

programs or from the use of the information contained herein.

5.1


22

Copyright 1996    Lawrence C. Marsh

1. 


   

y

t



  =  

β

1



 + 

β

2



x

t  


ε

t



2.   E(

ε

t



=

 0   <=>   E(



y

t

)



 = 

β

1



 + 

β

2



x

3.   var(



ε

t

)   



=

   


σ

 

2



  =   

var(


y

t

)



4.   cov(

ε

i



,

ε

j



)   

 cov(



y

i

,



y

j

)   



=

  0


5.    

x

t  



 c  


for every observation

6.    


ε

t

~N(0,



σ

 

2



) <=>  

y

t



~

N(

β



1

β



2

x

t



,

σ

 



2

)

 



 Assumptions of the Simple 

  Linear Regression Model

5.2


Copyright 1996    Lawrence C. Marsh

Probability Distribution     

of Least Squares Estimators 

b

1

 ~ N   

β



,

Τ Σ(


x

 

t



 

 



x

)

2

σ

2  

Σ

x



 

t

2

b

2

  ~  N   

β

,

Σ(



x

 

t



 

 



x

)

σ



2

2

5.3


Copyright 1996    Lawrence C. Marsh

σ

 



2

^

=



Τ − 2

e

t



^

2

Σ



Unbiased estimator of the error variance:

σ

 



2

σ

 



2

^

(Τ − 2)



Τ − 2


χ

Transform to a chi-square distribution:

 Error Variance Estimation 

5.4


Copyright 1996    Lawrence C. Marsh

We make a 



correct

 decision if:

• The null hypothesis is false and we decide to reject it.

• The null hypothesis is true and we decide not to reject it.

Our decision is 

incorrect

 if:


• The null hypothesis is true and we decide to reject it.

      This is a 



type I error

.

• The null hypothesis is false and we decide not to reject it.



      This is a 

type II error

.

5.5



Copyright 1996    Lawrence C. Marsh

b

2

  ~  N   

β

,

Σ(

x



 

t

 



 

x



)

σ

2



2

Create a standardized normal random variable, Z, 

by subtracting the mean of b

2

 and dividing by its 

standard deviation:

b

2

 

− β


2

 

var(b



2

)

Ζ  



=

 ∼   Ν(0,1)

5.6

Copyright 1996    Lawrence C. Marsh

Simple Linear Regression

y

t



  =  

β

1



 + 

β

2



x

t  


ε

t



    

where E 


ε

t

 = 0



y

t  


~  

N(

β



1

β



2

x



σ

 



2

                        since  Ey



t

 = 


β

1

 + 



β

2

x



ε

t



  = 

 

y



t

  



  

β

1



 

 



β

2

x



Therefore,     

ε

 

t  



~  N(0,

σ

 



2

) .


5.7

23

Copyright 1996    Lawrence C. Marsh

Create a Chi-Square

ε

 



t  

~  N(0,

σ

 



2

)    but  want  N(0,

1

) .


(

ε

 





/

σ)  


 N(0,

1

)    Standard Normal .



(

ε

 





/

σ)

2



  

 

χ

2



(1) 

 

     Chi-Square 



.

5.8


Copyright 1996    Lawrence C. Marsh

Sum  of  Chi-Squares

Σ

t



 

=1

(



ε



/

σ)

2

 



=

 

               

(

ε



/

σ)

2



 + (

ε



/

σ)

2



 +. . .+ (

ε



/

σ)

2



 

           

χ

2

(1)



 + 

χ

2



(1)

 +. . .+

χ

2



(1) 

   


 

=   

 

χ



2

(Τ)


        Therefore,      

Σ

t



 

=1

(



ε



/

σ)

2  


 ∼   

χ

2



(Τ)

5.9


Copyright 1996    Lawrence C. Marsh

    Since the 



errors 

  

ε



t

  = 


 

y

t



  

  



β

1

 



 

β



2

x



are not observable, we estimate them with 

the 


sample residuals  

e

t



  = 

 

y



t

  



  

b

1



 

 



b

2

x



t

.

  Unlike the errors, the 



Download 0.54 Mb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   ...   13




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling