Copyright 1996 Lawrence C. Marsh 0 PowerPoint Slides for Undergraduate Econometrics by


Download 0.54 Mb.
Pdf ko'rish
bet3/13
Sana04.11.2020
Hajmi0.54 Mb.
#140982
1   2   3   4   5   6   7   8   9   ...   13
Bog'liq
allChap6


Regression Model - I

1. The average value of  y, given x, is given by

the 

linear

 regression:

                                      E(y) = 

β



β

2



x

2.  For each value of  x, the values of  y are

distributed around their mean with 

variance

:

                           var(y) = 



σ

2

3.  The values of  y are uncorrelated, having 



zero

covariance 

and thus no linear relationship:

                       cov(y



,y

j

) = 0

4. The variable x must take 



at least two different

values

so that  





 c

where c is a constant.

3.9


Copyright 1996    Lawrence C. Marsh

5.  (optional) The  values  of  y  are  



normally          

distributed  

about  their  mean  for  each 

value  of  x:

                    y ~ N [(

β

1



+

β

2



x),

 σ



]

One more assumption that is often used in

practice but is not required for least squares:

3.10


Copyright 1996    Lawrence C. Marsh

The  Error  Term

     y is a random variable composed of 



two parts

:

 I.  Systematic component:     



E(y)  =  

β



β

2



x

  

This is the 



mean of y

.

 II.  Random component:       



e   =    y  -  E(y)  

                                                 =    y   -  

β



β

2



x

   


This is called the 

random error

.

      



Together    E(y)   and    e    form  

the  model

:

                        y   =   

β

1   

+  

β

2



x  +  e

3.11


Copyright 1996    Lawrence C. Marsh

Figure 3.5   The relationship among  y, e and 

        the true regression line.

.

.

.

.

y

4



y

1

y



2

y

3



x

1

x



2

x

3



x

4

}



}

{

{



e

1

e



2

e

3



e

4

E(y) = 



β



β

2

x



x

y

3.12



Copyright 1996    Lawrence C. Marsh

}

.

}

.

.

.

y

4



y

1

y



2

y

3



x

1

x



2

x

3



x

4

{



{

e

1



e

2

e



3

e

4



x

y

Figure 3.7a   The relationship among  y, e and 



        the fitted regression line.

^

y = b



+ b


2

x

^



.

.

.

.

y

1



y

2

y



3

y

4



^

^

^



^

^

^



^

^

3.13



15

Copyright 1996    Lawrence C. Marsh

{

{



.

.

.

.

.

y

4



y

1

y



2

y

3



x

1

x



2

x

3



x

4

x



y

Figure 3.7b    The sum of squared residuals

        from any other line will be larger.

y = b


+ b


2

x

^



.

.

.

y

1



^

y

3



^

y

4



^

y = b


+ b


2

x

^



*

*

*



*

e

1



^

*

e



2

^

*



y

2

^



*

e

3



^

*

*



e

4

^



*

*

{



{

3.14


Copyright 1996    Lawrence C. Marsh

f(

.

)

f(e)


f(y)

Figure 3.4   Probability  density  function  for  e  and  y

0

β

1



+

β

2



x

3.15


Copyright 1996    Lawrence C. Marsh

The  Error  Term Assumptions

1.  The value of y, for each value of x, is

                        

y   =   

β

1   



+  

β

2



x  +  e

2.  The average value of the random error  e  is:



                            E(e) = 0

3.  The variance of the random error  e  is:

           var(e) = 

σ

2



 = var(y)

4.  The covariance between any pair of  e’s  is:

   cov(e



,e

j

) = cov(y



,y

j

) = 0

5.  x must take at least two different values so that

     



 c

, where c is a constant.

6.  e is normally distributed with mean 0, var(e)=

σ

2

 



 (optional)              

e ~ N(0,

σ

2



)

3.16


Copyright 1996    Lawrence C. Marsh

Unobservable  Nature  

of  the  Error Term

1. 


Unspecified

 factors / explanatory variables,

  not in the model, may be in the error term.

2.  Approximation error is in the error term if

 relationship between y and x is not exactly      

 a perfectly 

linear

 relationship.



3.  Strictly 

unpredictable

 random behavior that

  may be unique to that observation is in error.

3.17

Copyright 1996    Lawrence C. Marsh

 

Population

 regression values:

               y

 

t

 = 



β



β

2

x



 

t

 + e



 

t

Population

 regression line:

              E(y

 

t

|x



 

t

) = 



β



β

2

x



 

t

Sample

 regression values:

                 y

 

t

 = b



+ b


2

x

 



t

 + e


 

t

Sample 

regression line:

                   y

 

t

 =  b



+ b


2

x

 



t

^

^



3.18

Copyright 1996    Lawrence C. Marsh

y

 



t

  = 


 

β

1



  

+  


β

2

x



 

t

 + e



 

t

Minimize error sum of squared deviations:



S(

β

1



,

β

2



) =    

Σ

(



y

 

t



 - 

β

1



 

β



2

x

 



t

)

2



           (3.3.4)

t=1


T

 e

 



t

  =  y


 

t

  - 



 

β

1



  

-  


β

2

x



 

t

 



3.19

16

Copyright 1996    Lawrence C. Marsh

Minimize  w.

 

r.

 



t. 

 

β



1

 

and



 

β

2



:

S(

β



1

,

β



2

) =    


Σ

(

y



 

t

 - 



β

1

 



β

2



x

 

t



)

2

           (3.3.4)



t =1

T

 =  

- 2 

Σ 

(



y

 

t



 - 

β

1



 

β



2

x

 



t

)   


=  

- 2 


Σ 

x

 



t

 

(



y

 

t



 - 

β

1



 

β



2

x

 



t

)   


S(

.



)

∂β

1



S(

.



)

∂β

2



Set each of these two derivatives equal to zero and 

solve these two equations for the two unknowns:  

β

1   


β

2

3.20



Copyright 1996    Lawrence C. Marsh

S(.)

S(.)

β

i



b

i

.

.

.

Minimize  w.

 

r.

 



t. 

 

β



1

 

and



 

β

2



:

S(

.



) =    

Σ 

(



y

 

t



 - 

β

1



 

β



2

x

 



t

)

2



t =1

T

 



 

S(.)



∂β

i

< 0

 

 

 



 

S(.)



∂β

i

>0



 

 



S(.)

∂β

i



=0

3.21


Copyright 1996    Lawrence C. Marsh

  To minimize S(.), you set the two  

derivatives equal to zero to get:

 =  

- 2 


Σ 

(

y



 

t

 - 



b

1

 



b

2



x

 

t



)  =  0   

=  

- 2 


Σ 

x

 



t

 

(



y

 

t



 - 

b

1



 

b



2

x

 



t

)  =  0   

S(

.



)

∂β

1



S(

.



)

∂β

2



When these two terms are set to zero, 

β

1



 and 

β

2



 become 

b

1



 and 

b

2



 because they no longer

represent just any value of 

β

1

 and 



β

2

 but the special 



values that correspond to the minimum of 

S(

.



)

 .

3.22



Copyright 1996    Lawrence C. Marsh

   

- 2 


Σ 

(

y



 

t

 - 



b

1

 



b

2



x

 

t



)  =  0   

  

- 2 



Σ 

x

 



t

 

(



y

 

t



 - 

b

1



 

b



2

x

 



t

)  =  0   

 

Σ 

y



 

t

 



-

 T

b



1

 

-



 

b



Σ

 

x



 

t

   =  0   

Σ 

x

 



t

 

y



 

t

 



-

 

b



1  

Σ

 



x

 



-

 

b



Σ

 



x

   =  0   



2

 T

b



1  

 



 

b



Σ

 

x



 

t

    =   

Σ 

y

 



t

 

b



1  

Σ

 



x

 

t  



 

b



Σ

 



x

    =   



Σ 

x

 



t

 

y



 

t

 



2

3.23


Copyright 1996    Lawrence C. Marsh

Solve for 

b

1

 and 



b

2

 using definitions of  



x

  and  


y

 

 T



b

1  


 

 



b

Σ



 

x

 



t

    =   

Σ 

y



 

t

 



b

1  


Σ

 

x



 

t  


 

b



Σ

 



x

    =   



Σ 

x

 



t

 

y



 

t

 



2

T

 

Σ

 

x



 

t

 



y

t   


Σ

 



x

 

t



 

Σ 

y



 

t

 



T

 

Σ

 



x

 

t



 

  

- (



Σ

 

x



 

t

)



2

2

b

2    



=

b

1    



=   



 

b



x

3.24


Copyright 1996    Lawrence C. Marsh

 elasticities 

percentage change in y

percentage change in x

η 

 

=



= 

x/x



y/y



= 

y   x



x   y

Using calculus, we can get the elasticity at a point:

η 

 



=   lim

= 

y   x



x   y

y   x



x   y



x

 



0

3.25


17

Copyright 1996    Lawrence C. Marsh

E(y)   =   

β

1  


+  

β



x

E(y)



x



  

 

β



2

 applying elasticities 

E(y)


x



 

 

β



2

η  


E(y)


x

E(y)


x

3.26


Copyright 1996    Lawrence C. Marsh

 estimating elasticities 

y



x

 



 b

2

η  



y

x



y

x

^



y

t

   =   



b

+ b



2

 x

t   



 =   4 + 1.5 x

t

^



x

   


=  8  = average number of years of experience

y

 



  

 

 $10   =  average wage rate

=  1.5         =  1.2

8

10



 

 b



2

η

y



x

^

3.27



Copyright 1996    Lawrence C. Marsh

 Prediction 

y

t

 = 4 + 1.5 x



t

^

Estimated regression equation:



x

t  


=  years of experience

y



 

=

 

 predicted wage rate



^

If

 x



t  

=  2 years, then 

y



 

 

$7.00 


per hour

.

^



If

 x

t  



=  3 years, then 

y



 

 

$8.50 


per hour

.

^



3.28

Copyright 1996    Lawrence C. Marsh

 log-log models 

ln(y)  =  

β



β



ln(x)

ln(y)



x



ln(x)

x



  

 



β

2



y

x



  

 



β

2

1



y

x



x

1



x

3.29


Copyright 1996    Lawrence C. Marsh

y



x



  

 

β



2

1

y



x



x

1

x



  

 



β

2



y

x



x

y

elasticity  of  y  with respect to  x:

  

 



β

2



y

x



x

y

η  



  =

3.30


Copyright 1996    Lawrence C. Marsh

Properties of 

Least

 

Squares 

Estimators

Chapter 4

Copyright © 1997 John Wiley & Sons, Inc.  All rights reserved.  Reproduction or translation of this work beyond 

that permitted in Section 117 of the 1976 United States Copyright Act without the express written permission of the 

copyright owner is unlawful.  Request for further information should be addressed to the Permissions Department, 

John Wiley & Sons, Inc.  The purchaser may make back-up copies for his/her own use only and not for distribution

 or resale.  The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these 

programs or from the use of the information contained herein.

4.1


18

Copyright 1996    Lawrence C. Marsh

y

t



  = 

household weekly food expenditures

Simple Linear Regression Model

y

t



  =  

β

1



 + 

β

2



 x

t  


ε

t



x

t  


household weekly income

For a given level of 

x

t



, the expected

level of food expenditures will be:

 

E(

y



t

|

x



t

=  



β

1

 + 



β

2

 x



t

4.2


Copyright 1996    Lawrence C. Marsh

1. 

   

y

t



  =  

β

1



 + 

β

2



x

t  


ε

t



2.   

E(

ε

t



=

 0   <=>   E(

y

t

)



 = 

β

1



 + 

β

2



x



3.   



var(

ε

t



)   

=

   

σ

 

2



  =   

var(

y

t



)

4.   

cov(

ε

i



,

ε

j



)   



 cov(

y

i

,



y

j

)   

=

  0

5.

    

x

t  



 c  


for every observation

6.    

ε

t



~N(0,

σ

 



2

) <=>  

y

t



~

N(

β



1

β



2

x

t



,

σ

 



2

)

 



Assumptions of the Simple

Linear Regression Model

4.3

Copyright 1996    Lawrence C. Marsh

The population parameters

  

β

1



 

and


 

β

2



are


Download 0.54 Mb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   ...   13




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling