Copyright 1996 Lawrence C. Marsh 0 PowerPoint Slides for Undergraduate Econometrics by


Download 0.54 Mb.
Pdf ko'rish
bet6/13
Sana04.11.2020
Hajmi0.54 Mb.
#140982
1   2   3   4   5   6   7   8   9   ...   13
Bog'liq
allChap6


2

 value.


y

t

 = 



β



β

2

x



+ e


t

*

*



*

*

β



β



2

/c

 



*

*

e



t

 = e


t

/c

y



t

 = y


t

/c

where



*

6.23


30

Copyright 1996    Lawrence C. Marsh

 Effects of Scaling the Data 

 Changing the scale of x and y 

y

t



/c = (

β

1



/c)

 

+ (c



β

2

/c)x



t

/c

 



+ e

t

/c 



y

t

 = 



β



β

2

x



+ e


t

β



β

1



/c

 

*



and

No change in

the R

2

 or the


t-statistics or

in regression

results for 

β

2



but all other

stats change.

y

t

 = 



β



β

2

x



+ e


t

*

*



*

*

x



= x


t

/c

 



*

*

e



t

 = e


t

/c

y



t

 = y


t

/c

where



*

6.24


Copyright 1996    Lawrence C. Marsh

 Functional Forms 

The term 

linear

 in a simple 

regression model does not  mean 

a linear relationship between 

variables, but a model in which 

the 


parameters

 enter the model 

in a linear way.

6.25


Copyright 1996    Lawrence C. Marsh

y

t



 = 

β



β

2



x

+ e



t

Linear Statistical Models:

Nonlinear Statistical Models:

ln(y


t

) = 


β



β

2

x



+ e


t

y

t



 = 

β



β



ln(x

t

)



 

+ e


t

y

t



 = 

β



β

2



x

+ e



t

2

y

t



 = 

β



β

2



x

t   


+ e

t

β



3

y

t



 = 

β



β

2



x

+ exp(



β

3

x



t

)

 



+ e

t

y



t

   = 


β



β

2

x



+ e


t

β

3

 Linear  vs.  Nonlinear

6.27


Copyright 1996    Lawrence C. Marsh

y

x



nonlinear 

relationship 

between food 

expenditure and 

income

 Linear  vs.  Nonlinear



food

expenditure

income

0

6.27



Copyright 1996    Lawrence C. Marsh

 Useful Functional Forms 

1.  Linear

2.  Reciprocal

3.  Log-Log

4.  Log-Linear

5.  Linear-Log

6.  Log-Inverse

 Look at 

 each form

 and its

 slope and

 elasticity

6.28


Copyright 1996    Lawrence C. Marsh

Linear


y

t

 = 



β



β

2

x



+ e


t

 

slope:  



β

2

elasticity: 



β

2

y



t

Useful Functional Forms 

x

t

6.29



31

Copyright 1996    Lawrence C. Marsh

Reciprocal

y

t

 = 



β



β

 



    

+ e


t

 

Useful Functional Forms 



1

x

t



slope:

elasticity:

1

x

t



2

− β


2

1

x



t

 y

t



− β

2

6.30


Copyright 1996    Lawrence C. Marsh

x

t



y

t

Log-Log



  ln(y

t

)= 



β



β

2

ln(x



t

) + e


t

 

slope: 



β

2   


elasticity: 

β

2



Useful Functional Forms 

6.31


Copyright 1996    Lawrence C. Marsh

Log-Linear

ln(y

t

)= 



β



β

2

x



+ e


t

 

slope: 



β

2

 y



t

 

elasticity: 



β

2

x



t

Useful Functional Forms 

6.32

Copyright 1996    Lawrence C. Marsh

Linear-Log

y

t



β



β

2

ln(x



t

)

 



+ e

t

 



_

slope:  


β

2   


elasticity:  

β

2



1

x

t



y

t

1



_

Useful Functional Forms 

6.33

Copyright 1996    Lawrence C. Marsh

Useful Functional Forms 

ln(y

t

) = 



β



β

2

   



  

+ e


t

 

1



x

t

Log-Inverse



slope: 

β

2   



elasticity: 

β

2



x

2

t

y



t

1

x



t

6.34


Copyright 1996    Lawrence C. Marsh

1.  E (e


t

) = 0


    2.  var (e

t

) = 



σ

2

       3.  cov(e



i

, e


j

) = 0


    4.  e

t

 ~ N(0,



 

σ

2



)

 Error Term Properties

6.35


32

Copyright 1996    Lawrence C. Marsh

 Economic Models

1.  Demand Models

2.  Supply Models

3.  Production Functions

4.  Cost Functions

5.  Phillips Curve

6.36


Copyright 1996    Lawrence C. Marsh

1.  Demand Models

  

*  quality demanded (y



d

)  and price (x)

*  constant elasticity

  Economic Models 

ln(y



)= 



β



β

2

ln(x)



+ e


t

 

d



6.37

Copyright 1996    Lawrence C. Marsh

2.  Supply Models

  

*  quality supplied (y



s

)  and price (x)

*  constant elasticity

  Economic Models 

 ln(y



)= 



β



β

2

ln(x



t

) + e


t

 

s



6.38

Copyright 1996    Lawrence C. Marsh

3.  Production Functions

*  output (y)  and input (x)

*  constant elasticity

  Economic Models 

ln(y


t

)= 


β



β

2

ln(x



t

) + e


t

 

Cobb-Douglas Production Function:



6.39

Copyright 1996    Lawrence C. Marsh

4a.  Cost Functions

  

*  total cost (y)  and output (x)



  Economic Models 

 

y



t

 = 


β



β

2

x



2

+ e



6.40


Copyright 1996    Lawrence C. Marsh

4b.  Cost Functions

  

*  average cost (x/y)  and output (x)



  Economic Models 

(y

t



/x

t

) = 



β

1

/x



β



2

x



+ e

t

/x



6.41


33

Copyright 1996    Lawrence C. Marsh

5.  Phillips Curve

 

*  wage rate (w



t

)  and time (t)

  Economic Models 

unemployment rate, 

u

t

w



t-1



w

t

  =



w



 w

t-1


γα + γη


u

t

1



 nonlinear in both variables and parameters 

6.42


Copyright 1996    Lawrence C. Marsh

The Multiple 

Regression Model

Chapter 7

Copyright © 1997 John Wiley & Sons, Inc.  All rights reserved.  Reproduction or translation of this work beyond 

that permitted in Section 117 of the 1976 United States Copyright Act without the express written permission of the 

copyright owner is unlawful.  Request for further information should be addressed to the Permissions Department, 

John Wiley & Sons, Inc.  The purchaser may make back-up copies for his/her own use only and not for distribution

 or resale.  The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these 

programs or from the use of the information contained herein.

7.1


Copyright 1996    Lawrence C. Marsh

Two Explanatory Variables

y

t

  = 



β

1

 + 



β

2

x



t2  

β



3

x

t3 



+ e

t



y

t



x

t2



β

2



x

t3



y

t



β

3

x



t

‘s affect 

y

t

 



  separately

 But least squares estimation of 

β

2

 now depends upon both  x



t2 

and x


t3 

.

7.2



Copyright 1996    Lawrence C. Marsh

Correlated  Variables

y

t

 = output



x

t2

 = capital



x

t3

 = labor



 Always 5 workers per machine. 

 If number of workers per machine

 is never varied, it becomes impossible

 to tell if the machines or the workers

 are responsible for changes in output.

y

t



  = 

β

1



 + 

β

2



x

t2  


β

3



x

t3 


+ e

t

7.3



Copyright 1996    Lawrence C. Marsh

The General Model

y

t

  = 



β

1

 + 



β

2

x



t2  

β



3

x

t3 



+. . .+ 

β

K



x

tK 


+ e

t

The parameter 



β

1

 is the intercept (constant) term.



The “variable” attached to 

β

1



 is 

x

t1



= 1.

 Usually, the number of explanatory variables

 is said to be K

1 (ignoring 



x

t1

= 1), while the



 number of parameters is K. (Namely: 

β



. . . 

β

K



).

7.4


Copyright 1996    Lawrence C. Marsh

1.  E(e


t

) = 0


2.  var(e

t

) = 



σ

2

3.  



cov

(

e



, e


s

=



 0

   


for 

 t 


 s

 



4.  e

t

 ~ N(0,



 

σ

2



)

Statistical Properties of e

t

 

7.5



34

Copyright 1996    Lawrence C. Marsh

1. E (y


t

) = 


β



β

2

x



t2 

+. . .+ 


β

K

x



tK

 

2. var(y



t

) = var(e

t

) = 


σ

2

3. 



cov(y

,y



s

) = cov(e

, e


s

) = 0    t

s

4. 



y

t

 ~ N(



β

1

+



β

2

x



t2 

+. . .+


β

K

x



tK

σ



2

)

Statistical Properties of y



t

 

7.6



Copyright 1996    Lawrence C. Marsh

Assumptions

 1.  y

t

 = 



β



β

2

x



t2 

+. . .+ 


β

K

x



tK

 + e


t

 2.  E (y

t

) = 


β



β

2

x



t2 

+. . .+ 


β

K

x



tK

 

 3.  var(y



t

) = var(e

t

) = 


σ

2

 4.  



cov(y

,y



s

) = cov(e

,e

s



) = 0     t 

 s



 5.  The values of x

tk

 are not random



 6.  y

t

 ~ N(



β

1

+



β

2

x



t2 

+. . .+


β

K

x



tK

σ



2

)

7.7



Copyright 1996    Lawrence C. Marsh

Least Squares Estimation

y

t

  = 



β

1

 + 



β

2

x



t2 

β



3

x

t3 



+ e

t



 S(


β

1



β

2



β

3

) = 



Σ

(

y



t

  − β


1

 − β


2

x

t2



  

− β


3

x

t3



)

2

t



 

=

 1

T

Define:


y

t  


= y

t  


 y

*

x

t2  


= x

t2 


 x

2



*

x

t3  



= x

t3 


 x

3



*

7.8


Copyright 1996    Lawrence C. Marsh

b

1



 = y

  − 


b

1

 − 



b

2

x



2  

− 

b



3

x

3



 b

3

 =



y



x

t3

)(Σ



x

t2  


− 



y

x



t2

)(Σ


x

t3

x



t2

)

* *



*

* *

* *

2

x



t2  

)(Σ


x

t3  


) − (Σ

x

t2



x

t3

)



*

*

* *

2

2

 2

b

2



 =

y



x

t2



)(Σ

x

t3  



− 



y

x



t3

)(Σ


x

t2

x



t3

)

* *



*

* *

* *

2

x



t2  

)(Σ


x

t3  


) − (Σ

x

t2



x

t3

)



*

*

* *

2

2

 2

Least Squares Estimators

7.9

Copyright 1996    Lawrence C. Marsh

Dangers of Extrapolation

Statistical models generally are good only

“within the relevant range”.   This means

that extending them to extreme data values

outside the range of the original data often

leads to poor and sometimes ridiculous results.

 If height is normally distributed and the 

 normal ranges from minus infinity to plus

 infinity, pity the man minus three feet tall.

7.10

Copyright 1996    Lawrence C. Marsh

Error Variance Estimation

σ

 

2



^

=

Τ − Κ



e

t

^



2

Σ

Unbiased estimator of the error variance:



 σ

 

2



σ

 

2



 ^

(Τ − Κ)


Τ − Κ


χ

Transform to a chi-square distribution:

7.11


35

Copyright 1996    Lawrence C. Marsh

Gauss-Markov Theorem

 Under the assumptions of the 

 multiple regression model, the

 ordinary least squares estimators 

 have the smallest variance of

 all linear and unbiased estimators.

 This means that the least squares 

 estimators are the B est Linear 

 U

 



nbiased Estimators (BLUE).

7.12


Copyright 1996    Lawrence C. Marsh

Variances

y

t

  = 



β

1

 + 



β

2

x



t2 

β



3

x

t3 



+ e

t

var(b



3

) =


(

1



 

r

23



)

Σ

(x



t3 

 



x

3

)



2

2

σ

2



var(b

2

) =



(

1



 

r

23



)

Σ

(x



t2 

 



x

2

)



2

2

σ

2



Σ

(x

t2 



 

x



2

)

2    



Σ

(x

t3 



 

x



3

)

2



where

  r


23 

=

Σ



(x

t2 


 

x



2

)(x


t3 

 



x

3

)



When

 r

23 



= 0

these reduce

to the simple

regression

formulas.

7.13


Copyright 1996    Lawrence C. Marsh

Variance Decomposition

The variance of an estimator is smaller when:

1.  The error variance, 

σ

 

2



, is smaller:   

σ

 



2              

0 .


2.  The sample size, T, is larger:

                                                 

Σ

(x

t2 



 

x



2

)



.

3.  The variable’s values are more spread out:

                                                            

(x

t2 



 

x



2

)



.

4.  The correlation is close to zero:  

r

23

          0 .



2

t = 1


T

7.14


Copyright 1996    Lawrence C. Marsh

Covariances

y

t

  = 



β

1

 + 



β

2

x



t2 

β



3

x

t3 



+ e

t

where



  r

23 


=

Σ

(x



t2 

 



x

2

)



2    

Σ

(x



t3 

 



x

3

)



2

Σ

(x



t2 

 



x

2

)(x



t3 

 



x

3

)



(

1



 

r

23



)   

Σ

(x



t2 

 



x

2

)



2    

Σ

(x



t3 

 



x

3

)



2

cov(b


2

,b

3



) =

2

 



r

23 


σ

2

7.15



Copyright 1996    Lawrence C. Marsh

Covariance Decomposition

1.  The error variance, 

σ

 



2

, is larger.

 

2.  The sample size, T, is smaller.



 

3.  The values of the variables are less spread out.

4.  The correlation, 

r

23



, is high

.

The covariance between any two estimators



is larger in absolute value when:

7.16


Download 0.54 Mb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   ...   13




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling