Copyright 1996 Lawrence C. Marsh 0 PowerPoint Slides for Undergraduate Econometrics by


Download 0.54 Mb.
Pdf ko'rish
bet2/13
Sana04.11.2020
Hajmi0.54 Mb.
#140982
1   2   3   4   5   6   7   8   9   ...   13
Bog'liq
allChap6


i

)

 

f(x

i

) + 

Σ

 g



2

(x

i

)

 

f(x

i

)

n

i = 1

n

i = 1

E

 

[g(X)]  



=  

E

 

[g



1

(X)]

   

+  E

 

[g



2

(X)]

2.23


Copyright 1996    Lawrence C. Marsh

Adding

 and 

Subtracting

 Random Variables

E(X-Y)  =  E(X) - E(Y) 

E(X+Y) =  E(X) + E(Y) 

2.24


Copyright 1996    Lawrence C. Marsh

E(X+a) = E(X) + a

Adding

 a 


constant

 to a variable will

add a constant to its expected value:

Multiplying by 

constant

 will multiply

 its expected value by that constant:

E(bX)  =  b E(X)

2.25


Copyright 1996    Lawrence C. Marsh

var(X) 


=  average squared deviations

                  around the mean of X.

var(X) 

=  expected value of the squared deviations



                  around the expected value of X.

var(X)  =

 

E [(X - EX) ]   



2

 Variance

2.26


8

Copyright 1996    Lawrence C. Marsh

var(X)  =

 

E [(X - EX) ]   



=

 

E [X  - 2XEX + (EX) ]



2

2

2



=

 

E(X )  - 2 EX EX + E (EX) 



2

2

=



 

E(X )  - 2 (EX) +  (EX) 

2

2

2



=

 

E(X )  - (EX) 



2

2

var(X)  =



 

E [(X - EX) ]   

2

var(X)  = E(X )  - (EX) 



2

2

2.27



Copyright 1996    Lawrence C. Marsh

variance


 of a discrete

random variable, X:

standard deviation is square root of variance

var ( X )   =  

(x

i

- EX )



2

 f(x


i

)

i = 1



n

2.28



Copyright 1996    Lawrence C. Marsh

x

i



      f(x

i

)     (x



i

 - EX)               (x

i

 - EX) f(x



i

)

2



.1

2 - 4.3 = -2.3

5.29 (.1) =  .529

3

.3



3 - 4.3 = -1.3

1.69 (.3) =  .507

4

.1

4 - 4.3 =  - .3



  .09 (.1) =  .009

5

.2



5 - 4.3 =    .7

  .49 (.2) =  .098

6

.3

6 - 4.3 =   1.7



2.89 (.3) =  .867

Σ

 x



i

 f(x


i

) =  .2 + .9 + .4 + 1.0 + 1.8 =  4.3

 

Σ 

(x



i

 - EX) f(x

i

)  =  .529 + .507 + .009 + .098 + .867



       =   2.01 

 

2



2

calculate the  

variance   

for a 

discrete random variable, X:

i = 1

n

n

i = 1

2.29


Copyright 1996    Lawrence C. Marsh

Z  =  a + cX

var(Z)  =  var(a + cX)

             =  E [(a+cX) - E(a+cX)]

             =  c   var(X)

2

2



var(a + cX)  =  c  var(X)

2

2.30



Copyright 1996    Lawrence C. Marsh

 A 


joint

 probability density function,

 f(x,y), provides the probabilities 

 associated with the joint occurrence

 of all of the possible pairs of X and Y.

Joint  pdf 

2.31

Copyright 1996    Lawrence C. Marsh

college grads

in household

.15


.05

.45


.35

joint pdf

f(x,y) 

Y = 1


Y = 2

vacation


homes

owned


X = 0

X = 1


Survey of College City, NY

f

(0,1)



f

(0,2)


f

(1,1)


f

(1,2)


2.32

9

Copyright 1996    Lawrence C. Marsh

E[g(X,Y)] = 

Σ

 

Σ



 g(x

i

,y



j

) f(x


i

,y

j



)

i

j



E(XY) = (0)(1)(.45)+(0)(2)(.15)+(1)(1)(.05)+(1)(2)(.35)=

.75


E(XY) = 

Σ

 



Σ

  x


i

 y

j



  f(x

i

,y



j

)

i



j

Calculating the expected value of 

functions of two random variables.

2.33


Copyright 1996    Lawrence C. Marsh

The 


marginal

 probability density functions,

f(x) and f(y), for discrete random variables,

can be obtained by summing over the f(x,y) 

with respect to the values of Y to obtain f(x) 

with respect to the values of X to obtain f(y).

f(x

i

) = 



Σ

 f(x


i

,y

j



)

f(y


j

) = 


Σ

 f(x


i

,y

j



)

i

j



 Marginal  pdf

2.34


Copyright 1996    Lawrence C. Marsh

.15


.05

.45


.35

marginal


Y = 1

Y = 2


X = 0

X = 1


.60

.40


.50

.50


f

(X = 1)


f

(X = 0)


f

(Y = 1)


f

(Y = 2)


marginal

pdf for Y:

marginal

pdf for X:

2.35

Copyright 1996    Lawrence C. Marsh

 The 


conditional

 probability density 

 functions of X given Y=y , f(x

|

y),



 and of Y given X=x , f(y

|

x),



 are obtained by dividing f(x,y) by f(y)

 to get f(x

|

y) and by f(x) to get f(y



|

x).


f(x

|

y) =



f(y

|

x) =



f(x,y)

f(x,y)


f(y)

f(x)


 Conditional  pdf

2.36


Copyright 1996    Lawrence C. Marsh

.15


.05

.45


.35

conditonal

Y = 1

Y = 2


X = 0

X = 1


.60

.40


.50

.50


.25

.75


.875

.125


.90

.10


.70

.30


f

(Y=2


|

X= 0)=.25

f

(Y=1


|

X = 0)=.75

f

(Y=2


|

X = 1)=.875

f

(X=0


|

Y=2)=.30


f

(X=1


|

Y=2)=.70


f

(X=0


|

Y=1)=.90


f

(X=1


|

Y=1)=.10


f

(Y=1


|

X = 1)=.125

2.37

Copyright 1996    Lawrence C. Marsh

X and Y are 



independent 

random 


variables if their joint pdf,  f(x,y),

is the product of their respective

marginal pdfs,  f(x)  and  f(y) .

f(x


i

,y

j



) = f(x

i

) f(y



j

)

for 



independence

 this must hold for all  pairs of i and j

 Independence 

2.38


10

Copyright 1996    Lawrence C. Marsh

.15


.05

.45


.35

not independent

Y = 1

Y = 2


X = 0

X = 1


.60

.40


.50

.50


f

(X = 1)


f

(X = 0)


f

(Y = 1)


f

(Y = 2)


marginal

pdf for Y:

marginal

pdf for X:

.50x.60=

.30


.50x.60=

.30


.50x.40=

.20


.50x.40=

.20


 The calculations

 in the boxes show

 the numbers 

 required to have

 independence

.

2.39



Copyright 1996    Lawrence C. Marsh

The 


covariance

 between two random

variables, X and Y, measures the

linear association between them.

cov(X,Y) = E[(X - EX)(Y-EY)]

Note that variance is a special case of covariance.

   cov(X,X) = var(X) = E[(X - EX)  ]

2

 Covariance

2.40

Copyright 1996    Lawrence C. Marsh

cov(X,Y)  =

 

E [(X - EX)(Y-EY)]



=

 

E [XY  - X EY - Y EX + EX EY]



=

 

E(XY)  - 2 EX EY + EX EY 



=

 

E(XY)  - EX EY 



 cov(X,Y)  =

 

E [(X - EX)(Y-EY)]   



cov(X,Y)  =  E(XY)  - EX EY 

=

 



E(XY)  - EX EY - EY EX + EX EY

2.41


Copyright 1996    Lawrence C. Marsh

.15


.05

.45


.35

Y = 1


Y = 2

X = 0


X = 1

.60


.40

.50


.50

EX=0(.60)+1(.40)=

.40

EY=1(.50)+2(.50)=



1.50

E(XY) = (0)(1)(.45)+(0)(2)(.15)+(1)(1)(.05)+(1)(2)(.35)=

.75

EX EY = (.40)(1.50) = 



.60

cov(X,Y) = E(XY) 

- EX EY

                =   .75 - (.40)(1.50)



                =   .75 -    .60

                =   

.15

covariance



2.42

Copyright 1996    Lawrence C. Marsh

 The 


correlation

 between two random 

 variables X and Y is their covariance

 divided by the square roots of their

 respective variances.

Correlation

 is a pure number falling between -1 and 1.

cov(X,Y)


ρ

(X,Y)  =


var(X) var(Y)

 Correlation

2.43

Copyright 1996    Lawrence C. Marsh

.15


.05

.45


.35

Y = 1


Y = 2

X = 0


X = 1

.60


.40

.50


.50

EX=


.40

EY=


1.50

cov(X,Y) =   

.15

correlation



EX=0(.60)+1(.40)=

.40


2

2

2



var(X) = E(X ) 

- (EX)


           =   .40 - (.40)

           =   

.24

2

2



2

EY=1(.50)+2(.50)

     =  .50 + 2.0

     =    

2.50

2

2



2

var(Y) = E(Y ) 

- (EY)

           = 2.50 - (1.50)



           =   

.25


2

2

2



ρ

(X,Y) =


cov(X,Y)

var(X) var(Y)

ρ

(X,Y) = 


.61

2.44


11

Copyright 1996    Lawrence C. Marsh

 Independent random variables 

 have zero covariance and,

 therefore, zero correlation.

 The converse is not true. 

 Zero Covariance & Correlation

2.45

Copyright 1996    Lawrence C. Marsh

The expected value of the weighted sum

of random variables is the sum of the 

expectations of the individual terms.

Since expectation is a linear operator,

it can be applied term by term.

E[c

1

X + c



2

Y] = c


1

EX + c


2

EY

E[c



1

X

1



+...+ c

n

X



n

] = c


1

EX

1



+...+ c

n

EX



n

In general, for random variables X

1

, . . . , X



n

 :

2.46



Copyright 1996    Lawrence C. Marsh

 The 


variance of a weighted sum 

of random

 variables is the sum of the variances, each times

 the square of the weight, plus twice the covariances

 of all the random variables times the products of

 their weights.

var(c

1



+

 c

2



Y)=c

var(X)+c



var(Y) 


+

 2c


1

c

2



cov(X,Y)

2

2



var(c

1



 

c



2

Y) = c


var(X)+c


var(Y) 


 2c


1

c

2



cov(X,Y)

2

2



Weighted 

sum

 of random variables:

Weighted 

difference

 of random variables:

2.47

Copyright 1996    Lawrence C. Marsh

The Normal Distribution

Y ~ N(

β

,



σ

2

)



f(y) =

2

 



π

 

σ



2

1

exp



β

y

f(y)



2

  

σ

2



(y - 

β

)



2

-

2.48



Copyright 1996    Lawrence C. Marsh

The Standardized Normal

Z ~ N(

0

,



1

)

f(z) =



2

 

π



 

1

exp



2

z

2



-

Z =  (y - 

β

)/

σ



2.49

Copyright 1996    Lawrence C. Marsh

P 

[

 Y > a 



]

    =    P                >                =  P   Z                 

a - 

β

a - 



β

Y - 


β

σ

σ



σ

β

y



f(y)

a

Y ~ N(



β

,

σ



2

)

2.50



12

Copyright 1996    Lawrence C. Marsh

P 

[

 a < Y < b 



]

    =  P              <               <        

                           

   =     P               Z <

a - 

β

Y - 



β

σ

σ



b - 

β

σ



a - 

β

σ



b - 

β

σ



β

y

f(y)



a

Y ~ N(


β

,

σ



2

)

b



2.51

Copyright 1996    Lawrence C. Marsh

Y

1



 ~ N(

β

1



,

σ

1



2

), Y


2

 ~ N(


β

2

,



σ

2

2



), . . . , Y

n

 ~ N(



β

n

,



σ

n

2



)

W = c


1

Y

1



 + c

2

Y



2

 + . . . + c

n

Y

n



Linear combinations of jointly

normally distributed random variables

are themselves normally distributed.

W ~ N


E(W), var(W) 

]

2.52


Copyright 1996    Lawrence C. Marsh

mean:     E[V] = E[ 

χ

(m) 


] = m

 

If Z



1

, Z


2

, . . . , Z

m

 denote m independent



N(0,1) random variables, and

V  =  Z


+ Z


2

 + . . . + Z

m

, then V ~ 



χ

(m) 


2

2

2

2

V is 


chi-square

 with m degrees of freedom.

Chi-Square

variance:     var[V] = var[ 

χ

(m) 


] = 2m

 

If Z



1

, Z


2

, . . . , Z

m

 denote m independent



N(0,1) random variables, and

V  =  Z


+ Z


2

 + . . . + Z

m

, then V ~ 



χ

(m) 


2

2

2

2

V is 


chi-square

 with m degrees of freedom.



2

2

2.53


Copyright 1996    Lawrence C. Marsh

mean:     E[

t

] = E[


t

(m) 


] = 0    symmetric about zero

 

variance:     var[



t

]    =   var[

t

(m) 


]   =    m (m



2

)

If Z ~ N(0,1) and V ~ 



χ

(m) 


and if Z and V

are independent then,



~  

 

t



(m) 

 is 



student-t

 with m degrees of freedom.



2

t =


Z

V

m



Student - t  

2.54


Copyright 1996    Lawrence C. Marsh

If V


1

 ~ 


χ

(m

1

and V


2

 ~ 


χ

(m

2

and if V


1

 and V


2

are independent, then



F

(m

1

,m

2



 is an 

F statistic

 with m


1

 numerator

degrees of freedom and m

2

 denominator



degrees of freedom.

2

F =


V

1 m


1

V

2



m

2

2

F  Statistic

2.55

Copyright 1996    Lawrence C. Marsh

The Simple Linear 

Regression

Model

Chapter 3

Copyright © 1997 John Wiley & Sons, Inc.  All rights reserved.  Reproduction or translation of this work beyond 

that permitted in Section 117 of the 1976 United States Copyright Act without the express written permission of the 

copyright owner is unlawful.  Request for further information should be addressed to the Permissions Department, 

John Wiley & Sons, Inc.  The purchaser may make back-up copies for his/her own use only and not for distribution

 or resale.  The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these 

programs or from the use of the information contained herein.

3.1


13

Copyright 1996    Lawrence C. Marsh

1.  Estimate

  a  relationship  

among  economic       

  variables, such as   y  =  f(x).

2.  Forecast  or  



predict

  the  value  of   one        

   variable,  y,   based  on  the  value  of

   another variable,  x.

 Purpose of Regression Analysis 

3.2


Copyright 1996    Lawrence C. Marsh

Weekly Food Expenditures

y  =  dollars spent each week on food items.

x  =  consumer’s weekly income.

The relationship between x and the expected 

value of y , given x,  might be

 linear

:

E(y|x)  =  



β

1  


+  

β



x

3.3


Copyright 1996    Lawrence C. Marsh

f(y|x=480)

f(y|x=480)

y

µ



y|x=480

 Figure 3.1a   Probability Distribution f(y|x=480) 

 of Food Expenditures if given income x=$480.

3.4


Copyright 1996    Lawrence C. Marsh

f(y|x)


f(y|x=480)

f(y|x=800)

y

µ

y|x=480



µ

y|x=800


Figure 3.1b     Probability Distribution of Food

Expenditures if given income x=$480 and x=$800.

3.5

Copyright 1996    Lawrence C. Marsh

{

β



1

x



E(y|x)


E(y|x)

Average


Expenditure

x (income)

E(y|x)=

β

1



+

β

2



x

β

2



=

E(y|x)



x

Figure 3.2  The Economic Model: a linear relationship 



      between avearage expenditure on food and income.

3.6


Copyright 1996    Lawrence C. Marsh

.

.

x

t



x

1

=480



x

2

=800



y

t

f(y



t

)

Figure 3.3.  The probability density function  



  for y

t

 at two levels of  household  income,  x



t

expenditure

Homoskedastic Case

income


3.7

14

Copyright 1996    Lawrence C. Marsh

.

x

 



t

x

1



x

2

y



t

f(y


t

)

Figure 3.3+.  The variance of y



t

 increases

        as household  income,  x

 



, increases.

expenditure

Heteroskedastic Case

x

3



.

.

income


3.8

Copyright 1996    Lawrence C. Marsh

Assumptions of the Simple Linear 

Download 0.54 Mb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   ...   13




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling