back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [ 46 ] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


46

The expression /( ) plays a central role in the theory of ML estimation. It has been proved that, under fairly general conditions, the ML estimator is consistent and asymptotically normally distributed with variance [/(p)]". Thus, in the above problem at the final step of the iteration we get an estimate of asymptotic variance of the ML estimator as [/(0)]".

The quantity [/( )] is also known as the information limit to the variance, or alternatively as the Cramer-Rao lower bound for the variance of the estimator p.

(9) Stochastic Regressors Consider the regression model

y, = Px, + , i = 1,2, . . . , n

Ui are independent with mean zero and common variance o. x, are random variables but x, and , are independent for all / and j. We will show that the least squares estimator 0 is an unbiased estimator for p. We have

P = 2 f-y,

where

Substituting the value of y, and simplifying, we get

= p + 2

Since X, and Uj are independent for all i and j, we know that c, and , are independent. Thus E{Ci Ui) = E(Ci) E{Uj) = 0. Since = 0, we need not worry about evaluating £(c,). Hence we get the result that

) = p + £(2c,M,) = P

Thus the least squares estimator is unbiased even if x, are random variables, provided that they are independent of the error terms.

However, when we consider the variance of 0, we have to evaluate £(c?) Eiuf). The latter term is o, but the former term is rather cumbersome to compute in general-hence the conclusions stated in Section 3.11.



Multiple Regression

4.1 Introduction

4.2 A Model with Two Explanatory Variables

4.3 Statistical Inference in the Multiple Regression Model

4.4 Interpretation of the Regression Coefficients

4.5 Partial Correlations and Multiple Correlation

4 Relationships Among Simple, Partial, and Multiple Correlation Coefficients

4.7 Prediction in the Multiple Regression Model

4.8 Analysis of Variance and Tests of Hypotheses

4.9 Omission of Relevant Variables and Inclusion of Irrelevant Variables

4.10 Degrees of Freedom and

4.11 Tests for Stability

*4.12 The LR. W, and LM Tests Summary Exercises

Appendix to Chapter 4 Data Sets



4.1 Introduction

In simple regression we study the relationship between an explained variable and an explanatory variable x. In multiple regression we study the relationship between and a number of explanatory variables jc„ Xj, . . . , x,,. For instance, in demand studies we study the relationship between quantity demanded of a good and price of the good, prices of substitute goods and the consumers income. The model we assume is

y, = a + , „ + 2, + • • + + , / = 1, 2, . . . ,

The errors , are again due to measurement errors in and errors in the specification of the relationship between and the xs. We make the same assumptions about u, that we made in Chapter 3. These are:

1. E(u,) = 0.

2. V(u) = for all I.

3. u, and Uj are independent for all / ¥=J.

4. u, and Xj are independent for all / and J.

5. «, are normally distributed for all i.

Under the first four assumptions, we can show that the method of least squares gives estimators a, p,, 2, . . . , that are unbiased and have minimum variance among the class of linear unbiased estimators. (The proofs are similar to those in Chapter 3 and are given in the appendix.)

Assumption 5 is needed for tests of significance and confidence intervals. It is not needed to prove the optimal properties of least squares estimators.

In addition to these assumptions which are similar to those we make in the case of simple regression, we will also assume that Xi, X2, . . . , Xj, are not collinear, that is, there is no deterministic linear relationship among them. For instance, suppose that we have the regression equation

= a + p,x, + +

but JC, and X2 are connected by the deterministic linear relationship

21 + X2 = 4

then we can express X2 in terms of x, and get Xj = 4 - 2x, and the regression equation becomes

= a -I- , , -1- p2(4 - 2x,) -1- « ia + 4p2) + ( , - 2p2>Xi +

Thus we can estimate (a -f- and 0, - 23,) but not a, ,, p, separately.

A case where there is an exact linear relationship between the explanatory variables is known as exact or perfect collinearity. In the case of the two variables we considered, the exact relationship implies that the correlation coeffi-



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [ 46 ] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]