back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [ 63 ] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


63

f&X 0.9594

= 2.12

\&2/ 0.4534

For the F-distribution with d.f. 12 and 12 the 5% point is 2.69. Thus we do not reject the hypothesis of equality at the 5% significance level. For Equation 2 the corresponding test statistic is

. ~ K = l = 2.15

0.4866

Again if we use a 5% significance level, we do not reject the hypothesis of equality of the error variances.

Thus, in both cases we might be tempted to conclude that we can apply the tests for stabiUty. There is, however, one problem with such a conclusion. This is that the F-test for equality of variances is a pretest, that is, it is a test preliminary to the test for stability. There is the question of what significance level we should use for such pretests. The general conclusion is that for pretests one should use a higher significance level than 5%. In fact, 25 to 50% is a good rule. If this is done, we would reject the hypothesis of equality of variances in the case of both equations 1 and 2.

*4.12 The LR, W, and LM Tests

In the Appendix to Chapter 3 we stated large-sample test statistics to test the hypothesis (3 = 0. These were

LR = n log

LM = nr

4his was pointed out m T Toyoda, "Use of the Chow Test Under Heteroskedasticity," Econo-memca, 1976, pp 601-608 The approximations used by Toyoda were found to be inaccurate, but the inaccuracy of the Chow test holds good See P Schmidt and R Sickles, "Some Further Evidence on the Use of the Chow Test Under Heteroskedasticity," Econometnca, Vol 45, No. 5, July 1977, pp. 1293-1298

«2 are greater than {k + 1), the two predictive tests that we have illustrated are tests for stability.

3. Another problem with the application of the tests for stability, which applies to both the analysis of variance and predictive tests, is that the tests are inaccurate if the error variances in the two samples are unequal.The true size of the test (under the null hypothesis) may not equal the prescribed a level. For this reason it would be desirable to test the equality of the variances.

Consider, for instance, the error variances for equation 1 in Table 4.4. The F-statistic to test equality of error variances is



LM =

URSS J RRSS - URSS RRSS/«

Both W and LM have a x-distribution d.f. r. The inequality W > LR > LM again holds and the proof is the same as that given in the Appendix to Chapter 3.

Illustrative Example

Consider example 1 in Section 4.11 (stability of the demand for food function). For equation 1 we have

URSS = 0.1695 RRSS = 0.2866 « = 30

and the number of restrictions r = 3. We get

W = 20.73

LR = 15.76

LM = 12.62

Looking at the x tables for 3 d.f. the 0.01 significance point is 11.3. Thus all the test statistics are significant at that level, rejecting the hypothesis of coefficient stability. As we saw earlier, the F-test also rejected the hypothesis at the 1% significance level. Turning to equation 2, we have

URSS = 0.1686 RRSS = 0.2397 = 30 -r = 4

Each has a x-distribution with I d.f. In the multiple regression model, to test the hypothesis (3, = 0 we use these test statistics with the corresponding partial substituted in the place of the simple The test statistics have a x-distri-bution with d.f. I. To test hypotheses such as

we have to substitute the multiple in place of the simple or partial in these formulae. The test statistics have a x-distribution with d.f. k.

To test any linear restrictions, we saw (in the Appendix to Chapter 3) that the likelihood-ratio test statistic was

, /RRSS\ = " " iuRSS j

where RRSS = restricted residual sum of squares

URSS = unrestricted residual sum of squares

LR has a x-distribution d.f.r, where r is the number of restrictions. The test statistics for the Wald test and the LM test are given by

RRSS - URSS



Summary

This chapter is very long and hence summaries will be presented by sections.

/. Sections 4.2 to 4.5: Model with Two Explanatory Variables

We discuss the model with two explanatory variables in great detail because it clarifies many aspects of multiple regression. Of special interest are the expressions for the variances of the estimates of the regression coefficients given at the beginning of Section 4.3. These expressions are used repeatedly later in the book. Also, it is important to keep in mind the distinction between separate confidence intervals for each individual parameter and joint confidence intervals for sets of parameters (discussed in Section 4.3). Similarly, there can be conflicts between tests for each coefficient separately (/-tests) and tests for a set of coefficients (F-test). (This is discussed in greater detail in Section 4.10.) Finally, in Section 4.5 it is shown that each coefficient in a multiple regression involving two variables can be interpreted as the regression coefficient in a simple regression involving two variables after removing the effect of all other variables on these two variables. This interpretation is useful in many problems and will be used in other parts of the book.

We now get

W = 12.65 LR = 10.56 LM = 8.90

From the tables with 4 d.f. the 5% significance point is 9.49. Thus both the W and LR tests reject the hypothesis of coefficient stability at the 5% significance level, whereas the LM test does not. There is thus a conflict among the three test criteria. We saw earlier that the F-test did not reject the hypothesis at the 5% significance level either.

The conflict between the W, LR, and LM tests has been attributed to the fact that in small samples the actual significance levels may deviate substantially from the normal significance levels. That is, although we said we were testing the hypothesis of coefficient stability at the 5% significance level, we were in effect testing it at different levels for the different tests. Procedures have been developed to correct this problem but a discussion of these procedures is beyond the scope of this book. The suggested formulas are too complicated to be discussed here. However, the elementary introduction of these tests given here will be useful in understanding some other tests discussed in Chapters 4, 5, and 6.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [ 63 ] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]