back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [ 72 ] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


72

Illustrative Example

For the data in Table 5.1, using the residuals in Table 5.2 the Ramsey, White, and Glejser tests produced the following results. Since there is only one explanatory variable x, we can use x instead of in the application of the Ramsey test. The test thus involves regressing u, on x], x], and so on. In our example the test was not very useful. The results were (standard errors are not reported because the R} is very low for sample size 20):

= -0.379 + 0.236 X 10 V 0.549 x 10*V = 0.034

None of the coefficients had a f-ratio > 1 indicating that we are unable to reject the hypothesis that the errors are homoskedastic.

The test suggested by White involves regressing if on x, x, x, and so on. The results were:

«2 = -1.370 + 0.116jc R = 0.7911

(0 390) (0 014)

«2 = 0.493 - 0.071JC + 0.00372 i?2 = 0.878

(0 620) (0 055) (0 0011)

The R-s are highly significant in both the cases. Thus the test rejects the hypothesis of homoskedasticity. A suggested procedure to correct for heteroskedasticity is to estimate the regression model assuming that V{u,) = 0-2, vihere cr2 = 7 + yx + yjx. This procedure is discussed in Section 5.4. Glejsers tests gave the following results:

= -0.209 + 0.0512x /?2 = 0.927

(0 094) (0 0034)

= -1.232 + 0.475vG? «2 0.902

(0 186) (0 037)

= 1.826 - 13.78(l/jc) i?2 = 0.649

(0 155) (2 39)

All the tests reject the hypothesis of homoskedasticity, although on the basis of i?2, the first model is preferable to the others. The suggested model to estimate is the same as that suggested by Whites test.

The results are similar for the log-linear form as well, although the coefficients are not as significant. Using the residuals in Table 5.3 we get, for the White test,

2 = -0.211 -I- 0 29 R = 0.572

(0 083) (0 026)

2 = -0.620 + 0.425JC - 0.051x2 q qq

(0 385) (0 2731 (0 047)

Thus there is evidence of heteroskedasticity even in the log-linear form, although casually looking at the residuals in Table 5.3, we concluded earlier that the errors were homoskedastic. The Goldfeld-Quandt test, to be discussed later in this section, also did not reject the hypothesis of homoskedasticity. The



Some Other Tests

The Likelihood Ratio Test

If the number of observations is large, one can use a likelihood ratio test. Divide the residuals (estimated from the OLS regression) into groups with n, observations in the /th group, n, = n. Estimate the error variances in each group by CT,-. Let the estimate of the error variance from the entire sample be CT Then if we define as

- 2 logX has a x-distribution with degrees of freedom (k - I). If there is only one explanatory variable in the equation, the ordering of the residuals can be based on the absolute magnitude of this variable. But if there are two or more variables and no single variable can provide a satisfactory ordering, then y, the predicted value of y, can be used.

Feldstein used this LR test for his hospital cost regressions described in Chapter 4 (Section 4.6, Example 1). He divided the total number of observations (177) into four groups of equal size, the residuals being ordered by the predicted values of the dependent variable. The estimates ct? were 71.47, 114.82. 102.81, and 239.34. The estimate ct- for the whole sample was 138.76. Thus - 2 logX was 18.265. The 1% significance point for the x-distribution with 3 d.f. is 11.34. Thus there were significant differences between the error variances. Next Feldstein weighted the observations by weights proportional to 1/ct,. The weights normalized to make their average equal to 1 were 1.2599, 0.9940, 1.0504, and 0.6885. This would make the error variances approximately equal. The equation was estimated by OLS using the transformed data. This procedure is called weighted least squares and is often denoted by WLS. The new estimates of the variances from this reestimated equation, for the four groups were 106.34, 110.71, 114.99, and 117.06, which are almost equal. However, the regression parameters did not change much, as shown in Table 5.4. Although the point estimates did not change much, the standard errors could be different. Since Feldstein does not present these, we have no way of comparing them.

Goldfeld and Quandt Test

If we do not have large samples, we can use the Goldfeld and Quandt test. In this test we split the observations into two groups-one corresponding to large values of x and the other corresponding to small values of x-fit separate

S. M. Goidfeid and R. E. Quandt, Nonlinear Methods in Econometrics (Amsterdam: North-Holland, 1972), Chap. 3.

Glejser tests, however, show significant heteroskedasticity in the log-linear form.



Case Type OLS WLS

General medicine

114.48

111.81

Pediatrics

24.97

28.35

General surgery

32.70

35.07

15.25

15.58

Traumatic and orthopedic surgery

39.69

36.04

Other surgery

98.02

101.38

Gynecology

58.72

58.48

Obstetrics

34.88

34.50

Others

69.51

66.26

Source: M. S. Feldstein, Economic Analysis for Health Service Efficiency (Amsterdam: North-Holland, 1967), p. 54.

regressions for each and then apply an F-test to test the equality of error variances. Goldfeld and Quandt suggest omitting some observations in the middle to increase our ability to discriminate between the two error variances.

Breusch and Pagan Test

Suppose that V{u = a,. If there are some variables that influ-

ence the error variance and if a] - /{ + a,Zu + ajZi, + • • • + a„), then the Breusch and Pagan test* is a test of the hypothesis

Hqi a, = a2 = • • • = « = 0

The function/(•) can be any function. For instance,/(x) can be x, x, e", and so on. The Breusch and Pagan test does not depend on the functional form. Let

So = Regression sum of squares from a regression of m; on Zl, Z2, ,Zr

Then X = So/2«T* has a distribution with degrees of freedom r.

This test is an asymptotic test. An intuitive justification for the test will be given after an illustrative example.

Illustrative Example

Consider the data in Table 5.1. To apply the Goldfeld-Quandt test we consider two groups of 10 observations each, ordered by the values of the variable x. The first group consists of observations 6, 11, 9, 4, 14, 15, 19, 20, 1, and 16. The second group consists of the remaining 10. The estimated equations were:

T. S. Breusch and A. R. Pagan, "A Simple Test for Heteroscedasticity and Random Coefficient Variation," Econometrica, Vol. 47, 1979, pp. 1287-1294.

Table 5.4 Comparison of OLS and WLS Estimates for Hospital-Cost Regression

Average Cost per Case



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [ 72 ] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]