back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [ 73 ] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


73

The F-ratio for the test is

(3 443) (0 096)

CT = 3.154

The 1% point for the F-distribution with d.f 8 and 8 is 6.03. Thus the F-value is significant at the 1% level and we reject the hypothesis of homoskedasticity. Consider now the logarithmic form. The results are:

Group 1: logy = 0.128 + 0.934jc R = 0.992

(0 079) (0 030)

CT = 0.001596 Group 2: log = 0.276 -I- 0.902x R = 0.912

(0 352) (0 099)

= 0.002789

The F-ratio for the test is

0.002789 0.001596

For d.f. 8 and 8, the 5% point from the F-tables is 3.44. Thus if we use the 5% significance level, we do not reject the hypothesis of homoskedasticity. Hence by the Goldfeld-Quandt test we reject the hypothesis of homoskedasticity if we consider the linear form but do not reject it in the log-linear form. Note that the White test rejected the hypothesis in both the forms.

Turning now to the Breusch and Pagan test, the regression of ] on x„ xj and xj gave the following regression sums of squares. For the linear form

5 = 40.842 for the regression of on x„ xj, xj

S = 40.065 for the regression of uj on x„ xj

Also, d = 1.726. The test statistic for the test is (using the second regression)

= 6.724

2d 2(2.979)

We use the statistic as a x" with d.f. = 2 since two slope parameters are estimated. This is significant at the 5 percent level, thus, rejecting the hypothesis of homoskedasticity.

For the log-linear form, using only x, and xj as regressors we get: 5 = 0.000011 and d = 0.00209. The test statistic is

Group 1: = 1.0533 + 0.876 : = 0.985

(0 616) (0 038)

a- = 0.475

Group 1: = 3.279 + 0.835jc R = 0.904



2(0.00209)-

= 1.259

using the tables with 2 d.f. we see that this is not significant at even the 50 percent level. Thus, the test does not reject the hypothesis of homoskedasticity in the log-linear form.

An Intuitive justification for the Breusch-Pagan Test

In Section 4.12 we discussed the LM, W, and LR tests. There we showed that the LM test statistic was LM = nR, which has a x-distribution. Arguing from analogy, we would get a test statistic nR, where R is the multiple correlation

coefficient in a regression of Or, on zw zj,, z,.....To see the relationship

between this and the Breusch-Pagan test statistic, note that

2 Regr.SS from a regression of on zi„ zj,, zj,, • . S

var(w?) var(M?)

Now, under the null hypothesis that the errors are homoskedastic, / has a X-distribution with 1 d.f. Hence \ { ) = 2 (because the variance of a x variable is twice the degrees of freedom). Thus

variM?) =

In large samples we can write w&ru]) = var{uj) and = ". Hence we get var(u]) = 26-. Thus the test statistic is S/2&, as described eariier. The statistic can also be viewed as half the regression sum of squares from a regression of g, = u]la on Zl,, Z21, Z3,, .... Breusch and Pagan argue that in discussions of heteroskedasticity. if one is going to plot any quantity, it is more reasonable to plot g, than quantities like ,.

5.3 Consequences of Heteroskedasticity

Before we attempt solutions to the heteroskedasticity problem, we will study the consequences on the least squares estimators. We will show that

1. The least squares estimators are still unbiased but inefficient.

2. The estimates of the variances are also biased, thus invalidating the tests of significance.

To see this, consider a very simple model with no constant term.

y, = px, + u, V(u.) = a] (5.1)

The least squares estimator of p is

0.000011



v«)-v(-„,

(5.2)

Suppose that we write or; = crz;, where z, are known, that is, we know the variances up to a multiplicative constant. Then dividing (5.1) by z, we have the model

= P + V, (5.3)

Z, Zj

where v, = ujz, has a constant variance cr. Since we are "weighting" the rth observation by l/z„ the OLS estimation of (5.3) is called weighted least squares (WLS). If p* is the WLS estimator of p, we have

E ( / ,)( ,) E (Jc/z,)v,

and since the latter term has expectation zero, we have £(P*) = p. Thus the WLS estimator p* is also unbiased. We will show that p* is more efficient than the OLS estimator p. We have

and substituting aJ = uh} in (5.2), we have

V(P) = a>

Thus

V(P*)

V(P) X ixj/zi) 2 xjzj

If E(u) = 0 and M, are independent of the x„ we have Ei x,uJ2 xj) = 0 and hence ) = p. Thus p is unbiased. If the u, are mutually independent, denoting X xj by S,, we can write



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [ 73 ] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]