back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [ 87 ] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


87

This is significant at the 1% level. Thus we reject the hypothesis p = 0, even though the DW statistic is close to 2 and the estimate p from the OLS residuals is only 0.1.

Let us keep all the numbers the same and just change the standard error of a. The following are the results:

SE{a)

h Conclusion

0.13

0.0169

0.155

1.80 Not significant at

the 5% level

0.15

0.0225

-0.125

Test not defined

Thus, other things equal, the precision with which a is estimated has significant effect on the outcome of the /i-test.

In the case where the h-test cannot be used, we can use the alternative test suggested by Durbin. However, the Monte Carlo study by Maddala and Rao-suggests that this test does not have good power in those cases where the litest cannot be used. On the other hand, in cases where the / -test can be used. Durbins second test is almost as powerful. It is not often used because it involves more computations. However, we will show that Durbins second test can be generalized to higher-order autoregressions, whereas the h-test cannot be.

6.8 A General Test for Higher-Order Serial Correlation: The LM Test

The h-test we have discussed is, like the Durbin-Watson test, a test for first-order autoregression. Breusch-" and Godfrey discuss some general tests that are easy to apply and are valid for very general hypotheses about the serial

"Maddala and Rao, "Tests for Serial Correlation."

-*T. S. Breusch, "Testing for Autocorrelation in Dynamic Linear Models," Australian Economic Papers, Vol. 17, 1978, pp. 334-355.

-L. G. Godfrey, "Testing for Higher Order Serial Correlation in Regression Equations When the Regressors Include Lagged Dependent Variables," Econometrica, Vol. 46, 1978, pp. 1303-1310.

We have

a = 0.65 P(a) = (0.14)2 = o.0196

p = 0.1 since DW 2(1 - p) Hence Durbins / -statistic is



correlation in the errors. These tests are derived from a general principle- called the Lagrange multiplier (LM) principle. A discussion of this principle is beyond the scope of this book. For the present we will explain what the test is. The test is similar to Durbins second test that we have discussed. Consider the regression model

= 2 + , t = \,2, . ,n (6.14)

«, = Pi«(-i + P2«/-2 + • • + Pp«,-p + e,e,~ 1N(0, o-) (6.15)

We are interested in testing : p, = = • • • = Pp = 0. The xs in equation (6.14) include lagged dependent variables as well. The LM test is as follows:

First, estimate (6.14) by OLS and obtain the least squares residuals u,. Next, estimate the regression equation

= S + S «r-,P, + • / (6-16)

r=l 1=1

and test whether the coefficients of ,„, are all zero. We take the conventional F-statistic and use p • F as with degrees of freedom p. We use the x-test rather than the F-test because the LM test is a large sample test.

The test can be used for different specifications of the error process. For instance, for the problem of testing for fourth-order autocorrelation

= p4«/-4 + e, (6.17)

we just estimate

, = 2 x„l, + P4«,-4 + -n, (6.18)

/= 1

instead of (6.16) and test p4 = 0.

The test procedure is the same for autoregressive or moving average errors. For instance, if we have a moving average (MA) error

u, = e, + p4e, 4

instead of (6.17), the test procedure is still to estimate (6.18) and test p4 = 0. Consider the following types of errors:

AR(2): u, = p,M, , -I- P2M,„2 + e,

MA(2): u, = e, + p,e, , -I- .-

AR(2) with interaction: u, = PM,„i + 2 (-2 "~ PiP2"/-3 +

In all these cases, we just test by estimating equation (6.16) with p = 2 and test Pi = P2 = 0. What is of importance is the degree of autoregression, not the nature.



Thus the LM test for serial correlation is:

1. Estimate equation (6.14) by OLS and get the residual u,.

2. Estimate equation (6.16) or (6.19) by OLS and compute the F-statistic for testing the hypothesis H: Pi = = • • • Pp = 0.

3. Use p • F as with p degrees of freedom.

6.9 Strategies When the DW Test Statistic is Significant

The DW test is designed as a test for the hypothesis p = 0 if the errors follow a first-order autoregressive process M, = pw,-, + e,. However, the test has been found to be robust against other alternatives such as AR(2), MA(1), ARMA(1, 1), and so on. Further, and more disturbingly, it catches specification errors like omitted variables that are themselves autocorrelated, and misspecified dynamics (a term that we will explain). Thus the strategy to adopt, if the DW test statistic is significant, is not clear. We discuss three different strategies.

1. Assume that the significant DW statistic is an indication of serial correlation but may not be due to AR(1) errors.

2. Test whether serial correlation is due to omitted variables.

3. Test whether serial correlation is due to misspecified dynamics.

Errors Not AR(1)

In case 1, if the DW statistic is significant, since it does not necessarily mean that the errors are AR(1), we should check for higher-order autoregressions by estimating equations of the form

u, = p,M,„, + p2W, 2 + e,

Once the order has been determined, we can estimate the model with the appropriate assumptions about the error structure by the methods described in Section 6.4. Actually, there are two ways of going about this problem of determining the appropriate order of the autoregression. The first is to progressively compUcate the model by testing for higher-order autoregressions. The second is to start with an autoregression of sufficiently high order and progressively simplify it. Although the former approach is the one commonly used, the latter approach is better from the theoretical point of view.

Finally, an alternative to the estimation of (6.16) is to estimate the equation

y, = S + 2 ,-, , + Tl, (6.19)



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [ 87 ] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]