back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [ 85 ] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


85

thumb (for samples of size 20) that one can use the methods that take account of autocorrelation if p > .3, where p is the estimated first-order serial correlation from an OLS regression." In samples of larger sizes it would be worthwhile using these methods for p smaller than 0.3.

2. The discussion above assumes that the true errors are first-order autoregressive. If they have a more complicated structure (e.g., second-order autoregressive), it might be thought that it would still be better to proceed on the assumption that the errors are first-order autoregressive rather than ignore the problem completely and use the OLS method. Engle shows that this is not necessarily true (i.e., sometimes one can be worse off making the assumption of first-order autocorrelation than ignoring the problem completely).

3. In regressions with quarterly (or monthly) data, one might find that the errors exhibit fourth (or twelfth)-order autocorrelation because of not making adequate allowance for seasonal effects. In such cases if one looks for only first-order autocorrelation, one might not find any. This does not mean that autocorrelation is not a problem. In this case the appropriate specification for the error term may be u, = , 4 + e, for quarterly data and u, = pM,„i2 + e, for monthly data.

4. Finally, and most important, it is often possible to confuse misspecified dynamics with serial correlation in the errors. For instance, a static regression model with first-order autocorrelation in the errors, that is, y, = , -t- u„ u, = p«, , + e„ can be written as

y, = py,-i + Px, - ,, -I- e, (6.11)

This model is the same as

I = 1 /-1 + «2, + + / (6.11)

with the restriction «,«2 + = 0. We can estimate the model (6.11) and test this restriction. If it is rejected, clearly it is not valid to estimate (6.11). (The test procedure is described in Section 6.8.)

The errors would be serially correlated but not because the errors follow a first-order autoregressive process but because the terms x, , and y, , have been omitted. This is what is meant by "misspecified dynamics." Thus a significant serial correlation in the estimated residuals does not necessarily imply that we should estimate a serial correlation model. Some further tests are necessary (like the restriction a,a2 + a, = 0 in the above-mentioned case). In fact, it is always best to start with an equation like (6.1 ) and test this restriction before applying any tests for serial correlation.

course, it is not sufficient to argue in favor of OLS on the basis of mean-square errors of the estimators alone. What is also relevant is how seriously the sampling variances are biased. "Robert F. Engle, "Specification of the Disturbance for Efficient Estimation," Econometrica, 1973.



2 ie, - e.xfUn - 1)

(=2

i ie, - if In

where e, are the residuals. The von Neumann ratio can be used only when e, are independent (under the null hypothesis) and have a common variance. The least squares residuals , do not satisfy these conditions and hence one cannot use the von Neumann ratio with least squares residuals.

During recent years there are a large number of alternative residuals that have been suggested for the linear regression model. Many of these residuals, particularly the "recursive residuals," satisfy the properties that they are independent and have a common variance. These different types of residuals are useful for diagnostic checking of the regression model and are discussed in Chapter 12. The recursive residuals, in particular, can easily be computed. Since they are independent and have a common variance, one can use them to compute the von Neumann ratio, as suggested by Phillips and Harvey.**

For large samples 6Vs can be taken as normally distributed with mean and variance given by

"J. von Neumann, "Distribution of the Ratio of the Mean Square Successive Difference to the Variance," Annals of Mathematical Statistics, 1941, pp. 367-395.

*G. D. A. Phillips and A. C. Harvey, "A Simple Test for Serial Correlation in Regression Analysis," Journal of the American Statistical Association, December 1974, pp. 935-939.

6.6 Some Further Comments on the DW Test

In Section 6.2 we discussed the Durbin-Watson test for first-order autocorrelation which is based on least-squares residuals. There are two other tests that are also commonly used to test first-order autocorrelation. These are

1. The von Neumann ratio.

2. The Berenblut-Webb test.

We will briefly describe what they are:

The von Neumann Ratio

The von Neumann ratio" is defined as



*V ( + \){ - If

For finite samples one can use the tables prepared by G. I. Hart, published in Annals of Mathematical Statistics, 1962, pp. 207-214.

There are many other residuals suggested in the literature for the purpose of testing serial correlation. These are the Durbin residuals, Sims residuals, and so on. But all these are more complicated to compute. The recursive residuals, which are useful for analysis of stability of the regression relationships and are easy to compute, can be used for tests for serial correlation in case the Durbin-Watson test is inconclusive.

The Berenblut-Webb Test

The Berenblut-Webb test" is based on the statistic

where ¸, are the estimated residuals from a regression of first difference of on first differences of the explanatory variables (with no constant term). If the original equation contains a constant term, we can use the Durbin-Watson tables on bounds with the g-statistic. The g-statistic is useful when values of IpI > 1 are possible.

The literature on the DW test and the problem of testing for autocorrelation is enormous. We will summarize a few of the important conclusions:

A. Since the DW statistic is usually printed out from almost all computer programs, and the tables for its use are readily available, one should use this test with least squares residuals. However, with most economic data it is better to use the upper bound as the true significance point (i.e., treat the inconclusive region as a rejection region). For instance, with = 25 and the number of explanatory variables 4, we have = 1.04 and dy = 1.77 as the 5% level significance points. Thus if the computed DW statistic is d = 1.5, we would normally say that the test is inconclusive at the 5% level. Treating dy as the 5% significance point, we would reject the null hypothesis p = 0 at the 5% level. If more accuracy is required when d is in the inconclusive region, there are a number of alternatives suggested but all are computationally burdensome. The whole idea of testing for serial correlation is that if we do not reject the hypothesis p = 0, we can stay with OLS and avoid excessive computational burden. Thus trying to use all these other tests is more burdensome than estimating the model assuming p # 0. If we generate the recursive residuals for some other purposes, we can apply the von Neumann ratio test using these residuals. Also,

"I. I. Berenblut and G. I. Webb, "A New Test for Autocorrelated Errors in the Linear Regression Model," Journal of the Royal Statistical Society, Series B, Vol. 35, 1973, pp. 33-50.

/ 2\ 4 \ - 2)



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [ 85 ] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]