back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [ 80 ] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


80

Autocorrelation

6.1 Introduction

6.2 Durbin-Watson Test

6.3 Estimation in Levels Versus First Differences

6.4 Estimation Procedures with Autocorrelated Errors

6.5 Effect of AR(1) errors on OLS Estimates

6.6 Some Further Comments on the DW Test

6.7 Tests for Serial Correlation in Models with Lagged Dependent Variables

6.8 A General Test for Higher-Order Serial Correlation: The LM Test

6.9 Strategies When the DW Test Statistic Is Significant *6.10 Trends and Random Walks

*6.11 ARCH Models and Serial Correlation Summary Exercises

6.1 Introduction

In Chapter 5 we considered the consequences of relaxing the assumption that the variance of the error term is constant. We now come to the next assumption that the error terms in the regression model are independent. In this chapter we study the consequences of relaxing this assumption.



d = -

i ( , - , ,)

where , is the estimated residual for period t. We can write d as

2 ? + 2 - 2 2

There are two situations under which error terms in the regression model can be correlated. In cross-section data it can arise among contiguous units. For instance, if we are studying consumption patterns of households, the error terms for households in the same neighborhood can be correlated. This is because the error term picks up the effect of omitted variables and these variables tend to be correlated for households in the same neighborhood (because of the "keeping up with the Joneses" effect). Similarly, if our data are on states, the error terms for contiguous states tend to be correlated. All these examples fall in the category of spatial correlation. In this chapter we will not be concerned with this type of correlation. Some of the factors that produce this type of correlation among the error terms can be taken care of by the use of dummy variables discussed in Chapter 8.

What we will be discussing in this chapter is the correlation between the error terms arising in time series data. This type of correlation is called autocorrelation or serial correlation. The error term u, at time period t is correlated with error terms u,+2> • and ,,, ,„2> • • and so on. Such correlation in the error terms often arises from the correlation of the omitted variables that the error term captures.

The correlation between u, and ,;; is called an autocorrelation of order k. The correlation between u, and is the first-order autocorrelation and is usually denoted by p,. The correlation between u, and is called the second-order autocorrelation and is denoted by and so on. There are ( - 1) such autocorrelations if we have n observations. However, we cannot hope to estimate all of these from our data. Hence we often assume that these (n - 1) autocorrelations can be represented in terms of one or two parameters.

In the following sections we discuss how to

1. Test for the presence of serial correlation.

2. Estimate the regression equation when the errors are serially correlated.

6.2 Durbin-Watson Test

The simplest and most commonly used model is one where the errors u, and have a correlation p. For this model one can think of testing hypotheses about p on the basis of p, the correlation between the least squares residuals u, and , . A commonly used statistic for this purpose (which is related to p) is the Durbin-Watson (DW) statistic, which we will denote by d. It is defined as



6.2 DURBIN-WATSON TEST 231

Since 2 "? and 2 , are approximately equal if the sample is large, we have = 2(1 - p). If p = +1, then d = 0, and if p = -1, then d = 4. We have d = 2 if p = 0. If is close to 0 or 4, the residuals are highly correlated.

The sampling distribution of d depends on the values of the explanatory variables and hence Durbin and Watson derived upper (du) limits and lower (rfj limits for the significance levels for d. There are tables to test the hypothesis of zero autocorrelation against the hypothesis of first-order positive autocorrelation. (For negative autocorrelation we interchange d and dy.)

If d < dt, we reject the null hypothesis of no autocorrelation. lfd> dy, we do not reject the null hypothesis. If di< d < du, the test is inconclusive.

Hannan and TerrelF show that the upper bound of the DW statistic is a good approximation to its distribution when the regressors are slowly changing. They argue that economic time series are slowly changing and hence one can use dy as the correct significance point.

The significance points in the DW tables at the end of the book are tabulated for testing p = 0 against p > 0. \f d > 2 and we wish to test the hypothesis p = 0 against p < 0, we consider 4 - d and refer to the Durbin-Watson tables as if we are testing for positive autocorrelation.

Although we have said that d = 2(\ - p) this approximation is vahd only in large samples. The mean of d when p = 0 has been shown to be given approximately by (the proof is rather complicated for our purpose)

n -

where is the number of regression parameters estimated (including the constant term) and n is the sample size. Thus, even for zero serial correlation, the statistic is biased upward from 2. If A: = 5 and n = 15, the bias is as large as 0.8. We illustrate the use of the DW test with an example.

Illustrative Example

Consider the data in Table 3.11. The estimated production function is log X = -3.938 + 1.451 log L, + 0.384 log

(0 237) (0 083) (0 04«)

R = 0.9946 DW = 0.88 p = 0.559

Referring to the DW tables with k = 2 and = 39 for the 5% significance level, we see that d = 1.38. Since the observed d = 0.858 < rf. we reject the hypothesis p = 0 at the 5% level.

J. Durbin and G. S. Watson, "Testing for Serial Correlation in Least Squares Regression," Biometrika, 1950, pp. 409-428; 195L pp. 159-178.

E. J. Hannan and R. D. Terrell, "Testing for Serial Correlation After Least Squares Regression," Econometrica, 1966, pp. 646-660.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [ 80 ] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]