back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [ 81 ] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


81

232 AUTOCORRELATION

Although the DW test is the most commonly used test for serial correlations, it has several limitations.

1. It tests for only first-order serial correlation.

2. The test is inconclusive if the computed value lies between rf and dy.

3. The test cannot be applied in models with lagged dependent variables.

At this point it would be distracting to answer all these criticisms. We will present answers to these points in later sections of this chapter. First we discuss some simple solutions to the serial correlation problem.

6.3 Estimation in Levels Versus First Differences

If the DW test rejects the hypothesis of zero serial correlation, what is the next step?

In such cases one estimates a regression by transforming all the variables by p-differencing, that is, regress y, - py, , on x, - px, , where p is the estimated p. However, since p is subject to sampling errors, one other alternative that is followed if the DW statistic d is small is to use a first-difference equation. In fact, a rough rule of thumb is: Estimate an equation in first differences wlienever tiie DW statistic is < R. In first difference equations, we regress ( / ~ -i) on {x, - x, i) (with all the explanatory variables differences similarly). The implicit assumption is that the first differences of the errors (u, -M, i) are independent. For instance, if

y, = a -I- px, + M,

is the regression equation, then

y,„i = a + px,„, + u,i

and we have by subtraction

, - y,-0 = P(-< - + (u, - "r-i)

If the errors in this equation are independent, we can estimate the equation by OLS. However, since the constant term a disappears under subtraction, we should be estimating the regression equation with no constant term. Often, we find a constant term also included in regression equations with first differences. This procedure is valid only if there is a linear trend term in the original equation. If the regression equation is

y, = a + 8/ -I- px, + u,

then

y, , = a -I- 8(/ - 1) 4- px,„, +



6.3 ESTIMATION LEVELS VERSUS FIRST DIFFERENCES 233

and on subtraction we get

, - y,-i) = + Hx, - ,-,) + (u, -

which is an equation with a constant term.

When comparing equations in levels and first differences, one cannot compare the R2s because the explained variables are different. One can compare the residual sum of squares but only after making a rough adjustment. Note that if var(M,) = ( , then the variance of the error term in the first difference equation is

var(M, - M, ,) = var(M,) -l- var(M, ,) - 2 cov(tt„M,„,) - ct2 + 2a2p = 2 1 - p)

where p is the correlation coefficient between u, and Since the residual sum of squares divided by the appropriate degrees of freedom gives a consistent estimator for the error variance, the two residual sums of squares can be made roughly comparable if we multiply the residual sum of squares from the levels equation by

(n - - \

n -

2(1 -p)

where is the number of regressors. If p is an estimate of p from the levels equation, since p = (2 - J)/2 where d is the DW test statistic, we get 2(1 -p) = d. Thus, we can multiply the residual sum of squares from the levels equation by

or if n is large, just by d. For instance, if the residual sum of squares is, say, 1.2 by the levels equation, and 0.8 by the first difference equation and « = 11, = 1, DW = 0.9, then the adjusted residual sum of squares with the levels equation is (9/10)(0.9)(1.2) = 0.97 which is the number to be compared with 0.8.

All this discussion, however, assumes that there are no lagged dependent variables among the explanatory variables. If there are lagged dependent variables in the equation, then the estimators of the parameters are not consistent and the above arguments do not hold.

Since we have comparable residual sum of squares, we can get the comparable R\ as well, using the relationship RSS = 5 1 - W):

i?i = /?2 from the first difference equation RSSo = residual sum of squares from the levels equation RSS = residual sum of squares from the first difference equation



Then

-=fRSS„--)/ RSS,

1 - R] V---- n -

RSSo (n - - 1

RSS, V n -

An alternative formula by Harvey which does not contain the last term will be presented after some illustrative examples.

Some Illustrative Examples

Consider the simple Keynesian model discussed by Friedman and Meiselman. The equation estimated in levels is:

C, = a + PA, + E, / = 1, 2, . . . ,

where C, - personal consumption expenditure (current dollars) A, = autonomous expenditures (current dollars)

The model fitted for the 1929-1939 period gave":

1. C, = 58,335.9 + 2.498 A,

(0 312)

R = 0.8771, DW = 0.89, RSS = 11,943 x 10"

2. , = 1.993 ,

(0 324)

R = 0.8096, DW = 1.51, RSS = 8387 x 10"

(Figures in parentheses are standard errors.) There is a reduction in the R but the R values are not comparable. The equation in first differences is better because of the larger DW statistic and lower residual sum of squares than for the equation in the levels (even after the adjustments described). For the production function data in Table 3.11 the first difference equation is

log = 0.987 log L, + 0.502 log ,

(0.158) (0 134)

R = 0.8405 DW = 1.177 RSS = 0.0278

The comparable figures for the levels equation reported earlier in Chapter 4, equation (4.24) are

/?2 = 0.9946 DW = 0.858 RSS = 0.0434

M. Friedman and D. Meiselman, "The Relative Stability of Monetary Velocity and the Investment Multiplier in the U.S., 1897-1958," in Stabilization Policies (Commission on Money and Credit) (Englewood Cliffs, N.J.: Prentice Hall. 1963).

"A. C. Harvey, "On Comparing Regression Models in Levels and First Differences," International Economic Review, Vol. 21, No. 3, October 1980, pp. 707-720.

Rp = comparable R from the levels equation



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [ 81 ] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]