back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [ 84 ] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


84

6.5 Effect of AR( 1) Errors on OLS Estimates

In Section 6.4 we described different procedures for the estimation of regression models with AR(1) errors. We will now answer two questions that might arise with the use of these procedures:

1. What do we gain from using these procedures?

2. When should we not use these procedures?

First, in the case we are considering (i.e., the case where the explanatory variable x, is independent of the error u,), the ordinary least squares (OLS) estimates are unbiased. However, they will not be efficient. Further, the tests of significance we apply, which will be based on the wrong covariance matrix, will be wrong. In the case where the explanatory variables include lagged dependent variables, we will have some further problems, which we discuss in Section 6.7. For the present, let us consider the simple regression model

y, = Px, + u, (6.9)

Let var(m,) = a, cov(m„ m, ) = , . If u, are AR(1), we have p, = p. The OLS estimator of p is

Hence

S x,u,

P - P = and £(p - p) = 0

V(p) = var(2x,«,)

= 21 + 2p 2 + 2p2 2 -2 + • • •)

since cov(m„ m, ) = p!tr. Thus we have

If we ignore the autocorrelation problem, we would be computing ( ) as CTX xj. Thus we would be ignoring the expression in the parentheses of (6.10). To get an idea of the magnitude of this expression, let us assume that the X, series also follow an AR(1) process with var(x,) = aj and cor(x„ x,„) =

correlation in the errors has resulted in a significant change in the estimates of 3i and P2.



Again if p = r = 0.8 and = 20, we have

Thus there is a further underestimation of 19%. Both these effects result in an underestimation of the standard errors of more than 80%.

We can also derive the asymptotic variance of the ML estimator p when both X and are first order autoregressive as follows: Note that the ML estimator of p is asymptotically equivalent to the estimator obtained from a regression of ( - -i) on (x, - px,„,). Hence

E ix, - px, ,)(M, - , ,) V(P) = var----

"Note that u, = u, - , - ). Hence Edu) = Edu) + [ - P) Ixf] - 2 [ - )lx,u,]. The first term is . The second term is

1 - rp

The last term is

1 - rp

In all this note that we take probability limits rather than expectations, since these results are all asymptotic.

. Since we are now assuming x, to be stochastic, we will consider the asymptotic variance of p. The expression in parentheses in (6.10) is now

1 2pr + 2pV + . . . = 1 + =

1 - pr 1 - pr

Thus

lul 1 - rp

where is the number of observations. If r = p = 0.8, then

1 -I- rp L64 1 - " 0.36

Thus ignoring the expression in the parentheses of (6.10) results in an underestimation by close to 78% for the variance of p.

One further error is also involved. This is that we use 2 ?/( - 1) as an estimate of . If p = 0, this is an unbiased estimate of cr. If p # 0, then under the assumptions we are making, we have approximately"



= var

2 (x, - px, if

2 (x, - px, tf where = \ar(e,). When x, is autoregressive we have

Also,

(x, - pxixf = did + - 2rp)

var(M) = CT = -- or = (1 - p)

1 - p2

Hence by substitution we get the asymptotic variance of p as

Tuj 1 + p - 2rp Thus the efficiency of the OLS estimator is

V(P) 1 - 1 - P V(P) 1 + rp 1 + p2 - 2rp

One can compute this for different values of r and p. For r = p = 0.8 this efficiency is 0.21. Thus the consequences of autocorrelated errors are:

1. The least squares estimators are unbiased but are not efficient. Sometimes they are considerably less efficient than the procedures that take account of the autocorrelation.

2. The sampling variances are biased and sometimes Ukely to be seriously understated. Thus R as well as t and F statistics tend to be exaggerated.

The solution to these problems is to use the maximum likelihood procedure or some other procedure mentioned earlier that takes account of the autocorrelation. However, there are four important points to note:

1. If p is known, it is true that one can get estimators better than OLS that take account of autocorrelation. However, in practice p is not known and has to be estimated. In small samples it is not necessarily true that one gains (in terms of mean-square error for p) by estimating p. This problem has been investigated by Rao and GriHches,* who suggest the rule of

"P. Rao and Z. Griliches, "Some Small Sample Properties of Several Two-Stage Regression Methods the Context of Autocorrelated Errors," Journal of the American Statistical Association, March 1969.

2 (x, - px, ,)e,



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [ 84 ] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]