back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [ 199 ] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


199

14.7 The Cointegrating Regression

To fix ideas, we shall consider the simple example considered by Engle and Granger (1987, p. 263). Consider two possibly correlated white noise errors, e,, and ˆ2,. Let x,, and Xj, be two series generated by the following model:

Xl, + ?>Xb = Uu , = .,~1 + <?1/ (14.6)

+ , = , Uy = p«2./-i + , < 1 (14.7)

Note that „ ~ 1(1) and , ~ 1(0). The model is internally consistent only if a 5 p. The reason for this constraint is that if a = p, it is impossible to find any values for „ and , that simultaneously satisfy both equalities. The parameters a and p are unidentified in the usual sense because there are no exogenous variables, and the errors are correlated. The reduced forms for x„ and , are

« P

Xb ---Uu + --

a - p a - p

They are linear combinations of ,, and ,, and hence, they are both 1(1). Note that equation (14.7) describes a linear combination of two 1(1) variables that is stationary. Thus, Xu and , are cointegrated.

In this case a linear least squares regression of x„ on , produces a consistent estimate of a that is actually "super-consistent," that is, it tends to the true value faster than the usual OLS estimator. In the usual case, if p is the least squares estimator of p, / (P - p) 0 whereas in the case here ( - p) 0 as -» o°. This regression of x,, on , is called the "cointegrating regression." All other linear combinations of x,, and „ other than the cointegrating regression (14.7), will have an infinite variance. There is no simultaneous equations bias in the estimation by OLS of equation (14.7) because the correlation between , and , is of a lower order in T than the variance of , which tends to infinity as oo. This is the case whether we regress , on to get an estimate of (-a) or we regress on x, to get an estimate of (- 1/a). Note that if p = 1, «2, is also 1(1) and then we do not have a cointegrating recession.

Note that equations (14.6) and (14.7) can be written in the autoregressive form

Ar,, = PSX,,, , + aP6x2,,-i + T],, , = -8x,,„, - «8x2,, i + 2(

where 8 = (1 - p)/(a - p) and ]„ and - , are linear combinations of the es. If we define z, = x„ -I- ctX2„ we can write these as

„ = 8 , , + ],, 2, = -8 ,-, + Ti2,



*A. Banerjee, J. Dolado, D. F. Hendry, and G. Smith, "Exploring Equilibrium Relationships in Econometrics Through Static Models: Some Monte Carlo Evidence," Oxford Bulletin of Economics and Statistics, Vol. 48, 1986, pp. 253-277.

Equations (14.8) give the VAR (vector autoregressions) representation for this simple model. An error correction model (ECM) is of the form

Ay, = Ax, + yiy - 3x), , + M,

It relates the change in to the change in x and the past periods disequilibrium. The ECM in this form for the model we have been considering can be derived simply by noting that we have defined z, = „ + olXj,. Hence, by equation (14.7), we have

Z, = pZ.i + ˆ2, or

, = (p - 1) ,„, + 62, or (14.10)

, = -aAx2, + (p - l)z, i + ˆ2,

However, note that when estimated by OLS, this equation gives inconsistent estimates of the parameters because of the correlation between X2, and ˆ2,. Note also that all the variables in this equation are 1(0).

Equations (14.9) can also be regarded as ECM representations except that in this model „ does not involve Ax2„ and vice versa. When estimated by OLS, equations (14.9) give consistent estimates of the parameters because ],, and 7)2, are serially uncorrelated. We can get a consistent estimate of 3 from the estimation of equations (14.9).

One question we might ask at this stage is: How have we managed to identify the parameters a and p in equations (14.6) and (14.7)? The answer is that we have done this by exploiting the information in the specification of the error terms. M„ is a random walk and «2, is 1(0). Although by considering a linear combination of the two equations we can generate an equation that looks like each, no linear combination can generate an 1(0) error in equation (14.7). Hence a is identified. Similarly, no linear combination can generate a random walk error as in (14.6). Thus p is identified. Equation (14.7) can be estimated by OLS to get a consistent estimate of a. This is free of the simultaneity bias because of the nature of X2,, which is 1(1), and M2,, which is 1(0). We then construct z, and get an estimate of p from equation (14.9).

Engle and Granger suggest estimating the cointegrating regression first (note that this is a static regression that is, a regression with no dynamics or lags) and then estimating the short-run dynamics through variants of the ECM by a two-stage estimation method using the estimated coefficient from the cointegrating regression. As discussed in Section 6.10, others have suggested estimating the long-run parameters and short-run dynamics simultaneously. Ba-nerjee et al.* perform a Monte Carlo study based on a model similar to that given by equation (14.6) and (14.7) and find that in small samples, the estimates



1 + P8 aP8 -8 1 - a8

The characteristic roots are 1 and 1 - a8 + p8. Thus the series are cointegrated. Note that if p = 1, then 8 = 0 and we have two unit roots. In this case X, and X2 are not cointegrated. If we consider the matrix of coefficients in the equations (14.8), we have to talk of zero roots rather than unit roots since the matrix of coefficients is A - I. Note that A - I is a singular matrix. It can be written as

*See Appendix to Chapter 7 for a discussion of characteristic roots.

of a from the static regression (14.7) are biased. They suggest that it is better to estimate the long-run parameter through a dynamic model.

What has been said about the regressions in the 1(1) variables can also be said about seasonal data. Our discussion of regressions involving variables with stochastic trends suggests that if is differenced and x is not, so that is 1(0) and X is 1(1), a regression of on x does not make sense. If both and x have trends, so that ~ 1(1) and x ~ 1(1), a regression of on x does not make sense unless they are cointegrated, that is, there exists a such that - px ~ 1(0). This is a case of common stochastic trends. Similar is the case with seasonal data. If is seasonally adjusted and x is not, a regression of on x does not make sense. If both and x have stochastic seasonal elements, a regression of on X makes sense only if they are seasonally cointegrated, that is, there are common seasonal elements. Also note that if y, and x, are both 1(1), a regression of the form , = p , + 7X,„2 + w, does not make sense because „ „ and , are all 1(0) but x, 2 is not. It is 1(1). Thus all the variables are not on the same level.

14.8 Vector Autoregressions and Cointegration

There is a simple relationship between vector autoregressions and cointegration. In the two-variable case we have considered, if the characteristic roots of the matrix of coefficients in the VAR model are both equal to unity, the series are both 1(1) but not cointegrated; if precisely one of the roots is unity, the series are cointegrated. If neither of the roots is unity, the series are stationary, so they are neither integrated nor cointegrated. In the example we have considered the VAR model given by equation (14.8) can be written as

x„ = (1 + P8)x,,,„, + ap8x2,,-i + T),, Xj, = -8x, , , + (1 - a8)x2,, ] + - The matrix of coefficients is



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [139] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [ 199 ] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]