back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [ 139 ] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]


139

10.3 THE ADAPTIVE EXPECTATIONS MODEL 409

Thus the weights are (k + 1)3, kp, (k - 1)3, {k - 2)3, and so on. The sum of the weights is

Ak + m + 2) p

We might want to restrict this sum to 1. The only problem with this is that if there is a trend in x„ say jc, is increasing over time, then jcJ+i given by (10.7) will continuously underpredict the actual values. We can make adjustment for this by multiplying jc*+ by (1 + g), where g is the average growth rate of x,. Thus in using distributed lag models we make adjustments for the growth rate observed in the past [which is actually the idea in formulas like (10.4)].

The distributed lag models received greater attention in the 1950s when Koyck," Cagan, and Nerlove* suggested using an infinite lag distribution with geometrically declining weights. Equation (10.7) will now be written as

xUx = i -, (10.8)

1 = 0

If 3, are geometrically decreasing we can write

3, = 3oX 0 < X < 1

The sum of the infinite series is pjl - \, and if this sum is equal to 1 we should have 3o = 1 - - Thus we get

x:+i = S (1 - MXjc,-, (10.9)

i=-0

Figure 10.1 shows the graph of successive values of 3,- There is one interesting property with this relationship. Lag equation (10.9) by one time period and multiply by X. We get

Kjc; = X 5 (1 - X)Xjc, , , = i (1 - X)X+jc, , ,

,=0 1=0

Substituting = / + 1, we get

>uc; = 2 (1 - Mvx, , (10.10)

Subtracting (10.10) from (10.9) we are left with only the first term on the right-hand side of (10.9). We thus get

"L. M. Koyck, Distributed Lags and Investment Analysis (Amsterdam: North-Holland, 1954). A thorough discussion of the Koyck model can be found in M. Nerlove. Distributed Lags and Demand Analysis, U.S.D.A. Handbook 141 (Washington, D.C.: U.S. Government Printing Office. 1958).

Phillip D. Cagan, "The Monetary Dynamics of Hyperinflations," in M. Friedman (ed.). Studies in the Quantity Theory of Money (Chicago: University of Chicago Press, 1956), pp. 25-117. Marc Nerlove, The Dynamics of Supply: Estimation of Farmers Response to Price (Baltimore: The Johns Hopkins Press, 1958).



Figure 10.1. Geometric or Koyck Lag.

- KxJ = (I - \)x,

- X* = (l - \)(x, - x)

(10.11)

(10.12)

revision in expectation

last periods error

Equation (10.12) says that expectations are revised (upward or downward) based on the most recent error. Suppose that x, was 100 but x, was 120. The error in prediction or expectation is 20. The prediction for (/ + 1) will be revised upward but by less than the last periods error. The prediction will, therefore, be > 100 but < 120 (since 0 < X < 1). This is the reason why the model given by (10.9) is called the adaptive expectations model (adaptive based on the most recent error).

Again, since the coefficients in (10.9) sum to 1, if there is a trend in x„ the formula for x,, has to be adjusted so that x,+1 is multiplied by (1 + g), where g is the average growth rate in x,. Otherwise, the adaptive expectations model can continuously underpredict the true value.

10.4 Estimation with the Adaptive Expectations Model

Consider now the estimation of the investment equation (10.1) where the expected profits are given by the adaptive expectations model (10.9). We can substitute (10.9) in (10.1) and try to estimate the equation in that form. This is called estimation in the distributed lag form.



10.4 ESTIMATION WITH THE ADAPTIVE EXPECTATIONS MODEL 4j1

Alternatively, we can try to use equation (10.11) to eliminate the unobserved x,+, and estimate the resulting equation. This is called estimation in the autoregressive form. Since this is easier, we will discuss this first.

Estimation in the Autoregressive Form

Consider equation (10.1), which we want to estimate.

y, = a + bxUx + u, (10.1)

Lag this equation by one time period and multiply throughout by \. We get

\y, , = flX + b\x, + (10.13)

Subtracting (10.13) firom (10.1) and using the definition of the adaptive expectations model as given in (10.11), we get

- -i = «(1 - + - \x,) + u, - Xm,„,

= a(l - \) + b(l - \)x, + u, - Xm, ,

y, = a + Xy, , + bx, + V, (10.14)

where a = a{\ - \), b = b(\ - K), and v, = u, - \u, ,. We have eliminated the unobserved x*+, and obtained an equation in the observed variable x,.

Since equation (10.14) involves a regression of y, on y, „ we call this the autoregressive form. One can think of estimating equation (10.14) by ordinary least squares and, in fact, this is what was done in the 1950s and that accounted for the popularity of the adaptive expectations model. However, notice that the error term v, is equal to u, - . , , and is thus autocorrelated. Since y, i involves M, ,, we see that y,-i is correlated with the error term v,. Thus estimation of equation (10.14) by ordinary least squares gives us inconsistent estimates of the parameters.

What is the solution? We can use the instrument variable method. Use jc, , as an instrument for y, ,. Thus the normal equations will be

2 x,v, = 0 and X x,-,v, = 0

The other alternative is to take the error structure of v, explicitly into account (note that it depends on X). But this amounts to not making the transformation we have made and thus is exactly the same as estimation in the distributed lag form, which is as follows.

Estimation in Distributed Lag Form

Substituting the expression (10.9) in (10.1), we get

y, = a + b X (1 - \)\x,, + u,



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79] [80] [81] [82] [83] [84] [85] [86] [87] [88] [89] [90] [91] [92] [93] [94] [95] [96] [97] [98] [99] [100] [101] [102] [103] [104] [105] [106] [107] [108] [109] [110] [111] [112] [113] [114] [115] [116] [117] [118] [119] [120] [121] [122] [123] [124] [125] [126] [127] [128] [129] [130] [131] [132] [133] [134] [135] [136] [137] [138] [ 139 ] [140] [141] [142] [143] [144] [145] [146] [147] [148] [149] [150] [151] [152] [153] [154] [155] [156] [157] [158] [159] [160] [161] [162] [163] [164] [165] [166] [167] [168] [169] [170] [171] [172] [173] [174] [175] [176] [177] [178] [179] [180] [181] [182] [183] [184] [185] [186] [187] [188] [189] [190] [191] [192] [193] [194] [195] [196] [197] [198] [199] [200] [201] [202] [203] [204] [205] [206] [207] [208] [209] [210] [211] [212]